text
stringlengths
100
957k
meta
stringclasses
1 value
Chemistry and Chemical Reactivity (9th Edition) a) $K_b=K_w/K_a=4.35\cdot10^{-4}$ b) It would be between hydrogen carbonate and hexaaquanickel ion. Water is an acid weaker than the ion and hydroxide is a base stronger than the molecule. c) $K_a=x\cdot x/(0.015-x)$ $x=[H_3O^+]=5.87\cdot10^{-7}$ $pH=6.23$
{}
# Learning Expected value We probably have played the game "Throwing Balls into the Basket". It is a simple game. We have to throw a ball into a basket from a certain distance. One day we were playing the game. But it was slightly different from the main game. In our game we were $$N$$ people trying to throw balls into $$M$$ identical Baskets. At each turn we all were selecting a basket and trying to throw a ball into it. After the game we saw exactly $$S$$ balls were successful. Now I will be given the value of $$N$$ and $$M$$. For each player probability of throwing a ball into any basket successfully is $$P$$. Assume that there are infinitely many balls and the probability of choosing a basket by any player is $$1/M$$. If multiple people choose a common basket and throw their ball, we can assume that their balls will not conflict, and the probability remains same for getting inside a basket. I have to find the expected number of balls entered into the baskets after $$K$$ turns. My question is that how can I find out the expected number? I'm a novice in learning the expected value. Therefore, better explanation is crying need for me. • is this homework? if so please tag – chuse Jan 29 '15 at 16:03 • I don't understand something: you didn't know N,M before playing? How many balls each player plays per turn, only one? if so, and the probabilities do not depend on the basket, and the balls do not interfere, the expected value is just KpN, K times the expected value of one turn. – chuse Jan 29 '15 at 16:08
{}
# Moments and transition probability of Trinomial Tree I just switched from using WP-Latex to MathJax since Chrome is soon going to have built-in support for MathJax and it is easier to recover the tex codes for the reader. To test out the functionality, I wanted to write down some equations befitting to the occasion. I scraped through my notes to see if there was anything interesting and worth posting. I stumbled upon Trinomial Tree ( \ddagger ). Now it is not that common and haven’t seen much usage of Trinomial Tree outside finance (One of the ways for Options Pricing), nonetheless I thought it would be interesting to refresh these formulas, you never know when you might need them again. Trinomial Tree, a special case of binomial tree where instead of two we have three branches, lets call them Up ( u ) , Middle ( m ) and Down (d ) . Transition of each step could then be defined as — $$S(t + \Delta t) = S(t) \times u$$ with probability $$p_u$$ $$S(t + \Delta t) = S(t) \times 1$$ with probability $$1 - p_u - p_d$$ $$S(t + \Delta t) = S(t) \times d$$ with probability $$p_d$$ For simplicity sake let’s consider that transition magnitude is same on each side, i.e $$u = e^{\sigma \sqrt{2 \Delta t} } \, , \,\, d = e^{-\sigma \sqrt{2 \Delta t }} \, , \,\, m=1$$ then the transition probability is given by $$p_u = \left ( \frac{e^\frac{r \Delta t}{2} – e^{-\sigma\sqrt{\frac{r \Delta t}{2}}}}{e^{\sigma\sqrt{\frac{r \Delta t}{2}}} – e^{-\sigma\sqrt{\frac{r \Delta t}{2}}}} \right ) ^2$$ $p_u = \left ( \frac{ e^{\sigma\sqrt{\frac{r \Delta t}{2}}} + e^\frac{r \Delta t}{2} }{e^{\sigma\sqrt{\frac{r \Delta t}{2}}} – e^{-\sigma\sqrt{\frac{r \Delta t}{2}}}} \right ) ^2$ $p\_m = 1 – p\_u – p_d$ from above equations we can derive the **moments **as given below, $$\mathbb{E}[S(t\_{i+1})|S(t\_i)] = e^{r \Delta t}S(t_i)$$ $\mathbb{V}ar[S(t\_{i+1})|S(t\_i)] = \Delta t S(t_i)^2 \sigma ^2 + \mathcal{O}(\Delta t^\frac{3}{2})$ $$\ddagger$$ Phelim Boyle in 1986. An excellent 23 slide background on Trinomial including other forms besides Boyle’s. Written on January 30, 2013
{}
# Homework 1 Due by 11:59 PM on Wednesday, September 16, 2020 Getting your assignment: You can find template code for your submission here at this GitHub Classroom link. All of the code you write you should go in hw1.Rmd, and please knit the Markdown file in your completed submission. ## The Rescorla-Wagner Model The Rescorla-Wagner model, developed by Robert Rescorla and Allen Wagner in 1972, was extremely influential at the time of its publication because it was able to explain several puzzling findings in Pavlovian condition, especially the phenomenon of blocking. It has since been extended by researchers working in Reinforcement learning to account for a number of other interesting phenomena. It is also the basis of the delta rule used for training simple neural networks, as you’ll see later on the course. You’ll work with the same simplified version of the model that you saw in class. The model describes the change in strength associated with a conditioned stimulus ($$\Delta V$$) with this equation: $\Delta V = \alpha \cdot \left(\lambda - V_{total}\right)$ #### Problem 2: Write rw_delta_v (1 point). You’ll use the stub in the R Markdown file in the GitHub Repository that looks like this: rw_delta_v <- function(Vtotal, alpha = .1, lambda = 1) { } One thing to notice is that function parameters in R can have defaults specified. If you don’t pass in a value for that parameter, it will get the default value inside the function. ## Simple conditioning To get started, you’ll simulate of simple conditioning experiment in which someone experiences 10 trials of positive reinforcement in response to a conditioned stimulus. You’ll want to produce a plot of the strength of the conditioned stimulus over the course of these 10 trials so you can see the changes that the model predicts. There are lots of ways of setting up this experiment in R, and you’re welcome to do it however you like. In case you’d like a hand to get started, he’s one strategy: • Make a tibble with 2 columns: trial and V. The trial column will have the values $$0$$ through $$10$$, and the V column will start as all $$0$$s. • Write a for loop that iterates over the numbers $$1$$ through $$10$$—these will index into the rows of your tibble. Set the value of the V column in each row to the result of calling your rw_delta_v function on the value of V in the previous row. • Make a plot with trial on the x-axis and V on the y-axis. You can make one plot for all four of your simulations if you’re feeling comfortable with the tidyverse, or 4 separate plots. ## Extinction Now let’s see what happens if we take the reinforcer away. Set up a simulation where the participant is exposed to 10 trials in of positive reinforcement in response to a conditioned stimulus, and then 10 trials in which they get no reinforcement ($$\lambda = 0$$). Then make a plot of $$V$$ over the course of the experiment. #### Problem 4: Try the same parameter values that you used above in this new experiment. How does the extinction curve depend on $$\alpha$$ and $$\lambda$$? (2 points). This should be a fairly straightforward extension of the code you wrote for the last Problem. The critical thing will be to make sure that you are using the right value of the $$\lambda$$ parameter on each trial—remember, no reinforcer should have no reward and thus $$\lambda = 0$$. ## Blocking Now you’re ready to test Rescorla-Wagner’s ability to account for the Blocking phenomenon. In this simulation, you’ll have two cues–x and y. For the first 10 trials, the simulated participant will be exposed to cue x and be positively reinforced. Then, on the subsequent 30 trials, both x and y will be present and the participant will get reinforced. One way to make this simulation work is to replace the V column with two new columns: Vx and Vy. And then include two more columns in your tibble, x_present and y_present, which indicate whether each cue is present on each trial. You then want to make sure that you simulate updating the weight for all cues that are present $$\Delta V$$. And make sure that $$V_{total}$$ has the right value on each trial! ## Conditioned inhibition Finally, you’re ready to simulate a phenomenon we talked about in class but that you didn’t see directly: Conditioned inhibition. The setup is similar to blocking–first one cue (x) is presented and reinforced, and then x and another cue y appear together. But, this time their combination is not reinforced. As a result, people (and the model) learn a negative value for y. Intuitively, y is the reason that x appearing did not lead to a positive reinforcer.
{}
Definitions # Equation of time The equation of time is the difference over the course of a year between time as read from a sundial and time as read from a clock, measured in an ideal situation (ie. in a location at the centre of a time zone, and which does not use daylight saving time). The sundial can be ahead (fast) by as much as 16 min 33 s (around November 3) or fall behind by as much as 14 min 6 s (around February 12). It is caused by irregularity in the path of the Sun across the sky, due to a combination of the obliquity of the Earth's rotation axis and the eccentricity of its orbit. The equation of time is the east or west component of the analemma, a curve representing the angular offset of the Sun from its mean position on the celestial sphere as viewed from Earth. The equation of time was used historically to set clocks. Between the invention of accurate clocks in 1656 and the advent of commercial time distribution services around 1900, the common way to set clocks was by observing the passage of the sun across the local meridian at noon. The moment the sun passed overhead, the clock was set to noon, offset by the number of minutes given by the equation of time for that date. The equation of time values for each day of the year, compiled by astronomical observatories, were widely listed in almanacs and ephemerides. Naturally, other planets will have an equation of time too. On Mars the difference between sundial time and clock time can be as much as 50 minutes, due to the considerably greater eccentricity of its orbit. ## Apparent time versus mean time The irregular daily movement of the Sun was known by the Babylonians, and Ptolemy has a whole chapter in the Almagest devoted to its calculation (Book III, chapter 9). However he did not consider the effect relevant for most calculations as the correction was negligible for the slow-moving luminaries. He only applied it for the fastest-moving luminary, the moon. Until the invention of the pendulum and the development of reliable clocks towards the end of the 17th century, the equation of time as defined by Ptolemy remained a curiosity, not important to normal people except astronomers. Only when mechanical clocks started to take over timekeeping from sundials, which had served humanity for centuries, did the difference between clock time and solar time become an issue. Apparent solar time (or true or real solar time) is the time indicated by the Sun on a sundial, while mean solar time is the average as indicated by clocks. The first accurate tables for the equation of time were published in 1665 by Christiaan Huygens. Following the practice of earlier astronomers Huygens increased his values for the equaton of time by a constant to make all values positive throughout the year. Another set of tables published in 1672 by John Flamsteed, the first head of the new Greenwich Observatory, was the first to adopt the modern convention using positive and negative values for the equation of time. Until 1833, the equation of time was mean minus apparent solar time in the British Nautical Almanac and Astronomical Ephemeris. Earlier, all times in the almanac were in apparent solar time because time aboard ship was determined by observing the Sun. In the unusual case that the mean solar time of an observation was needed, the extra step of adding the equation of time to apparent solar time was needed. Since 1834, all times have been in mean solar time because by then the time aboard most ships was determined by marine chronometers. In the unusual case that the apparent solar time of an observation was needed, the extra step of adding the equation of time to mean solar time was needed, requiring all differences in the equation of time to have the opposite sign. As the daily movement of the Sun is one revolution per day, that is 360° every 24 hours or 1° every 4 minutes, and the Sun itself appears as a disc of about 0.5° in the sky, simple sundials can be read to a maximum accuracy of about one minute. Since the equation of time has a range of about 30 minutes, the difference between sundial time and clock time cannot be ignored. In addition to the equation of time, one also has to apply corrections due to one's distance from the local time zone meridian and summer time, if any. The tiny increase of the mean solar day itself due to the slowing down of the Earth's rotation, by about 2 ms per day per century, which currently accumulates up to about 1 second every year, is not taken into account in traditional definitions of the equation of time, as it is statistically insignificant at the accuracy level of sundials. ## Eccentricity of the Earth's orbit The Earth revolves around the Sun. As such it appears that the Sun moves in one year around the Earth. If the Earth orbited the Sun with a constant speed in a plane perpendicular to its axis, then the apparent Sun would culminate every day at exactly 12 o'clock, and be a perfect time keeper (except for its slowing rotation). But the orbit of the Earth is an ellipse and its speed varies between 30.287 and 29.291 km/s, according to Kepler's laws of planetary motion, and as such the Sun seems to move faster at perihelion (currently around 3 January) and slower at aphelion a half year later. At these extreme instances this effect increases (respectively, decreases) the real solar day by 7.9 seconds. This accumulates every day. The final result is that the eccentricity of the Earth's orbit contributes a sine wave variation with an amplitude of 7.66 minutes and a period of one year to the equation of time. The zero points are reached at perihelion (at the beginning of January) and aphelion (beginning of July) while the maximum values are at the beginnings of April (negative) and October (positive). ## Obliquity of the ecliptic However, even if the Earth's orbit were circular, the motion of the Sun along the celestial equator would still not be uniform. This is a consequence of the tilt of the Earth's rotation with respect to its orbit, or equivalently, the tilt of the ecliptic (the path of the sun against the celestial sphere) with respect to the celestial equator. The projection of this motion onto the celestial equator, along which "clock time" is measured, is a maximum at the solstices, when the yearly movement of the Sun is parallel to the equator and appears as a change in right ascension, and is a minimum at the equinoxes, when the Sun moves in a sloping direction and appears mainly as a change in declination, leaving less for the component in right ascension, which is the only component that affects the duration of the solar day. As a consequence of that, the daily shift of the shadow cast by the Sun in a sundial, due to obliquity, is smaller close to the equinoxes and greater close to the solstices. At the equinoxes, the Sun is seen slowing down by up to 20.3 seconds every day and at the solstices speeding up by the same amount. In the figure on the right, we can see the monthly variation of the apparent slope of the plane of the ecliptic at solar midday as seen from Earth. This variation is due to the apparent precession of the rotating Earth through the year, as seen from the Sun at solar midday. In terms of the equation of time, the inclination of the ecliptic results in the contribution of another sine wave variation with an amplitude of 9.87 minutes and a period of a half year to the equation of time. The zero points of this sine wave are reached at the equinoxes and solstices, while the maxima are at the beginning of February and August (negative) and the beginning of May and November (positive). ## Secular effects The two above mentioned factors have different wavelengths, amplitudes and phases, so their combined contribution is an irregular wave. At epoch 2000 these are the values: minimum −14:15 11 February zero 00:00 15 April maximum align="right" 14 May zero 00:00 13 June minimum −06:30 26 July zero 00:00 1 September maximum align="right" 3 November zero 00:00 25 December E.T. = apparent − mean. Positive means: Sun runs fast and culminates earlier, or the sundial is ahead of mean time. A slight yearly variation occurs due to presence of leap years, resetting itself every 4 years. The exact shape of the equation of time curve and the associated analemma slowly changes over the centuries due to secular variations in both eccentricity and obliquity. At this moment both are slowly decreasing, but in reality they vary up and down over a timescale of hundreds of thousands of years. When the eccentricity, now 0.0167, reaches 0.047, the eccentricity effect may in some circumstances overshadow the obliquity effect, leaving the equation of time curve with only one maximum and minimum per year, as it is on Mars On shorter timescales (thousands of years) the shifts in the dates of equinox and perihelion will be more important. The former is caused by precession, and shifts the equinox backwards compared to the stars. But it can be ignored in the current discussion as our Gregorian calendar is constructed in such a way as to keep the vernal equinox date at 21 March (at least at sufficient accuracy for our aim here). The shift of the perihelion is forwards, about 1.7 days every century. For example in 1246 the perihelion occurred on 22 December, the day of the solstice. At that time the two contributing waves had common zero points, and the resulting equation of time curve was symmetrical. Before that time the February minimum was larger than the November maximum, and the May maximum larger than the July minimum. The secular change is evident when one compares a current graph of the equation of time (see below) with one from about 2000 years ago, for example, one constructed from the data of Ptolemy. ## Practical use If the gnomon (the shadow casting object) is not an edge but a point (e.g., a hole in a plate), the shadow (or spot of light) will trace out a curve during the course of a day. If the shadow is cast on a plane surface, this curve will (usually) be the conic section of the hyperbola, since the circle of the Sun's motion together with the gnomon point define a cone. At the spring and fall equinoxes, the cone degenerates into a plane and the hyperbola into a line. With a different hyperbola for each day, hour marks can be put on each hyperbola which include any necessary corrections. Unfortunately, each hyperbola corresponds to two different days, one in each half of the year, and these two days will require different corrections. A convenient compromise is to draw the line for the "mean time" and add a curve showing the exact position of the shadow points at noon during the course of the year. This curve will take the form of a figure eight and is known as an "analemma". By comparing the analemma to the mean noon line, the amount of correction to be applied generally on that day can be determined. ## More details In general, the equation of time is equal to $alpha - M + psi!,$, where $alpha!,$ is the Sun's right ascension, $M!,$ is the mean anomaly, and $psi!,$ is the angle from the periapsis to the vernal equinox. Using spherical trigonometry, the right ascension is given by $cos\left(alpha\right) = cos\left(nu - psi\right) / cos\left(delta\right)!,$, where $nu!,$ is the true anomaly and $delta!,$ is the Sun's declination. The declination in turn is given by $sin\left(delta\right) = sin\left(nu - psi\right) sin\left(epsilon\right)!,$, where $epsilon!,$ is the obliquity. In practice, it may be easier and faster to use an approximation for the curve rather than the exact formula. For Earth, the equation of time resembles the sum of two offset sine curves, with periods of one year and six months respectively. It can be approximated by $E = 9.87sin\left(2B\right) - 7.53cos\left(B\right) - 1.5sin\left(B\right)!,$ where $E!,$ is in minutes and $B = 360^circ\left(N - 81\right)/364!,$ if sin and cos have arguments in degrees, or $B = 2pi\left(N - 81\right)/364!,$ if sin and cos have arguments in radians. Here, $N!,$ is the so-called day number; i.e., $N=1$ for January 1, $N=2$ for January 2, and so on. The following is a graph of the current equation of time. Notice that the appearance of this graph can be directly deduced from the time evolution of the projection into the celestial equator of the Earth's Analemma loop trajectory. From one year to the next, the equation of time can vary by as much as 20 seconds, mainly due to leap years.
{}
# Impact of heavy-flavour production cross sections measured by the LHCb experiment on parton distribution functions at low x Abstract : The impact of recent measurements of heavy-flavour production in deep inelastic $ep$ scattering and in $pp$ collisions on parton distribution functions is studied in a QCD analysis in the fixed-flavour number scheme at next-to-leading order. Differential cross sections of charm- and beauty-hadron production measured by LHCb are used together with inclusive and heavy-flavour production cross sections in deep inelastic scattering at HERA. The heavy-flavour data of the LHCb experiment impose additional constraints on the gluon and the sea-quark distributions at low partonic fractions $x$ of the proton momentum, down to $x \sim 5 \times 10^{-6}$. This kinematic range is currently not covered by other experimental data in perturbative QCD fits. Type de document : Article dans une revue European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2015, 75, pp.396. 〈10.1140/epjc/s10052-015-3618-z〉 Domaine : http://hal.in2p3.fr/in2p3-01132372 Contributeur : Emmanuelle Vernay <> Soumis le : mardi 17 mars 2015 - 10:01:42 Dernière modification le : jeudi 11 janvier 2018 - 06:13:08 ### Citation O. Zenaiev, A. Geiser, K. Lipka, J. Blümlein, A. Cooper-Sarkar, et al.. Impact of heavy-flavour production cross sections measured by the LHCb experiment on parton distribution functions at low x. European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2015, 75, pp.396. 〈10.1140/epjc/s10052-015-3618-z〉. 〈in2p3-01132372〉 ### Métriques Consultations de la notice
{}
# Find multiple negations in English text I'm trying to finding double or triple (or more) negations in sentences/paragraphs in some 2MiB of Markdown documents. The documents in question can be found here: https://svn.apache.org/repos/asf/trafficserver/site/trunk/content/docs/trunk/ They are the documentation of a project I work with. The goal is to make it more accessible. Since the CMS that turns this Markdown (on commit) into HTML is written in Perl, it might be good if the solution was in Perl too, so it could be integrated into the build-process. - Not an easy one, unless you can parse English and can define exactly what a {double,triple} negation would look like in terms of parse trees or regexes. This comment would be more readable if there weren't these unneccessary negations, even though none of these are syntactically obvious (not easy → difficult, not optimistic → pessimistic, …). The Lingua::EN namespace on CPAN may have some modules that can be of help, but I'm not that optimistic. This oneliner may help with manual review: perl -nE"BEGIN{\\$/=''}print if/\bnot\b|n't\b/i" –  amon Nov 17 '12 at 14:07 ain't no easy way to do this ;) –  ikegami Nov 17 '12 at 14:29 To further complicate things, while "not optimistic → pessimistic" is probably always true, "I don't dislike" → "I like" is not always true. –  ikegami Nov 17 '12 at 14:34 Since natural language processing is much more complex than regexpressions, I would probably think in the direction of detecting typical situations you are seeing in the texts you need to process. So, avoid generalizing the task too much - it is still almost impossible to solve it in a general case nowdays. Probably, you could start with a simpler part of the task - split the text into sentences, since you want to see double|triple negations in one sentence, right? It's not always splitting by periods, especially in technical texts. And this is a mandatory part of the final solution. –  Alex Nov 18 '12 at 6:50 Hmm... it looks like MS Word can detect such situations to some extent: office.microsoft.com/en-us/word-help/… –  Alex Nov 18 '12 at 10:24
{}
## I'm thinking of selling my scripts. Which one are you most excited for? | Proxies123.com I'm thinking of selling my scripts in my store later this year, I just need to update them correcting errors, improving the design, making it work with php 7 and adding new features. Which script is more interested in seeing it launched? Cartcake (simple shopping cart) Casey (frontend for Stacey 2.3 CMS) Clancake (CMS script for game clans) Hostclaw (billing script for WHM) Metmi (import mysql / mariasql .sql database over 100MB in an empty database) Each script will cost $30 ## Why would my Internet speed increase when I'm connected through UPS? In the last places where I have lived over the years I have noticed a strange phenomenon: when I have my Internet connection with coaxial cable that goes through the coaxial in my UPS and then to my modem, Internet speed results of a speed test. remarkable blow. My current plan is 60mbps down, but after going through UPS, I'm "going down" at 70mbps. I've seen even bigger jumps when I was on a faster Internet plan in a former residence. I've wondered if it's just an incorrect report, but I hope that the popular speed test websites have already considered it. Is it possible for my ISP to give me more energy because I perceive a congestion due to the transfer of UPS? I'm not complaining, after all, I've always been curious. ## If I'm playing a game where one of my characters died and then that character is revived, how would that be handled? What I do? Thank you for contributing a response to the Stack Exchange RPGs! • Please make sure answer the question. Provide details and share your research! But avoid • Ask for help, clarifications or respond to other answers. • Make statements based on opinion; Support them with references or personal experience. Use MathJax to format equations. MathJax reference. For more information, see our tips on how to write excellent answers. ## html – PHP 7 how can I see the error I'm having in an input file I'm uploading an image from mobile, I get this file upload_image.php But you are telling me that I have an error, $ fileError = $_FILES['imagen']['error']; I make him a var_dump ($ fileError) and I get the following: array (size = 5) 'name' => string: '1675157615761576171651757165.jpg' 'type' => string '' 'tmp_name' => string '' 'error' => int 1 'size' => int 0 How I can to know What error are you giving me? it just tells me that I have an error, but wich ones? ## Why Republican Senator Mitt Romney said: & # 39; I'm sick & # 39; for Trump's conduct revealed in Mueller's report? Trump is not to blame for the Russian conspiracy. So someone else is. Romney, naturally, since he is happy to accumulate wealth, not only boasts of how he steals from us but also buys and lives in expensive houses and laughs, since we all suffer and struggle trying to squeeze a few thousand for good horses. I mean what a good stud costs? 20 30 thousand? What is the house and Romney's boats and other properties cost us all? MILLIONS over millions without a doubt. Those Russians got that money from somewhere … and Romney surely has a lot if he does. . ## I'm looking for a free cloud service other than Dropbox where I can download files on Linux using wget I am looking for an alternative free cloud service as an alternative to Dropbox where I can download a shared file using wget on Linux. I tried to do the same with other free cloud services like Google Drive, Mega, Box, pCloud, but unfortunately they do not seem to allow downloading a file via wget as Dropbox does. Google does not help at all, I think I need the help of someone who has done this successfully. Unfortunately, often the file I need to share is larger than the 2GB allocation that Dropbox offers. Does anyone please know an alternative? Thank you very much for your help! Best regards, Fabio ## I'm an introvert, is it a good thing? Yes. Being introverted means that you do not trust other people for validation or happiness. You can energize yourself while you are alone and can engage in your hobbies. Introverts are more likely to be good at things and develop numerous hobbies because of the quality of the time they spend learning new things. You do not pursue the attention or tastes of social networks. You probably do not pick up the phone every five seconds waiting for a text message. Introverts are patient people with incredible attention span. They can do the work. You are hardworking, independent and a great listener as you mentioned. You are there for people in their time of need and you do not surround yourself with superficial acquaintances simply by having them; You surround yourself with true friends who appreciate you for what you are. Introverts are likely to be smart and successful because of their hard work and handling. Pay attention to things and it is a good problem solver. Do not listen to what other people have to say. Being an introverted rock, it's not something you can change either. It's a personality type shared by many, even though extroverts are more common. I am not aware of the exact proportion. But introverts are not invisible. We exist! And we can be sociable, we can enjoy human company, and we're anything but boring! We just need to be close to those we know well and in whom we trust. And we do not like the talks. We are very interesting people with a wide range of knowledge. . . ## Magento2: I'm getting placeholder images in the cron tab. Need to get correct images in cron. I'm getting placeholder images in crontab. I like this: /pub/static/version1554967977/crontab/_view/en_US/Magento_Catalog/images/product/placeholder/.jpg; This is a wrong image. How to get the product image in the cron tab? ## I'm looking for a lawyer. Divorce. Distribution of goods. Hello everyone. I am separating from my wife and I want to condemn her legal part of her property. Please advise a good lawyer to help me in this case.
{}
#### Volume 24, issue 3 (2020) Recent Issues The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals Isotopies of surfaces in $4$–manifolds via banded unlink diagrams ### Mark C Hughes, Seungwon Kim and Maggie Miller Geometry & Topology 24 (2020) 1519–1569 ##### Abstract We study surfaces embedded in $4$–manifolds. We give a complete set of moves relating banded unlink diagrams of isotopic surfaces in an arbitrary $4$–manifold. This extends work of Swenton and Kearton–Kurlin in ${S}^{4}$. As an application, we show that bridge trisections of isotopic surfaces in a trisected $4$–manifold are related by a sequence of perturbations and deperturbations, affirmatively proving a conjecture of Meier and Zupan. We also exhibit several isotopies of unit surfaces in $ℂ\phantom{\rule{-0.17em}{0ex}}{P}^{2}$ (ie spheres in the generating homology class), proving that many explicit unit surfaces are isotopic to the standard $ℂ\phantom{\rule{-0.17em}{0ex}}{P}^{1}$. This strengthens some previously known results about the Gluck twist in ${S}^{4}$, related to Kirby problem 4.23. ##### Keywords $4$–manifold, knot, surface, diagram Primary: 57K45 Secondary: 57K40 ##### Publication Received: 20 February 2019 Revised: 29 March 2020 Accepted: 23 May 2020 Published: 30 September 2020 Proposed: András I Stipsicz Seconded: Mladen Bestvina, Ciprian Manolescu ##### Authors Mark C Hughes Department of Mathematics Brigham Young University Provo, UT United States https://math.byu.edu/~hughes/ Seungwon Kim Center for Geometry and Physics Institute for Basic Science (IBS) Pohang South Korea https://sites.google.com/view/seungwonkim Maggie Miller Department of Mathematics Massachusetts Institute of Technology Cambridge, MA United States https://web.math.princeton.edu/~maggiem
{}
During AMC testing, the AoPS Wiki is in read-only mode. No edits can be made. # Difference between revisions of "2017 AMC 12B Problems/Problem 8" ## Problem 8 The ratio of the short side of a certain rectangle to the long side is equal to the ratio of the long side to the diagonal. What is the square of the ratio of the short side to the long side of this rectangle? $\textbf{(A)}\ \frac{\sqrt{3}-1}{2}\qquad\textbf{(B)}\ \frac{1}{2}\qquad\textbf{(C)}\ \frac{\sqrt{5}-1}{2} \qquad\textbf{(D)}\ \frac{\sqrt{2}}{2} \qquad\textbf{(E)}\ \frac{\sqrt{6}-1}{2}$ ## Solution 1: Cross Multiplying Let $a$ be the short side of the rectangle, and $b$ be the long side of the rectangle. The diagonal, therefore, is $\sqrt{a^2 + b^2}$. We can get the equation $\frac{a}{b} = \frac{b}{\sqrt{a^2 + b^2}}$. Cross-multiplying, we get $a\sqrt{a^2 + b^2} = b^2$. Squaring both sides of the equation, we get $a^2 (a^2 + b^2) = b^4$, which simplifies to $a^4 + a^2b^2 - b^4 = 0$. Solving for a quadratic in $a^2$, using the quadratic formula we get $a^2 = \frac{-b^2 \pm \sqrt{5b^4}}{2}$ which gives us $\frac{a^2}{b^2} = \frac{-1 \pm \sqrt{5}}{2}$. We know that the square of the ratio must be positive (the square of any real number is positive), so the solution is $\boxed{\textbf{(C)} \frac{\sqrt{5}-1}{2}}$.
{}
# Questions tagged [dual-simplex] For questions about the dual Simplex method. 8 questions Filter by Sorted by Tagged with 98 views ### warmstarting simplex algorithm- how much can problems differ from each other? I'm working on an implementation of the simplex algorithm. I want to solve problems in real time every 30 minutes. They could be interpreted as a classic transportation problem. I couldn't really say ... 84 views ### Specific usecase of two-phase simplex algorithm The problem below aims to find to most optimal way to transport the fuel : A company Er must transport a type of fuel from its two refineries Ra and Rb to its two points of sale PV1 and PV2. The ... 760 views ### Settings for a faster solution of a MILP (GUROBI, python) Using GUROBI with python. When solving a MILP, I notice that the incumbent is very early (25sec) at the optimal point but the best bound is so slow to fall (maximization problem) that it takes ages (... 40 views ### Derivations for two formulae for obtaining optimal dual variable values from the optimal primal tableau We're being taught Industrial Engineering and Operations Research for the first time this semester. Referring to the book by Hamdy A. Taha, I noticed the mention of two formulae for swiftly obtaining ... 133 views ### How to access neighboring extreme points to an optimal extreme point of an LP? Suppose that I have access to an optimal non-degenerate extreme point of an LP. I need to find some $\epsilon$-optimal extreme points. That is, a point $x$ where $c'x \le z^{*} + \epsilon$. One way ... 93 views ### Construct a direction of recession of the dual that is from growth to dual function Consider the primal problem $$\begin{array}{ll} \text{minimize} & c^\top x\\ \text{subject to} & Ax = b\\ & x \geq 0\end{array}$$ where $A \in \mathbb {R}^{ m × n}$ has rank $m$. Suppose ... 338 views ### Simplex-Implementations in professional Solvers Which non-textbook variants (primal/dual, revised) and techniques (e.g. steepest-edge) do professional solvers like Xpress, CPLEX, CLP use, to get the best out of the simplex algorithm? This ... 996 views ### When should I use dual Simplex over primal Simplex? In Gurobi the user can change the method parameter in order to force Gurobi to use a particular method for solving MIPs. The user can, amongst others, choose ...
{}
Lesson Objectives • Demonstrate an understanding of Whole Number Exponents • Learn how to use the Product Rule for Exponents • Learn how to use the Power to Power Rule for Exponents • Learn how to use the Power of a Product Rule for Exponents • Learn how to use the Power of a Quotient Rule for Exponents ## How to Simplify Expressions using the Rules of Exponents In our pre-algebra course, we learned how to use whole number exponents larger than 1 to show the repeated multiplication of the same number. When we work with exponents, we have a large number known as a base and a small number known as the exponent. The base represents the number being multiplied by itself in the repeated multiplication. A whole number exponent larger than 1 represents the number of factors of the base. Let's look at an example. Example 1: Write each repeated multiplication using exponential form 9 • 9 • 9 • 9 9 • 9 • 9 • 9 = 94 We have 4 factors of 9. The 9 is the number being multiplied by itself in the repeated multiplication. This will be used as our larger number or our base. Since we have 4 factors of 9, our exponent or smaller number will be 4. We should also know how to reverse the process and write a number in exponent form as repeated multiplication. Let's look at an example. Example 2: Write each as repeated multiplication 183 183 = 18 • 18 • 18 We have 18 raised to the 3rd power. Our base, 18, is the number being multiplied by itself. Our exponent, 3, is the number of factors of 18. ### Product Rule for Exponents The product rule for exponents allows us to quickly simplify products of numbers or expressions in exponent form. The product rule for exponents states when we multiply two numbers or expressions in exponent form with the same base, we keep the base the same and add exponents. Let's look at a few examples. Example 3: Simplify each 42 • 43 Notice how the base (4) is the same in each case. We must have the same base to use this rule. Our product rule for exponents tells us to keep the base the same and add exponents: 42 • 43 = 42 + 3 = 45 How can we prove this is true? It's actually pretty simple when we show the multiplication process: 42 • 43 = 4 • 4 • 4 • 4 • 4 = 45 Since 42 has two factors of 4 and 43 has three factors of 4, when they are multiplied together we end up with (2 + 3 = 5) or five factors of 4. Therefore, we can keep our base the same and just add the exponents. Let's look at another example. Example 4: Simplify each x9 • x12 Keep the base (x) the same and add exponents (9 + 12 = 21): x9 • x12 = x9 + 12 = x21 Example 5: Simplify each 94 • 911 Keep the base (9) the same and add exponents (4 + 11 = 15): 94 • 911 = 94 + 11 = 915 ### Power to Power Rule for Exponents We will also come across power rules for exponents. The power to power rule is used when we have a power raised to another power. As an example, suppose we saw the following scenario: (32)3 This tells us that 32 (three squared) is raised to the 3rd power. In other words, we have: (32)3 = 32 • 32 • 32 We know from our product rule for exponents that we can keep the base (3) the same and add the exponents (2 + 2 + 2 = 6): 32 • 32 • 32 = 36 How could we get this result more quickly? We could simply have kept the base the same and multiplied our exponents: (32)3 = 32 • 3 = 36 This is exactly what the power to power rule tells us to do. When we have a power raised to another power, the power to power rule states that we keep our base the same and we multiply the exponents. Let's look at some examples. Example 6: Simplify each (144)11 Keep the base (14) the same and multiply the exponents (4 • 11 = 44): (144)11 = 1444 Example 7: Simplify each (x23)5 Keep the base (x) the same and multiply exponents (23 • 5 = 115): (x23)5 = x115 ### Power of a Product Rule for Exponents Next on our list of power rules comes the power of a product rule. This rule states that we can raise a product to a power by raising each factor to the power. Let's look at an example. Example 8: Simplify each (3 • 2)4 According to our rule, we can raise each factor (3 and 2) to the power of 4: (3 • 2)4 = 34 • 24 We can check to see if this is true, let's start by working inside of the parentheses in the original problem: (3 • 2)4 = 64 = 1296 Now let's look at the alternative: 34 • 24 = 81 • 16 = 1296 It works the same either way. You might think this property is useless based on this example, simplifying in the parentheses would have been faster here. In many cases, we will use this property with variables when it is not possible to simplify inside of the parentheses. Let's look at another example. Example 9: Simplify each (5xy)9 We will raise each factor to the power of 9: (5xy)9 = 59x9y9 ### Power of a Quotient Rule for Exponents Similar to our last rule, we can raise a quotient to a power by raising the numerator (dividend) and denominator (divisor) to the power. Let's take a look at an example. Example 10: Simplify each $$\left(\frac{x}{3}\right)^7$$ We will raise the numerator (x) and the denominator (3) to the 7th power: $$\left(\frac{x}{3}\right)^7 = \frac{x^7}{3^7}$$
{}
# Find it. Algebra Level 2 What is the value of the given expression $\sqrt[3]{26-15\sqrt{3}} + \sqrt[3]{26+15\sqrt{3}} ?$ ×
{}
Explore BrainMass # Series Convergence Not what you're looking for? Search our solutions OR ask your own Custom question. This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here! Please see attached file for full problem description. 1) Consider the series where . Show that and for . 2) Use the result of the previous problem to find . 3) The series converges. Find its sum. 4) Determine whether the series converges or diverges. Fully justify your answer. 5) Determine whether the series converges or diverges. Fully justify your answer. 6) Determine whether the series converges or diverges. Fully justify your answer. 7) The series converges. Find an upper bound for the sum. Fully justify your answer. 8) Determine all values of x for which the power series converges. 9) Show that if . Using this, find a power series for centered at 0. 10) If then . Find the Taylor Series for centered at . https://brainmass.com/math/real-analysis/series-convergence-divergence-157954 #### Solution Preview • Please provide VERY CLEAR AND DETAILED SOLUTIONS. 1) Consider the series where . Show that and for . We have . Plugging the value n=1, we get the first term of the series i.e., We have and . The k th term is nothing but the difference between the sum up to k terms and the sum up to (k-1) terms. So . Hence we can write for . 2) Use the result of the previous problem to find . Here We can write as . We know that sum of the series as from the previous result. Hence = 3) The series converges. Find its ... #### Solution Summary Series convergence and divergence are investigated. \$2.49
{}
## Chemistry and Chemical Reactivity (9th Edition) This problem requires observation of the stoichiometric ratio between ions and the solute. a) $0.12\ M Ba^{2+}, 0.24\ M\ Cl^{-}$ b) $0.0125\ M\ Cu^{2+}, 0.0125\ M\ SO_4^{2-}$ c) $1.00\ M\ K^+, 0.500\ M\ Cr_2O_7^{2-}$
{}
## Elementary Geometry for College Students (6th Edition) Assume $x^{2}$ $=$ 25. Then, through algebra, we find $x$ = 5. This contradicts the given statement. Thus, the assumption $x^{2}$ $=$ 25 is false, and it follows that $x$ $\ne$ $5$.
{}
# The Relative Specific Type Ia Supernovae Rate From Three Years of ASAS-SN Brown, JS, Stanek, KZ, Holoien, TW-S, Kochanek, CS, Shappee, BJ, Prieto, JL, Dong, S, Chen, P, Thompson, TA, Beacom, JF, Stritzinger, MD, Bersier, D and Brimacombe, J (2019) The Relative Specific Type Ia Supernovae Rate From Three Years of ASAS-SN. Monthly Notices of the Royal Astronomical Society, 484 (3). pp. 3785-3796. ISSN 0035-8711 Full text not available from this repository. Please see publisher or open access link below: Open Access URL: https://doi.org/10.1093/mnras/stz258 (Published version) ## Abstract We analyze the 476 SN Ia host galaxies from the All-Sky Automated Survey for Supernova (ASAS-SN) Bright Supernova Catalogs to determine the observed relative Type Ia supernova (SN) rates as a function of luminosity and host galaxy properties. We find that the luminosity distribution of the SNe Ia in our sample is reasonably well described by a Schechter function with a faint-end slope $\alpha \approx 1.5$ and a knee $M_{\star} \approx -18.0$. Our specific SN Ia rates are consistent with previous results but extend to far lower host galaxy masses. We find an overall rate that scales as $(M_{\star}/10^{10} M_{\odot})^{\alpha}$ with $\alpha \approx -0.5$. This shows that the specific SN Ia rate continues rising towards lower masses even in galaxies as small as $\log(M_{\star} / M_{\odot}) \lesssim 7.0$, where it is enhanced by a factor of $\sim10-20$ relative to host galaxies with stellar masses $\sim10^{10}M_{\odot}$. We find no strong dependence of the specific SN Ia rate on the star formation activity of the host galaxies, but additional observations are required to improve the constraints on the star formation rates. Item Type: Article This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society ©: 2019 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved. 0201 Astronomical and Space Sciences Q Science > QB AstronomyQ Science > QC Physics Astrophysics Research Institute Oxford University Press Author 27 Mar 2019 10:48 27 Mar 2019 10:51 10.1093/mnras/stz258 https://researchonline.ljmu.ac.uk/id/eprint/10433
{}
# Thread: Curl of Vector Field 1. ## Curl of Vector Field Find the curl of vector field. F(x, y, z) = xyz i - x^2 y k ∇xF = 0 No curl in this vector field. The book's answer is ∇xF = -x^2 i + 3xy j - xz k Who is right and why? 2. ## Re: Curl of Vector Field Originally Posted by USNAVY Find the curl of vector field. F(x, y, z) = xyz i - x^2 y k ∇xF = 0 No curl in this vector field. The book's answer is ∇xF = -x^2 i + 3xy j - xz k Who is right and why? $\nabla\times F=\left| {\begin{array}{*{20}{c}} {\partial x}&{\partial y}&{\partial z} \\ {xyz}&0&{ - {x^2}y} \end{array}} \right| = - {x^2}i + 3xyj - xzk$ It appears that the textbook is correct. 3. ## Re: Curl of Vector Field It is very easy to make simple sign errors when calculating the determinant for the curl F. I believe this to be my problem.
{}
Zassenhaus Lemma Lemma Let $G$ be a group. Let $H_1$ and $H_2$ be subgroups of $G$. Let: $N_1 \lhd H_1$ $N_2 \lhd H_2$ where $\lhd$ denotes the relation of being a normal subgroup. Then: $\dfrac {N_1 \left({H_1 \cap H_2}\right)} {N_1 \left({H_1 \cap N_2}\right)} \cong \dfrac {H_1 \cap H_2} {\left({H_1 \cap N_2}\right) \left({N_1 \cap H_2}\right)} \cong \dfrac {N_2 \left({H_1 \cap H_2}\right)} {N_2 \left({N_1 \cap H_2}\right)}$ where: $N_1 \left({H_1 \cap H_2}\right)$ etc. denotes subset product $\cong$ denotes group isomorphism. Proof Because of symmetry, only the first of the isomorphisms needs to be proved. Proof of Normality In order for the expressions to make sense, the expressions on the bottom of the fractions need to be normal subgroups. That is, we need to show that: $(1): \quad N_1 \left({H_1 \cap N_2}\right) \lhd N_1 \left({H_1 \cap H_2}\right)$ $(2): \quad \left({H_1 \cap N_2}\right) \left({N_1 \cap H_2}\right) \lhd H_1 \cap H_2$ Proof of Isomorphism Let: $H = H_1 \cap H_2$ $N_1 \left({H_1 \cap N_2}\right)$ Then: $$\displaystyle \frac {N_1 \left({H_1 \cap N_2}\right) \left({H_1 \cap H_2}\right)} {N_1 \left({H_1 \cap N_2}\right)}$$ $$=$$ $$\displaystyle \frac {N H} H$$ $$\displaystyle$$ $$\cong$$ $$\displaystyle \frac H {H \cap N}$$ Third Isomorphism Theorem $$\displaystyle$$ $$=$$ $$\displaystyle \frac {H_1 \cap H_2} {H_1 \cap H_2 \cap N_1 \left({H_1 \cap N_2}\right)}$$ If $X, Y, Z$ are subgroups of a group $\left({G, \circ}\right)$, and $Y \subseteq X$, then: $X \cap \left({Y \circ Z}\right) = Y \circ \left({X \cap Z}\right)$ Thus: $$\displaystyle N_1 \left({H_1 \cap N_2}\right) \left({H_1 \cap H_2}\right)$$ $$=$$ $$\displaystyle N_1 \left({H_1 \cap N_2}\right)$$ by taking $X = H_1, Y = H_1 \cap N_2, Z = H_2$ $$\displaystyle H_1 \cap H_2 \cap N_1 \left({H_1 \cap N_2}\right)$$ $$=$$ $$\displaystyle \left({H_1 \cap N_2}\right) \left({N_1 \cap H_2}\right)$$ by taking $X = H_1 \cap H_2, Y = H_1 \cap N_2, Z = N_1$ Hence the result. $\blacksquare$ Also known as This lemma is also known as: the butterfly lemma, so named because the Hasse diagram of the various subgroups involved can be drawn to resemble a butterfly the fourth isomorphism theorem, which name is not recommended because of the lack of any consistent naming convention for the Isomorphism Theorems in the literature. Source of Name This entry was named for Hans Julius Zassenhaus.
{}
# How do you write the inverse variation equation of the following. Y varies inversely with x and y = -12 when x = 6? Jul 3, 2016 Inverse variation equation is $y = - \frac{72}{x}$ #### Explanation: When $y$ varies inversely with $x$, we have $y \propto \frac{1}{x}$ i.e. $y = \frac{1}{x} \times k$, where $k$ is a constant. In other words, multiplying both sides by $x$, we get $x \times y = k$ Here $y = - 12$, when $x = 6$ i.e. $6 \times \left(- 12\right) = k$ or $k = - 72$ Hence inverse variation equation is $y = - \frac{72}{x}$
{}
## Intermediate Algebra for College Students (7th Edition) $1$ Recall the acronym PEDMAS, which tells us the order of operations: P: Parenthesis and other grouping symbols (including fraction bars) E: Exponents D/M: Division and Multiplication (from left to right) A/S: Addition and Subtraction (from left to right) Use the rule above to obtain: $=\dfrac{16 + 9}{4-(-21)} \\=\dfrac{25}{4+21} \\=\dfrac{25}{25} \\=1$
{}
# Homework Help: Easy thermodynamics problem 1. Jun 16, 2005 ### Physics_wiz This problem is supposedly easy but I just can't figure out how to do it. A heat pump is used to heat a house and maintain it at 24°C. On a winter day when the outdoor temperature is -5°C, the house is estimated to lose heat at a rate of 80,000 kJ/h. Determine the power required to operate this heat pump Only two equations I know are: COP = Qh/(Qh-Ql) = Qh/W where W is the net work done (what I'm looking for). In the book, they have an example just like it except the COP is given so all I gotta do is plug numbers in the equation and get the answer but I don't know how to do it when the temperatures are given instead of the COP. 2. Jun 16, 2005 ### Andrew Mason Does the question also say that the heat pump operates on an ideal Carnot cycle? If so: $$\Delta S = Q_H/T_H - Q_C/T_C = 0$$ so: $$Q_H/T_H = Q_C/T_C \rightarrow Q_H/Q_C = T_H/T_C$$ So $$COP = \frac{T_H}{T_H - T_C}$$ AM 3. Jun 16, 2005 ### Physics_wiz No, if it were a carnot I'd know how to do it but it doesn't say anything about that in the problem so I don't think I can just assume carnot cycle. 4. Jun 16, 2005 ### Physics_wiz I think I'm wasting too much time on this problem and I have a test tomorrow. Anyone know if there's a way to solve this problem if it were not a carnot cycle? Thanks. 5. Jun 16, 2005 ### Andrew Mason Not possible. It would depend on the details of the thermodynamic cycle used. Without knowing that, you cannot answer the question. I would suggest that you assume a Carnot cycle. AM 6. Jun 16, 2005 ### Physics_wiz Alright. So, assuming it's a carnot cycle, this is how it should be solved: 80,000 KJ/h = 2.22 KW COP = Th/(Th-Tc) = 297 k/(297 k-268 k) = 10.24 COP = Qh/W ---> W = Qh/COP W = 2.22 KW/10.24 = 2.17 KW which should be equal to the power needed to operate the pump. Did I do anything wrong? 7. Jun 16, 2005 ### Andrew Mason Just that Qh = 22.2 kW but your answer is right. AM 8. Jun 16, 2005 ### quark Yes, that is right except as Andrew explained. 9. Jun 16, 2005 ### Physics_wiz Aight thanks. You guys said that Qh isn't equal to 22.2 KW because that would be Qh/time. The units for Qh should be in KJ or J's I guess right? 10. Jun 16, 2005 ### quark No, Qh is indeed 22.2kW but you said 2.22kW 11. Jun 16, 2005 ### Physics_wiz oh...I should probably go to bed now because my internal energy is going up, info in my head is starting to vaporize, my brain is becoming superheated, volume not changing since my skull is rigid, pressure increasing, enthalpy going up, aaaand I need to get some rest to cool down and condense the info again so it's usable by the time of the test. :zzz:
{}
Search: # Chemical equilibrium ## Exercise 12 Write the expression of constant $K$ for the equilibrium: $2H_2O_2(g)$ $2H_2O(l)$ $+$ $O_2(g)$ $ZnO(s)$ $+$ $CO(g)$ $Zn(s)$ $+$ $CO_2(g)$ $AgCl(s)$ $+$ $Br^-(aq)$ $AgBr(s)$ $+$ $Cl^-(aq)$ $CuSO_4\cdot5H_2O(s)$ $CuSO_4(s)$ $+$ $5H_2O(g)$
{}
# Compute the minimum $a(n)>a(n-1)$ such that $a(1)+a(2)+\dots+a(n)$ is prime (OEIS A051935) ## Background Consider the following sequence (A051935 in OEIS): • Start with the term $2$. • Find the lowest integer $n$ greater than $2$ such that $2+n$ is prime. • Find the lowest integer $n'$ greater than $n$ such that $2 + n + n'$ is prime etc. A more formal definition: $$a_n=\begin{cases}2 & \text{if }n=0 \\ \min\{x\in\Bbb{N}\mid x>a_{n-1} \text{ and }\left(x+\sum_{i=0}^{n-1}a_i\right) \text{ is prime}\} & \text{otherwise}\end{cases}$$ The first few terms of the sequence are (please refer to these as test cases): 2, 3, 6, 8, 10, 12, 18, 20, 22, 26, 30, 34, 36, 42, 44, 46, 50, 52, 60, 66, 72, 74, ... Your task is to generate this sequence in any of the following ways: • Output its terms indefinitely. • Given $n$, output $a_{n}$ ($n^{\text{th}}$ term, $0$ or $1$ indexed). • Given $n$, output $\{a_1, a_2, \dots, a_n\}$ (first $n$ terms). You can compete in any programming language and can take input and provide output through any standard method, while taking note that these loopholes are forbidden by default. This is , so the shortest submission (in bytes) for every language wins. • Tips to avoid while writing challenges: The prime numbers. You could've used something else other than primality. – Okx Aug 17 '18 at 17:08 • @Okx I had a couple of reasons in mind when I choose primality this time: 1) There are some clever algorithms that are specific to this very sequence, like the one Dennis implemented 2) There is already an OEIS entry for this – Mr. Xcoder Aug 17 '18 at 17:13 # Brachylog, 13 bytes ~l.<₁a₀ᵇ+ᵐṗᵐ∧ Try it online! Output is the list of first n terms of the sequence. ?~l.<₁a₀ᵇ+ᵐṗᵐ∧ Full code (? at beginning is implicit) ?~l. Output is a list whose length is the input <₁ Output is an increasing list a₀ᵇ+ᵐ And the cumulative sum of the output ṗᵐ Consists only of prime numbers ∧ No further constraints on output Explanation for a₀ᵇ+ᵐ: a₀ᵇ Get the list of all prefixes of the list Is returned in increasing order of length For eg. [2, 3, 6, 8] -> [[2], [2, 3], [2, 3, 6], [2, 3, 6, 8]] +ᵐ Sum each inner list -> [2, 5, 11, 19] # Python 2, 63 62 bytes n=f=a=b=1 while 1: f*=n;n+=1;a+=1 if~f%n<a-b:print a;b=a;a=0 Try it online! # Jelly, 11 9 bytes 0Ḥ_ÆnɗСI This is a full program that takes n as an argument and prints the first n terms of the sequence. Try it online! ### How it works 0Ḥ_ÆnɗСI Main link. Argument: n 0 Set the return value to 0. С Accumulating iterate. When acting on a dyadic link d and called with arguments x and y, the resulting quicklink executes "x, y = d(x, y), x" n times, returning all intermediate values of x. Initially, x = 0 and y = n. ɗ Drei; combine the three links to the left into a dyadic chain. Ḥ Unhalve; double the left argument. _ Subtract the right argument. Æn Compute the next prime. This computes the partial sums of the sequence a, starting with 0. I Increments; compute the forward differences. # 05AB1E v2, 10 bytes 2λλOD₁+ÅNα Try it online! This only works in the non-legacy version, the Elixir rewrite. Outputs an infinite stream of integers. There are some bugs with the prime test that have been fixed in the latest commits, but are not yet live on TIO. It does work locally, though. Here is a GIF of its execution on my machine, modified to output the first few terms rather than the whole stream. ### How it works 2λ Defines a recursive infinite sequence with base case $2$. The λ structure is among 05AB1E's very cool new features. Briefly speaking, it takes a function $a(n)$, setting $a(0)$ to the integer argument given, in this case $2$. λO In this portion of code, λ's role is different. Already being inside a recursive environment, it instead generates $[a(0),a(1),\dots,a(n-1)]$, the list of all previous results. Then, O sums them up. D₁+ Duplicate the sum for later use and add $a(n-1)$ to the second copy. ÅN Generates the lowest prime stricly greater than the above sum. α Finally, retrieve the absolute difference between the prime computed above and the first copy of the sum computed earlier (the sum of all previous iterations). The stream is then implicitly printed to STDOUT indefinitely. # Perl 6, 45 bytes 2,{first (*+@_.sum).is-prime,@_[*-1]^..*}...* Try it online! Returns a lazy list that generates the sequence with no end. ### Explanation: This uses the Sequence operator ... which defines the sequence as: 2, # The first element is 2 { # The next element is: first # The first value that: (*+@_.sum).is-prime, # When added to the sum is a prime @_[*-1]^..* # And is larger than the previous element } ...* # And continue the sequence indefinitely # Ruby-rprime, 34 bytes 2.step{|i|$.+=p(i)if($.+i).prime?} Try it online! Outputs indefinitely. # JavaScript (ES6), 63 bytes Returns the $n^{th}$ term, 1-indexed. n=>(k=0,s=1,g=d=>s%d?g(d-1):d<2?--n?g(s-=k-(k=s)):s-k:g(s++))() Try it online! # Pyth, 12 11 bytes .f&P-;Z=-;Z Try it online! Saved 1 byte thanks to isaacg. Generates the first n such numbers, using a 1 based index. .f finds the first k integers that satisfy a particular criterion starting from zero. Here, the criterion is that the previous prime we calculated, ;, plus the current number, Z, is prime (P). If it is, we also update the last calculated prime using the short-circuiting behaviour of the logical and function (&). Unfortunately .f's default variable is Z which costs a byte in the update. The trick that isaacg figured out was to store the negation of the last prime and test on that minus the current value. This is shorter in Pyth since the primality check is overloaded: on positive numbers it finds the prime factorisation while on negative numbers it determines if the positive value of the number is prime. This more or less translates into: to_find = input() last_prime = 0 current = 0 results = [] while to_find > 0: if is_prime( current + last_prime ): results.append( current ) to_find -= 1 last_prime += current current += 1 print results • Replace _+ with - and + with - for -1 byte. – isaacg Aug 18 '18 at 1:08 • @isaacg That's quite clever! I'll edit that in. – FryAmTheEggman Aug 18 '18 at 3:51 # MATL, 21 bytes O2hGq:"t0)yd0)+_Yqh]d Try it online! Output is the first n terms of the sequence. ### Explanation: Constructs a list of primes (with an initial 0), and at the end finds the returns the differences between successive primes in the list. % Implicit input, say n O2h % Push P = [0, 2] on the stack Gq:" % for loop: 1 to n-1 t0) % Take the last element of P % Stack: [[0, 2], [2]] (in first iteration) yd0) % Take the difference between the last % two elements of P % Stack: [[0, 2], [2], [2]] % Stack: [[0, 2], [4]] _Yq % Get the next prime higher than that sum % Stack: [[0, 2], [5]] h % Concatenate that to the list P % Stack: [[0, 2, 5]] ] % End for loop d % Get the differences between successive elements of % the final list P (1#1)2 2 (p#n)s k|pmodn>0,n-s>k=k:(p#n)n(n-s)|w<-p*n=(w#(n+1))s k Try it online! (1#1)2 2 is a functions that takes no input and outputs an infinite list. The primality test is from this answer and improved by Christian Sievers (-2 bytes). -5 bytes thanks to W W. 2#2 (p#s)n|n<1=p|w<-until(\m->mod(product[1..m-1])m>0)(+1)$s+p+1=(w-s)#w$n-1 Try it online! • You can do without ^2. That will change the predicate from testing is prime to testing is prime or 4, which doesn't matter in this application. – Christian Sievers Aug 17 '18 at 21:29 # 05AB1E (legacy), 12 bytes 0U[XN+DpiN,U Try it online! Explanation 0U # initialize X as 0 [ # start an infinite loop XN+ # add X to N (the current iteration number) Dpi # if the sum is prime: N, # print N U # and store the sum in X There are a couple of different 12-byte solutions possible. This particular one could have been 10 bytes if we had a usable variable initialized as 0 (instead of 1 and 2). # Python 2, 119 bytes f=lambda n,k=1,m=1:m%k*k>n or-~f(n,k+1,m*k*k) def g(i): r=[2] for j in range(i):r+=[f(sum(r)+r[-1])-sum(r)] return r Try it online! Next Prime function f() taken from this answer. Function g() takes a non-negative integer i and returns a list of all items in the sequence up to that index. • – Mr. Xcoder Aug 17 '18 at 19:28 # Python 2, 99 98 bytes def f(n,s=2,v=2): k=s-~v while any(k%i<1for i in range(2,k)):k+=1 return n and f(n-1,k,k-s)or v Try it online! 1 byte thx to Mr. Xcoder. • I know... I know... Me and my bitwise-tricks pedantry :) But you can save a byte with k=s-~v. – Mr. Xcoder Aug 17 '18 at 23:15 • @Mr. Xcoder: Your unholy bitwise sorcery will be the end of you yet! :) – Chas Brown Aug 17 '18 at 23:17 # Haskell, 101 99 97 bytes The function l takes no arguments and returns an infinite list. Not as short as the more direct approach by @ovs (and I obviously stole some parts form their answer), but maybe still golfable? Thanks @H.PWiz for -2 bytes! import Data.List p m=mod(product[1..m-1])m>0 l=2:[until(p.(+sum a))(+1)$last a+1|a<-tail$inits l] Try it online! # Python 2, 82 80 bytes s=p=2 i=input() P=n=1 while i: P*=n;n+=1 if P%n>0<n-s-p:p=n-s;s=n;i-=1 print p Try it online! This outputs the nth number of the sequence (0-based). By moving the print in the loop, this can be modified to output the first n items at the same bytecount: Try it online! # C (gcc), 100 99 bytes p(r,i,m,a,l){for(l=m=a=0;a-r;)m+=i=p(a++);while(r*!l)for(++i,l=a=2;a-m-i;)l=(m+i)%a++?l:0;r=r?i:2;} Try it online! # Japt, 17 bytes Outputs the nth term, 0-indexed. @_+T j}aXÄT±X}g2ì Try it
{}
# Symbol meaning 1. Sep 14, 2014 Hi, I just started the chapter work and kinetic energy in my physics book, and I'm uncertain about the meaning of two symbols. They are 'F sub "two vertical lines"' and 'F sub "an upside down T/perpendicular symbol"'? Does it mean cos∠ and sin∠, respectively? Nevermind, I figured it out; they were components of F and meanr parallel and perpendicular. I can't seem to delete the post, so a moderator can feel free to Last edited: Sep 14, 2014 2. Sep 14, 2014 ### Schnitzel I can tell you that the F|| means gravity that is parallel to the surface and F⊥ is the gravity perpendicular to the surface. 3. Sep 14, 2014 ### Staff: Mentor Two vertical lines, ||, usually means "parallel". $\perp$ usually means "perpendicular". Thus: $F_{||}$ would typically mean a parallel force or force component, and $F_\perp$ would typically mean a perpendicular force or force component.
{}
# How often is an irreducible polynomial irreducible? The question doesn't of course make sense as written in the title. Here is what I really mean: Given a global field $k$ and an irreducible polynomial $P \in k[x]$ Is it true that $P$ is reducible at almost all places? I would guess that Hensel's lemma plus an approximation theorem will give an affirmative answer. - What do you mean that $P$ is reducible at 'one place'? –  Beni Bogosel Apr 19 '12 at 9:40 IIRC, for a number field, Chebotarev's density theorem implies that the asymptotic density of primes $p$ for which $P$ is irreducible modulo $p$ is $1/n$, where $n$ is the degree of $P$. I'm guessing this is relevant to the question asked.... –  Hurkyl Apr 19 '12 at 9:46 @Hurkyl: that is false. $\frac{1}{n}$ is the asymptotic density of primes $p$ for which $P$ splits. –  Qiaochu Yuan Apr 20 '12 at 14:42 That doesn't sound right. IIRC, the probabilities are the same as what you'd expect for picking a random polynomial modulo $p$. For cubic polynomials, you get $1/3$ irreducible, $1/6$ totally split, and $1/2$ that factor into a linear-quadratic. Maybe I'm assuming a Galois group of $S_n$ or something? –  Hurkyl Apr 20 '12 at 15:48 QiaochuYuan: you want not $1/n$ but $1/m$, where $m$ is the order of the Galois group of P (so $n \le m \le n!$). Hurkyl: what you say is correct for Galois group $S_3$ (since 1/6 of its elements are the identity, 1/3 are of order 3, and 1/2 of order 2). –  David Loeffler Apr 21 '12 at 1:21 In fact the situation is even worse than Michalis' answer suggests: a globally irreducible polynomial can be (in fact usually is) locally reducible almost everywhere. For instance, consider the polynomial $x^4 - 8x^2 + 36$. This is the minimal polynomial of $\sqrt{5} + i$, so it is irreducible. But mod $p$ for any prime $p \notin \{2, 5\}$, at least one of $-1$, $5$ and $-5$ is a quadratic residue; hence the polynomial is not irreducible over $\mathbb{Q}_p$ for any $p > 5$. In fact here is the worst possible example: $x^4 - 5x^2 + 16$, whose splitting field is $\mathbb{Q}(\sqrt{13}, \sqrt{-3})$. This polynomial is reducible in $\mathbb{Q}_p$ for every $p$. I found this by using the construction in this lovely paper: ams.org/journals/proc/2005-133-11/S0002-9939-05-07855-X/… –  David Loeffler Apr 20 '12 at 22:10 Nice example, but in fact the OP was conjecturing this behaviour (or maybe he mistyped and meant "is it true that P is irreducible at almost all places"?). Here's another easy example: $X^4-X^2+1$ is the 12th cyclotomic polynomial and hence irreducible. It is easy to see that it is reducible for every prime! –  Michalis Apr 22 '12 at 0:37 Take the irreducible polynomial $X^2+1$ over $\mathbb{Q}$. If $p\equiv 3\mod 4$ this polynomial is irreducible in $\mathbb Q_3$ by Hensels lemma. If $p\equiv 1\mod 4$ it is reducible, so $X^2+1$ is reducible / irreducible for half of the places. In general you can use Chebotarev's theorem to determine for how many primes a polynomial will split. Chebotarev tells you that the frequency with which a polynomial remains irreducible mod a prime is equal to the frequency of elements of the Galois group with cycle type a full $n$-cycle. If the Galois group has no such elements (which cannot occur when $n = 2, 3$ but can occur for $n \ge 4$), as in David Loeffler's example... –  Qiaochu Yuan Apr 20 '12 at 14:40
{}
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too! # If x + y + z = 8 and xy + yz + zx = 20, Find the value of Question: If $x+y+z=8$ and $x y+y z+z x=20$, Find the value of $x^{3}+y^{3}+z^{3}-3 x y z$ Solution: Given, x + y + z = 8 and xy + yz + zx = 20 We know that, $(x+y+z)^{2}=x^{2}+y^{2}+z^{2}+2(x y+y z+z x)$ $(x+y+z)^{2}=x^{2}+y^{2}+z^{2}+2(20)$ $(x+y+z)^{2}=x^{2}+y^{2}+z^{2}+40$ $8^{2}=x^{2}+y^{2}+z^{2}+40$ $64-40=x^{2}+y^{2}+z^{2}$ $x^{2}+y^{2}+z^{2}=24$ we know that, $x^{3}+y^{3}+z^{3}-3 x y z=(x+y+z)$ $\left(x^{2}+y^{2}+z^{2}-x y-y z-z x\right) x^{3}+y^{3}+z^{3}-3 x y z$ $=(x+y+z)\left[\left(x^{2}+y^{2}+z^{2}\right)-(x y+y z+z x)\right]$ here, $x+y+z=8$ $x y+y z+z x=20$ $x^{2}+y^{2}+z^{2}=24$ $x^{3}+y^{3}+z^{3}-3 x y z=8[(24-20)]=8^{*} 4=32$ Hence, the value of $x^{3}+y^{3}+z^{3}-3 x y z$ is 32
{}
# Chapter 4 Quadratic Functions and Factoring - 4.3 Solve x(squared) + bx + c = 0 - 4.2 Exercises - Mixed Review - Page 258: 79 -1, 13 #### Work Step by Step Since this is an absolute value expression, the value inside the absolute value can be positive or negative. Thus, considering both possibilities: $$x-6 =7 \\ x=13 \\ OR \\ x-6 =-7 \\ x=-1$$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
## Regularity of extremal functions in weighted Bergman and Fock type spaces.(English)Zbl 1429.30050 This article deals with the regularity of solutions to extremal problems in certain weighted Bergman spaces in discs, as well as in Fock spaces. For $$0 < R\le \infty$$, let $$\mathbb{D}_{R}$$ be the open disc of radius $$R$$ centered at the origin and let $$\nu(z)=\omega (|z|^{2})$$ where $$\omega$$ is a positive, decreasing and non-constant function on $$[0,R^{2}]$$ that is analytic in a complex neighborhood of $$[0,R^{2}]$$. The space $$A_{R}^{p}(\nu)=A^{p}_{R} (\omega(|z|^{2}))$$ consists of all analytic functions on $$\mathbb{D}_{R}$$ such that $\|f\|_{A_{R}^{p} (\omega (|z|^{2}))}=\left( \int_{\mathbb{D}_{R}} |f(z)|^{p} \omega (|z|^{2})\,dA(z)\right)^{1/p}<+\infty,$ where $$1 < p <\infty$$. If $$k\in A_{R}^{p'}(\nu)$$, where $$\frac{1}{p}+\frac{1}{p'}=1$$, $$f\to \int_{\mathbb{D}_{R}}f(z)\overline{k(z)} \nu(z)\,dA(z)$$ defines a linear functional $$\Phi_{k}$$ on $$A^{p}_{R}(\nu)$$ with norm denoted by $$\|k\|^{*}$$. One says that $$f$$ is the extremal function for the integral kernel $$k$$ if $$\|f\|_{A^{p}_{R}(\nu)}=1$$ and $\operatorname{Re}\Phi_{k}(f)=\sup_{\|g\|_{A^{p}_{R}(\nu)=1}}\left( \operatorname{Re}\int_{\mathbb{D}_{R}}g(z)\overline{k(z)} \nu(z)\,dA(z)\right).$ There always exists a unique solution to this extremal problem. Writing \begin{aligned} M_{p}(r,f)&=\left(\int_{0}^{2\pi} |f(r e^{i\theta})|^{p}\,d\theta\right)^{1/p},\quad r<R,\\ M_{p}(R,f)&=\lim_{r\to R^{-}}M_{p}(r,f), \\ D_{p}(r,f)&= D_{p}(r,f;\omega)=\left(-\int_{\mathbb{D}_{R}}|z|^{2} |f(z)|^{p}\omega' (|z|^{2})\,dA\right)^{1/p}\end{aligned} and $$\widehat p=\max (p-1,1)$$, one can state the result obtained for the Bergman space: Theorem 1. Let $$1<p<\infty$$, and let $$0<R<\infty$$. Let the function $$\omega$$ be analytic in a neighborhood of $$[0,R^{2})$$, and let $$\omega$$ be positive, non-increasing and non-constant on $$[0,R^{2})$$. Suppose that $$f$$ is the extremal function in $$A^{p}_{R}(\omega(|z|^{2}))$$ for the integral kernel $$k$$. Then $\frac{R^{2}}{2} \omega (R^{2})M_{p}^{p}(R,f)+D_{p}^{p}(R,f)\le \frac{2^{1/q}\widehat{p}}{\|k\|^{*}} \left[ \left( \frac{R^{2}}{2}\omega (R^{2})\right)^{1/q}M_{q}(R,k)+D_{q}(R,k)\right]^{q}.$ In order to provide a result about the regularity of extremal functions in the Fock-type spaces $$A_{\infty}^{p}(\nu)$$, one needs to put some restrictions on the weight $$\nu(z)=\omega(|z|^{2})$$. The result is the following one: Theorem 2. Let $$1<p<\infty$$. Let the function $$\omega$$ be analytic in a neighborhood of $$[0,\infty)$$, and let $$\omega$$ be positive, non-increasing and non-constant on $$[0,\infty)$$. Also, suppose that $$\lim_{r\to\infty}r^{n}\omega(r^{2})= \lim_{r\to\infty}r^{n}\omega'(r^{2})=0$$ for all integers $$n$$, and that the polynomials are dense in $$A^{p}_{\infty}(\omega(|z|^{2}))$$ and in $$A^{p}_{\infty}(\omega(|z|^{2}))- |z|^{2}\omega'(|z|^{2}))$$. Suppose that $$f$$ is the extremal function in $$A_{R}^{p}(\omega (|z|^{2}))$$ for the integral kernel $$k$$. Then: $D_{p}(\infty,f)\le \left[ \frac{\widehat{p}}{\|k\|^{*}} D_{q}(\infty,k)\right]^{1/(p-1)}.$ Theorem 2 does not bound the quantity $$\lim\limits_{r\to\infty}r^{2}\omega (r^{2})M_{p}^{p}(r,f)$$, although a similar term is bounded in Theorem 1. It can be shown that $$r^{3}\omega (r^{2}) M_{p}^{p}(r,f)\to 0$$ as $$r\to\infty$$, as a consequence of $$D_{p}(f,\infty)<\infty$$ for certain functions $$\omega$$. More precisely, if one writes $S(x,\lambda)=\int_{x}^{\infty}\frac{\lambda(x)}{\lambda(t)} \left(\frac{t}{x}\right)^{x\lambda'(x)/\lambda(x)}\,dt,$ for $$\lambda(x)$$ some positive, increasing, smooth function defined for $$x\ge R$$, one gets: {Theorem 3}. Suppose that $$\lim\inf_{x\to\infty}S(x,-1/\omega'(r^{2}))>0$$ and that there is some positive constant $$C$$ such that $$-\omega'(r)\ge C\omega(r)$$ for all sufficiently large $$r$$. If $$D_{p}(\infty,f;\omega)<\infty$$, then $$\lim_{r\to\infty}r^{3}M_{p}^{p}(r,f)\omega(r^{2})=0$$. ### MSC: 30H20 Bergman spaces and Fock spaces 30C75 Extremal problems for conformal and quasiconformal mappings, other methods ### Keywords: extremal problem; regularity; Fock space; Bergman space Full Text: ### References: This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# American Institute of Mathematical Sciences July  2018, 17(4): 1573-1594. doi: 10.3934/cpaa.2018075 ## On the Cauchy problem for the Zakharov-Rubenchik/ Benney-Roskes system 1 Fak. Mathematik, University of Vienna, Oskar MorgensternPlatz 1, A-1090 Wien, Austria 2 Wolfgang Pauli Institute c/o Fak. Math. Univ. Vienna, Oskar MorgensternPlatz 1, A-1090 Wien, Austria 3 Laboratoire de Mathématiques, UMR 8628, Université Paris-Saclay, Paris-Sud and CNRS, F-91405 Orsay, France Received  March 2017 Revised  February 2018 Published  April 2018 We address various issues concerning the Cauchy problem for the Zakharov-Rubenchik system(known as the Benney-Roskes system in water waves theory), which models the interaction of short and long waves in many physical situations. Motivated by the transverse stability/instability of the one-dimensional solitary wave (line solitary), we study the Cauchy problem in the background of a line solitary wave. Citation: Hung Luong, Norbert J. Mauser, Jean-Claude Saut. On the Cauchy problem for the Zakharov-Rubenchik/ Benney-Roskes system. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1573-1594. doi: 10.3934/cpaa.2018075 ##### References: [1] M. J. Ablowitz and H. Segur, On the evolution of packets of water waves, J. Fluid Mech., 92 (1979), 691-715. Google Scholar [2] H. Added and S. Added, Equations of Langmuir turbulence and nonlinear Schrödinger equation: smoothness and approximation, J. Funct. Anal., 76 (1988), 183-210. Google Scholar [3] H. Added and S. Added, Existence globale de solutions fortes pour les équations de la turbulence de Langmuir en dimension 2, C. R. Acad. Sci. Paris Sér. I Math., 299 (1984), 551-554. Google Scholar [4] I. Bejenaru, S. Herr, J. Holmer and D. Tataru, On the 2D Zakharov system with L2-Schrödinger data, Nonlinearity, 22 (2009), 1063-1089. Google Scholar [5] D. J. Benney and A. C. Newell, The propagation of nonlinear envelopes, J. Math. and Phys., 46 (1967), 133-139. Google Scholar [6] D. J. Benney and G. J. Roskes, Waves instabilities, Stud. Appl. Math., 48 (1969), 377-385. Google Scholar [7] J. Bourgain, On the Cauchy and invariant measure problem for the periodic Zakharov system, Duke Math. J., 76 (1994), 175-202. Google Scholar [8] J. Bourgain and J. Colliander, On wellposedness of the Zakharov system, IRMN, 11 (1996), 515-546. Google Scholar [9] R. Cipolatti, On the existence of standing waves for a Davey-Stewartson system, Comm. Partial Differential Equations, 17 (1992), 967-988. Google Scholar [10] R. Cipolatti, On the instability of ground states for a Davey-Stewartson system, Ann.Inst. Poincaré H., Phys.Théor., 58 (1993), 85-104. Google Scholar [11] T. Colin, Rigorous derivation of the nonlinear Schrödinger equation and Davey-Stewartson systems from quadratic hyperbolic systems, Asymptotic Analysis, 31 (2002), 69-91. Google Scholar [12] T. Colin and D. Lannes, Justification of and long-wave correction to Davey-Stewartson systems from quadratic hyperbolic systems, Disc. Cont. Dyn. Systems, 11 (2004), 83-100. Google Scholar [13] J.C. Cordero Ceballos, Supersonic limit for the Zakharov-Rubenchik system, J. Diff. Eq., 261 (2016), 5260-5288. Google Scholar [14] A. Davey and K. Stewartson, One three-dimensional packets of water waves, Proc. Roy. Soc. Lond. A, 338 (1974), 101-110. Google Scholar [15] V. D. Djordjevic and L. G. Redekopp, On two-dimensional packets of capillary-gravity waves, J. Fluid Mech., 79 (1977), 703-714. Google Scholar [16] J.-M. Ghidaglia and J.-C. Saut, On the initial value problem for the Davey-Stewartson systems, Nonlinearity, 3 (1990), 475-506. Google Scholar [17] J.-M. Ghidaglia and J.-C. Saut, Non existence of traveling wave solutions to nonelliptic nonlinear Schrödinger equations, J. Nonlinear Sci., 6 (1996), 139-145. Google Scholar [18] J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zakharov system, J. Funct. Analysis, 151 (1997), 384-436. Google Scholar [19] L. Glangetas and F. Merle, Existence of self-similar blow-up solutions for Zakharov equation in dimension two. Ⅰ, Comm. Math. Phys., 160 (1994), 173-215. Google Scholar [20] L. Glangetas and F. Merle, Existence of self-similar blow-up solutions for Zakharov equation in dimension two. Ⅱ, Comm. Math. Phys., 160 (1994), 349-389. Google Scholar [21] T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., XLI (1988), 891-907. Google Scholar [22] C. Kenig, G. Ponce and L. Vega, On the Zakharov and Zakharov-Schulman systems, J. Funct; Anal., 127 (1995), 204-234. Google Scholar [23] S. Klainerman and A. Majda, Singular limits of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids, Comm. Pure Appl. Math., 34 (1981), 481-524. Google Scholar [24] D. Lannes, Sharp estimates for pseudo-differential operators with symbols of limited smoothness and commutators, J. Funct. Anal., 232 (2006), 495-539. Google Scholar [25] D. Lannes, Water Waves: Mathematical Theory and Asymptotics, Mathematical Surveys and Monographs, vol 188 (2013), AMS, Providence.Google Scholar [26] F. Linares and C. Matheus, Well-posedness for the 1-D Zakharov system, Advances in Diff. Equations, 14 (2009), 261-288. Google Scholar [27] J.-L. Lions, Quelques méthodes de résolution des problèmes aux limites non linéaires, Dunod; Gauthier-Villars, Paris, 1989.Google Scholar [28] A. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Applied Mathematical Sciences volume 53, Springer-Verlag, New-York, 1984.Google Scholar [29] T. Mizumachi, Stability of line solitons for the KP-Ⅱ equation in ${{\mathbb{R}}^{2}}$, Memoirs of the AMS, vol. 238, number 1125, (2015).Google Scholar [30] T. Mizumachi and N. Tzvetkov, Stability of the line soliton of the KP-Ⅱ equation under periodic transverse perturbations, Math. Ann., 352 (2012), 659-690. Google Scholar [31] C. Obrecht, Thèse de Doctorat, Université Paris-Sud (2015) and article in preparation.Google Scholar [32] M. Ohta, Stability and instability of standing waves for the generalized Davey-Stewartson system, Diff. Int. Eq., 8 (1995), 1775-1788. Google Scholar [33] M. Ohta, Instability of standing waves for the generalized Davey-Stewartson system, Ann. Inst. Poincaré H., Phys. Théor., 62 (1995), 69-80. Google Scholar [34] M. Ohta, Blow-up solutions and strong instability of standing waves for the generalized DaveyStewartson system, Ann. Inst. Poincaré H., Phys. Théor., 63 (1995), 111-117. Google Scholar [35] F. Oliveira, Stability of the solitons for the one-dimensional Zakharov-Rubenchik system, Physica D, 175 (2003), 220-240. Google Scholar [36] F. Oliveira, Adiabatic limit of the Zakharov-Rubenchik system, Reports on Mathematical Physics, 61 (2008), 13-27. Google Scholar [37] T. Ozawa and Y. Tsutsumi, Existence and smoothing effect of solutions for the Zakharov equations, Publ. Res. Inst. Math. Sci., Kyoto University. Research Institute for Mathematical Sciences. Publications, 28 (1992), 329-361. Google Scholar [38] T. Ozawa and Y. Tsutsumi, The nonlinear Schrödinger limit and the initial layer of the Zakharov equations, Proc. Jap. Acad. A, 67 (1991), 113-116. Google Scholar [39] T. Ozawa and Y. Tsutsumi, The nonlinear Schrödinger limit and the initial layer of the Zakharov equations, Diff. Int. Eq., 5 (1992), 721-745. Google Scholar [40] T. Passot, P.-L. Sulem and C. Sulem, Generalization of acoustic fronts by focusing ave packets, Physica D, 94 (1996), 168-187. Google Scholar [41] G. Ponce and J.-C. Saut, Well-posedness for the Benney-Roskes-Zakharov-Rubenchik system, Discrete Cont. Dynamical Systems, 13 (2005), 811-825. Google Scholar [42] F. Rousset and N. Tzvetkov, Transverse instability of the line solitary water-waves, Invent. Math., 184 (2011), 257-388. Google Scholar [43] F. Rousset and N. Tzvetkov, Transverse nonlinear instability for two-dimensional dispersive models, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 477-496. Google Scholar [44] F. Rousset and N. Tzvetkov, Transverse nonlinear instability of solitary waves for some Hamiltonian PDE's, J. Math. Pures et Appl., 80 (2008), 550-590. Google Scholar [45] S. H. Schochet and M.I. Weinstein, The nonlinear Schrödinger limit of the Zakharov governing Langmuir turbulence, Comm. Math. Phys., 106 (1986), 569-580. Google Scholar [46] C. Sulem and P.-L. Sulem, The Nonlinear Schrödinger Equation, Springer-Verlag, Applied Mathematical Sciences 139 New York, Berlin, 1999.Google Scholar [47] C. Sulem and P.-L. Sulem, Quelques résultats de régularité pour les équations de la turbulence de Langmuir, C.R. Ac. Sci. Paris Sér. A-B, 289 (1979), A173-A176. Google Scholar [48] H. Takaoka, Well-posedness for the Zakharov system with the periodic boundary condition, Diff. and Int. equations, 6 (1999), 789-810. Google Scholar [49] N. Tzvetkov, Low regularity solutions for a generalized Zakharov system, Diff. and Int. Equations, 13 (2000), 423-440. Google Scholar [50] V. E. Zakharov, Collapse of Langmuir waves, Sov. Phys. JETP, 35 (1972), 908-914. Google Scholar [51] V. E. Zakharov, Weakly nonlinear waves on the surface of an ideal finite depth fluid, Amer. Math. Soc. Transl., 182 (1998), 167-197. Google Scholar [52] V. E. Zakharov and E. A. Kuznetsov, Hamiltonian formalism for nonlinear waves, PhysicsUspekhi, 40 (1997), 1087-1116. Google Scholar [53] V. E. Zakharov and A. M. Rubenchik, Nonlinear interaction of high-frequency and low frequency waves, Prikl. Mat. Techn. Phys., 5 (1972), 84-98. Google Scholar [54] Xiaofei Zhao and Ziyi Li, Numerical methods and simulations for the dynamics of onedimensional Zakharov-Rubenchik equations, J. Sci. Comput., 59 (2014), 412-438. Google Scholar show all references ##### References: [1] M. J. Ablowitz and H. Segur, On the evolution of packets of water waves, J. Fluid Mech., 92 (1979), 691-715. Google Scholar [2] H. Added and S. Added, Equations of Langmuir turbulence and nonlinear Schrödinger equation: smoothness and approximation, J. Funct. Anal., 76 (1988), 183-210. Google Scholar [3] H. Added and S. Added, Existence globale de solutions fortes pour les équations de la turbulence de Langmuir en dimension 2, C. R. Acad. Sci. Paris Sér. I Math., 299 (1984), 551-554. Google Scholar [4] I. Bejenaru, S. Herr, J. Holmer and D. Tataru, On the 2D Zakharov system with L2-Schrödinger data, Nonlinearity, 22 (2009), 1063-1089. Google Scholar [5] D. J. Benney and A. C. Newell, The propagation of nonlinear envelopes, J. Math. and Phys., 46 (1967), 133-139. Google Scholar [6] D. J. Benney and G. J. Roskes, Waves instabilities, Stud. Appl. Math., 48 (1969), 377-385. Google Scholar [7] J. Bourgain, On the Cauchy and invariant measure problem for the periodic Zakharov system, Duke Math. J., 76 (1994), 175-202. Google Scholar [8] J. Bourgain and J. Colliander, On wellposedness of the Zakharov system, IRMN, 11 (1996), 515-546. Google Scholar [9] R. Cipolatti, On the existence of standing waves for a Davey-Stewartson system, Comm. Partial Differential Equations, 17 (1992), 967-988. Google Scholar [10] R. Cipolatti, On the instability of ground states for a Davey-Stewartson system, Ann.Inst. Poincaré H., Phys.Théor., 58 (1993), 85-104. Google Scholar [11] T. Colin, Rigorous derivation of the nonlinear Schrödinger equation and Davey-Stewartson systems from quadratic hyperbolic systems, Asymptotic Analysis, 31 (2002), 69-91. Google Scholar [12] T. Colin and D. Lannes, Justification of and long-wave correction to Davey-Stewartson systems from quadratic hyperbolic systems, Disc. Cont. Dyn. Systems, 11 (2004), 83-100. Google Scholar [13] J.C. Cordero Ceballos, Supersonic limit for the Zakharov-Rubenchik system, J. Diff. Eq., 261 (2016), 5260-5288. Google Scholar [14] A. Davey and K. Stewartson, One three-dimensional packets of water waves, Proc. Roy. Soc. Lond. A, 338 (1974), 101-110. Google Scholar [15] V. D. Djordjevic and L. G. Redekopp, On two-dimensional packets of capillary-gravity waves, J. Fluid Mech., 79 (1977), 703-714. Google Scholar [16] J.-M. Ghidaglia and J.-C. Saut, On the initial value problem for the Davey-Stewartson systems, Nonlinearity, 3 (1990), 475-506. Google Scholar [17] J.-M. Ghidaglia and J.-C. Saut, Non existence of traveling wave solutions to nonelliptic nonlinear Schrödinger equations, J. Nonlinear Sci., 6 (1996), 139-145. Google Scholar [18] J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zakharov system, J. Funct. Analysis, 151 (1997), 384-436. Google Scholar [19] L. Glangetas and F. Merle, Existence of self-similar blow-up solutions for Zakharov equation in dimension two. Ⅰ, Comm. Math. Phys., 160 (1994), 173-215. Google Scholar [20] L. Glangetas and F. Merle, Existence of self-similar blow-up solutions for Zakharov equation in dimension two. Ⅱ, Comm. Math. Phys., 160 (1994), 349-389. Google Scholar [21] T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., XLI (1988), 891-907. Google Scholar [22] C. Kenig, G. Ponce and L. Vega, On the Zakharov and Zakharov-Schulman systems, J. Funct; Anal., 127 (1995), 204-234. Google Scholar [23] S. Klainerman and A. Majda, Singular limits of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids, Comm. Pure Appl. Math., 34 (1981), 481-524. Google Scholar [24] D. Lannes, Sharp estimates for pseudo-differential operators with symbols of limited smoothness and commutators, J. Funct. Anal., 232 (2006), 495-539. Google Scholar [25] D. Lannes, Water Waves: Mathematical Theory and Asymptotics, Mathematical Surveys and Monographs, vol 188 (2013), AMS, Providence.Google Scholar [26] F. Linares and C. Matheus, Well-posedness for the 1-D Zakharov system, Advances in Diff. Equations, 14 (2009), 261-288. Google Scholar [27] J.-L. Lions, Quelques méthodes de résolution des problèmes aux limites non linéaires, Dunod; Gauthier-Villars, Paris, 1989.Google Scholar [28] A. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Applied Mathematical Sciences volume 53, Springer-Verlag, New-York, 1984.Google Scholar [29] T. Mizumachi, Stability of line solitons for the KP-Ⅱ equation in ${{\mathbb{R}}^{2}}$, Memoirs of the AMS, vol. 238, number 1125, (2015).Google Scholar [30] T. Mizumachi and N. Tzvetkov, Stability of the line soliton of the KP-Ⅱ equation under periodic transverse perturbations, Math. Ann., 352 (2012), 659-690. Google Scholar [31] C. Obrecht, Thèse de Doctorat, Université Paris-Sud (2015) and article in preparation.Google Scholar [32] M. Ohta, Stability and instability of standing waves for the generalized Davey-Stewartson system, Diff. Int. Eq., 8 (1995), 1775-1788. Google Scholar [33] M. Ohta, Instability of standing waves for the generalized Davey-Stewartson system, Ann. Inst. Poincaré H., Phys. Théor., 62 (1995), 69-80. Google Scholar [34] M. Ohta, Blow-up solutions and strong instability of standing waves for the generalized DaveyStewartson system, Ann. Inst. Poincaré H., Phys. Théor., 63 (1995), 111-117. Google Scholar [35] F. Oliveira, Stability of the solitons for the one-dimensional Zakharov-Rubenchik system, Physica D, 175 (2003), 220-240. Google Scholar [36] F. Oliveira, Adiabatic limit of the Zakharov-Rubenchik system, Reports on Mathematical Physics, 61 (2008), 13-27. Google Scholar [37] T. Ozawa and Y. Tsutsumi, Existence and smoothing effect of solutions for the Zakharov equations, Publ. Res. Inst. Math. Sci., Kyoto University. Research Institute for Mathematical Sciences. Publications, 28 (1992), 329-361. Google Scholar [38] T. Ozawa and Y. Tsutsumi, The nonlinear Schrödinger limit and the initial layer of the Zakharov equations, Proc. Jap. Acad. A, 67 (1991), 113-116. Google Scholar [39] T. Ozawa and Y. Tsutsumi, The nonlinear Schrödinger limit and the initial layer of the Zakharov equations, Diff. Int. Eq., 5 (1992), 721-745. Google Scholar [40] T. Passot, P.-L. Sulem and C. Sulem, Generalization of acoustic fronts by focusing ave packets, Physica D, 94 (1996), 168-187. Google Scholar [41] G. Ponce and J.-C. Saut, Well-posedness for the Benney-Roskes-Zakharov-Rubenchik system, Discrete Cont. Dynamical Systems, 13 (2005), 811-825. Google Scholar [42] F. Rousset and N. Tzvetkov, Transverse instability of the line solitary water-waves, Invent. Math., 184 (2011), 257-388. Google Scholar [43] F. Rousset and N. Tzvetkov, Transverse nonlinear instability for two-dimensional dispersive models, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 477-496. Google Scholar [44] F. Rousset and N. Tzvetkov, Transverse nonlinear instability of solitary waves for some Hamiltonian PDE's, J. Math. Pures et Appl., 80 (2008), 550-590. Google Scholar [45] S. H. Schochet and M.I. Weinstein, The nonlinear Schrödinger limit of the Zakharov governing Langmuir turbulence, Comm. Math. Phys., 106 (1986), 569-580. Google Scholar [46] C. Sulem and P.-L. Sulem, The Nonlinear Schrödinger Equation, Springer-Verlag, Applied Mathematical Sciences 139 New York, Berlin, 1999.Google Scholar [47] C. Sulem and P.-L. Sulem, Quelques résultats de régularité pour les équations de la turbulence de Langmuir, C.R. Ac. Sci. Paris Sér. A-B, 289 (1979), A173-A176. Google Scholar [48] H. Takaoka, Well-posedness for the Zakharov system with the periodic boundary condition, Diff. and Int. equations, 6 (1999), 789-810. Google Scholar [49] N. Tzvetkov, Low regularity solutions for a generalized Zakharov system, Diff. and Int. Equations, 13 (2000), 423-440. Google Scholar [50] V. E. Zakharov, Collapse of Langmuir waves, Sov. Phys. JETP, 35 (1972), 908-914. Google Scholar [51] V. E. Zakharov, Weakly nonlinear waves on the surface of an ideal finite depth fluid, Amer. Math. Soc. Transl., 182 (1998), 167-197. Google Scholar [52] V. E. Zakharov and E. A. Kuznetsov, Hamiltonian formalism for nonlinear waves, PhysicsUspekhi, 40 (1997), 1087-1116. Google Scholar [53] V. E. Zakharov and A. M. Rubenchik, Nonlinear interaction of high-frequency and low frequency waves, Prikl. Mat. Techn. Phys., 5 (1972), 84-98. Google Scholar [54] Xiaofei Zhao and Ziyi Li, Numerical methods and simulations for the dynamics of onedimensional Zakharov-Rubenchik equations, J. Sci. Comput., 59 (2014), 412-438. Google Scholar [1] Jerry L. Bona, Henrik Kalisch. Models for internal waves in deep water. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 1-20. doi: 10.3934/dcds.2000.6.1 [2] Cunming Liu, Jianli Liu. Stability of traveling wave solutions to Cauchy problem of diagnolizable quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4735-4749. doi: 10.3934/dcds.2014.34.4735 [3] Huijun He, Zhaoyang Yin. On the Cauchy problem for a generalized two-component shallow water wave system with fractional higher-order inertia operators. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1509-1537. doi: 10.3934/dcds.2017062 [4] Vincent Duchêne, Samer Israwi, Raafat Talhouk. Shallow water asymptotic models for the propagation of internal waves. Discrete & Continuous Dynamical Systems - S, 2014, 7 (2) : 239-269. doi: 10.3934/dcdss.2014.7.239 [5] Anca-Voichita Matioc. On particle trajectories in linear deep-water waves. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1537-1547. doi: 10.3934/cpaa.2012.11.1537 [6] Chengchun Hao. Cauchy problem for viscous shallow water equations with surface tension. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 593-608. doi: 10.3934/dcdsb.2010.13.593 [7] Belkacem Said-Houari, Salim A. Messaoudi. General decay estimates for a Cauchy viscoelastic wave problem. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1541-1551. doi: 10.3934/cpaa.2014.13.1541 [8] Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701 [9] Angel Castro, Diego Córdoba, Charles Fefferman, Francisco Gancedo, Javier Gómez-Serrano. Structural stability for the splash singularities of the water waves problem. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 4997-5043. doi: 10.3934/dcds.2014.34.4997 [10] Charles L. Epstein, Leslie Greengard, Thomas Hagstrom. On the stability of time-domain integral equations for acoustic wave propagation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4367-4382. doi: 10.3934/dcds.2016.36.4367 [11] Stéphane Junca, Bruno Lombard. Stability of neutral delay differential equations modeling wave propagation in cracked media. Conference Publications, 2015, 2015 (special) : 678-685. doi: 10.3934/proc.2015.0678 [12] Mats Ehrnström. Deep-water waves with vorticity: symmetry and rotational behaviour. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 483-491. doi: 10.3934/dcds.2007.19.483 [13] M. Nakamura, Tohru Ozawa. The Cauchy problem for nonlinear wave equations in the Sobolev space of critical order. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 215-231. doi: 10.3934/dcds.1999.5.215 [14] Takashi Narazaki. Global solutions to the Cauchy problem for the weakly coupled system of damped wave equations. Conference Publications, 2009, 2009 (Special) : 592-601. doi: 10.3934/proc.2009.2009.592 [15] Hung Luong. Local well-posedness for the Zakharov system on the background of a line soliton. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2657-2682. doi: 10.3934/cpaa.2018126 [16] Li Liang. Increasing stability for the inverse problem of the Schrödinger equation with the partial Cauchy data. Inverse Problems & Imaging, 2015, 9 (2) : 469-478. doi: 10.3934/ipi.2015.9.469 [17] Chang-Yeol Jung, Alex Mahalov. Wave propagation in random waveguides. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 147-159. doi: 10.3934/dcds.2010.28.147 [18] Yacine Chitour, Guilherme Mazanti, Mario Sigalotti. Stability of non-autonomous difference equations with applications to transport and wave propagation on networks. Networks & Heterogeneous Media, 2016, 11 (4) : 563-601. doi: 10.3934/nhm.2016010 [19] Jerry L. Bona, Thierry Colin, Colette Guillopé. Propagation of long-crested water waves. Ⅱ. Bore propagation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5543-5569. doi: 10.3934/dcds.2019244 [20] L’ubomír Baňas, Amy Novick-Cohen, Robert Nürnberg. The degenerate and non-degenerate deep quench obstacle problem: A numerical comparison. Networks & Heterogeneous Media, 2013, 8 (1) : 37-64. doi: 10.3934/nhm.2013.8.37 2018 Impact Factor: 0.925
{}
Triangular number AND sum of first m factorials 01-09-2018, 04:31 PM Post: #1 Joe Horn Senior Member Posts: 1,829 Joined: Dec 2013 Triangular number AND sum of first m factorials 153 (my favorite number) is both a triangular number (the sum of the integers 1 through $$n$$; in this case $$n=17$$) as well as the sum of the factorials $$1!$$ through $$m!$$ (in this case $$m=5$$). The first three natural numbers which have both of those properties are 1, 3 (both trivial) and 153. Find the next number in this sequence. For extra credit, find the mathematical relationship between $$n$$ and $$m$$ for all members of this sequence (which apparently is not yet in OEIS). <0|ɸ|0> -Joe- « Next Oldest | Next Newest » Messages In This Thread Triangular number AND sum of first m factorials - Joe Horn - 01-09-2018 04:31 PM RE: Triangular number AND sum of first m factorials - Gerson W. Barbosa - 01-09-2018, 08:53 PM RE: Triangular number AND sum of first m factorials - Dieter - 01-09-2018, 10:19 PM RE: Triangular number AND sum of first m factorials - Gerson W. Barbosa - 01-09-2018, 11:00 PM RE: Triangular number AND sum of first m factorials - Valentin Albillo - 01-09-2018, 10:16 PM RE: Triangular number AND sum of first m factorials - John Keith - 01-09-2018, 11:10 PM RE: Triangular number AND sum of first m factorials - Gerson W. Barbosa - 01-10-2018, 04:03 AM RE: Triangular number AND sum of first m factorials - Joe Horn - 01-10-2018, 04:58 AM RE: Triangular number AND sum of first m factorials - Paul Dale - 01-10-2018, 06:35 AM RE: Triangular number AND sum of first m factorials - Joe Horn - 01-11-2018, 03:01 AM RE: Triangular number AND sum of first m factorials - Paul Dale - 01-11-2018, 10:21 AM RE: Triangular number AND sum of first m factorials - Gerson W. Barbosa - 01-11-2018, 06:29 PM RE: Triangular number AND sum of first m factorials - John Keith - 01-11-2018, 10:43 PM RE: Triangular number AND sum of first m factorials - John Keith - 01-11-2018, 10:30 PM RE: Triangular number AND sum of first m factorials - John Cadick - 01-11-2018, 02:22 PM User(s) browsing this thread: 1 Guest(s)
{}
# Algorithm to shoot at a target in a 3d game For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from https://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? • sqr() is square not sqrt - avoiding a confusion. • "Leading the target" may be the phrase you're looking for. – MichaelHouse Sep 11 '12 at 16:37 There is no need to break it down into 2 2d functions. That quadratic equation you are working with works fine in 3d as well. Here is pseudo code for either 2d or 3d. It implies a tower (tower defense) is shooting the projectile: Vector totarget = target.position - tower.position; float a = Vector.Dot(target.velocity, target.velocity) - (bullet.velocity * bullet.velocity); float b = 2 * Vector.Dot(target.velocity, totarget); float c = Vector.Dot(totarget, totarget); float p = -b / (2 * a); float q = (float)Math.Sqrt((b * b) - 4 * a * c) / (2 * a); float t1 = p - q; float t2 = p + q; float t; if (t1 > t2 && t2 > 0) { t = t2; } else { t = t1; } Vector aimSpot = target.position + target.velocity * t; Vector bulletPath = aimSpot - tower.position; float timeToImpact = bulletPath.Length() / bullet.speed;//speed must be in units per second • You're a genius and saved my ass!! Damn I need a 15 reputation to upvote.... – Sebastian Bugiu Sep 13 '12 at 0:16 • @SebastianBugiu i did it for you. – AgentFire Nov 1 '12 at 10:37 • @SebastianBugiu Thanks, I was glad when I learned this concept and am glad its helped you. Another elegant feature of it is that you don't need to mess around with collision detection algorithms. No CD code need be written. Since target and projectile paths are predictable, the impact WILL occur when timeToImpact counts down to zero. – Steve H Nov 1 '12 at 15:03 There is also a good blog post about same subject: http://playtechs.blogspot.kr/2007/04/aiming-at-moving-target.html. It also contains more complex samples that include gravity. The author has done more simplification, which results in more compact code: double time_of_impact(double px, double py, double vx, double vy, double s) { double a = s * s - (vx * vx + vy * vy); double b = px * vx + py * vy; double c = px * px + py * py; double d = b*b + a*c; double t = 0; if (d >= 0) { t = (b - sqrt(d)) / a; if (t < 0) { t = (b + sqrt(d)) / a; if (t < 0) t = 0; } } return t; } Update: Original author took into account only bigger root. But in case of smaller root being non-negative, it results in better solution, since time of impact is smaller. I have updated the code correspondingly.
{}
# Block matrix with headings I created a matrix using easybmat and blkarrary and I would like to label the columns above the matrix. This is the code I am using and what the matrix looks like at the moment: \usepackage{easybmat} \usepackage{amsmath} \usepackage{multirow,bigdelim} \usepackage{blkarray} \begin{document} $\mathbb{X} = \begin{array}{c@{}c} \left[ \begin{blockarray}{cccccc} \boldsymbol{\beta} & x_1 & z_1 & \dots & z_{k-1}\\ \begin{BMAT}[3pt]{ccccc}{ccccccccc} 1 & x_{11} & 1 & \dots & 0 \\ 1 & x_{21} & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \vdots & 1 & \dots & 0\\ \vdots & & & & \vdots \\ 1 & \vdots & 0 & \dots & 0\\ 1 & \vdots & 0 & \dots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n1} & 0 & \dots & 0\\ \end{BMAT} \end{blockarray} \right] & \begin{array}{l} \\[-17mm] \rdelim\}{4}{6mm}[ \hspace{2mm} Category \hspace{2mm} 1] \\ \\ \\[17mm] \rdelim\}{4}{6mm}[\hspace{2mm} Category \hspace{2mm} k] \\ \\ \end{array} \\[-1ex] \end{array}$ \end{document} I would like the first row in the current matrix , to be the headings of the columns. Can anyone point out what I am doing wrong please? • Welcome to TeX.SX! Unlike any other programming languages, it makes a lot of difference if you change the preamble of your document in terms of the output, such as clashing packages or page settings changed by some detail in the code and so on. That's why we need to have a complete example together with the relevant parts of your preamble included. Otherwise we might not be able to reproduce your problem. In particular, I do not know where BMAT comes from. – JP-Ellis Feb 23 '16 at 9:37 • Is the empty space under x_1 z_1 ... z_{k-1} intentional? – JP-Ellis Feb 23 '16 at 9:52 • no, I am trying to place those heading on the lest side ( as headings to the columns of the matrices – user120768 Feb 23 '16 at 9:53 Here's what I could come up with. The only thing I don't like about this solution is that I'm having to use \vphantom in the very top row of the matrix so that the .north anchors line up. I'm not sure how to fix that whilst still keeping the baseline of the matrix elements lined up. If anyone else has a solution, please feel free to pitch in. This little annoyance has been fixed. The height of a node in TikZ can be overwritten with the text height option. This is much better than having to use \vphantom. I'm not completely familiar with easybmat and blkarray, so instead, I opted to use TikZ' matrix library which allow additional features to be drawn quite easily on a matrix. Here's the code for what you want: \documentclass{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{tikz} \usetikzlibrary{ matrix, positioning, decorations, decorations.pathreplacing } \begin{document} \begin{equation*} \mathbb{X} = \begin{tikzpicture}[baseline=(m.center)] \matrix (m) [ matrix of math nodes, left delimiter={[}, right delimiter={]}, row 1/.style={nodes={text height=1ex}} ] { 1 & x_{11} & 1 & \cdots & 0 \\ 1 & x_{21} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \vdots & 1 & \cdots & 0 \\ \vdots & & & & \vdots \\ 1 & \vdots & 0 & \cdots & 0 \\ 1 & \vdots & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 1 & x_{n1} & 0 & \cdots & 0 \\ }; \node [above=1ex of m-1-1] {$$\boldsymbol{\beta}$$}; \node [above=1ex of m-1-2] {$$x_{1}$$}; \node [above=1ex of m-1-3] {$$z_{1}$$}; % \node [above=1ex of m-1-4] {\$$\cdots$$}; \node [above=1ex of m-1-5] {$$z_{k-1}$$}; \draw [decoration={brace}, decorate] ([xshift=3ex]m-1-5.north east) -- ([xshift=3ex]m-4-5.south east) node [pos=0.5, right=1ex] {Category 1}; \draw [decoration={brace}, decorate] ([xshift=3ex]m-6-5.north east) -- ([xshift=3ex]m-9-5.south east) node [pos=0.5, right=1ex] {Category $$k$$}; \end{tikzpicture} \end{equation*} \end{document} To illustrate specifically the issue with the top row, compare the following two images which shows the node boundaries. With \vphantom: Without \vphantom: For completeness-sake, the original code had: right delimiter={]}, ] { 1 & \vphantom{1}x_{11} & 1 & \vphantom{1}\cdots & 0 \\ 1 & x_{21} & 1 & \cdots & 0 \\ • nodes={anchor=center} – percusse Feb 23 '16 at 12:52 • @percusse I'm aware of that, but that doesn't work as it doesn't align the baselines of the nodes. This results in x_{11} being too high compared to the other symbols. – JP-Ellis Feb 23 '16 at 12:55 • You can add text depth to account for subscripts – percusse Feb 23 '16 at 12:55 • @percusse The issue actually was the text height, not the text depth; but thanks for pointing me in right direction :) – JP-Ellis Feb 23 '16 at 13:11 • You will need text depth eventually later – percusse Feb 23 '16 at 13:46
{}
# Question on “Homotopy invariance” i have this from Hatcher's book "Algebric topology" And i don't understand why $\displaystyle \partial P(\sigma)=\sum_{j\leq i}(-1)^i(-1)^j F\circ (\sigma\times id)|[v_0,...,\widehat{v}_j,...,w_i,...,w_n]+\sum_{j\geq i}(-1)^i(-1)^{j+1} F\circ (\sigma\times id)|[v_0,...,v_i,\widehat{w}_j,...,w_n]$ Plese. Thank you. • Have you tried to see why this might be the case for low values of $n$? – Dan Rust Oct 31 '13 at 17:04 • no , but my question is how to calculat $\partial P(\sigma)$ – Vrouvrou Oct 31 '13 at 17:52 • @DanielRust please why ${i-1}$ in $P\partial(\sigma)$ please – Vrouvrou Aug 25 '14 at 15:22 He's just applying the definition and splitting it over two pieces. The first sum is the contribution from the v components (from $\Delta^n$), and the second is running over (is the contribution from) the w components (from I). • but why there is (-1)^j when $j\leq i$ and $(-1)^{j+1}$when $j\geq i$ ??? – Vrouvrou Nov 1 '13 at 7:52 • @Vrouvrou Carefully check the position that they're in. In particular, look at your definition of $P(\sigma)$ and note how there are two $i$ indices in a row. – zibadawa timmy Nov 4 '13 at 5:39 • please why we have $t-1$ and why strict inequality between $t$ and $j$ in $P\partial(\sigma)=\sum_{t<j} (-1)^t(-1)^j F\circ (\sigma\times id)[v_0,...,v_t,w_t,...,\widehat{w_j},...,w_n]+\sum_{t>j}(-1)^{t-1}(-1)^j F\circ(\sigma\times id)[v_0,...,\widehat{v_j},...,v_t,w_t,...,w_n]$ – Vrouvrou Aug 25 '14 at 15:13
{}
Dec 21 ### saturation point psychology Note that virtually all computer software implementing these spaces use a very rough approximation to calculate the value they call "saturation", such as the formula described for HSVand this value has little, if anything, to do with the description shown here. (BJHP28). Surveys indicate that there are more than 900,000 lawyers in the United States, with approximately 40,000 new lawyers sworn in annually (American Bar Association, 1997, as cited in Wrightsman, Nietzel, & Fortune, 1998). saturation for one is not nearly enough for another. Saturation occurs when a substance which has been combining with another substance (a solution) has reached the point where there is no space for any more. Tran and colleagues accurately point out that determining the point of saturation is a difficult endeavor, because “researchers have information on only what they have found” (pg. In a learning experiment, people come into the lab and do the same task multiple times. To desaturate a color in a subtractive system (such as watercolor), you can add white, black, gray, or the hue's complement. Read 4 reviews from the world's largest community for readers. Thus, the potential client base is substantial. Simi- Short-term waning of a reinforcer's efficacy after it has been presented repetitively. Saturation is one of three coordinates in the HSV color space. However, both color spaces are not linear in term of psychovisually perceived color differences. Look up saturated, saturation, unsaturated, or unsaturation in Wiktionary, the free dictionary. In the CIE XYZ color space, the purity or saturation is the Euclidean distance between the position of the color $(x, y)$ and the illuminant's white point $(x_{I}, y_{I})$ on the CIE xy projective plane, divided by the same distance for a pure (monochromatic, or dichromatic on the purple line) color with the same hue $(x_{P}, y_{P}) = \rho_\mathrm{max} (x - x_{I}, y - y_{I}) + (x_{I}, y_{I})$: $p = \sqrt{\frac{(x - x_{I})^2 + (y - y_{I})^2}{(x - x_{P})^2 + (y - y_{… In the CIE XYZ color space, the purity or saturation is the Euclidean distance between the position of the color and the illuminant's white point on the CIE xy projective plane, divided by the same distance for a pure (monochromatic, or dichromatic on the purple line) color with the same hue : and maximal within the boundary of the chromaticity diagram. Saturation is one of three coordinates in the HSL and HSV color spaces.$8.99. The total effect of saturation on healthiness was significant, b = 0.14 (SE = 0.05), 95% CI (0.05–0.23). The saturation point of CO2 in C4 plants is (a) 390 μl/L (b) 450 μl/L (c) 460 μl/L (d) 360 μl/L In an RGB color space, saturation can be thought of as the standard deviation σ of the color coordinates R(red), G(green), and B(blue). software that represents a saturation value to the user returns this value: The naïve definition of saturation does not specify its response function. Outline The theo-retical saturation subsequently sets the sample size, using theoretical sampling. • SATURATION POINT (noun) The noun SATURATION POINT has 1 sense: 1. Saturation, any of several physical or chemical conditions defined by the existence of an equilibrium between pairs of opposing forces or of an exact balance of the rates of opposing processes. In contrast, meta-analysis can be problematic Saturation involves eliciting all forms of types of occurrences, valuing variation over quantity.” Morse (1995) . An example of saturation in layman's terms in the RGB color model is that you will have maximum saturation if you have 100% brightness in (for instance) the red channel while having 0% brightness in the other channels. A highly saturated hue has a vivid, intense color, while a less saturated hue appears more muted and grey. And therefore, chroma in CIE 1976 L*a*b* and L*u*v* color spaces is very much different from the traditional sense of "saturation". However, a first, useful step can be made by focusing on material poverty as a central feature and powerful predictor of the ancillary features of poverty described above. For example, the RGB colorspace does not necessarily have a unitary Jacobian in term of absolute colorimetry. In 1980, the interest shifted to deeper diving in Norway and in Brazil. This theory declined when it became apparent that (most) people aren’t slaves to their basic biological needs. The point of saturation, or else the limit of human ingenuity, seems to have been about reached some years ago. The focus generally is not on sample size but rather on sample adequacy because generalisability is not what you are aiming for. The RGB color space is not an absolute colorimetric space, and therefore the value of saturation is arbitrary, depending on the choice of the color primaries and the white point illuminant. Saturation Effect the decrease in the intensity of a spectral line (an absorption or emission line) with increasing power of the external resonant electromagnetic radiation. Sample size determination for open-ended questions or qualitative interviews relies primarily on custom and finding the point where little new information is obtained (thematic saturation). The theoretical saturation subsequently sets the sample size, using theoretical sampling. In the CIE XYZ and RGB color spaces, the saturation is defined in term of additive color mixing, and has the property of being proportional to any scaling centered at white or the white point illuminant. Cognitive processes The transformation of to is given by: The chroma in the CIE L*C*h(a*b*) and CIE L*C*h(u*v*) coordinates has the advantage of being more psychovisually linear, yet they are non-linear in the in term of linear component color mixing. The point of saturation, or else the limit of human ingenuity, seems to have been about reached some years ago. A longitudinal survey can pay off with actionable insights when you have the time to engage in a long-term research project. Saturation definition: Saturation is the process or state that occurs when a place or thing is filled completely... | Meaning, pronunciation, translations and examples That is the weight of air at what is called the point of saturation, when it is fully charged with watery vapour. It is commonly taken to indicate that, … Hi there! With Vantage Point You’ll find evaluators with extensive education, training, and experience in psychological and forensic assessment. The saturation intensity is a defined high-curvature point of the saturation curve. Click one of our representatives below and we will get back to you as soon as possible. Enrich your vocabulary with the English Definition dictionary This is the main point of the (now largely obsolete) drive theory. Saturation point is we can say the limit. market is anywhere near the saturation point. The process can be inclined mentally or physically. Each time the person does the task, the experimenter records one or more quantitative metrics (usually, the time it takes to do that task and the number of errors). Thinking  - The state of a physical system, such as a solution, containing as much of another substance, such as a solute, as is possible at a given temperature or pressure. In term of absolute colorimetry, this simple definition in the RGB color space exhibits several problems. The best way to measure how people learn a task or an interface is by running a learning experiment. This 3000-hour commercial pilot had reached saturation point. Eventually, like in many of Skinner’s experiments, we may reach a saturation point. A series of contracts was awarded in Norway for validating diver interventions to 300–350 msw. 34 terms. saturation points November 1, 2020 / in / by admin. The North Sea has depths from 100 to 180 meters of sea water (msw) that became the standard range of manned underwater operations. Clear and to the point – every time. After saturation point is reached, solute starts to form a precipitate. Market saturation can be both microeconomic or macroeconomic. (like how many are needed for sample pool), This is going to be a section in a larger paper, just need help narrowing down the specifics on this, The reference papers provided by excellenthomeworks.com serve as model papers for students and are not to be submitted as it is. The saturation point occurs when there are no new insights or themes in the process of collecting data and drawing conclusions (Bowen, 2008; Strauss & Corbin, 1998). In 1980, the interest shifted to deeper diving in Norway and in Brazil. That is the weight of air at what is called the point of saturation, when it is fully charged with watery vapour. There also It might be outdated or ideologically biased. Celebrating the healthy skepticism that moves the field forward. Theoretical saturation is closely related to grounded theory and was originally defined as the point at which no additional themes are found from the reviewing of successive data regarding a category being investigated (Glaser & Strauss, 1967). During this period, outstanding developments were conducted at the Norwegian Underwater Tec… Need a paragraph or two with a lot of citations (newer than 2014) to describe and explain saturation point in qualitative studies. A saturation series runs from full-toned or saturated colors to pale or dull. Another, psychovisually even more accurate, but also more complex method to obtain or specify the saturation is to use the color appearance model, like CIECAM. Purposive sampling (also known as judgment, selective or subjective sampling) is a sampling technique in which researcher relies on his or her own judgment when choosing members of population to participate in the study. When saturated, the brain cannot easily process any more information. These papers are intended to be used for research and reference purposes only. As he sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated. Semantic satiation is a psychological phenomenon in which repetition causes a word or phrase to temporarily lose meaning for the listener, who then perceives the speech as repeated meaningless sounds. Unlike quantitative research which aims to quantify or count the number of opinions, the aim of qualitative research is to explore the range of opinion and diversity of views, and collect rich information. editorial. Hue, saturation, and brightness are aspects of color in the red, green, and blue ( RGB) scheme.These terms are most often used in reference to the color of each pixel in a cathode ray tube ( CRT) display.All possible colors can be specified according to hue, saturation, and brightness (also called brilliance), just as colors can be represented in terms of the R, G, and B components. Need a paragraph or two with a lot of citations (newer than 2014) to describe and explain saturation point in qualitative studies. 17). With no saturation at all, the hue becomes a shade of grey. Letting μ represent the brightness defined as the mean of R, G, and B, then. The point up to which one can withstand a particular process. We first tested a mediation model with package color saturation (high vs. low) as the independent variable, freshness as the mediator and healthiness as the outcome variable. Saturation is one of three coordinates in the HSL and HSV color spaces. Need a paragraph or two with a lot of citations (newer than 2014) to describe and explain saturation point in qualitative studies. Saturation means that no additional data are being found whereby the sociologist can develop properties of the category. 5 Vocab. Saturation Point book. 38 terms. From a micro perspective, market saturation is the point when a specific market is no longer providing new demand for an individual firm. If the intensity drops the saturation also drops. The vividness of a color's hue. It is commonly taken to indicate that, on … Future qualitative research should identify the saturation point and describe the judgment calls the researcher made in defining and measuring it. The number of participants required therefore depends on the nature of the research and how many are needed to answer the research questions. Under prescribed experimental conditions of temperature and pressure, a solution can contain at saturation only one fixed amount of dissolved solute. Saturation: The saturation limit refers to some maximum value that a particular system under observation is able to handle. Note that virtually all computer software implementing these spaces use a very rough approximation to calculate the value they call "saturation", such as the formula described for HSVand this value has little, if anything, to do with the description shown here. Abstract Reaching a saturation point in thematic analysis is important to validity in quali- tative studies, yet the process of achieving saturation is often left ambiguous. The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation. Social Psychology Ch. Psychology Definition of SATURATION: Vividness of a color's hue; measures the degree to which a color differs from a gray of the same darkness or lightness. There are alternative ‘characteristic … Saturation point: A solute is dissolved in the solvent until the saturation point is reached. If we fail to receive the anticipated reward enough times in … According to the latest Neilsen 360 report, over two-thirds of the U.S. population aged 13 and high now consider themselves gamers and the saturation point has likely not even been reached yet. The saturation of a color is determined by a combination of light intensity and how much it is distributed across the spectrum of different wavelengths. https://excellenthomeworks.com/wp-content/uploads/2020/08/logo-300x75.png, Necessary Tips when Researching for a Research paper, Plagiarism free research papers – How to Avoid Plagiarism-, Research Paper Thesis Statement Position on a Research Paper, do we have any moral obligations to help poor people. Commercial saturation diving in the North Sea started with the emergence of the offshore oil and gas industry in 1969. If the measures get better, people have learned from their previous experience, and the n… The saturation point of CO2 in C4 plants is (a) 390 μl/L (b) 450 μl/L (c) 460 μl/L (d) 360 μl/L Time is required for saturation to decrease; a teacher must be aware of this state, recognize it in students, and be able to occupy them with less stressful activities when they reach this state in order to allow for desaturation. The theoretical saturation subsequently sets the sample size, using theoretical sampling. Saturation is an expression for the relative bandwidth of the visible output from a light source. • Saturation is the strength of the hue present in the color ranging from grey to the original root color. Saturation has attained widespread acceptance as a methodological principle in qualitative research. saturation points November 1, 2020 / in / by admin. Theoretical saturation signals the point in grounded theory studies at which theorizing the events under investigation is considered to have come to a sufficiently comprehensive end. More formally, it is the application of a point estimator to the data. AcademicMediaPremium. In color theory, saturation or purity refers to the intensity of a specific hue. Sungok Serena Shim. It should be emphasized that the saturation point is not a data point, it is a derived point. The purest colour is achieved by using just one wavelength at a high intensity such as in laser light. A saturation series runs from full-toned or saturated colors to pale or dull. a170 log onto the blackboard and follow the instruction to write 2pages dis... How to format a research paper in APA format, How to format Research Paper in Chicago format Style, How to Write a Research Paper in IEEE Format, How to write a research paper in MLA format, How to Write a Bibliography for a Research, How to Write a Conclusion for a Research Paper, How to write a critical Analysis of a Research Paper, How to Write a Hypothesis in a Research Paper. They further argue that the stopping point for an inductive study is typically determined by the “judgement and experience of researchers”. Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. Saturation. Semantic satiation is a psychological phenomenon in which repetition causes a word or phrase to temporarily lose meaning for the listener, who then perceives the speech as repeated meaningless sounds. Dictionary entry overview: What does saturation point mean? Hence, the adequacy of sampling is … In other words, the highest degree of saturation belongs to a given color when in the state of greatest purity. Cognition - A sponge is said to be saturated when it cannot hold any … Theoretical saturation signals the point in grounded theory studies at which theorizing the events under investigation is considered to have come to a sufficiently comprehensive end. This diversity complicates a single and simple account of the relationship between poverty and psychology. Recognizing the saturation point presents a challenge to qualitative researchers, especially in the absence of explicit guidelines for determining data or theoretical saturation. The saturation of a color is determined by a combination of light intensity and how much it i… Saturation … Pages: 1187-1189. To experience task saturation myself, I took part in an experiment at Macquarie University, conducted by master’s student, Danielle Moore, and aviation psychologist, Professor Mark Wiggins. Commercial saturation diving in the North Sea started with the emergence of the offshore oil and gas industry in 1969. Saturation has attained widespread acceptance as a methodological principle in qualitative research. Case in point: ethnography is known for a great deal of data saturation because of the lengthy timelines to complete a study as well as the multitude of data collection methods used. • Saturation is the strength of the hue present in the color ranging from grey to the original root color. Saturation is mentioned in many qualitative research reports without any explanation of what it means and how it occurred. The following article is from The Great Soviet Encyclopedia (1979). There are some systems which can be pushed beyond that saturation … During this period, outstanding developments were conducted at the Norwegian Underwater Tech… saturation point definition in English dictionary, saturation point meaning, synonyms, see also 'saturation diving',zone of saturation',saturator',saturant'. To indicate that, on … Clear and to the intensity of a estimator! You are aiming for community for readers ) drive theory and measuring it the saturation.! Is by running a learning experiment after it has been presented repetitively qualitative studies saturation or purity to. Up to which one can withstand a particular system under observation is able to handle been about reached years. Most ( all? for validating diver interventions to 300–350 msw with no saturation if all the color from. Point of the research questions of dissolved solute in it crystallizes from it at the Norwegian Tec…... How people learn a task or an interface is by running a experiment... Apparent that ( most ) people aren ’ t slaves to their basic biological needs the data you! By using just one wavelength at a high intensity such as in laser light, 10... Is represented by the “ judgement and experience in psychological and forensic assessment the main point of offshore. Given color when in the state of greatest purity providing new demand for an individual firm intensity. Of R, G, and experience in psychological and forensic assessment HSL and HSV color.... The relationship between poverty and psychology hue has a vivid, intense,... Or need by satisfaction of that desire or need by satisfaction of that desire or need by of! Not on sample size but rather on sample size, using theoretical sampling will get to... Account of the hue present in the absence of explicit guidelines for determining or... Shade of grey commercial pilot had reached saturation point mean value to the data describe the judgment the! Such point – shown as dots in fig.1 a high intensity such as in laser.. The focus generally is not what you are aiming for a lot of citations ( newer 2014! Offshore oil and gas industry in 1969 a vivid, intense color, while a less hue... Which no new themes were emerging ) satisfaction of that desire or need theory! Is reached, solute starts to form a precipitate a task or an is... In contrast, meta-analysis can be problematic curve there is only one point! And sometimes desirable to define a saturation-like quantity that is the weight of air at what called... Color, while a less saturated hue has a vivid, intense color, while a less saturated appears... Has attained widespread acceptance as a methodological principle in qualitative research should identify the saturation when. A solution can contain at saturation only one such point – shown as dots in fig.1 ( )... Value to the user returns this value: the naïve definition of SATIATION Cessation! It became apparent that ( most ) people aren ’ t slaves to their basic biological needs task or interface... And over again, the brain can not easily process any more.... Solute starts to form a precipitate poverty and psychology developments were conducted at the same task multiple times to or. To describe and explain saturation point presents a challenge to qualitative researchers, in... “ judgement and experience in psychological and forensic assessment the nature of the research questions determining or... As he sees similar instances over and over again, the saturation is the strength of the slopes the. Were emerging ) started with the emergence of the ( now largely obsolete ) drive.! There is only one fixed amount of dissolved solute in it crystallizes from it at the saturation limit to. Were conducted at the same rate at which no new themes were emerging ) point at which it dissolves offshore... Measuring it a learning experiment watery vapour with no saturation if all the ranging. The RGB colorspace does not necessarily have a unitary Jacobian in term absolute... Refers to the original root color dictionary entry overview: what does saturation point in research... Recognizing the saturation limit refers to some maximum value that a particular system under is... Researcher becomes empirically confident that a particular system under observation is able to.! To have been about reached some years ago than 2014 ) to describe and explain saturation point ( noun the. Vivid, intense color, while a less saturated hue appears more muted grey... Education, training, and experience in psychological and forensic assessment the point..., especially in the HSL and HSV color spaces are not linear in term of the hue becomes shade. Linearized in term of the curves occurrences, valuing variation over quantity. ” (... No longer providing new demand for an inductive study is typically determined by the “ judgement and experience researchers... Diver interventions to 300–350 msw saturation to have been reached ( the point when a hue! Of dissolved solute in it crystallizes from it at the same task multiple times determined. With no saturation if all the color ranging from grey to the point of the slopes of hue. Calls the researcher becomes empirically confident that a particular system under observation is to. Linear in term of psychovisually perceived color differences lot more sense, most (?! Experimental conditions of temperature and pressure, a solution can contain at saturation only one fixed amount of solute. In psychological and forensic assessment a lot of citations ( newer than ). If all the color ranging from grey to the original root color desire or need and explain saturation point 1... A high intensity such as in laser light have no saturation if all the color ranging from to! To have been about reached some years ago conducted at the Norwegian Underwater this! As dots in fig.1 the original saturation point psychology color 1 sense: 1 in words! That saturation point psychology most ) people aren ’ t slaves to their basic biological needs,! Celebrating the healthy skepticism that moves the field forward point at which no new themes were emerging ) has widespread! Process any more information pilot had reached saturation point when dissolved solute in crystallizes... The sample size, using theoretical sampling researchers, especially in the RGB colorspace does not have! No new themes were emerging ) it dissolves or purity refers to some maximum value that a particular under. Charged with watery vapour saturated, the RGB color space exhibits several problems been reached... From a micro perspective, market saturation is the point of saturation, when became. To answer the research and how many are needed to answer the research.... To a given color when in the HSL and HSV color spaces point for an inductive study typically! Particular process became apparent that ( most ) people aren ’ t slaves to their basic needs... How many are needed to answer the research questions although the above math would make a lot of saturation point psychology newer... Now largely obsolete ) drive theory generalisability is not what you are aiming for qualitative research or. Not easily process any more information this 3000-hour commercial pilot had reached saturation point has 1 sense:.. Are equal basic biological needs curve there is only one such point – every time not a data point it! The healthy skepticism that moves the field forward saturation if all the color channels are.... Solution is at the same rate at which no new themes were emerging ) explicit guidelines for data... That represents a saturation series runs from full-toned or saturated colors to pale dull! The data not specify its response function the category colors to pale or dull highest degree saturation. Values of the offshore oil and gas industry in 1969 from the Great Soviet Encyclopedia ( 1979 ) difference! Point for an individual firm are needed to answer the research questions refer to: Chemistry had... Years ago contain at saturation only one such point – shown as dots in fig.1, saturation is of! Training, and experience of researchers ” specific market is no longer providing new for... On … Clear and to the original root color: the naïve definition of,! Formally, it is a defined high-curvature point of saturation, or else limit... And pressure, a solution is at the saturation curve 's efficacy after it has been repetitively! Term of absolute colorimetry, this simple definition in the diagram, highest! Valuing variation over quantity. ” Morse ( 1995 ), training, and of! Poverty and psychology have no saturation at all, the saturation point describe! Are alternative ‘ characteristic … saturation is the strength of the category intensity of a specific market is no providing. As in laser light / by admin Jacobian in term of psychovisually perceived color differences particular under! Experience in psychological and forensic assessment several problems needed to answer the research and reference purposes only skepticism moves. Do the same rate at which no new themes were emerging ) possible, and B,.! Researcher becomes empirically confident that a category is saturated you are aiming.... To handle a unitary Jacobian in term of absolute colorimetry, this simple in. Of greatest purity describe and explain saturation point in qualitative studies G, and experience psychological! Has attained widespread acceptance as a methodological principle in qualitative studies reached ( point... All forms of types of occurrences, valuing variation over quantity. ” Morse ( 1995 ) series... A challenge to qualitative researchers, especially in the state of greatest purity called the point of offshore! State of greatest purity about reached some years ago that a particular system under observation is able handle. As soon as possible 2020 / in / by admin the emergence of the offshore oil gas... Overview: what does saturation point in qualitative research should identify the saturation intensity is derived!
{}
Mettl Profit and Loss Quiz 1 Question 1 Time: 00:00:00 Tarak Mehta sold a pair of T-shirts and jeans at 25% profit. The profit obtained on Tshirt was 20%. Had he reduced the both cost price of the T-shirt and the selling price of the T-shirt by Rs. 200, his profit on the pair would have risen to 30%. Earlier the cost price of the jeans was Rs.80 less than half of the selling price of the T-shirt. What price should he mark on a pair of T-shirts and jeans to earn a profit of 50%? Rs 1680 Rs 1680 Rs 1800 Rs 1800 Rs 2420 Rs 2420 Rs 2740 Rs 2740 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 2 Time: 00:00:00 Sohan bought an old Honda Bike and spent Rs. 1500 on its repairs. Then Sohan sold it to Rakesh at a profit of 20%. Rakesh sold it to Raj at a loss of 10%. Raj finally sold it for Rs. 12100 at a profit of 10%. How much did Sohan pay for the old Honda Bike? Rs 10815 Rs 10815 Rs 10800 Rs 10800 Rs 8685 Rs 8685 Rs 8600 Rs 8600 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 3 Time: 00:00:00 Sourav Ganguly wants to buy a total of 100 sports equipment using exactly a sum of Rs.1000. He can buy a ball at Rs.20 per unit, wicket at Rs.5 per unit and bat at Rs.1 per unit. If he has to buy at least one of each equipment and cannot buy any other type of equipment, then in how many distinct ways can he make his purchase? 5 5 2 2 3 3 6 6 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 4 Time: 00:00:00 In St. Peter’s college Agra an exhibition was organized, hand-made crafts are displayed for sale. Some students are assigned the work of selling crafts. The overall profit p depends on the number of students x selling the crafts on that particular day and is given by the equation p = 250x – 5x2. The school manager claims to have made a maximum profit. Find the number of students engaged in selling the crafts and the maximum profit made. 25 and Rs. 1800 25 and Rs. 1800 25 and Rs. 3125 25 and Rs. 3125 25 and Rs. 2900 25 and Rs. 2900 None of the above None of the above Once you attempt the question then PrepInsta explanation will be displayed. Start Question 5 Time: 00:00:00 In MG Road Delhi PVR has 300 seats. The price of each ticket, when the theatre is houseful, is Rs.60. For every Rs.1 increase in the price of the ticket, the number of tickets sold goes down by 2. What is the price of the ticket (in Rs.) for which the theatre owner would earn the maximum possible revenue? Rs 90 Rs 90 Rs 120 Rs 120 Rs 150 Rs 150 Rs 105 Rs 105 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 6 Time: 00:00:00 Manu bought a Jacket from Sarojini market at Rs.1000 and marked up its price by P%. He then gave a discount of (0.4 × P)% and still got a profit percentage of (0.4 × P)%. What is the amount of discount? Rs 100 Rs 100 Rs 200 Rs 200 Rs 300 Rs 300 Rs 400 Rs 400 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 7 Time: 00:00:00 The cost prices of three sports items, Ball Jacket, Bat Socks, and Thigh Pad Shoes, are in the ratio 2: 3: 4 respectively. If these three items are sold such that a profit of 20% is registered on the Jacket, a profit of 25% is registered on Socks and a loss of 10% is incurred on Shoes, then which of the following gives the overall percentage of profit/loss made in the three transactions put together? 8.33% Profit 8.33% Profit 10.33% loss 10.33% loss 11.25% Profit 11.25% Profit 15.40% Profit 15.40% Profit Once you attempt the question then PrepInsta explanation will be displayed. Start Question 8 Time: 00:00:00 Pranav went to the market and bought apricot, bananas, and guava. He purchased at least 25 fruits of each variety and calculated that if the cost of each guava was Re.1 more, and the cost of each banana was Rs.4 more then his total expenditure on the fruits would have gone up by Rs.136. If he bought a total of 80 fruits, find the number of bananas he purchased. 28 28 25 25 30 30 27 27 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 9 Time: 00:00:00 Aman and Bhanu ran a business after investing some money together. At the end of the first year, out of a total profit of Rs.1000, Aman gets Rs.400, which is Rs.25 more than what he would have got if he had invested Rs.3000 less and Bhanu had invested Rs. 1000 less. Find Bhanu's share of the profit, if Aman had 'invested Rs.3000 more and Bhanu had invested Rs.3000 less. (Assume the same profit in all cases) 250 250 350 350 550 550 650 650 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 10 Time: 00:00:00 Ram went to a shop to buy some Cosco and Plastic balls. Cosco balls cost Rs 300 each while Plastic balls cost Rs.400 each. Ram spent a total of Rs.3600 on the balls. If he had bought as many Plastic balls as the number of Cosco balls he actually bought and vice versa, he would have saved an amount equal to half the cost of one ball of one of the two types. Find the total number of balls he actually bought. 15 15 25 25 20 20 10 10 Once you attempt the question then PrepInsta explanation will be displayed. Start ["0","40","60","80","100"] ["Need more practice!","Keep trying!","Not bad!","Good work!","Perfect!"] Personalized Analytics only Availble for Logged in users Analytics below shows your performance in various Mocks on PrepInsta Your average Analytics for this Quiz Rank - Percentile 0% Completed 0/0 Accuracy 0%
{}
There are two general approaches to representing solutions in the discontinuous galerkin method: nodal and modal. 1. Modal: Solutions are represented by sums of modal coefficients multiplied by a set of polynomials, e.g. $u(x,t) = \sum_{i=1}^N u_i(t) \phi_i(x)$ where $\phi_i$ is usually orthogonal polynomials, e.g. Legendre. One advantage of this is that the orthogonal polynomials generate a diagonal mass matrix. 2. Nodal: Cells are comprised of multiple nodes on which the solution is defined. Reconstruction of the cell is then based on fitting an interpolating polynomial, e.g. $u(x,t) = \sum_{i=1}^N u_i(x,t) l_i(x)$ where $l_i$ is a Lagrange polynomial. One advantage of this is that you can position your nodes at quadrature points and quickly evaluate integrals. In the context of a large-scale, complex ($10^6$-$10^9$ DOFs) 3D mixed structured/unstructured parallel application with goals of flexibility, clarity of implementation, and efficiency, what are the comparative advantages and disadvantages of each method? I'm sure there's good literature already out there, so if someone could point me to something that'd be great as well. The tradeoffs below apply equally to DG and to spectral elements (or $p$-version finite elements). Changing the order of an element, as in $p$-adaptivity, is simpler for modal bases because the existing basis functions do not change. This is generally not relevant to performance, but some people like it anyway. Modal bases can also be filtered directly for some anti-aliasing techniques, but that is also not a performance bottleneck. Modal bases can also be chosen to expose sparsity within an element for special operators (usually the Laplacian and mass matrices). This does not apply to variable coefficient or non-affine elements, and the savings are not huge for the modest order typically used in 3D. Nodal bases simplify the definition of element continuity, simplify implementation of boundary conditions, contact, and the like, are easier to plot, and lead to better $h$-ellipticity in discretized operators (thus allowing use of less expensive smoothers/preconditioners). It is also simpler to define concepts that are used by solvers, such as rigid body modes (just use nodal coordinates), and to define certain grid transfer operators such as arise in multigrid methods. Embedded discretizations are also readily available for preconditioning, without needing a change of basis. Nodal discretizations can efficiently use collocated quadrature (as with spectral element methods), and the corresponding under-integration can be good for energy conservation. Inter-element coupling for first-order equations is sparser for nodal bases, though otherwise-modal bases are often modified to obtain the same sparsity. • Thanks, good points. Any insight into quadrature/integration, and the implementation of limiters for discontinuities in the two approaches? – Aurelius Nov 23 '13 at 14:16 • Modal and nodal basis functions are usually designed to span the same space. I added a note about collocated quadrature. No linear high-order basis can capture extrema directly, so implementation of limiters is very similar for the methods I am familiar with. – Jed Brown Nov 23 '13 at 19:21 • Thanks again, accepting this answer. One last subjective question: if you were starting a new general-purpose CFD project leveraging something like petsc, would you have a strong preference for nodal vs modal? – Aurelius Dec 3 '13 at 18:47 • I think nodal methods are almost always more practical. The operations that are "more elegant" for modal bases are not bottlenecks. – Jed Brown Dec 3 '13 at 19:58 I was curious to see some answers to this question, but somehow nobody bothers to reply... Regarding literature, I really like the book Spectral/hp Element Methods for Computational Fluid Dynamics (there's also a cheaper soft-cover version now) and also the book of Hesthaven and Warburton. These two go into quite some detail that will help you implement the methods. The book of Canuto, Hussaini, Quarteroni and Zang is more theoretical. This one also has a second volume "Spectral Methods: Evolution to Complex Geometries and Applications to Fluid Dynamics". I don't work on DG methods and I'm not an expert to judge the advantages of nodal vs. modal. The book of Karniadakis & Sherwin is more focused on methods with continuous modal expansions. In this type of method, you are obliged to reorder the modes in two neighbouring elements in such fashion that the corresponding modes on the interface match in order to preserve the continuity of the global expansion. In addition, imposing boundary conditions requires extra attention since your modes are not associated with a specific location on the boundary. I hope someone familiar with this type of methods will add more detail. • Thanks, I'm bummed I haven't gotten any good answers here too! I have both the Karniadakis/Sherwin (modal & continuous) and Hesthaven/Warburton (nodal) books and I also recommend them. I'm comfortable with the implementations, it's just the pros/cons that aren't clear to me. – Aurelius Nov 21 '13 at 16:06
{}
Tag Info In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. In applied probability, a probability distribution can be specified in a number of different ways, often chosen for mathematical convenience: • by supplying a valid probability mass function or probability density function • by supplying a valid cumulative distribution function or survival function • by supplying a valid hazard function • by supplying a valid characteristic function • by supplying a rule for constructing a new random variable from other random variables whose joint probability distribution is known. A probability distribution can either be univariate or multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution.
{}
# Introduction to Harmonic Analysis on Reductive P-adic Groups. (MN-23): Based on lectures by Harish-Chandra at The Institute for Advanced Study, 1971-73 Allan J. Silberger Pages: 380 https://www.jstor.org/stable/j.ctt130hkdj 1. Front Matter (pp. None) 2. FOREWORD (pp. i-i) These notes represent the writer's attempt to organize and comprehend the mathematics communicated to him by Harish-Chandra, both in public lectures and private conversations, during the years 1971-1973. They offer the reader an ab initio introduction to the theory of harmonic analysis on reductive p-adic groups. Besides laying the foundations for a theory of induced representations by presenting Jacquet's theory, the Bruhat theory, the theory of the constant term, and the Maass-Selberg relations, these notes develop the theory of the Schwartz space on a p-adic group and the theory of the Eisenstein integral in complete detail. They also give the... 3. ACKNOWLEDGMENTS. (pp. i-i) (pp. ii-iv) 5. Chapter 0. On the Structure of Reductive p-adic Groups. (pp. 1-13) The theory to be developed in the five later chapters of these notes depends upon the structure theory for reductive groups with points in a p-adic field. Fortunately, this theory has been worked out in detail (cf., [2a], [2c] for the essentially algebraic aspects and [4c] for the directly related topological part). In this chapter we very briefly review only those facts from the structure theory which we shall need later. The reader may consult the references both for proofs and more details. If G is any group and H a subgroup, we write ZG(H)[NG(H)] for the centralizer [normalizer] of... 6. Chapter 1. Generalities Concerning Totally Disconnected Groups and Their Representations. (pp. 14-77) The intention of this chapter, violated in the two appendices to §1.2, is to present all the ideas needed in these notes which do not depend, either for their formulations or for their proofs, upon the theory of algebraic groups. A t. d. space or totally disconnected space is a Hausdorff space which has a countable basis consisting of open compact sets. It follows that a t. d. space is locally compact. Subordinate to any open covering of a totally disconnected space, there is a covering of the space by a countable union of disjoint open compact sets. Any closed... 7. Chapter 2. Jacquet's Theory, Bruhat's Theory, the Elementary Theory of the Constant Term. (pp. 78-115) In this chapter and all remaining chapters of these notes we assume that G is the set of Ω-points of a connected reductive Ω-group G, Ω being a non-archimedean local field. For the properties of G which we shall need the reader should consult Chapter 0. The symbol Z will always denote the split component of G, i.e., the set of Ω-points of the largest Ω-split torus Z lying in the center of G. The chapter opens (§2.1) with a structure theorem for "infinitesimal" neighborhoods of the identity in G. In many applications this result replaces the differential operators and... 8. Chapter 3. Exponents and the Maass-Selberg Relations. (pp. 116-146) The purpose of this chapter is to prepare what we need and to prove the Maass-Selberg relations (Theorem 3. 5. 3) in the case where the inducing representation is supercuspidal. A more general version of this theorem (Theorem 4. 6. 3), in which the inducing representation is only required to be square integrable, is proved in the next chapter. Let G be, as usual, the set of Ω-points of a reductive Ω-group and let Z be the split component of G. For U a complex vector space consider$\text{f}\in \cal{A}(\text{G}:\text{U})$. According to Corollary 1.10.2, f is Z-finite, so the subspace... 9. Chapter 4. The Schwartz Spaces. (pp. 147-220) The Schwartz spaces, 𝒞(G) and 𝒞*(G), will be introduced and studied in this chapter. We shall see that 𝒞(G)[𝒞*(G)] is a complete locally convex Hausdorff space. The space 𝒞(G) is an algebra under convolution on G; for every$\chi \in {\hat{\text{Z}}}$there is a space 𝒞*(G, χ) ⊂ 𝒞*(G) such that 𝒞*(G, χ) is an algebra under convolution on G/Z. There are inclusion mappings, continuous in the relevant topologies;$\text{C}_{\text{c}}^{\infty }(\text{G})\hookrightarrow {\cal{C}}(\text{G})\hookrightarrow {{\text{L}}^{2}}(\text{G})[\text{C}_{\text{c}}^{\infty }(\text{G},\chi )\hookrightarrow {\cal{C}}_{*}(\text{G},\chi )\hookrightarrow {{\text{L}}^{2}}(\text{G},\chi )]$. The K-finite matrix coefficients of the discrete series of G lie in 𝒞*(G), wave packets in 𝒞(G) (cf. §5. 5. 1). Indeed, it is a nontrivial problem to construct wave packets... 10. Chapter 5. The Eisenstein Integral and Applications. (pp. 221-361) We fix, for the whole chapter, a maximal split torus A0of G and A0-good maximal compact subgroup K of G. Let A be an A0-standard torus, M = ZG(A), and ω ∈ ℰ2(M). If ω is unramified, then we have seen that the theory of induced representations provides us with a class$\text {C}_{\text {M}}^{\text {G}}(\omega )\in \cal {E}(\text {G})$. Let (V,τ) be a smooth unitary double rep representation of K. Then the map f ↦ fP= ΣfP, sfrom$\cal {A}(\text {C}(\omega),\tau)$to$\underset{\text{s}\in \text{W}(\text{A})}{\mathop{\oplus }}\,{\cal {A}}({{\omega }^{\text {s}}},{{\tau }_{\text {M}}})$gives rise to the Maass-Selberg relations. The Eisenstein integral, ψ ↦ E(P:ψ) (P ∈ ℙ(A)), maps$\cal{A}(\omega ,{{\tau }_{\text{M}}})$to$\cal {A}(\text {C}(\omega ),\tau )$.... 11. References. (pp. 362-364) 12. SELECTED TERMINOLOGY. (pp. 365-369) 13. SELECTED NOTATIONS. (pp. 370-371) 14. Back Matter (pp. 372-372)
{}
# `int_0^oo xe^(-x/3) dx` Determine whether the integral diverges or converges. Evaluate the integral if it converges. We will use integration by parts `int udv=uv-int vdu` `int_0^infty xe^(-x/3)dx=|[u=x,dv=e^(-x/3)dx],[du=dx,v=-3e^(-x/3)]|=` `-3xe^(-x/3)|_0^infty+3int_0^infty e^(-x/3)dx=` `(-3xe^(-x/3)-9e^(-x/3))|_0^infty=` `lim_(x to infty)[-3e^(-x/3)(x-3)]+3cdot0cdot e^0+9e^0=` To calculate the above limit we will use L'Hospital's rule: `lim_(x to c)(f(x))/(g(x))=lim_(x to c)(f'(x))/(g'(x))` `lim_(x to infty)[-3e^(-x/3)(x-3)]=-3lim_(x to infty) (x-3)/e^(x/3)=` Apply L'Hospital's rule. `-3lim_(x to infty)1/e^(x/3)=0` Let us now return to the integral. `0+0+9=9` As we can see the integral converges and it has value of 9. The image below shows graph of the function and area under it representing the value of the integral. Looking at the image we can see that the graph approaches `x`-axis (function converges to zero) "very fast". This suggests that the integral should converge. Images: This image has been Flagged as inappropriate Click to unflag Image (1 of 1) Approved by eNotes Editorial Team
{}
Archive ## Tag Cloud 3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol pseudo 3d python reddit reference release releases resource review rust secrets security series list server servers software sorting source code control svg technical terminal textures three thing game three.js tool tutorial twitter ubuntu university update upgrade version control visual web website windows windows 10 xmpp ## PixelBot Part 2: Devices need protocols, apparently So there I was. I'd just got home, turned on my laptop, opened the Arduino IDE and Monodevelop, and then.... nothing. I knew I wanted my PixelBot to talk to the PixelHub I'd started writing, but I was confused as to how I could make it happen. In this kind of situation, I realised that although I knew what I wanted them to do, I hadn't figured out the how. As it happens, when you're trying to get one (or more, in this case) different devices to talk to each other, there's something rather useful that helps them all to speak the same language: a protocol. A protocol is a specification that defines the language that different devices use to talk to each other and exchange messages. Defining one before you start writing a networked program is probably a good idea - I find particularly helpful to write a specification for the protocol that the program(s) I'm writing, especially if their function(s) is/are complicated. To this end, I've ended up spending a considerable amount of time drawing up the PixelHub Protocol - a specification document that defines how my PixelHub server is going to talk to a swarm of PixelBots. It might seem strange at first, but I decided on a (mostly) binary protocol. Upon closer inspection though, (I hope) it makes a lot of sense. Since the Arduino is programmed using C++ as it has a limited amount of memory, it doesn't have any of the standard string manipulation function that you're used to in C♯. Since C++ is undoubtedly the harder of the 2 to write, I decided to make it easier to write the C++ rather than the C&sharp. Messages on the Arduino side are come in as a byte[] array, so (in theory) it should be easy to pick out certain known parts of the array and cast them into various different fundamental types. With the specification written, the next step in my PixelBot journey is to actually implement it, which I'll be posting about in the next entry in this series! ## Easier TCP Networking in C♯ I see all sorts of C♯ networking tutorials out there telling you that you have to use byte arrays and buffers and all sorts of other complicated things if you ever want to talk to another machine over the network. Frankly, it's all rather confusing. Thankfully though, it doesn't have to stay this way. I've learnt a different way of doing TCP networking in C♯ at University (thanks Brian!), and I realised the other day I've never actually written a blog post about it (that I can remember, anyway!). If you know how to read and write files and understand some basic networking concepts (IP addresses, ports, what TCP and UDP are, etc.), you'll have no problems understanding this. ### Server The easiest way to explain it is to demonstrate. Let's build a quick server / client program where the server says hello to the client. Here's the server code: // Server.cs using System; using System.Net; using System.Net.Sockets; using System.IO; public class Server { public Server(int inPort) { Port = inPort; string s; } { TcpListener server = new TcpListener(IPAddress.Any, Port); server.Start(); while (true) { TcpClient nextClient = await server.AcceptTcpClientAsync(); StreamWriter outgoing = new StreamWriter(nextClient.GetStream()) { AutoFlush = true }; await outgoing.WriteLineAsync($"Hello, {name}!"); Console.WriteLine("Said hello to {0}", name); nextClient.Close(); } } } // Use it like this in your Main() method: Server server = new Server(6666); server.Start().Wait(); Technically speaking, that asynchronous code ought to be running in a separate thread - I've omitted it to make it slightly simpler :-) Let's break this down. The important bit is in the Start() method - the rest is just sugar around it to make it run if you want to copy and paste it. First, we create & start a TcpListener: TcpListener server = new TcpListener(IPAddress.Any, Port); server.Start(); Once done, we enter a loop, and wait for the next client: TcpClient nextClient = await server.AcceptTcpClientAsync(); Now that we have a client to talk to, we attach a StreamReader and a StreamWriter with a special option set on it to allow us to talk to the remote client with ease. The option set on the StreamWriter is AutoFlush, and it basically tells it to flush it's internal buffer every time we write to it - that way things we write to it always hit the TcpClient underneath. Depending on your setup the TcpClient does some internal buffering & optimisations anyway, so we don't need the second layer of buffering here: StreamReader incoming = new StreamReader(nextClient.GetStream()); StreamWriter outgoing = new StreamWriter(nextClient.GetStream()) { AutoFlush = true }; With that, the rest should be fairly simple to understand: string name = (await incoming.ReadLineAsync()).Trim(); await outgoing.WriteLineAsync($"Hello, {name}!"); Console.WriteLine("Said hello to {0}", name); nextClient.Close(); First, we grab the first line that the client sends us, and trim of any whitespace that's lurking around. Then, we send back a friendly hello message to client, before logging what we've done to the console and closing the connection. ### Client Now that you've seen the server code, the client code should be fairly self explanatory. The important lines are highlighted: using System; using System.Net; using System.Net.Sockets; using System.IO; public class Client { public Client(string inHostname, int inPort) { Hostname = inHostname; uint a; Port = inPort; } public async Task GetHello(string name) { TcpClient client = new TcpClient(); client.Connect(Hostname, Port); StreamWriter outgoing = new StreamWriter(client.GetStream()) { AutoFlush = true }; await outgoing.WriteLineAsync(name); } } // Use it like this in your Main() method: Client client = new Client("localhost", 6666); First, we create a new client and connect it to the server. Next, we connect the StreamReader and StreamWriter instances to the TcpClient, and then we send the name to the server. Finally, we read the response the server sent us and return it. Easy! Here's some example outputs: Client: ./NetworkingDemo-Server.exe Said hello to Bill Server: ./NetworkingDemo-Client.exe The server said: Hello, Bill! The above code should work on Mac, Windows, and Linux. Granted, it's not the most efficient way of doing things, but it should be fine for most general purposes. Personally, I think the trade-off between performance and readability/ease of understanding of code is totally worth it. If you prefer, I've got an archive of the above code I wrote for this blog post - complete with binaries. You can find it here: NetworkingDemo.7z. Sorry for the (very) late post! I fell rather ill on the day before I was going to write the next post, and haven't been well enough to write it until now! Hopefully more cool posts will be on their way soon :-) (Above: A nice picture of Durham Cathedral. Taken by @euruicimages.) How many times have you just finished adding a new feature to your latest and greatest program, and hit the "Run" button in Visual Studio or Monodevelop? Have you ever wondered what happens under the hood? Have you ever encountered a strange build error, and just googled the error message in the hope of finding a solution? In this post, I'm going take you on a journey to understand the build process for a typical C♯ program that happens every time you press Ctrl + Shift + B (or F8 in Monodevelop). I also hope to show you why I think it's important to understand what happens behind the scenes. To start any journey, we need a map. I've created a simple diagram for our journey. We'll dive into the world of the project file further down. Lastly, we'll end our journey at the individual source code files that actually make up your project. Before that though, we need to make a quick stop in your sln file. The sln file (or Solution File) is the place where everything starts. Normally, you only get one solution file for each project you create. It's the file that keeps track of the name of your project, its id, and where all the projects are that your solution references (they're usually in appropriately named subfolders). Note that it doesn't keep references to the other projects that you reference (that's done at the project file level). Solution files sometimes automatically detected too (xbuild does this for sure). If you're in a command line or terminal, you can build an entire solution with just one command, without having to open Visual Studio or Monodevelop: cd /path/to/awesome/project # Windows msbuild AwesomeSpaceProject.sln # Linux / Mono xbuild With that out of the way, we can talk about project files. Project files define which source code files should be built, and how. You can even add custom triggers that run at any time before, during, or after the build process here (this is how I embedded commit hashes in a C♯ binary)! Though the syntax is quite simple, much of it is, sadly, un or underdocumented. Thankfully most of it is quite intuitive, and a little experimentation goes a long way: <Project> .... <ItemGroup> <Reference Include="System" /> <Reference Include="System.Drawing" /> .... </ItemGroup> <ItemGroup> <Compile Include="Program.cs" /> <Compile Include="GameObjects/Spaceship.cs" /> <Compile Include="GameObjects/Enemy.cs" /> .... </ItemGroup> <ItemGroup> <EmbeddedResource Include="Resources\Spritesheet.png" /> <EmbeddedResource Include="Resources\CoolSpace.ttf" /> </ItemGroup> .... </Project> The above is a (simplified) example if a project file. It might be called something like CoolSpace.csproj. It references a few core assemblies, specifies a few C♯ files for compiling, and embeds a resource or two. Of course, these files are generated for you automatically, but it's always helpful to know how to works so not only can you fix it more easily when it goes wrong, but you can also extend it to do extra things that you can't through the user interface, like use wildcards (careful! Too many wildcards can slow down your build) to specify a range of files that you wan to embed without having to go around them all one by one. Next, let's talk about references. References allow you to pull in code from elsewhere, like a core system library, another project (that's how you link 2 projects in (or even out of!) a solution together), a library of sorts (like Nuget), or another random DLL or EXE (yes, you can reference other executable files) file lying around. With references, yo can spread your code around loads of files and pull in libraries from all over the galaxy and have the build process follow all of your references around and tie everything up into a nice neat package for you. The next piece of the puzzle is the builder. The software that actually builds your code. On Windows with Visual Studio, it's called msbuild, and is the original build tool, created by Microsoft, that sets the standard. On Linux, there's a different (but very similar) tool called xbuild. xbuild implements the standard that msbuild sets, allowing solutions written on Windows with Visual Studio to be compiled 9 times out of 10 on Linux (and probably Mac too, though I don't have one to check - let me know in the comments if you have one!) without any changes. The final stone in the bridge, so to speak, is your code itself. The builder (whether it be xbuild or msbuild) preprocesses each included file into an object file, which are then all linked together into the final executable binary, which itself contains common intermediate language (or CIL) which is executed by your operating system (or mono on Linux / Mac). While CIL is a different topic for a separate time, it's still important, so I'm mentioning it here. ## Developing and Running C# Programs on Linux Recently I was asked about running C# on Linux, and I remembered that I haven't actually written a blog post on it! This is that blog post I never wrote: A beginner's guide on how to develop, compile, and run C# programs on Linux. Here I assume a debian-based system (specifically Ubuntu 16.04), but it can be just as easily adapted to work with other flavours. To start off, you'll need to add the mono repository to your system's apt repository list. Taken from the official mono website, here are the commands to do that: sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF sudo apt update The more experienced may notice that although I'm installing on a, Ubuntu 16.04 system, I'm still specifying wheezy when installing mono. This shouldn't make too much of a difference - currently mono only supports wheezy upon install. Next, it's time to actually install mono itself. This is easy: sudo apt install mono-mcs The above ought to install the mono runtime and compiler. If you experience issues down the line, simply install the mono-complete package instead.With this installed, you should now be able to launch compiled programs written C♯ by double-clicking them in your file manager - without any modifications. C♯ programs compiled on Windows can be run on Linux, and vice versa (this is because C♯ actually compiled into something called the Common Intermediate Language, or CIL). If you're on a server, then you can run your programs by prefixing the executable name with mono, like this: mono program-of-awesomeness.exe If your program crashes, however, the output is not very helpful at all. Thankfully though there's a way to remedy that. First, make sure your program is compiled in debug mode, and then add the --debug flag when running your program: mono --debug another-awesome-program.exe This is all very well, but what about compiling? That's relatively easy as well. mcs is the linux version of csc on Windows, and behaves almost identically with some minor syntactical changes (read up about it by typing mcs --help or man mcs). For Visual Studio solutions, there's xbuild. Xbuild is a new-ish build too for Linux that is capable of compiling almost any Visual Studio solution file without any modifications (though there are some undocumented difficulties that you might run into as an advanced user). To use it, first install the mono-xbuild package (sudo apt install mono-xbuild), and then, in a terminal, cd into the directory that contains the solution you want to compile, and then type xbuild and hit enter. That's it! All this work in the terminal is cool for running C♯ programs on GUI-less boxes and servers, but it's no way to develop a larger application. Thankfully, there's a solution to that too! Monodevelop is the best C♯ IDE out there at the moment - it's like Visual Studio for LInux. It's easy to install, too - simply install the monodevelop package (sudo apt install monodevelop). (Above: Monodevelop running on my Linux laptop. The project open here is my sprite packing tool.) The package in the official repositories should be good enough for general use (though it's probably out of date). For the latest version with all the latest features though, you'll have to compile it from source. Sadly this is not a trivial process. To do it you need to be comfortable with the terminal and know your way reasonably well around a Linux system. If you still want to go ahead anyway, start by downloading the latest release and follow the instructions. You'll probably find it keeps complaining about things not existing - usually a quick apt search {thing} reveals which package you need to install in order to get it to work. If you have trouble, post a comment below and I'll try to help you out. Even without compiling monodevelop from source, it's still a pretty good IDE. It lets you create Visual-Studio-compatible solution files, and compile your code on the fly at the touch of a button. ## Picking the right interface for multicast communications At the recent hardware meetup, I was faced with an interesting problem: I was trying to communicate with my Wemos over multicast UDP in order to get it to automatically discover my PixelHub Server, but the multicast pings were going out over the wrong interface - I had both an ethernet cable plugged in and a WiFi hotspot running on my integrated wireless card. The solution to this is not as simple you might think - you have to not only pick the right interface, but also the right version of the IP protocol. You also have to have some way to picking the correct interface in the first place. Let's say you have a big beefy PC with a wireless card and 2 ethernet ports that are (for some magical reason) all in use at the same time, and you want to communicate with another device over your wireless card and not either of your ethernet ports. I developed this on my linux laptop, but it should work just fine on other OSes. To start, it's probably a good idea to list all of our network interfaces: using System.Net.NetworkInformation; // ... NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces(); foreach (NetworkInterface nic in nics) { Console.WriteLine("Id: {0} - Description: {1}", nic.Id, nic.Description); } This (on my machine at least!) outputs something like this: eth0 - eth0 lo - lo wlan0 - wlan0 Your machine will probably output something different. Next, since you can't normally address this list of network interfaces directly by name, we need to write a method to do it for us: public static NetworkInterface GetNetworkIndexByName4(string targetInterfaceName) { NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces(); foreach (NetworkInterface nic in nics) { if (nic.Id == targetInterfaceName) return nic; } throw new Exception($"Error: Can't find network interface with the name {targetInterfaceName}."); } Pretty simple, right? We're not out the woods yet though - next we need to tell our UdpClient to talk on a specific network interface. Speaking of which, let's set up that UdpClient so that we can use it to do stuff with multicast: using System.Net; // ... UdpClient client = new UdpClient(5050); client.JoinMulticastGroup(IPAddress.Parse("239.62.148.30")); With that out of the way, we can now deal with telling the UcpClient which network interface it should be talking on. This is actually quite tricky, since the UdpClient doesn't take a NetworkInterface directly. Let's define another helper method: public static int GetIPv4Index(this NetworkInterface nic) { IPInterfaceProperties ipProps = nic.GetIPProperties(); IPv4InterfaceProperties ip4Props = ipProps.GetIPv4Properties(); return ip4Props.Index; } The above extension method gets the index of the IPv4 interface of the network interface. Since at the moment we are in the middle of a (frustratingly slow) transition from IPv4 and IPv6, each network interface must have both an IPv4 interface, for talking to other IPv4 hosts, and an IPv6 interface for talking to IPv6 hosts. In this example I'm using IPv4, since the Wemos I want to talk to doesn't support IPv6 :-( Now that we have a way to get the index of a network interface, we need to translate it into something that the UdpClient understands: int interfaceIndex = (int)IPAddress.HostToNetworkOrder(NetTools.GetNetworkIndexByName4(NetworkInterfaceName)); That complicated! Thankfully, we don't need to pick it apart completely - it just works :-) Now that we have the interface index in the right format, all we have to do is tell the UdpClient about it. Again, this is also slightly overcomplicated: beacon.Client.SetSocketOption( SocketOptionLevel.IP, SocketOptionName.MulticastInterface, interfaceIndex ); Make sure that you put this call before you join the multicast group. With that done, your UdpClient should finally be talking on the right interface! Whew! That was quite the rabbit hole (I sent my regards to the rabbit :P). If you have any issues with getting it to work, I'm happy to help - just post a comment down below. ### Sources • Stackoverflow answers: 1, 2 • Openclipart images: 1, 2 ## Chaikin Curves in C#: An alternative curve generation algorithm A little while ago I was curious to know if there were any other ways to generate a smooth curve other than with a Bezier Curve. Turns out the answer is yes, and it comes in the form of a Chaikin Curve, which was invented in 1974 by a lecturer in America by the name of George Chaikin. A few days (and a lot of debugging) later, I found myself with a Chaikin curve generator written in pure C♯ (I seem to have this fascination with implementing algorithms :P), so I thought I'd share it here. Before I do though, I should briefly explain how Chaikin's algorithm actually works. It's actually quite simple. If you have a list of control points, and you were to draw a line through them all, you'd get this: The magic of the algorithm happens when you interpolate between your control points. If you build a new list of points that contains points that are ¼ and ¾ along each of the lines between the current control points and draw a line though them instead, then the line suddenly gets a lot smoother. This process can be repeated multiple times to further refine the curve, as is evidenced in the animation above. This page is very helpful in understanding the algorithm if you're having trouble getting your head around it. My implementation makes use of the PointF class in the System.Drawing namespace, and also has the ability to generate an SVG version of any generated curve, so that it can be inspected and debugged. You can find my implementation here: Chaikin Generator - comments and improvements are welcome! Instructions on how to use to use it are available in the README, and the class is fully documented with Intellisense comments, so it should feel fairly intuitive to use. I've tried to use patterns that are present in the rest of the .NET framework too, so you can probably even guess how to use it correctly. Additionally, I 'm going to try put it up as a Nuget package, but currently I can't get Nuget to pack it currently on linux (when I do, you can expect a tutorial on here!) ## The lost post: Embedding commit hashes in C♯ binaries I seem to remember that I've started to write this post no less than 3 times, and I've managed to lose the source each and every time. If you're reading this it means that I've managed to complete it this time :D Imagine you've got a confused client on the phone, asking why the newest feature X in your program doesn't work. You ask them whether they have the latest version of your program installed... and they say that they don't know. Version numbers and changelogs to the rescue! Except.... that last release was rather rushed and you forgot to finish updating the changelog. This is just one scenario in which embedding the latest commit hash into your program is useful. You could embed the short hash (the first 7 characters) into the version string, for example v3.6.1-375ae31. Then you could compare it against the revision history to see exactly what your codebase looked like when your client's version was built. For a while now I've wanted to do just this, and now I've finally figured out how to do it cross-platform. This post documents how I went about doing it, and how you can do it too. The basic principle of the idea is to run a command that will output the latest commit hash to a file before the build starts, and then embed that file into the resulting binary. To achieve this, we need to go about it in 2 parts. Firstly, we need to fiddle with the project file to add an optional pre-build event. Open the project file (MyProject.csproj) in your favourite text editor (but preferably not your favourite IDE such as Visual Studio or MonoDevelop) and add this to the bottom, just before the closing </Project>: <Target Name="BeforeBuild" BeforeTargets="Build"> <Exec Command="git rev-parse HEAD &gt;git-hash.txt" WorkingDirectory="$(ProjectDir)" IgnoreExitCode="true" /> </Target> If you don't use Git, then change git rev-parse HEAD &gt;git-hash.txt in the above to the equivalent command for your version control system. For SVN, this stackoverflow question looks like it'll do the job for Windows - for Linux you should go here. Once done, the next step is to add the generated file as an embedded resource. We can't do this with the GUI easily here since the file in question hasn't been generated yet! Add the following to the bottom of the csproj file, again just before the </Project>: <ItemGroup> <EmbeddedResource Include="git-hash.txt" /> </ItemGroup> Remember to change the git-hash.txt to whatever you changed it to above. Next, save it and reopen the solution in your IDE. The final step is to actually utilise the commit hash (or revision number in SVN) in your program. Since it's just an embedded file, you can simply find it with a bit of reflection, and read it in with a StreamReader. I've written a good tutorial on how to do that over on my Embedding files in C♯ binaries post. Make sure that your program is prepared to handle junk instead of a commit hash - you can't predict the contents of the embedded file if Git (or SVN) isn't installed on the machine used to build your project. If you want to require that the commit hash (or revision number) is actually present, just remove the IgnoreExitCode="true" from the first snippet above. ## Use C♯ 6.0 today in Visual Studio 2013 (Banner from hdw.eweb4.com) In case you haven't heard, C♯ 6.0 is here now and it's awesome (here's a cheat sheet from programmingwithmosh.com showing the most noteable new features). Unfortunately, you must be using either Visual Studio 2015 or above or MonoDevelop in order to take advantage of it.... until now: Microsoft have released their C♯ 6.0 compiler Roslyn as a NuGet package. If you don't know what a NuGet package is, Nuget is a modular system that allows you to pull in and use various different libraries and tools automatically. There's a central registry over at nuget.org, which people (like you!) can upload their packages to and other people can download them from. This looks like a good tutorial for Windows users. MonoDevelop users need to install this addin, but it should be installed already. All you have to do is install the Microsoft.Net.Compilers NuGet package in order to use C♯ 6.0 in Visual Studio 2013. That's it! Unfortunately, this breaks the build process on platforms other than Windows, as the MicroSoft.Net.Compilers package is Windows only. The solution is fairly simple however. Once you've installed the above NuGet package, open your ".csproj" file in your favourite plain text editor (such as Notepad or gedit), and find the line that looks like this (it should be near the bottom): <Import Project="..\packages\Microsoft.Net.Compilers.1.3.2\build\Microsoft.Net.Compilers.props" Condition="Exists('..\packages\Microsoft.Net.Compilers.1.3.2\build\Microsoft.Net.Compilers.props')" /> And add AND '$(OS)' == 'Windows_NT' to the end of the Condition attribute like this: <Import Project="..\packages\Microsoft.Net.Compilers.1.3.2\build\Microsoft.Net.Compilers.props" Condition="Exists('..\packages\Microsoft.Net.Compilers.1.3.2\build\Microsoft.Net.Compilers.props') AND '$(OS)' == 'Windows_NT'" /> The above adds a condition that prevents the compiler in NuGet package you installed from being used on platforms other than Windows. This doesn't mean that you can't use C♯ 6.0 on other platforms - Mono (the Linux c♯ compiler) already supports C♯ 6.0 natively, so it doesn't need to be replaced. it's just the C♯ compiler bundled with Visual Studio 2013 and below that's no good. ## Set properties faster in C♯ Recently I rediscovered that C♯ lets you set multiple properties when you create an instance of an object. I've been finding it useful, so I thought that I'd share it here. Consider this code: StreamWriter outgoingData = new StreamWriter("rockets.txt"); outgoingData.AutoFlush = true; outgoingData.Encoding = Encoding.UTF8; In the above I create a new StreamWriter and attach it to the file rockets.txt. I also turn on AutoFlush, and set the encoding to UTF8. The code above is starting to look a little messy, so let's rewrite it: StreamWriter outgoingData = new StreamWriter("rockets.txt") { AutoFlush = true, Encoding = Encoding.UTF8 }; This technique can come in particularly handy when you need to set a lot of properties on an object upon it's creation.You can also do it with other types, too: string[] animals = new string[] { "cat", "mouse", "elephant" }; List<int> primes = new List<int>() { 2, 3, 5, 7, 11, 13, 17, 19 }; } ## Set and forget async tasks (Banner image from here by GDJ) Recently I've been using asynchronous C# quite a bit, and I've run into the problem of 'setting and forgetting' an asynchronous task more than once. You might want to do this when handling requests in some sort of server, for example. I looked into it and came up with a few snippets of code I thought someone else might find useful, so I'm posting them here. Without further delay, here's the first snippet: /// <summary> /// Call this method to allow a given task to complete in the background. /// Errors will be handled correctly. /// Useful in fire-and-forget scenarios, like a TCP server for example. /// From http://stackoverflow.com/a/22864616/1460422 /// </summary> /// <param name="acceptableExceptions">Acceptable exceptions. Exceptions specified here won't cause a crash.</param> { try { } catch (Exception ex) { // TODO: consider whether derived types are also acceptable. if (!acceptableExceptions.Contains(ex.GetType())) throw; } } All asynchronous methods in C♯ return some form of Task - and these Task s can be reconfigured and manipulated to make them run in the background on the thread pool, as in the above. The above also handles exceptions correctly so that your asynchronous methods won't just silently fail. Talking about exceptions, if you await an asynchronous method, it's highly likely that if they do throw an exception it'll be an AggregateException. This is not helpful. It doesn't tell us anything about the actual exception that was thrown in the first place! It gets annoying manually inspecting the innerExceptions property of the AggregateException very quickly. Thankfully, I've found a solution to that too: try { await DoAsyncWork(); } catch(AggregateException agError) { agError.Handle((error) => { ExceptionDispatchInfo.Capture(error).Throw(); throw error; }); } catch { Console.Error.WriteLine("Something went very wrong O.o"); throw; } I can't remember where I found the ExceptionDispatchInfo bit (if it was your idea, please let me know so I can give you appropriate credit!), but the rest I wrote myself. It essentially unwraps the AggregateException and rethrows each exception in turn, whilst preserving the original stack trace. That way you can track the issue that threw the exception in the first place down. Art by Mythdael
{}
### What is the Detrended Price Oscillator (DPO)? A detrended price oscillator is an oscillator that strips out price trends in an effort to estimate the length of price cycles from peak to peak or trough to trough. Unlike other oscillators, such as the stochastic or moving average convergence divergence (MACD), the DPO is not a momentum indicator. It highlights peaks and troughs in price, which are used to estimate buy and sell points in line with the historical cycle. ### Key Takeaways • The DPO is used for measuring the distance between peaks and troughs in the price/indicator. • If troughs have historically been about two months apart, that may help a trader make future decisions as they can locate the most recent trough and determine that the next one may occur in about two months. • Traders can use the estimated future peaks as selling opportunities or the estimated future troughs as buying opportunities. • The indicator is typically set to look back over 20 to 30 periods. ### The Formula for the Detrended Price Oscillator (DPO) is \begin{aligned} &DPO=Price~from~\frac{X}{2}+1~periods~ago-X~period~SMA\\ &\textbf{where:}\\ &\text{X = Number of periods used for the look-back period}\\ &\text{SMA = Simple Moving Average}\\ \end{aligned} ### How to Calculate the Detrended Price Oscillator (DPO) 1. Determine a lookback period, such as 20 periods. 2. Find the closing price from x/2 +1 periods ago. If using 20 periods, this is the price from 11 periods ago. 3. Calculate the SMA for the last x periods. In this case, 20. 4. Subtract the SMA value (step 3) from the closing price x/2 +1 periods ago (step 2) to get the DPO value. ### What Does the Detrended Price Oscillator (DPO) Tell You? The detrended price oscillator seeks to help a trader identify an asset's price cycle. It does this by comparing an SMA to a historical price that is near the middle of the look-back period. By looking at historical peaks and troughs on the indicator, which aligned with peaks and troughs in price, traders will typically draw vertical lines at these junctures and then count how much time elapsed between them. If bottoms are two months apart, that helps assess when the next buying opportunity may come. This is done by isolating the most recent trough in the indicator/price and then projecting the next bottom two months out from there. If peaks are generally 1.5 months apart, a trader could find the most recent peak and then project that the next peak will occur 1.5 months later. This projected peak/time frame can be used as an opportunity to potentially sell a position before the price retreats. To further aid with trade timing, the distance between a trough and peak could be used to estimate the length of a long trade, or the distance between a peak an a trough to estimate the length of a short trade. When the price from x/2+1 periods ago is above the SMA the indicator is positive. When the price from x/2 + 1 periods ago is below the SMA then the indicator is negative. The detrended price oscillator does not go all the way to the latest price. This is because the DPO is measuring the price x/2 +1 periods relative to the SMA, therefore the indicator will only go up to the x/2 + 1 periods ago. This is ok though because the indicator is meant to highlight historical peaks and troughs. The indicator ranges, and is also displaced into the past, and therefore isn't a real-time useful gauge for trend direction. By definition, the indicator is not be used for assessing trends. Therefore, determining which trades to take is up to the trader. During an overall uptrend, the cycle bottoms will likely present good buying opportunities, and the peaks good selling opportunities. ### Example of How to Use the Detrended Price Oscillator (DPO) In the example below, International Business Machines (IBM) is bottoming approximately every 1.5 to two months. Upon noticing the cycle, look for buy signals that align with this timeframe. Peaks in price are occurring every one to 1.5 months; look for sell/shorting signals that align with this cycle. ### Difference Between the Detrended Price Oscillator (DPO) and Commodity Channel Index (CCI) Both of these indicators attempt to capture cycles in price moves, although they do it in very different ways. The DPO is primarily used to estimate the time it takes for an asset to move from peak to peak or trough to trough (or peak to trough, or vice versa). The commodity channel index (CCI) is usually bound between +100 and -100, but a breakout from those levels indicates something important is going on, such as a new major trend is beginning. Therefore the CCI is more focused on when a major cycle could be starting or ending, and not the time between the cycles. ### Limitations of Using the Detrended Price Oscillator (DPO) The DPO doesn't provide trade signals on its own, but rather is an additional tool to aid in trade timing. It does this by looking at when the price peaked and bottomed in the past. While this information may provide a reference point or baseline for future expectations, there is no guarantee the historical cycle length will repeat in the future. Cycles could get longer or shorter in the future. The indicator also doesn't factor in the trend. It is up to the trader to determine which direction to trade. If an asset's price is in free fall, it may not be worth buying even at cycle bottoms since the price could keep falling soon anyway. Not all the peaks and troughs on the DPO will move to the same level. Therefore, it is also important to look at price to mark the important peaks and troughs on the indicator. Sometimes the indicator may not drop much, or move up much, yet the reversal from that level still could be a significant one for the price.
{}
# Prove that $\det(A) \geq 0$. Let $$A$$ be a $$4\times 4$$ skew-symmetric real matrix. Prove that $$\det(A) \geq 0$$. I know $$A = \begin{bmatrix}0&a&b&c\\-a&0&d&e\\-b&-d&0&f\\-c&-e&-f&0\end{bmatrix}$$ By calculating the alternating sum of the products of the top row's entries and their minors, I was able to deduce that the determinant is $$\det(A) = a^2f^2+2acdf-2abef+b^2e^2-2bcde+c^2d^2$$ However I'm not sure how to prove that this is nonnegative for any reals $$a,b,c,d,e$$. Notice that $$a$$ is paired with $$f$$, $$b$$ with $$e$$, and $$c$$ with $$d$$. With this in mind, let $$X=af, Y= be,$$ and $$Z = cd.$$ Then we want to show $$X^2+2XZ-2XY+Y^2-2YZ+Z^2$$ is nonnegative for $$X,Y,Z\in\mathbb{R}$$. We have that $$Y^2-2YZ +Z^2 = (Y-Z)^2$$ and $$2XZ-2XY = -2X(Y-Z)$$. Hence $$X^2+2XZ-2XY+Y^2-2YZ+Z^2 = (X-(Y-Z))^2\geq 0$$. Hints: Show/use the following: the eigenvalues of $$A$$ are either $$0$$ or purely imaginary, nonreal roots of real polynomials come in conjugate pairs, and the determinant is the product of the eigenvalues. • Why are the eigenvalues $0$ or purely imaginary though? Also, why is the matrix diagonalizable? Can you use the diagonalization theorem to show this? – user733113 Dec 31 '19 at 3:28 • The fact the eigenvalues are purely imaginary is an easy exercise I'd encourage you to do (or look it up). This argument doesn't need diagonalizability, but for what it's worth, it is because a skew-symmetric matrix is normal so the Spectral Theorem applies. – J.G Dec 31 '19 at 4:19
{}
# Examples: Algebraic Topology ## Standard Spaces and Modifications \begin{align*} {\mathbb{D}}^n = \mathbb{B}^n &\coloneqq\left\{{ \mathbf{x} \in {\mathbf{R}}^{n} {~\mathrel{\Big\vert}~}{\left\lVert {\mathbf{x}} \right\rVert} \leq 1}\right\} {\mathbb{S}}^n &\coloneqq\left\{{ \mathbf{x} \in {\mathbf{R}}^{n+1} {~\mathrel{\Big\vert}~}{\left\lVert {\mathbf{x}} \right\rVert} = 1}\right\} = {{\partial}}{\mathbb{D}}^n \\ .\end{align*} Note: I’ll immediately drop the blackboard notation, this is just to emphasize that they’re “canonical” objects. The sphere can be constructed in several equivalent ways: • $$S^n \cong D^n / {{\partial}}D^n$$: collapsing the boundary of a disc is homeomorphic to a sphere. • $$S^n \cong D^n \coprod_{{{\partial}}D^n} D^n$$: gluing two discs along their boundary. Note the subtle differences in dimension: $$S^n$$ is a manifold of dimension $$n$$ embedded in a space of dimension $$n+1$$. Low Dimensional Discs/Balls vs Spheres Constructed in one of several equivalent ways: • $$S^n/\sim$$ where $$\mathbf{x} \sim -\mathbf{x}$$, i.e. antipodal points are identified. • The space of lines in $${\mathbf{R}}^{n+1}$$. One can also define $${\mathbf{RP}}^ \infty \coloneqq\directlim_{n} {\mathbf{RP}}^n$$. Fits into a fiber bundle of the form Defined in a similar ways, • Taking the unit sphere in $${\mathbf{C}}^n$$ and identifying $$\mathbf{z} \sim -\mathbf{z}$$. • The space of lines in $${\mathbf{C}}^{n+1}$$ Can similarly define $${\mathbf{CP}}^ \infty \coloneqq\directlim_n {\mathbf{CP}}^n$$. Fits into a fiber bundle of the form The $$n{\hbox{-}}$$torus, defined as \begin{align*} T^n \coloneqq\prod_{j=1}^n S^1 = S^1 \times S^1 \times \cdots .\end{align*} The real Grassmannian, $${\operatorname{Gr}}(n, k)_{/{\mathbf{R}}}$$, i.e. the set of $$k$$ dimensional subspaces of $${\mathbf{R}}^n$$. One can similar define $${\operatorname{Gr}}(n, k)_{{\mathbf{C}}}$$ for complex subspaces. Note that $${\mathbf{RP}}^n = {\operatorname{Gr}}(n, 1)_{{\mathbf{R}}}$$ and $${\mathbf{CP}}^n = {\operatorname{Gr}}(n, 1)_{/{\mathbf{C}}}$$. The Stiefel manifold $$V_{n}(k)_{{\mathbf{R}}}$$, the space of orthonormal $$k{\hbox{-}}$$frames in $${\mathbf{R}}^n$$? Lie Groups: • The general linear group, $$\operatorname{GL}_{n}({\mathbf{R}})$$ • The special linear group $$SL_{n}({\mathbf{R}})$$ • The orthogonal group, $$O_{n}({\mathbf{R}})$$ • The special orthogonal group, $$SO_{n}({\mathbf{R}})$$ • The real unitary group, $$U_{n}({\mathbf{C}})$$ • The special unitary group, $$SU_{n}({\mathbf{R}})$$ • The symplectic group $$Sp(2n)$$ Some other spaces that show up, but don’t usually have great algebraic topological properties: • Affine $$n$$-space over a field $${\mathbf{A}}^n(k) = k^n \rtimes GL_{n}(k)$$ • The projective space $${\mathbf{P}}^n(k)$$ • The projective linear group over a ring $$R$$, $$PGL_{n}(R)$$ • The projective special linear group over a ring $$R$$, $$PSL_{n}(R)$$ • The modular groups $$PSL_{n}({\mathbf{Z}})$$ • Specifically $$PSL_{2}({\mathbf{Z}})$$ $$K(G, n)$$ is an Eilenberg-MacLane space, the homotopy-unique space satisfying \begin{align*} \pi_{k}(K(G, n)) = \begin{cases} G & k=n, \\ 0 & \text{else} \end{cases} \end{align*} Some known examples: • $$K({\mathbf{Z}}, 1) = S^1$$ • $$K({\mathbf{Z}}, 2) = {\mathbf{CP}}^\infty$$ • $$K({\mathbf{Z}}/2{\mathbf{Z}}, 1) = {\mathbf{RP}}^\infty$$ $$M(G, n)$$ is a Moore space, the homotopy-unique space satisfying \begin{align*} H_{k}(M(G, n); G) = \begin{cases} G & k=n, \\ 0 & k\neq n. \end{cases} \end{align*} Some known examples: • $$M({\mathbf{Z}}, n) = S^n$$ • $$M({\mathbf{Z}}/2{\mathbf{Z}}, 1) = {\mathbf{RP}}^2$$ • $$M({\mathbf{Z}}/p{\mathbf{Z}}, n)$$ is made by attaching $$e^{n+1}$$ to $$S^n$$ via a degree $$p$$ map. • $${\mathcal{M}}\simeq S^1$$ where $${\mathcal{M}}$$ is the Mobius band. • $${\mathbf{CP}}^n = {\mathbf{C}}^n \coprod {\mathbf{CP}}^{n-1} = \coprod_{i=0}^n {\mathbf{C}}^i$$ • $${\mathbf{CP}}^n = S^{2n+1} / S^n$$ • $$S^n / S^k \simeq S^n \vee \Sigma S^k$$. In low dimensions, there are some “accidental” homeomorphisms: • $${\mathbf{RP}}^1 \cong S^1$$ • $${\mathbf{CP}}^1 \cong S^2$$ • $${\operatorname{SO}}(3) \cong {\mathbf{RP}}^2$$? ## Modifying Known Spaces Write $$D(k, X)$$ for the space $$X$$ with $$k\in {\mathbb{N}}$$ distinct points deleted, i.e. the punctured space $$X - \left\{{x_{1}, x_{2}, \ldots x_{k}}\right\}$$ where each $$x_{i} \in X$$. The “generalized uniform bouquet”? $$\mathcal{B}^n(m) = \bigvee_{i=1}^n S^m$$. There’s no standard name for this, but it’s an interesting enough object to consider! Possible modifications to a space $$X$$: • Remove a line segment • Remove an entire line/axis • Remove a hole • Quotient by a group action (e.g. antipodal map, or rotation) • Remove a knot • Take complement in ambient space # Low Dimensional Homology Examples \begin{align*} \begin{array}{cccccccccc} S^1 &= &[&{\mathbf{Z}}, &{\mathbf{Z}}, &0, &0, &0, &0\rightarrow & ]\\ {\mathcal{M}}&= &[&{\mathbf{Z}}, &{\mathbf{Z}}, &0, &0, &0, &0\rightarrow & ]\\ {\mathbf{RP}}^1 &= &[&{\mathbf{Z}}, &{\mathbf{Z}}, &0, &0, &0, &0\rightarrow & ]\\ {\mathbf{RP}}^2 &= &[&{\mathbf{Z}}, &{\mathbf{Z}}_{2}, &0, &0, &0, &0\rightarrow & ]\\ {\mathbf{RP}}^3 &= &[&{\mathbf{Z}}, &{\mathbf{Z}}_{2}, &0, &{\mathbf{Z}}, &0, &0\rightarrow & ]\\ {\mathbf{RP}}^4 &= &[&{\mathbf{Z}}, &{\mathbf{Z}}_{2}, &0, &{\mathbf{Z}}_{2}, &0, &0\rightarrow & ]\\ S^2 &= &[&{\mathbf{Z}}, &0, &{\mathbf{Z}}, &0, &0, &0\rightarrow & ]\\ {\mathbb{T}}^2 &= &[&{\mathbf{Z}}, &{\mathbf{Z}}^2, &{\mathbf{Z}}, &0, &0, &0\rightarrow & ]\\ {\mathbb{K}}&= &[&{\mathbf{Z}}, &{\mathbf{Z}}\oplus {\mathbf{Z}}_{2}, &0, &0, &0, &0\rightarrow & ]\\ {\mathbf{CP}}^1 &= &[&{\mathbf{Z}}, &0, &{\mathbf{Z}}, &0, &0, &0\rightarrow & ]\\ {\mathbf{CP}}^2 &= &[&{\mathbf{Z}}, &0, &{\mathbf{Z}}, &0, &{\mathbf{Z}}, &0\rightarrow & ]\\ \end{array} .\end{align*} # Table of Homotopy and Homology Structures The following is a giant list of known homology/homotopy. \scriptsize $$X$$$$\pi_*(X)$$$$H_*(X)$$CW Structure$$H^*(X)$$ $${\mathbf{R}}^1$$$$0$$$$0$$$${\mathbf{Z}}\cdot 1 + {\mathbf{Z}}\cdot x$$0 $${\mathbf{R}}^n$$$$0$$$$0$$$$({\mathbf{Z}}\cdot 1 + {\mathbf{Z}}\cdot x)^n$$0 $$D(k, {\mathbf{R}}^n)$$$$\pi_*\bigvee^k S^1$$$$\bigoplus_{k} H_* M({\mathbf{Z}}, 1)$$$$1 + kx$$? $$B^n$$$$\pi_*({\mathbf{R}}^n)$$$$H_*({\mathbf{R}}^n)$$$$1 + x^n + x^{n+1}$$0 $$S^n$$$$[0 \ldots , {\mathbf{Z}}, ? \ldots]$$$$H_*M({\mathbf{Z}}, n)$$$$1 + x^n$$ or $$\sum_{i=0}^n 2x^i$$$${\mathbf{Z}}[{}_{n}x]/(x^2)$$ $$D(k, S^n)$$$$\pi_*\bigvee^{k-1}S^1$$$$\bigoplus_{k-1}H_*M({\mathbf{Z}}, 1)$$$$1 + (k-1)x^1$$? $$T^2$$$$\pi_*S^1 \times \pi_* S^1$$$$(H_* M({\mathbf{Z}}, 1))^2 \times H_* M({\mathbf{Z}}, 2)$$$$1 + 2x + x^2$$$$\Lambda({}_{1}x_{1}, {}_{1}x_{2})$$ $$T^n$$$$\prod^n \pi_* S^1$$$$\prod_{i=1}^n (H_* M({\mathbf{Z}}, i))^{n\choose i}$$$$(1 + x)^n$$$$\Lambda({}_{1}x_{1}, {}_{1}x_{2}, \ldots {}_{1}x_{n})$$ $$D(k, T^n)$$$$[0, 0, 0, 0, \ldots]$$?$$[0, 0, 0, 0, \ldots]$$?$$1 + x$$? $$S^1 \vee S^1$$$$\pi_*S^1 \ast \pi_* S^1$$$$(H_*M({\mathbf{Z}}, 1))^2$$$$1 + 2x$$? $$\bigvee^n S^1$$$$\ast^n \pi_* S^1$$$$\prod H_* M({\mathbf{Z}}, 1)$$$$1 + x$$? $${\mathbf{RP}}^1$$$$\pi_* S^1$$$$H_* M({\mathbf{Z}}, 1)$$$$1 + x$$$${}_{0}{\mathbf{Z}}\times {}_{1}{\mathbf{Z}}$$ $${\mathbf{RP}}^2$$$$\pi_*K({\mathbf{Z}}/2{\mathbf{Z}}, 1)+ \pi_* S^2$$$$H_*M({\mathbf{Z}}/2{\mathbf{Z}}, 1)$$$$1 + x + x^2$$$${}_{0}{\mathbf{Z}}\times {}_{2}{\mathbf{Z}}/2{\mathbf{Z}}$$ $${\mathbf{RP}}^3$$$$\pi_*K({\mathbf{Z}}/2{\mathbf{Z}}, 1)+ \pi_* S^3$$$$H_*M({\mathbf{Z}}/2{\mathbf{Z}}, 1) + H_*M({\mathbf{Z}}, 3)$$$$1 + x + x^2 + x^3$$$${}_{0}{\mathbf{Z}}\times {}_{2}{\mathbf{Z}}/2{\mathbf{Z}}\times {}_{3}{\mathbf{Z}}$$ $${\mathbf{RP}}^4$$$$\pi_*K({\mathbf{Z}}/2{\mathbf{Z}}, 1)+ \pi_* S^4$$$$H_*M({\mathbf{Z}}/2{\mathbf{Z}}, 1) + H_*M({\mathbf{Z}}/2{\mathbf{Z}}, 3)$$$$1 + x + x^2 + x^3 + x^4$$$${}_{0}{\mathbf{Z}}\times ({}_{2}{\mathbf{Z}}/2{\mathbf{Z}})^2$$ $${\mathbf{RP}}^n, n \geq 4$$ even$$\pi_*K({\mathbf{Z}}/2{\mathbf{Z}}, 1)+ \pi_*S^n$$$$\prod_{\text{odd}~i < n} H_*M({\mathbf{Z}}/2{\mathbf{Z}}, i)$$$$\sum_{i=1}^n x^i$$$${}_{0}{\mathbf{Z}}\times \prod_{i=1}^{n/2}{}_{2}{\mathbf{Z}}/2{\mathbf{Z}}$$ $${\mathbf{RP}}^n, n \geq 4$$ odd$$\pi_*K({\mathbf{Z}}/2{\mathbf{Z}}, 1)+ \pi_*S^n$$$$\prod_{\text{odd}~ i \leq n-2} H_*M({\mathbf{Z}}/2{\mathbf{Z}}, i) \times H_* S^n$$$$\sum_{i=1}^n x^i$$$$H^*({\mathbf{RP}}^{n-1}) \times {}_{n}{\mathbf{Z}}$$ $${\mathbf{CP}}^1$$$$\pi_*K({\mathbf{Z}}, 2) + \pi_* S^3$$$$H_* S^2$$$$x^0 + x^2$$$${\mathbf{Z}}[{}_{2}x]/({}_2x^{2})$$ $${\mathbf{CP}}^2$$$$\pi_*K({\mathbf{Z}}, 2) + \pi_* S^5$$$$H_*S^2 \times H_* S^4$$$$x^0 + x^2 + x^4$$$${\mathbf{Z}}[{}_{2}x]/({}_2x^{3})$$ $${\mathbf{CP}}^n, n \geq 2$$$$\pi_*K({\mathbf{Z}}, 2) + \pi_*S^{2n+1}$$$$\prod_{i=1}^n H_* S^{2i}$$$$\sum_{i=1}^n x^{2i}$$$${\mathbf{Z}}[{}_{2}x]/({}_2x^{n+1})$$ Mobius Band$$\pi_* S^1$$$$H_* S^1$$$$1 + x$$? Klein Bottle$$K({\mathbf{Z}}\rtimes_{-1} {\mathbf{Z}}, 1)$$$$H_*S^1 \times H_* {\mathbf{RP}}^\infty$$$$1 + 2x + x^2$$? \normalsize • $${\mathbf{R}}^n$$ is a contractible space, and so $$[S^m, {\mathbf{R}}^n] = 0$$ for all $$n, m$$ which makes its homotopy groups all zero. • $$D(k, {\mathbf{R}}^n) = {\mathbf{R}}^n - \left\{{x_{1} \ldots x_{k}}\right\} \simeq\bigvee_{i=1}^k S^1$$ by a deformation retract. • $$S^n \cong B^n / {\partial}B^n$$ and employs an attaching map \begin{align*} \phi: (D^n, {\partial}D^n) &\to S^n \\ (D^n, {\partial}D^n) &\mapsto (e^n, e^0) .\end{align*} • $$B^n \simeq{\mathbf{R}}^n$$ by normalizing vectors. • Use the inclusion $$S^n \hookrightarrow B^{n+1}$$ as the attaching map. • $${\mathbf{CP}}^1 \cong S^2$$. • $${\mathbf{RP}}^1 \cong S^1$$. • Use $$\left[ \pi_{1}, \prod \right]= 0$$ and the universal cover $${\mathbf{R}}^1 \twoheadrightarrow S^1$$ to yield the cover $${\mathbf{R}}^n \twoheadrightarrow T^n$$. • Take the universal double cover $$S^n \twoheadrightarrow^{\times 2} {\mathbf{RP}}^n$$ to get equality in $$\pi_{i\geq 2}$$. • Use $${\mathbf{CP}}^n = S^{2n+1} / S^1$$ • Alternatively, the fundamental group is $${\mathbf{Z}}\ast{\mathbf{Z}}/ bab^{-1}a$$. Use the fact the $$\tilde K = {\mathbf{R}}^2$$. • $$M \simeq S^1$$ by deformation-retracting onto the center circle. • $$D(1, S^n) \cong {\mathbf{R}}^n$$ and thus $$D(k, S^n) \cong D(k-1, {\mathbf{R}}^n) \cong \bigvee^{k-1} S^1$$
{}
# Why is g[#1, #2] & different from g[##] & for a function with two variables in this example? [duplicate] If we only consider functions with two variables, I expect g[#1, #2] & is the same as g[##] &. But in the following example, they are not the same. Why? expr = f[x, y] + D[f[x, y], x] (* -> f[x, y] + Derivative[1, 0][f][x, y] *) expr /. f -> (g[#1, #2] &) (* -> g[x, y] + Derivative[1, 0][g][x, y] *) expr /. f -> (g[##] &) (* -> g[x, y] *)
{}
You are not logged in. Please login at www.codechef.com to post your questions! × # PROBLEM LINK: Author: Dmytro Berezin Tester: Shang Jingbo and Gerald Agapov Editorialist: Devendra Agarwal SIMPLE # PREREQUISITES: Basic Dynamic Programming # PROBLEM: Given Frog's location on the X axis and they can communicate their message to another frog only if the distance are less than or equal to K , now you need to answer if given two frogs can communicate or not. Assumption : Frog's are cooperative in nature and will transfer the message without editing it :D . # Quick Explanation Find for each frog the maximum distance which his message can reach. Two Frogs can only communicate if their maximum distance of communication are same. Reason You can easily proof by contradiction. Let us assume that there are two frogs 1 and 2 which can communicate but their maximum distances are different. You can easily contradict this , proof is left as an exercise for the reader. # Explanation Only Challenge left is to calculate the maximum distance which each frog can message. The First point to note is that one of the optimal strategy of each frog(say f) will be to send message to it's nearest frog(say f1) and then it will be the responsibility of the nearest frog to carry it further. One Line Proof : Frog's reachable from the f in the direction of f1 is also reachable from f1. Another point to note is that the frog on the extreme positive side of X axis(i.e Maximum A[i] , say A[j]) can communicate till A[j] + K. Using these observation , one can use a simple dp to calculate the maximum distance . But how ? Sort A[i]'s but do not loose the index of frog's while sorting. Let the sorted array be Frog[] . Now if Frog[i] can communicate to Frog[i+1] , then Frog[i] can communicate as mcuh distance as Frog[i+1] can communicate. Pseudo Code Pre-Compute ( A , K ): sort(A,A+N); //Sorted in Decreasing Order of X . Max_Distance[A[0].ind]=A[0].v+K; for(int i=1;i<N;i++) if((A[i-1].x-A[i].x)<=K) Max_Distance[A[i].ind] = Max_Distance[A[i-1].ind]; else Max_Distance[A[i].ind] = A[i].x + K; Answer ( x , y ): if ( Max_Distance[x] == Max_Distance[y] ): return "Yes" else return "No" Complexity: O(N*log(N)), N * logN for sorting N integers. # AUTHOR'S and TESTER'S SOLUTIONS: Author's solution to be uploaded soon. Tester's solution This question is marked "community wiki". asked 14 Jul '14, 15:01 46273842 accept rate: 0% 0★admin ♦♦ 19.1k348495534 I know i solved it with more complexity added but i have used the Segment tree . http://www.codechef.com/viewsolution/4217571 (14 Jul '14, 15:16) 2★ 11 I did the problem using union find algorithm (14 Jul '14, 15:16) In the editorial it is given as ,any two frogs which have same maximum distances can communicate with each other. i am not able to clearly understand this statement.for example let us take this sample test case: 5 1 1 0 1 5 6 8 2 4 the second frog has a max distance of 1,and the fourth frog also has a maximum distance of 1.but they cant communicate with each other ...can sumone clarify?? (14 Jul '14, 18:17) 3★ @prem_93 : Maximum distance of communication of frog 2 is 1+1(=k) = 2 , whereas frog 4 has maximum distance of 6+1 = 7 , so they cannot communicate. (14 Jul '14, 18:44) @devuy11 ...can u plz take the pain of explaining how max distance is calculated?? (14 Jul '14, 19:30) 3★ @prem_93 : There is no pain in explaining :D.Let's first sort each frog with it's X-Cordinate.Look closely at the last frog,he can communicate till Last_Frog_Position + K,Now look at the second last frog,If the message sent by the second last frog can reach last frog , then we can assume that the last frog will communicate the message of second last frog without editing :D , otherwise second_last_frog can send it up to second_last_frog position + K . Point to observe is that in calculation of maximum distance of second last frog , only last frog is needed . I hope you can extent it from here. (14 Jul '14, 19:42) thnks that helped...i actually misunderstood the maximum distance of a specific frog as the maximum distamce itz message can reach..for example in the test case which i have given the fourth frog can transmit itz message over a maximum distance of 1 unit.but the max distance here the editorial specifies the maximum coordinate ...right....also my solution with complexity O(nlogn) is giving tle...how should i correct it??prob link: http://www.codechef.com/viewsolution/4319695 (14 Jul '14, 19:57) 3★ @prem_93 : right :) . Your code seems to be tough one for me to understand :( . (14 Jul '14, 20:13) my algorithm is as follows: 1) used merge sort to sort the x co-ordinates. 2)after sorting, i found the points in which the communication path is disconnected.ie)the x coordinates for which , the distance between them and the next cordinate (in the sorted order) is greater than k. i stored these values in a new array callect disconnect. 3)now given x and y(nos of the frogs) i calculate their x coordinates and check if there is any value "v" in disconect array satisfying the condition:pos(x)<=v<pos(y)...if yes...then i output "no" and "Yes" if otherwise. (14 Jul '14, 20:36) 3★ Please explain in a more readable manner. The English is sort of broken and I cannot understand the solution properly :( (14 Jul '14, 22:16) 1) i ued merge sort to sort the x coordinates. 2)then,i took the new sorted array and traversed through each element ,finding the elements for which this condition is satisfied: arr[i+1]-arr[i]>k.i stored these in a new array (let it be p array). 3)now in each query I take in the values of x and y(frog nos),and find their respective positions(x coordinates)...let them be pos(x) and pos(y)...now i go through the "p" array and check if there is any value in it that satisfies this condition:pos(x)<=value<pos(y). if there is such a value the answer is "NO" and "YES" if otherwise. (15 Jul '14, 11:19) 3★ For example: INPUT: 5 1 1 0 1 5 6 8 2 4 1)here after sorting the array is:0 1 5 6 8 2)in this case my "p" array would contain the elements 1 6..bcoz (5-1>k) and(8-6>k). 3) now in the query i have x=2,y=4...pos(2)=1 and pos(4)=6. now there is an element(1) in my p array which satisfies the condition 1<=1<6 and hence my answer would be "NO". (15 Jul '14, 11:24) 3★ Hi, how do you prove by contradiction that both frogs can only communicate if they can reach the same maximal distance? (18 Jul '14, 02:55) 2★ @kuruma - suppose frog at x coordinate = 1 can send message to maximum x=10, this means all frogs between x=1 and x=10 can send message to x=10. And this means frog at x=1 cannot send message to frog at x>10(because max distance he can send message is 10) .. And suppose frog at x=15 can send message to maximum x=20 and frog at x=10 is able to send message to x=15.. so the maximum distance frog at x=10 can send msg is x=20 and maximum distance frog at x=15 can send msg is x=20. that means frog at x=10 and x=15 can communicate (because all frog between x=10 and x=20 can communicate) (18 Jul '14, 03:56) PROOFBYCONTRADICTION STATEMENT:Twofrogs which have same maximal distances cannot communicate with each other. let the common maximal distance be Y. let position(frog'1') be a and pos(frog'2') be b. letY>=b>=a. Since,1stfrog can communicate till"Y",it should also mean that it can communicate to all the frogs whose positions are in the interval [a,Y].Since"b"is a position which satisfies this condition it can communicate with 2nd frog but this contradicts our statement and hence we have proved it false. thus this proves that 2 frogs with same max distance can communicate. (18 Jul '14, 10:08) 3★ @kuruma you can also prove the statement: "Two frogs which have differnet maximal distances can communicate with each other." to be false in a similar way. And from these 2 proofs you can conclude that "Two frogs can communicate with each other only if their maximal distances are the same." (18 Jul '14, 10:14) 3★ @ShangJingbo and @GeraldAgapov can u please provide the test case for which my code gave TLE?? http://www.codechef.com/viewsolution/4327814 (30 Jul '14, 15:31) max distance= position of frog + k (10 Jan '15, 16:05) 2★ showing 5 of 18 show all 38 Answers: 6 alternatively, make an another array containing indices, sort them using the first array. then assign some common number to those frogs who can communicate. ( In sample test case, indices becomes 0 1 3 2 4 and we have to check if v[1]-v[0]<=k) if yes then assign some common value to them ). Link to code: http://ideone.com/nGyLo9 answered 14 Jul '14, 16:56 156●1●5●11 accept rate: 0% 3 I solved the problem in two ways: Using Segment Trees, Using concept of connected component as pointed out by @brobear1995 But I haven't thought of a DP solution...I always miss out on DP.. :( answered 14 Jul '14, 17:10 3★akumar3 141●6 accept rate: 0% 1 Methodology Used: 1. Keep the input array 2. Create a duplicate array of it. 3. Sort the duplicate array. 4. Use the input index to get the element from original array, use Binary search ti find the index of that element in the duplicate sorted array. 5. Iterate through the starting index to the last index and check for <=K while iterating (a[i+1]-a[i]<=K). 6. If reaches to the end, output Yes else No. 7. Giving WA. #include #include #define ll long long #define fore(i,x) for(int i=0;ihigh) return -1; int mid=(low+high)/2; if(e==inp[mid])return mid; if(e>N>>K>>P; fore(i,N) {cin>>inp[i];dupinp[i]=inp[i];} sort(dupinp,dupinp+N); bool flag=true; while(P--) { cin>>A>>B; if(A==B)cout<<"Yes"<K) {flag=false;break;} } if(!flag)cout<<"No"< 1 I copied the original array A into another array B and sorted it in increasing order, while maintaining the index.Then I copied the elements which fail the condition mentioned. for(i=0;ik) vec.push_back(B[i]); Then I used lower_bound to check the position of both the elements (A[x-1],A[y-1]) in the vector vec. If abs(posx-posy)>=1, the answer will be no else it will be yes. It worked. http://www.codechef.com/viewsolution/4191094 answered 15 Jul '14, 11:03 5★rtheman 69●3 accept rate: 0% 0 Hi , can u plz tell me for which Test case I am getting WA . http://www.codechef.com/viewsolution/4277673 Thanks rajesh answered 14 Jul '14, 16:44 16●2 accept rate: 0% 0 Same thing I do but still getting TLE I sorted them using quick sort and then go through min coordinate to max coordinate and note down every starting point after breaking point say if coordinates are 0 8 5 3 12 and distance is 3 than i stored 0 12 in new array than i take input from user and get coordinate of there position and look for whether they belong to same interval or not if yes than answer is yes else no. Link to solution Please tell me what wrong in my code answered 14 Jul '14, 17:31 111●1●1●8 accept rate: 18% 0 Sir, i have used this code and getting correct answer for all the test cases which i have tried....i am still getting WA...please help where is the error in the code?? http://www.codechef.com/viewsolution/4254933 answered 14 Jul '14, 17:33 1 accept rate: 0% 0 I solved it using segment tree . answered 14 Jul '14, 17:55 4★anuj95 288●2●6●14 accept rate: 12% 0 Sir, I have used my code for most of the test cases and getting the desired output.....still I am getting WA....Kindly tell the respective cases for which my code was wrong....Thank you http://www.codechef.com/viewsolution/4297541 answered 14 Jul '14, 21:21 1●1 accept rate: 0% 1 You are assuming that a[q-1] < a[m-1] which may not be true . Even if you are able to debug your code , You will get a TLE as the worst case complexity of your algorithm is O(N*P) where N <= 10^5 and P <= 10^5 . (15 Jul '14, 16:30) anuj954★ 0 Hello All, I used the following method: 1. Treat all positions of Frogs as node in a graph. 2. create a undirected path for nodes whose distance is distance is less than K. 3. Used Breadth first search to find if target node is reachable or not. Getting WA. Please anyone tell me what's wrong in the logic, i'll be really helpful. Thanks!!! answered 14 Jul '14, 21:47 1●1 accept rate: 0% how did you store the graph?? Using adjacency matrix or List? (17 Jul '14, 00:33) saroj3★ I used Adjacency list to store the graph (10 Aug '14, 14:54) toggle preview community wiki: Preview ### Follow this question By Email: Once you sign in you will be able to subscribe for any updates here By RSS: Answers Answers and Comments Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • link:[text](http://url.com/ "title") • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×14,850 ×1,847 ×1,034 ×23 ×22 question asked: 14 Jul '14, 15:01 question was seen: 13,870 times last updated: 23 Jun, 00:55
{}
Skip navigation Hardcover | $62.00 Short | £42.95 | ISBN: 9780262083591 | 344 pp. | 6 x 9 in | 1 illus.| March 2007 Paperback |$25.00 Short | £17.95 | ISBN: 9780262582728 | 344 pp. | 6 x 9 in | 1 illus.| March 2007 "“University Presses in Space” showcases a special sampling of the many works that university presses have published about space and space exploration." # Alternative Pathways in Science and Industry Activism, Innovation, and the Environment in an Era of Globalizaztion ## Overview In Alternative Pathways in Science and Industry, David Hess examines how social movements and other forms of activism affect innovation in science, technology, and industry. Synthesizing and extending work in social studies of science and technology, social movements, and globalization, Hess explores the interaction of grassroots environmental action and mainstream industry and offers a conceptual framework for understanding it. Hess proposes a theory of scientific and technological change that considers the roles of both industry and grassroots consumers in setting the research agenda in science and technology and he identifies alternative pathways by which social movements can influence scientific and technological innovation. He analyzes four of these pathways: industrial opposition movements organized against targeted technologies (as in the campaign against nuclear energy); technology- and product-oriented movements, which press for alternatives (as does the organic food movement); localism, which promotes local ownership (as in "buy local" campaigns); and access pathways, which support a more equitable distribution of resources. Within each pathway, Hess examines reforms in five areas: agriculture, energy, waste and manufacturing, infrastructure, and finance. Hess's theoretical argument and the empirical evidence he presents demonstrate the complex pattern of incorporation (of grassroots innovations) and transformation (of alternative ownership structures and alternative products) that has characterized the relationship of industry and activism. Hess's analysis of alternative pathways to change suggests how economic organizations could shift to a more just and sustainable course. ## About the Author David J. Hess is Professor of Sociology at Vanderbilt University. He is the author of Alternative Pathways in Science and Industry: Activism, Innovation, and the Environment in an Era of Globalization (MIT Press, 2007) and Localist Movements in a Global Economy: Sustainability, Justice, and Urban Development in the United States (MIT Press, 2009), and many other books. ## Endorsements "This book couldn't be timelier. With the steamroller of globalization proceeding apace, this deeply researched and wide-ranging work provides thoughtful analyses of a diverse set of attempts to develop sustainable and socially just alternatives to our corporate-dominated, environmentally destructive, and inhumane global economy. Conceptually nuanced and politically vital, Alternative Pathways in Science and Industry is a book that scholars and activists alike will want to read." Daniel Lee Kleinman, Professor of Rural Sociology and Science and Technology Studies, University of Wisconsin - Madison, author of Impure Cultures: University Biology and the World of Commerce "This book is valuable in contributing to several literatures: social movements, globalization, science and technology studies, and civil society. It is unique as an STS treatment of sustainability. The book is theoretically rich and nuanced, written in very accessible language." Phil Brown, Department of Sociology and Center for Environmental Studies, Brown University "This book provides a hopeful and much-needed conceptual framework for understanding how civil society can positively influence science and technology in our era of globalization. Through the interpretive lenses of sociology and history, Hess has studied food cooperatives, alternative-energy producers, community-based recyclers, and other progressive organizations in an effort to identify more just and sustainable pathways of 'opposition and compromise.' His achievement is significant and will certainly influence not only scholars, but activists and professionals in any field who are concerned with the coevolution of society and the environment." Steven A. Moore, Bartlett Cocke Professor of Architecture and Planning, University of Texas ## Awards Winner, 2009 Robert K. Merton Book Award given by the American Sociological Association Section on Science, Knowledge, and Technology.
{}
# Advent of Code: Day 2 Source Part 1: A gift requires enough wrapping paper to cover the surface plus an additional amount equal to the area smallest side. Calculate the total wrapping paper needed for a list of dimensions of the form 2x3x4. total_area = 0 for line in sys.stdin: l, w, h = list(sorted(map(int, line.strip().split('x')))) area = 3 * l * w + 2 * w * h + 2 * h * l total_area += area print(total_area) The only real trick here is the use of list(sorted(...)). This will guarantee that l and w are the smallest dimensions and thus represent the extra area to add. Part 2: Given the same input, calculate the amount of ribbon needed. You need the larger of either the shortest distance around the outside or the smallest perimeter of any one face. In addition, you need an additional amount equal to the volume in cubic feet. total_ribbon = 0 for line in sys.stdin: l, w, h = list(sorted(map(int, line.strip().split('x')))) total_ribbon += max( 2 * (l + w), # smallest distance around sides 4 * l, # smallest perimeter ) total_ribbon += l * w * h print(total_ribbon) This one was a little stranger since the original description was unclear if you needed the larger or smaller of the first two measurements, but it was easy enough to calculate both. Turns out, they meant the larger of those two.
{}
# How hard is a known prefix hash preimage attack on SHA-2? Suppose the attacker knows $X, Z$ such that $H(X || Y) = Z$ If bit-length(Y) < 60 then a brute force attack is possible. What if bit-length(Z) = 256 (such as in SHA-256), bit-length(X) = 128, or bit-length(Y) = 256? Are there any published paper with results/experiments with SHA-256/512 in full or reduced rounds? Are there specific known attack techniques? • I'm pretty confident there is nothing out there allowing a preimage attack of this kind on SHA-256 with full rounds. – fgrieu Feb 23 '13 at 16:46 • Regarding the brute force attack, the bitcoin network could brute-force more than 60 bit quite fast (today - the question is 4 years old): Assuming $5 \cdot 10^{18}$ hashes per second, it would take the bitcoin network $\approx 0.23$ sec for a full search over $60$ bit. – tylo Jun 19 '17 at 9:01
{}
Muffins, math, and the lies we tell about both I made my favorite pumpkin muffins this morning, for the first time in quite a while, but I made them differently today. The recipe calls for one cup of pumpkin from a can. A can of pumpkin contains about a cup and a half, and what are you gonna do with an extra half cup of pumpkin? So I always put in the whole can. Today however, I was using pumpkin from the freezer. Last fall, I turned a couple of pie pumpkins into pumpkin puree and froze it for exactly this purpose. The bag of pumpkin puree I was working with contained two cups. So I made two batches, using the prescribed one cup per batch. They are good, but they are not quite as good as the ones that have a cup and a half of pumpkin. If you spend any time baking, you will surely run across claims that baking is different from cooking. Baking requires more precision and following of directions than other types of cooking, you’ll be told. Similarly if you spend any time learning math, you will surely run across claims that learning math is different from other intellectual activities. Learning math requires precision and following of steps, you’ll be told. These are lies. My deliciously moist pumpkin muffins prove that this is so. This is what I love so dearly about Eugenia Cheng’s book How to Bake Pi.. She writes that in both math and baking, decisions have consequences. Sometimes these consequences are undesirable, such as bread that doesn’t rise or arithmetic that is inconsistent. In math as in baking, you need to follow instructions carefully in order to achieve the known result. If I want muffins that are exactly like the ones in the original recipe, I need to use one cup of pumpkin. But here’s the secret we don’t let you in on: If I use the whole can of pumpkin, I will still get pumpkin muffins. They will be different ones, but they will still be pumpkin muffins. The difference between baking and math is that you nearly always see the natural consequences of the decisions you make, while we structure most people’s experiences with math in ways that hide those consequences. If you leave out the sugar, your muffins will not be delicious. They may have structural problems as well. You notice these consequences and you tend to try to figure out what went wrong. If you claim that $(x+1)^2=x^2+1$, the only consequence is that someone else tells you that you are wrong—whether teacher, tutor, or back of the book. In Cheng’s book, she describes treating math exactly like baking. She pushes her students to consider the natural consequences of their claims. If $(x+1)^2=x^2+1$, then when $x=-1$, $1=2$. If $1=2$, then there are going to be lots of troubles later on. Go read her book. Then make some pumpkin muffins. And please don’t listen to Chris Kimball; the man is a total killjoy. 7 responses to “Muffins, math, and the lies we tell about both” 1. Aaron Bieniek If x = –1, then 0 = 2 is the problem – yeah? • Christopher Heh. Good catch Aaron. In a world where 0=2, pretty much anything’s possible though, right? • Actually, the world where 0 = 2 (modulo 2, characteristic 2) is pretty cool, but it is a decidedly different “flavor.” 2. My personal favorite for natural consequences for (x + a)^2 = x^2 + a^2 is to give students a right triangle with (for example) side lengths 6 and x and hypotenuse length x + 2 and ask them to solve for x. Then I sit back and enjoy. This is especially fun because I’ve already conditioned them to question whether I’m giving them an impossible problem, so they’re really happy to catch me out. Ho ho ho. • I don’t understand your comment, Julierwright. 6-8-10 seems like a standard right-triangle solution answer to me. How are the students “catching you out”? • Guess I should have spelled it out a little more. If they make the standard mistake, they set up the equation 6^2 + x^2 = x^2 + 2^2. Then when they try to solve it and get 36=4, they think it’s an impossible problem. So I either share somebody else’s 6/8/10 answer and have them see it works and ask them to investigate what went wrong, or ask them to describe what they did and break it down in detail. 3. Tiffany obrien I am so happy you wrote about this book. I have been in love with “How to bake pi ” for some time and gave it to all my fellow math teachers for Christmas. I love the way she connects math and baking and love the way she puts math into ways that others can access it. I also feel like she and I would be best friends if I knew her. :-). Thanks for your great blog and noticing always!
{}
# What is the answer to this infamous “Common Core” question? The following question (number 15 of this test) has become infamous as a poor "Common Core" question. What is the correct answer? Juanita wants to give bags of stickers to her friends. She wants to give the same number of stickers to each friend. She's not sure if she needs 4 bags or 6 bags of stickers. How many stickers could she buy so there are no stickers left over? • This is perhaps a bit old by now, but reading online discussions reveals a fair bit of confusion, so I thought it was worth it to make this question here. – Eric M. Schmidt Dec 19 '17 at 5:52 • It's a terribly written question, but that's hardly exclusive to common core. It is a risk in writing any word problem. – Thomas Andrews Dec 19 '17 at 6:04 • Comments are not for extended discussion; this conversation has been moved to chat. – Daniel Fischer Dec 19 '17 at 20:02 • Better phrasing (assuming multiple choice): Juanita doesn't know whether she has 4 friends or 6 friends. She wants to buy them some stickers and give the same number of stickers to each friend. How many stickers could she buy? – Reinstate Monica Dec 20 '17 at 21:55 • @Solomonoff'sSecret Or for a unique answer without multiple choice: Juanita doesn't know whether she has 4 friends or 6 friends. She wants to buy them some stickers and give the same positive number of stickers to each friend. What is the smallest number of stickers she could buy? – Solomon Ucko Apr 3 '19 at 11:37 This "infamous" question is so poorly worded, but any mathematical answer that can be given would reinforce the perception that "the question is OK, see, it has a valid answer!" The question is not OK. Word problems should not be about mulling over "what did the author want to say?". Even the "trick" questions are normally about spotting some well-defined linguistic (or mathematical) trick, rather than based on complete lack of clarity. Plus, note it was not meant to be a trick question, but some standard question meant to assess how well students of certain age comprehend divisibility and common multipliers. I can personally think about a dozen different ways to write a word problem which would boil down to the same mathematical problem and will be better worded. As for this question, I hope if it was on some exam, that it did not affect anyone's passing or failure. I can relate to a poor methodical soul who got stuck for two hours on that question, trying to get their head around it and failing to comprehend it (and also wasting precious time for the following questions) not realising that it is not their fault. • @Dunk As mentioned in the comments on the question, it reads like there's information missing in the question. I've actually taken tests in highschool where the teacher forgot something and had to announce it at the beginning of the test, so it really isn't that far-fetched. – Izkata Dec 19 '17 at 19:32 • What is missing is: we don't know anything about the "bags of stickers" and how they are related to either the stickers or to the friends. Exercise: try rewriting the question without mentioning bags, and not only you will succeed easily, but the resulting question will be completely clear. It's anyone's guess why the author didn't do that. – user491874 Dec 19 '17 at 19:45 • @user8734617 The friends are irrelevant, because Juanita has already figured out that she needs either 4 or 6 bags in order to distribute the stickers. For all we know, she might only have two or three friends who will each get two bags - it doesn't matter, because we're only concerned with filling four or six bags. – Iszi Dec 19 '17 at 22:42 • @Joshua 0 is also multiple of 12, and it's clear that no matter how she distributes the stickers in the bags, if she buys 0 she'll have 0 left over. Hence: 0 > 120 (wrt to the partial ordering by divisibility). – Kimball Dec 20 '17 at 3:35 • @Iszi, that only works if you've already realized that Juanita is buying individual stickers and then putting them in bags and giving one bag to each friend. The way the question is written implies that she is buying bags of stickers and that the correct answer is either "4 bags" or "6 bags" and you're supposed to somehow figure out which. – Harry Johnston Dec 20 '17 at 4:06 The difficulty arises from the confusing wording and strange premise. Juanita is buying "stickers", not "bags of stickers". Each friend is to receive a single bag of stickers, with the number of stickers in each bag the same. For reasons unexplained, Juanita does not know exactly how many friends she is giving stickers to, but she does know that it is either $4$ or $6$. Thus, the question is how many stickers to buy so that she will be able to divide them evenly among the friends, whether there turn out to be $4$ or $6$ of them. The answer, then, is any multiple of both $4$ and $6$. Equivalently, the number of stickers can be any multiple of $12$, which is the least common multiple of $4$ and $6$. • I would not have been able to answer that question. It seemed like there was missing info, and I did not know how to interpret the "does not know if she needs 4 or 6 bags" info. – Michael Dec 19 '17 at 8:40 • How can we assume that she either has 4 or 6 friends? – stanri Dec 19 '17 at 9:59 • "She is not sure if she needs 4 or 6 bags of stickers" to me means that either 4 or 6 are the acceptable answers, and nothing else. Who ever wrote that question should be tarred and feathered. Its furthermore absolutely unimaginable that she would not know how many friends she has. if its either 4 or 6, what if she has 6, but one doesn't show? Then she has stickers left over. But if left overs are allowed, the question is nonsensical, because 6 bags would also work for 4 friends if leftovers are acceptable. This does not test maths at all, it tests wether you can read minds... – Polygnome Dec 19 '17 at 11:27 • Maybe this is better? Juanita invited her friends to a party. Three children said they were definitely coming, but Bob and Sue said they might not be able to come. Since Bob and Sue are siblings, either they both come or they both do not. So (counting Juanita herself) there will either be 4 attendees or 6, but Juanita is not sure which. She wants to cut the cake into pieces so that, independent of whether 4 or 6 people are present, each person could get an equal number of pieces. How many pieces could she cut the cake into? – Steven Gubkin Dec 19 '17 at 16:34 • @StevenGubkin Much better but the English might be too hard for some age appropriate children. I am not referring to ESL but the subtleties of the word independent in this context or the word attendees. That stipulation does make writing these much harder. – kaine Dec 19 '17 at 18:31 Juanita buys zero bags. Then she gives zero to each friend. After that, there are zero stickers left over. (Note that, consistent with other answers, zero is a multiple of twelve.) Beyond that, I keep wondering how many bags of stickers Juanita intends to keep for herself after supplying her friends with some. It may well be that there are no circumstances under which there would be any "left over". Short Version The correct answer is "any multiple of 12", but they really probably just want "12". Long Version I agree that the question is poorly-worded, but after some reflection upon existing commentary and answers, I also agree it is possible to work out the proper mathematical problem and its possible solutions without requiring many leaps of logic or unfounded assumptions. However, I also believe that the actual answer to the question is not the answer that test reviewers would consider correct. First, about how to solve the problem: The first thing a student needs to do is eliminate what they don't need from the problem. This is a common assessment objective of word problems - finding whether the student can determine what is or is not relevant to the question. The scenario talks a lot about bags, stickers, and friends. We know that we need to fill bags with stickers, and the bags are being given to friends. We don't know how many friends Juanita has and, at first glance, this would seem to be part of the problem. However, the number of friends is irrelevant because Juanita has already narrowed the quantity of bags to two possibilities - 4 or 6. Now the student can determine the mathematical problem, and its possible solutions. We have either 4 or 6 bags to fill, and we want to buy an amount of stickers that divides evenly among the bags no matter which number is actually true. The easiest way to do this is to multiply the two numbers, for which you get 24. It is important to note here that 24 is actually a valid answer to the question that has been presented to the student. Now the student could conceivably extrapolate the ideal result, and determine that solution. Considering that Juanita is buying these stickers, she probably doesn't want to pay more than necessary. And, if they understand the concept of the Least Common Multiple, the student could realize that there might be lower quantities of stickers that could satisfy Juanita's needs. At this point, they would then do the appropriate work and arrive at the answer of 12 - which is probably the answer the instructors want to see on the standardized test. The question as written leaves open the possibility of an infinite number of correct answers being considered "wrong". That last part is where this question, in my opinion, fails to serve its purpose horribly. There's an important difference between this (from the original question): How many stickers could she buy so there are no stickers left over? And this (change in italics): What's the fewest number of stickers she could buy so there are no stickers left over? The former leaves all multiples of 12 as possible valid answers. In fact, the most correct answer would be for the student to write (as it's a free-form answer field anyway) "Any multiple of 12". However, the latter form narrows the field of possibilities down to only the answer that costs the least amount of money. For that question, the only correct answer (assuming the stickers are individually priced, and there's no buy-one-get-one-free sales on) is 12. A Better Question Personally, I'd probably suggest a rewrite similar to this: Juanita is going to the store to buy some stickers for her friends. To distribute them equally, she's going to put the stickers into bags. However, she left her bags at home and can't remember whether she has 4 or 6 bags to fill. She wants to be able to give the same number of stickers to each friend. How many stickers should she buy, to avoid spending any more money than she needs to while still equally dividing the stickers? That puts the question in simpler terms, and links it to a practical real-life need (saving money), while keeping all of the elements from the original question. • You've made a mistake here: you've silently replaced the constraint "She wants to give the same number of stickers to each friend" with "we want to buy an amount of stickers that divides evenly among the bags". That sleight-of-hand allows you to argue that "the number of friends is irrelevant" (because suddenly all you care about is bags), but nothing in the problem statement entitles the student to replace the one constraint with the other. – ruakh Dec 20 '17 at 2:27 • The info "each friend gets one bag" is missing in the question. Students might find out that they need some link between friends and bags, and learn that they should just assume something that looks reasonable. Later in life, they start a sticker-buying company, getting Juanita as customer. Juanita is still not clear on what she wants, so the student buys 12 stickers. Juanita becomes angry, because she 'obviously' knows that she has four friends, she was doubting about the number of bags because two of her friends think it is easier to carry two bags. ... – user193810 Dec 20 '17 at 8:07 • ... So Juanita 'obviously' wanted to have 8 stickers to give to 4 friends, she just was not sure if she wanted them in 2+2+2+2 or 2+2+1+1+1+1. So the business fails, all because the student was taught in school to just assume something if there is not enough info, in stead of asking for clarification. – user193810 Dec 20 '17 at 8:10 • @Pakk Yes, so when you can get clarification you do And when you don't, you presume bags=friends, determine if the risk of error is worth the cost of double checking, and usually just get on with it. – Yakk Dec 20 '17 at 18:34 • @Yakk: With questions as bad as in the common core test, students will not learn that there is a risk associated with such assumptions. The worst thing is that it is so easy to make better questions, as Iszi also demonstrated... – user193810 Dec 20 '17 at 20:45 The question is terribly worded. Here is another solution which is consistent with the wording: The only way Juanita can be unsure whether she needs 4 or 6 bags is if she has exactly 1 or 2 friends; any other number of friends does not divide evenly into both 4 and 6 bags and she would know in those cases that at least one choice was inappropriate. The question clearly states that Juanita has more than one friend, so she must have two. To ensure that no stickers are left over, she should buy the smaller of the two allowed choices, i.e. four bags.
{}
## Classical Local Systems I lied to you a little. I may not get into the arithmetic stuff quite yet. I’m going to talk about some “classical” things in modern language. In the things I’ve been reading lately, these ideas seem to be implicit in everything said. I can’t find this explained thoroughly anywhere. Eventually I want to understand how monodromy relates to bad reduction in the ${p}$-adic setting. So we’ll start today with the different viewpoints of a local system in the classical sense that are constantly switched between without ever being explained. You may need to briefly recall the old posts on connections. The goal for the day is to relate the three equivalent notions of a local system, a vector bundle plus flat connection on it, and a representation of the fundamental group. There may be some inaccuracies in this post, because I can’t really find this written anywhere and I don’t fully understand it (that’s why I’m making this post!). Since I said we’d work in the “classical” setting, let’s just suppose we have a nice smooth variety over the complex numbers, ${X}$. In this sense, we can actually think about it as a smooth manifold, or complex analytic space. If you want, you can have the picture of a Riemann surface in your head, since the next post will reduce us to that situation. Suppose we have a vector bundle on ${X}$, say ${E}$, together with a connection ${\nabla : E\rightarrow E\otimes \Omega^1}$. We’ll fix a basepoint ${p\in X}$ that will always secretly be lurking in the background. Let’s try to relate this this connection to a representation of the fundamental group. Well, if we look at some old posts we’ll recall that a choice of connection is exactly the same data as telling you “parallel transport”. So what this means is that if I have some path on ${X}$ it tells me how a vector in the fiber of the vector bundle moves from the starting point to the ending point. Remember, that we fixed some basepoint ${p}$ already. So if I take some loop based at ${p}$ say ${\sigma}$, then a vector ${V\in E_p}$ can be transported around that loop to give me another vector ${\sigma(V)\in E_p}$. If my vector bundle is rank ${n}$, then ${E_p}$ is just an ${n}$-dimensional vector space and I’ve now told you an action of the loop space based at ${p}$ on this vector space. Visualization of a vector being transported around a loop on a torus (yes, I’m horrible at graphics, and I couldn’t even figure out how to label the other vector at p as $\sigma (V)$): This doesn’t quite give me a representation of the fundamental group (based at ${p}$), since we can’t pass to the quotient, i.e. the transport of the vector around a loop that is homotopic to ${0}$ might be non-trivial. We are saved if we started with a flat connection. It can be checked that the flatness assumption gives a trivial action around nullhomotopic loops. Thus the parallel transport only depends on homotopy classes of loops, and we get a group homomorphism ${\pi_1(X, p)\rightarrow GL_n(E_p)}$. Modulo a few details, the above process can essentially be reversed, and hence given a representation you can produce a unique pair ${(E,\nabla)}$, a vector bundle plus flat connection associated to it. This relates the latter two ideas I started with. The one that gave me the most trouble was how local systems fit into the picture. A local system is just a locally constant sheaf of ${n}$-dimensional vector spaces. At first it didn’t seem likely that the data of a local system should be equivalent to these other two things, since the sheaf is locally constant. This seems like no data at all to work with rather than an entire vector bundle plus flat connection. Here is why algebraically there is good motivation to believe this. Recall that one can think of a connection as essentially a generalization of a derivative. It is just something that satisfies the Leibniz rule on sections. Recall that we call a section, ${s}$, horizontal for the connection if ${\nabla (s)=0}$. But if this is the derivative, this just means that the section should be constant. In this analogy, we see that if we pick a vector bundle plus flat connection, we can form a local system, namely the horizontal sections (which are the locally constant functions). If you want an exercise to see that the analogy is actually a special case, take the vector bundle to be the globally trivial line bundle ${\mathcal{O}_X}$ and the connection to be the honest exterior derivative ${d:\mathcal{O}_X\rightarrow \Omega^1}$. The process can be reversed again, and given any locally constant sheaf of vector spaces, you can cook up a vector bundle and flat connection whose horizontal sections are precisely the sections of the sheaf. Thus our three seemingly different notions are actually all equivalent. I should point out that part of my oversight on the local system side was thinking that a locally constant sheaf somehow doesn’t contain much information. Recall that it is still a sheaf, so we can be associating lots of information on large open sets and we still have restriction homomorphisms giving data as well. Next time we’ll talk about some classical theorems in differential equation theory that are most easily proved and stated in this framework. ## Irreducible Character Basis I’d just like to expand a little on the topic of the irreducible characters being a basis for the class functions of a group $cf(G)$ from two times ago. Let’s put an inner product on $cf(G)$. Suppose $\alpha, \beta \in cf(G)$. Then define $\displaystyle \langle \alpha, \beta \rangle =\frac{1}{|G|}\sum_{g\in G} \alpha(g)\overline{\beta(g)}$. The proof of the day is that the irreducible characters actually form an orthonormal basis of $cf(G)$ with respect to this inner product. Let $e_i=\sum_{g\in G} a_{ig}g$. Then we have that $a_{ig}=\frac{n_i\chi_i(g^{-1})}{|G|}$ (although just a straightforward calculation, it is not all that short, so we’ll skip it for now). Thus $e_j=\frac{1}{|G|}\sum n_j\chi_j(g^{-1})g$. So now examine $\frac{\chi_i(e_j)}{n_j}=\frac{1}{|G|}\sum \chi_j(g^{-1})\chi_i(g)$ $=\frac{1}{|G|}\sum \chi_i(g)\overline{\chi_j(g)}$ $= \langle \chi_i, \chi_j \rangle$. Where we note that since $\chi_j$ is a character $\chi_j(g^{-1})=\overline{\chi_j(g)}$. Thus we have that $\langle \chi_i, \chi_j \rangle = \delta_{ij}$. This fact can be used to get some neat results about the character table of a group, and as consequences of those we get new ways to prove lots of familiar things, like $|G|=\sum n_i^2$ where the $n_i$ are the degrees of the characters. You also get a new proof of Burnside’s Lemma. I’m not very interested in any of these things, though. I may move on to induced representations and induced characters. I may think of something entirely new to start in on. I haven’t decided yet. ## Class sums Let’s define a new concept that seems to be really important in algebraic number theory, that will help us peek inside some of the things we’ve been seeing. Let $C_j$ be a conjugacy class in a finite group. Then we call $z_j=\sum_{g\in C_j}g$ a class sum (for pretty obvious reasons, it is the sum of all the elements in a conjugacy class). Lemma: The number of conjugacy classes in a finite group G is the dimension of the center of the group ring. Or if we let r denote the number of conjugacy classes, then $r=dim_k(Z(kG))$. We prove this by showing that the class sums form a basis. First, given a class sum, we show that $z_j\in Z(kG)$. Well, let $h\in G$, then $hz_j h^{-1}=z_j\Rightarrow hz_j=z_j h$, since conjugation just permutes elements of the conjugacy class, thus they live in the right place. They are also linearly independent since the elements of the sums $z_j$ and $z_k$ are disjoint (they are orbits which partition the group) if $j\neq k$. Now all we need is that they span. Let $u=\sum a_gg\in Z(kG)$. Then for any $h\in G$, we have that $huh^{-1}=u$, so by comparing coefficients, $a_{hgh^{-1}}=a_g$ for all $g\in G$. This gives that all the coefficients on elements in the same conjugacy class are the same, i.e. we can factor out that coefficient and have the class sum left over. Thus $u$ is a linear combination of the class sums, and hence they span. As a corollary we get that the number of simple components of $kG$ is the same as the number of conjugacy classes of $G$. This is because $Z(M_{n_i}(k))$ is the subspace of scalar matrices. So if there are m simple components, we get 1 dimension for each of these by our decomposition in Artin-Wedderburn and so $r=Z(kG)=m$. Another consequence is that the number of irreducible k-representations of a finite group is equal to the number of its conjugacy classes. The proof is just to note that the number of simple $kG$-modules is precisely the number of simple components of $kG$ which correspond bijectively with the irreducible k-representations, and now I refer to the paragraph above. Now we can compute $\mathbb{C}S_3$ in a different way and confirm our answer from before. We know that it is 6 dimensional, since the dimension is the order of the group. We also know that there are three conjugacy classes, so there are three simple components, so the dimensions of these must be 1, 1, and 4. Thus $\mathbb{C}S_3\cong \mathbb{C}\times\mathbb{C}\times M_2(\mathbb{C})$. If we want another quick one. Let $Q_8$ be the quaternion group of order 8. Then try to figure out why $\mathbb{C}Q_8\cong \mathbb{C}^4\times M_2(\mathbb{C})$. So I think I’m sort of done with Artin-Wedderburn and its consequences for now. Maybe I’ll move on to some character theory as Akhil brought up in the last post… ## A-W Consequences I said I’d do the uniqueness part of Artin-Wedderburn, but I’ve decided not to prove it. Here is the statement: Every left semisimple ring R is a direct product $R\cong M_{n_1}(\Delta_1)\times\cdots \times M_{n_m}(\Delta_m)$ where $\Delta_i$ are division rings (so far the same as before), and the numbers m, $n_i$, and the division rings $\Delta_i$ are uniquely determined by R. The statement here is important since if we can figure one of those pieces of information out by some means, then we’ve completely figured it out, but I think the proof is rather unenlightening since it is just fiddling with simple components. Let’s use this to write down the structure of kG where G is finite and k algebraically closed with characteristic not dividing $|G|$. This is due to Molien: then $kG\cong M_{n_1}(k)\times\cdots \times M_{n_m}(k)$. By Maschke we know that kG is semisimple, and by Artin-Wedderburn, we get then that $kG\cong\prod M_{n_i}(\Delta_i)$. In fact, the proof of Artin-Wedderburn even tells us that $\Delta_i=End_{kG}(L_i)$ where $L_i$ is a minimal left ideal of $kG$. Thus, given some minimal left ideal L, it suffices to show that $\Delta=End_{kG}(L)\cong k$. Note that $\Delta$ is a subspace of $kG$ as a vector space over k. Thus it is finite dimensional. Now we have both L and $\Delta$ as finite dimensional vector spaces (over k). Let $a\in k$, then this element acts on L by $u\mapsto au$. But $au=ua$, so $k\subset Z(\Delta)$. Choose $d\in \Delta$, then adjoin it to k: $k(d)$. Since this is commutative, and a subdivision ring, it is a field. i.e. $k(d)/k$ as a field extension is finite, and hence algebraic, so d is algebraic over k. But we assumed k algebraically closed, so $d\in k$. Thus $\Delta=k$, and we are done. As a Corollary to this, we get that under the same hypotheses, $|G|=n_1^2+\cdots + n_m^2$. This is just counting dimensions under the isomorphism above, since $dim_k(kG)=|G|$ and $dim_k(M_{n_i})=n_i^2$. Note also that we can always take one of the $n_i$ to be 1, since we always have the trivial representation. Let’s end today with an example to see how nice this is. Without needing to peek inside or know anything about representations of $S_3$, we know that $\mathbb{C}S_3\cong \mathbb{C}\times\mathbb{C}\times M_2(\mathbb{C})$, since the only way to write 6 as the sum of squares is 1+1+1+1+1+1, or 1+1+4, and the first one gives $\mathbb{C}^6$ which is abelian which can’t happen since $S_3$ is non-abelian. Thus it must be the second one. ## Maschke and Schur As usual, ordering of presenting this material and level of generality are proving to be difficult decisions. For my purposes, I don’t want to do things as generally as they can be done. But on the other hand, most of the proofs are no harder in the general case, so it seems pointless to avoid generality. First we prove Maschke’s Theorem. Note that there are lots of related statements and versions of what I’m going to write. This says that if G is a finite group and k is a field whose characteristic does not divide the order of the group, then kG is a left semisimple ring. Proof: We’ll do this using the “averaging operator.” Let’s use the version of semisimple that every left ideal is a direct summand. Let I be a left ideal of kG. Then since kG can be regarded as a vector space over k, I is a subspace. So there is a subspace, V, such that $kG=I\oplus V$. We are done if it turns out that V is a left ideal. Let $\pi : kG\to I$ be the projection. (Since any $u=i + v$ uniquely, define $\pi(u)=i$.) Now it is equivalent to show that $\pi$ is a kG-map, since then it would be a retract and hence I would be a direct summand. Unfortunately, it is not a kG-map. So we’ll force it to be one by averaging. Let $D:kG\to kG$ by $D(u)=\frac{1}{|G|}\sum_{x\in G}x\pi(x^{-1}u)$. By our characteristic condition, $|G|\neq 0$. Claim: $Im(D)\subset I$. Let $u\in kG$ and $x\in G$, then $\pi(x^{-1}u)\in I$ by definition, and I a left ideal, so $x\pi(x^{-1}u)\in I$. So since I an ideal, the sum is in I which shows the claim. Claim: $D(b)=b$ for all $b\in I$. This is just computation, $x\pi(x^{-1}b)=xx^{-1}b=b$, so $D(b)=\frac{1}{|G|}(|G|b)=b$. Claim: D is a kG-map. i.e. we want to prove that $D(gu)=gD(u)$ for $g\in G$ and $u\in kG$. Here the averaging pays off: $\displaystyle gD(u)=\frac{1}{|G|}\sum_{x\in G}gx\pi(x^{-1}u)$ $\displaystyle = \frac{1}{|G|}\sum_{x\in G} gx\pi(x^{-1}g^{-1}gu)$ $\displaystyle = \frac{1}{|G|}\sum_{y=gx\in G} y\pi(y^{-1}gu)$ $= D(gu)$. Thus we have proved Maschke’s Theorem. The other tool we’ll need next time is that of Schur’s Lemma: Let M and N be simple left R-modules. Then every non-zero R-map $f:M\to N$ is an iso. And $End_R(M)$ is a division ring. Proof: $\ker f\neq M$ since it is a non-zero map. And so $ker f=\{0\}$ since it is a submodule, so we have an injection. Likewise, $im f$ is a submodule, and hence must be all of N, so we have surjection and hence an iso. The other part of the lemma is just noting that since every map in $End_R(M)$ that is non-zero is an iso, it has an inverse. Next time I’ll talk about how some of these things relate to representations. ## Representation Theory III Let’s set up some notation first. Recall that if $\phi: G\to GL(V)$ is a representation, then it makes V into a kG-module. Let’s denote this module by $V^\phi$. Now we want to prove that given two representations into GL(V), that $V^\phi \cong V^\sigma$ if and only if there is an invertible linear transformation $T: V \to V$ such that $T(\phi(x))=\sigma(T(x))$ for every $x\in G$. The proof of this is basically unwinding definitions: Let $T: V^\phi \to V^\sigma$ be a kG-module isomorphism. Then for free we get $T(xv)=xT(v)$ for $x\in G$ and $v\in V$ is vector space iso. Now note that the multiplication in $V^\phi$ is $xv=\phi(x)(v)$ and in $V^\sigma$ it is $xv=\sigma(x)(v)$. So $T(xv)=xT(v)\Rightarrow T(\phi(x)(v))=\sigma(x)(T(v))$. Which is what we needed to show. The converse is even easier. Just check that the T is a kG-module iso by checking it preserves scalar multiplication. This should look really familiar (especially if you are picking a basis and thinking in terms of matrices). We’ll say that T intertwines $\phi$ and $\sigma$. Essentially this is the same notion as similar matrices. Now we will define some more concepts. Let $\phi: G\to GL(V)$ be a representation. Then if $W\subset V$ is a subspace, then it is “$\phi$-invariant” if $\phi(x)(W)\subset W$ for all $x\in G$. If the only $\phi$-invariant subspaces are 0 and V, then we say $\phi$ is irreducible. Let’s look at what happens if $\phi$ is reducible. Let W be a proper non-trivial $\phi$-invariant subspace.Then we can take a basis for W and extend it to a basis for V such that the matrix $\phi(x)=\left(\begin{matrix} A(x) & C(x) \\ 0 & B(x) \end{matrix}\right)$ and $A(x)$ and $B(x)$ are matrix representations of G (the degrees being dim W and dim(V/W) respectively). In fact, given a representation on V, $\phi$ and a representation on W, $\psi$, we have a representation on $V\oplus W$, $\phi \oplus \psi$ given in the obvious way: $(\phi \oplus \psi)(x) : (v, w)\mapsto (\phi(x)v, \psi(x)w)$. The matrix representation in the basis $\{(v_i, 0)\}\cup \{(0, w_j)\}$ is just $\left(\begin{matrix}\phi(x) & 0 \\ 0 & \psi(x)\end{matrix}\right)$ (hence is reducible since it has both $V\oplus 0$ and $0\oplus W$ as invariant subspaces). I’m going to continue with representation theory, but I’ll start titling more appropriately now that the basics have sort of been laid out. ## Representation Theory I I know everyone and their brother does a series of posts on basic representation theory, and I said I would try to avoid very overtly repeat posts of other math blogs, but I can’t help it. I don’t know representation theory very well at all, and I feel my time has come to wrestle with the beast. This is one of the main points of this blog, so may as well try. Goal: Carefully build as slowly as I can all the way to the Artin-Wedderburn Theorem. Beginning: What is a representation. Well, let G be a finite group and V any finite dimensional vector space over $\mathbb{C}$. Then a representation of G is a just a homomorphism $\phi: G\to GL(V)$. So we can already see some nice uses of this. If we choose a basis, then we get a matrix representation (every element is sent to some invertible matrix, and the group operation is preserved). It would be even nicer if this were an embedding, so that we could actually think of the elements of our group as matrices. We say a 1-1 representation is faithful. We have an arsenal of examples already, but probably don’t even realize it. The trivial representation is just sending every group element to the identity transformation. What I like to call the “almost trivial representation” (term my own, so don’t use this somewhere and expect people to know what you are talking about), is to embed G in $S_n$ for some large n, which we know is possible. Then under this embedding, a group element is either even or odd. If it is even, send it to the identity transformation. If it is odd, send it to the negative identity transformation. Probably a better way to say this is: a representation of $S_n$ is, $\phi(x)=sgn(x)1_V$. Let’s define the group ring (aka the group algebra). Let k be a field and G a finite group. Then $kG$ is the set $\{\sum a_g g : a_g\in k \ and \ g\in G\}$. In a sense, we have formed a vector space over k with basis G. Our addition is $\sum a_g g+\sum b_g g=\sum (a_g+b_g) g$. Our multiplication requires slightly more effort: $(\sum a_g g)(\sum b_g g)=\sum a_g b_h gh = \sum_x (\sum_{gh=x}a_g b_h) x$. This structure will be of great importance soon, but I don’t want to throw too much out there at once. Remember, we’re going to go slowly. But if you want to think ahead, the first thing of tomorrow’s post will be that a representation equips the vector space V with the structure of a $\mathbb{C}G$-module. And there was nothing special about $\mathbb{C}$ there. A k-representation equips V with the structure of a $kG$-module.
{}
# what do they mean here? #### allegansveritatem ##### Full Member Here is the problem: I I am stuck at c). The only thing I could think of doing is adding a 5 to the 1/2x-3 and then putting a minus sign in front of the function for reflecting it. Here is what thhis looks like in calculator window: Is this what they want for c)? I know that y=2 should be a straight horizontal line but...that doesn't seem to have any purchase here. #### Jomo ##### Elite Member If you put red paint on the line (1/2)x-3 and then folded the paper along the line y=2 the result would be that you have a 2nd line on the paper. Do you see that? It is the same line as in your graph. You have one point on this new line. Do you see what it is? Now what is the slope of this line compared to the slope of the line (1/2)x-3 which is 1/2? Actually you do NOT have the graph of (1/2)x-3. So draw the given line and draw the 2nd line which is reflected across the line y=2 #### tkhunny ##### Moderator Staff member Some can be done quite mechanically. I'll show y = 3x - 4 Reflect across x-axis -y = 3x - 4 Reflect across y-axis y = 3(-x) - 4 Reflect across y = x x = 3y - 4 Think on what those small changes mean to the graph. Think on what could be done to create the others, reflecting about y = 2 or x = 3. It may be interesting to find any point that does NOT move at all with the transformation. Fun example. y = 3. What happens when it is reflected across the y-axis? #### Dr.Peterson ##### Elite Member I am stuck at c). The only thing I could think of doing is adding a 5 to the 1/2x-3 and then putting a minus sign in front of the function for reflecting it. In (c), you are asked to reflect the line y = (1/2)x - 3 about the line y = 2. Have you been taught how to reflect a point, or a function, about a line other than the axis? I suspect so; then you will want to look back at what you were taught, and do it. I'll assume here that you have to invent it for yourself. One way to do so is to think about what it means to reflect a point about the horizontal line y = b. Given any point (x, y), you want to find a new point (x, y') that is the same distance below y = b as the given point was above y = b. That is, y' - b = -(y - b). Do you see that? Then what does y' have to be, in terms of y and b? Then try applying that to your equation. What others have said will help you check that the line you get really is what you want. #### Harry_the_cat ##### Senior Member Put aside your GC for a minute. With paper and pen draw the graph of $$\displaystyle y=\frac{1}{2}x-3$$ ie y-int = -3 and x-int = 6. Now dot in the line of reflection $$\displaystyle y=2$$. Now imagine folding the graph along the dotted line, bringing the bottom part up onto the top part. Draw in the reflected line. The slope of the original line is 1/2. What is the slope of the reflected line? The y-int of the original line is -3. What is the y-int of the reflected line? So now you have the slope and the y-int, what is the equation of the line? #### allegansveritatem ##### Full Member If you put red paint on the line (1/2)x-3 and then folded the paper along the line y=2 the result would be that you have a 2nd line on the paper. Do you see that? It is the same line as in your graph. You have one point on this new line. Do you see what it is? Now what is the slope of this line compared to the slope of the line (1/2)x-3 which is 1/2? Actually you do NOT have the graph of (1/2)x-3. So draw the given line and draw the 2nd line which is reflected across the line y=2 well, one thing I think I can answer: The slope would be the negative of the present line, y=1/2x-3. Thanks for posting. I will get back to this. Last edited: #### allegansveritatem ##### Full Member Some can be done quite mechanically. I'll show y = 3x - 4 Reflect across x-axis -y = 3x - 4 Reflect across y-axis y = 3(-x) - 4 Reflect across y = x x = 3y - 4 Think on what those small changes mean to the graph. Think on what could be done to create the others, reflecting about y = 2 or x = 3. It may be interesting to find any point that does NOT move at all with the transformation. Fun example. y = 3. What happens when it is reflected across the y-axis? I will have to think about your questions when my noodle comes back to me--sometime tomorrow I hope. But I think I am beginning to see it. Thanks #### allegansveritatem ##### Full Member In (c), you are asked to reflect the line y = (1/2)x - 3 about the line y = 2. Have you been taught how to reflect a point, or a function, about a line other than the axis? I suspect so; then you will want to look back at what you were taught, and do it. I'll assume here that you have to invent it for yourself. One way to do so is to think about what it means to reflect a point about the horizontal line y = b. Given any point (x, y), you want to find a new point (x, y') that is the same distance below y = b as the given point was above y = b. That is, y' - b = -(y - b). Do you see that? Then what does y' have to be, in terms of y and b? Then try applying that to your equation. What others have said will help you check that the line you get really is what you want. I really don't think I recall having been introduced to this type of procedure before. I know how to reflect across the axes but not otherwise...but I am getting a glimmer of what would be entailed. Somehow the two lines will intersect on the line to be reflected over...So, if I can find the point of intersection I can draw the second graph through it...Am I right? I will have to study your post tomorrow. No head for it now. Very late. Thanks #### allegansveritatem ##### Full Member Put aside your GC for a minute. With paper and pen draw the graph of $$\displaystyle y=\frac{1}{2}x-3$$ ie y-int = -3 and x-int = 6. Now dot in the line of reflection $$\displaystyle y=2$$. Now imagine folding the graph along the dotted line, bringing the bottom part up onto the top part. Draw in the reflected line. The slope of the original line is 1/2. What is the slope of the reflected line? The y-int of the original line is -3. What is the y-int of the reflected line? So now you have the slope and the y-int, what is the equation of the line? I don't think I know what y-int means...maybe I do but if so what I know is not thus designated. I will look it u p. Thanks for posting #### Dr.Peterson ##### Elite Member I really don't think I recall having been introduced to this type of procedure before. I know how to reflect across the axes but not otherwise...but I am getting a glimmer of what would be entailed. Somehow the two lines will intersect on the line to be reflected over...So, if I can find the point of intersection I can draw the second graph through it...Am I right? I will have to study your post tomorrow. No head for it now. Very late. Thanks That would be one way to do it in this case; it wouldn't be sufficient if, say, the graph you were reflecting didn't intersect the line you are reflecting over. But there are other ways besides my suggestion. In the end, there is a fairly simple formula you can use. #### allegansveritatem ##### Full Member I worked on this again today. I am posting here the program that I followed (I used it for for d) and e) as well). In short, what I did was graph the line given, find the point at which it intersects with y=2, find another point that is on the other side of y=2 that corresponds with 6,0 but shifted up 4 and then use those two points to find the slope of the reflected graph (I knew this already but...) and then use the point-slope form to find the equation of reflected line. Is this the most streamlined strategy? Probably not, but it is what I hit upon. #### Harry_the_cat ##### Senior Member Yes that's correct. Well done. BTW, y-int means y-intercept, ie where the graph cuts the y-axis. If the equation of a line is given in y = mx + c form, then c is the y-int and is the gradient. What you've done is correct and is a valid method, but if you want a more "stream-lined" way, reread my post #5. #### Dr.Peterson ##### Elite Member Now I can tell you my method as suggested in post #4. Given a point (x,y) and line y=b, the reflected point (x,y') will satisfy y' - b = -(y - b), as far above b as y is below. Simplifying this, you get y' = 2b - y. This amounts to reflecting across the x-axis (-y) and then shifting up 2b (+ 2b). So you want to do that to your line y = (1/2)x - 3: reflect it across the axis to get y = -(1/2)x + 3, and then shift up by 4 (twice 2) to get y = -(1/2)x + 7. Or, just plug it into my formula: y = 2*2 - [(1/2)x - 3] = 4 - (1/2)x + 3 = -(1/2)x + 7. I don't know what method you are expected to use, not having been taught anything about it. #### Harry_the_cat ##### Senior Member I agree that there are various methods to solve this sort of problem. I don't think you need to be taught specifically how do this. If you have knowledge of: (1) straight lines and their equations (3) intercepts and/or intersection points (4) concept of reflection, then you should aim to be able to work it out for yourself. #### allegansveritatem ##### Full Member Yes that's correct. Well done. BTW, y-int means y-intercept, ie where the graph cuts the y-axis. If the equation of a line is given in y = mx + c form, then c is the y-int and is the gradient. What you've done is correct and is a valid method, but if you want a more "stream-lined" way, reread my post #5. well, when you used the abbreviation "int" I didn't associate it with the word intercept, probably because my calculator uses "int" to stand for, among other things, greatest integer function. I did reread the post and I think it influenced to some extent how I proceeded here. #### allegansveritatem ##### Full Member Now I can tell you my method as suggested in post #4. Given a point (x,y) and line y=b, the reflected point (x,y') will satisfy y' - b = -(y - b), as far above b as y is below. Simplifying this, you get y' = 2b - y. This amounts to reflecting across the x-axis (-y) and then shifting up 2b (+ 2b). So you want to do that to your line y = (1/2)x - 3: reflect it across the axis to get y = -(1/2)x + 3, and then shift up by 4 (twice 2) to get y = -(1/2)x + 7. Or, just plug it into my formula: y = 2*2 - [(1/2)x - 3] = 4 - (1/2)x + 3 = -(1/2)x + 7. I don't know what method you are expected to use, not having been taught anything about it. That is a good way and I think I was on the point of sniffing it out today but didn't quite get clear in my mind how to set it into a formula--I knew there was a formula floating around somewhere but couldn't grab it. #### allegansveritatem ##### Full Member I agree that there are various methods to solve this sort of problem. I don't think you need to be taught specifically how do this. If you have knowledge of: (1) straight lines and their equations
{}
When I try to fetch a directory with get "Path To\Directory\", I get the following error: NT_STATUS_FILE_IS_A_DIRECTORY opening remote file Path To\Directory (Using smbclient v3.6.23. The server is a computer running Windows 7 Home Edition.) • smbclient uses the same type of semantics that server clients like FTP and HTTP do, where each get or put targets one file. you can write scripts to perform retrievals by directory, or you can use the mget/mput commands to specify a mask or wildcard to retrieve multiple files, as shown in my answer. it may be that smbclient isn't quite the right tool for your purposes. – Frank Thomas Dec 25 '14 at 4:23 Per the smbclient manpage, you need to use the mget command, with a mask and recursion and prompt set. Then cd to the directory you want to get recursively: smbclient '\\server\share' recurse ON prompt OFF cd 'path\to\remote\dir' mget * Or, all on one line, smbclient '\\server\share' -N -c 'prompt OFF;recurse ON;cd 'path\to\directory\';lcd '~/path/to/download/to/';mget *' If you need to authenticate to the server drop -N and use the password setting on the connect command. http://technotize.blogspot.com/2011/12/copy-folder-with-ubuntu-smb-client.html • Also, I think you've got your quotes a bit confused in the one-liner. My smbclient only seems to like dealing with directories in "double quotes". – c24w Jun 7 '17 at 14:32 • Just copied and replaced folders but it didnt work - ends with trailing > – Wax Cage Aug 7 '17 at 21:40 • For people really want to copy without problems follow this article: indradjy.wordpress.com/2010/04/14/… (helped me) – Wax Cage Aug 7 '17 at 21:46 • Is it possible to skip already existing files? This question was asked here and I need its answer, too: unix.stackexchange.com/questions/551104/… – mgutt Apr 21 at 23:10 • @mgutt, consider using rsync on top of SMB, once you have the smbshare mounted. SMB has no semantics for for uni/bi directional sync by itself, but tools like rsync (linux) and robocopy (windows) can provide very robust sync functionality on an existing mount point. Also note that SMB does introduce some idiosyncracies related to file locking, so you may need to shoot for eventual concurrence, rather than single point in time concurrency. – Frank Thomas Apr 22 at 3:02 You could also use the tar command for smbclient: smbclient -Tc allfiles.tar /path/to/directory This will create a tar archive allfiles.tar in the current directory the smbclient command is executed. Afterwards you can unpack the files again with tar xf allfiles.tar. use -D option to set the Directory smbclient -D "\" -c ls smbclient -D "\Path\To\Directory" -c ls smbclient -D "\Path\To\Directory" -c "get target /tmp/target" `
{}
# Map 2 At what scale is made map if the distance 8.2 km corresponds on the map segment 5 cm long? M 1:  164000 ### Step-by-step explanation: $M=8.2\mathrm{/}5\cdot 1000\cdot 100=164000\approx 1:164000$ We will be pleased if You send us any improvements to this math problem. Thank you! Tips to related online calculators Need help to calculate sum, simplify or multiply fractions? Try our fraction calculator. Check out our ratio calculator. Do you want to convert length units? ## Related math problems and questions: • Scale of the map Determine the map's scale, which is the actual distance of 120 km l represented by a segment long 6 cm. • Scale of the map The distance between two cities is actually 30 km and the map is 6 cm. What is the scale of the map? • Two villages Two villages are 11 km and 500 m away. On the map, their distance is determined by a 5 cm long line. Find the scale of the map. • Scale of the map Determine the scale of the map if the actual distance between A and B is 720 km and distance on the map is 20 cm. • Map On the tourist map at a scale of 1 : 50000 is distance between two points along a straight road 3.7 cm. How long travels this distance on a bike at 30 km/h? Time express in minutes. • What scale What scale is the map drawn if it shows a 15km route from the station to the ruins with a line 30cm long? A) 1: 2000 B) 1: 5000 C) 1: 20,000 D) 1: 50,000 • Scale 3 Miriam room is 3.2 meters wide. It is draw by line segment length 6.4 cm on floor plan. In what scale it is plan of the room? • Map scale The rectangular plot has a scale of 1: 10000 area 3 cm2 on the map. What content does this plot have on a 1:5000 scale map? • The plan The plan of the housing estate is in three scales 1: 5000,1: 10000,1: 15000. The distance between two points on a plan with a scale of 1: 10000 is 12 cm. What is this distance on the other two plans? What is this distance? • Two villages On the map with a scale of 1:40000 are drawn two villages actually 16 km away. What is their distance on the map? • The distance The distance between two towns on a given map is 3 ½ cm. If ½ cm represents 6km. , what is the distance between the two towns? • Two monuments The distance between two historical monuments on a map with a scale of 1: 500000 is 48 mm. Find the actual distance between these monuments in km. • Line segment The 4 cm long line segment is enlarged in the ratio of 5/2. How many centimeters will measure the new line segment? • Points on line segment Points P & Q belong to segment AB. If AB=a, AP = 2PQ = 2QB, find the distance: between point A and the midpoint of the segment QB. • Line segment Cut a line segment of 15 cm into two line segments so that their lengths are in ratio 2:1. What length will each have? • Map scale Garden has on plan on a scale of 1: 150 width 22 cm and length 35 cm. What is the real area of the garden? • Divide in ratio Line segment AB 12 cm long divide in a ratio of 5: 3. How long are the individual parts?
{}
### 机器人代写|SLAM代写机器人导航代考|FastSLAM with Unknown Data Association statistics-lab™ 为您的留学生涯保驾护航 在代写SLAM方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写SLAM代写方面经验极为丰富,各种代写SLAM相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 机器人代写|SLAM代写机器人导航代考|FastSLAM with Unknown Data Association The biggest limitation of the FastSLAM algorithm described thus far is the assumption that the data associations $n^{t}$ are known. In practice, this is rarely the case. This section extends the FastSLAM algorithm to domains in which the mapping between observations and landmarks is not known [57]. The classical solution to the data association problem in SLAM is to chose $n_{t}$ such that it maximizes the likelihood of the sensor measurement $z_{t}$ given all available data [18]. $$\hat{n}{t}=\underset{n{t}}{\operatorname{argmax}} p\left(z_{t} \mid n_{t}, \hat{n}^{t-1}, s^{t}, z^{t-1}, u^{t}\right)$$ The term $p\left(z_{t} \mid n_{t}, \hat{n}^{t-1}, s^{t}, z^{t-1}, u^{t}\right)$ is referred to as a likelihood, and this approach is an example of a maximum likelihood (ML) estimator. ML data association is also called “nearest neighbor” data association, interpreting the negative log likelihood as a distance function. For Gaussians, the negative log likelihood is Mahalanobis distance, and the estimator selects data associations by minimizing this Mahalanobis distance. In the EKF-based SLAM approaches described in Chapter 2, a single data association is chosen for the entire filter. As a result, these algorithms tend to be brittle to failures in data association. A single data association error can induce significant errors in the map, which in turn cause new data association errors, often with fatal consequences. A better understanding of how uncertainty in the SLAM posterior generates data association ambiguity will demonstrate how simple data association heuristics often fail. ## 机器人代写|SLAM代写机器人导航代考|Data Association Uncertainty Two factors contribute to uncertainty in the SLAM posterior: measurement noise and motion noise. As measurement noise increases, the distributions of possible observations of every landmark become more uncertain. If measurement noise is sufficiently high, the distributions of observations from nearby landmarks will begin to overlap substantially. This overlap leads to ambiguity in the identity of the landmarks. We will refer to data association ambiguity caused by measurement noise as measurement ambiguity. An example of measurement ambiguity is shown in Figure 3.7. The two ellipses depict the range of probable observations from two different landmarks. The observation, shown as an black circle, plausibly could have come from either landmark. Attributing an observation to the wrong landmark due to measurement ambiguity will increase the error of the map and robot pose, but its impact will be relatively minor. Since the observation could have been generated by either landmark with high probability, the effect of the observation on the landmark positions and the robot pose will be small. The covariance of one landmark will be slightly overestimated, while the covariance of the second will be slightly underestimated. If multiple observations are incorporated per control, a data association mistake due to measurement ambiguity of one observation will have relatively little impact on the data association decisions for the other observations. Ambiguity in data association caused by motion noise can have much more severe consequences on estimation accuracy. Higher motion noise will lead to higher pose uncertainty after incorporating a control. If this pose uncertainty is high enough, assuming different robot poses in this distribution will imply drastically different ML data association hypotheses for the subsequent observations. This motion ambiguity, shown in Figure $3.8$, is easily induced if there is significant rotational error in the robot’s motion. Moreover, if multiple observations are incorporated per control, the pose of the robot will correlate the data association decisions of all of the observations. If the SLAM algorithm chooses the wrong data association for a single observation due to motion ambiguity, the rest of the data associations also will be wrong with high probability. ## 机器人代写|SLAM代写机器人导航代考|Per-Particle Data Association Unlike most EKF-based SLAM algorithms, FastSLAM takes a multi-hypothesis approach to the data association problem. Each particle represents a different hypothesized path of the robot, so data association decisions can be made on a per-particle basis. Particles that pick the correct data association will receive high weights because they explain the observations well. Particles that pick wrong associations will receive low weights and be removed in a future resampling step. Per-particle data association has several important advantages over standard ML data association. First, it factors robot pose uncertainty out of the data association problem. Since motion ambiguity is the more severe form of data association ambiguity, conditioning the data association decisions on hypothesized robot paths seems like a logical choice. Given the scenario in Figure 3.8, some of the particles would draw new robot poses consistent with data association hypothesis on the left, while others would draw poses consistent with the data association hypothesis on the right. Doing data association on a per-particle basis also makes the data association problem easier. In the EKF, the uncertainty of a landmark position is due to both uncertainty in the pose of the robot and measurement error. In FastSLAM, uncertainty of the robot pose is represented by the entire particle set. The landmark filters in a single particle are not affected by motion noise because they are conditioned on a specific robot path. This is especially useful if the robot has noisy motion and an accurate sensor. Another consequence of per-particle data association is implicit, delayeddecision making. At any given time, some fraction of the particles will receive plausible, yet wrong, data associations. In the future, the robot may receive a new observation that clearly refutes these previous assignments. At this point, the particles with wrong data associations will receive low weight and likely be removed from the filter. As a result of this process, the effect of a wrong data association decision made in the past can be removed from the filter. Moreover, no heuristics are needed in order to remove incorrect old associations from the filter. This is done in a statistically valid manner, simply as a consequence of the resampling step. ## 机器人代写|SLAM代写机器人导航代考|FastSLAM with Unknown Data Association n^吨=最大参数n吨p(和吨∣n吨,n^吨−1,s吨,和吨−1,在吨) ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
{}
Mathematics Department # Operator Algebras and Dynamics Seminar ## Spring 2022 All talks are from 3:45-4:45 p.m. in the Seminar room, unless otherwise specified. • Apr 11 • Low Complexity Subshifts admitting Mixing Measures Darren Creutz Time: 03:45 PM #### View Abstract Word complexity is a fine-grained quantification of determinism for subshifts: closed, shift-invariant subsets of A^\mathbb{Z} for some finite alphabet A. A natural question is the extent to which measure-theoretic mixing, a very strong form of dynamical randomness, implies complexity in the symbolic sense. Despite initially conjecturing that mixing subshifts have superpolynomial complexity, Ferenczi in 1996 established that the staircase transformation, which is mixing, has quadratic complexity. However, he could only prove that mixing implies a superlinear lower bound. We introduce a class of modified staircases with complexity much lower than quadratic: for any $f : \mathbb{N} \to \mathbb{N}$ with $f(n)/n$ increasing and $\Sum 1/f(n) < \infty$, we prove there exists an extremely elevated staircase transformation with word complexity $p$ satisfying $p(q) = o(f(q))$. This is joint work with Ronnie Pavlov and Shaun Rodock. • Mar 28 • Amenability of C*-dynamical systems Alex Bearden University of Texas at Tyler Time: 03:45 PM #### View Abstract The notion of amenability of a locally compact group was generalized to certain operator algebraic settings by Anantharaman-Delaroche in the late 70s. We will review the history and triumphs of amenability in W*- and C*-dynamical systems, and then describe recent progress in the understanding of the various competing notions of amenability from the work of Buss/Echterhoff/Willett, Ozawa/Suzuki, and our joint work with Jason Crann. • Mar 07 • A quantization of coarse structures and uniform Roe algebras David Sherman University of Virginia Time: 03:45 PM #### View Abstract A coarse structure is a way of talking about "large-scale" properties. It is encoded in a family of relations that often, but not always, come from a metric. A coarse structure naturally gives rise to Hilbert space operators that in turn generate a so-called uniform Roe algebra. In ongoing work with Bruno Braga and Joe Eisner, we use ideas of Weaver to construct "quantum" coarse structures and uniform Roe algebras in which the underlying set is replaced with an arbitrary represented von Neumann algebra. The general theory immediately applies to quantum metrics (suitably defined), but it is much richer. We explain another source of examples based on measure instead of metric, leading to a large and easy-to-understand class of new C*-algebras. I will present the big picture: where uniform Roe algebras come from, how Weaver's framework facilitates our definitions. I will focus on a few illustrative examples and will not assume any familiarity with coarse structures or von Neumann algebras.
{}
Mathjax Install 9cq1lvj7fjaeqr zua5c3wjma b1lgf5odjts0 t59u8bbood a807bwfxo00eb vhi5vq2kazarj0 0axn4ifhamlqbc m9wlhnhsbl3s0eo igonmitw2vm0j7f opbca5c42r90u 1tatrq36l6m1 ouaahf60eab796u f45h9cxeeg50al2 7edwl9caomz99 7demiixkeo2 8am1lgby23 55tr1v3l63632 s2fim9wieo r77wh42b9p8mhn9 6lyaqlioevq n6gzu8ajewn4fc ns0jwsiykel vm030m1fil0 db4pfojahqcyo9 wkkqkrg4075i8oq ey7lqcoc3cbt ut4o0x1kajkz ztb7ssb6aec jmkj05x3v00 0xlot5v1weypx u1s19m97quufr2v g78cffy6obc 8yc3yvu70kmwlc
{}
# Get a random event out of a set of mutually exclusive events Get a random event from a set of collectively exhaustive and mutually exclusive events, with this set being represented by a list of tuples (name, probability). ### Reminder: • Collectively exhaustive events means at least one of them must occur. • Mutually exclusive events means at most 1 of them occurs. • Collectively exhaustive and mutually exclusive means their probabilities sum up to 1 and only one of them occurs. ### Code: Events definition: events = [ ('Win', 0.5), ('Loss', 0.3), ('Draw', 0.2) ] Getting the random event: def get_random_event(events): random_number = uniform(0., 1.) lower_bound = 0 for event, prob in events: if lower_bound <= random_number <= lower_bound + prob: return event else: lower_bound += prob ### Illustration: 0 -- -- Win -- -- 0.5 -- Loss -- 0.8 - Draw - 1 Generate a random number in [0,1] and check where it falls. • This only works because of the ordering of the data in events , which isn't very obvious. Do you have a check that ensures that it's ordered in by prob? – Ben Jan 1 '18 at 22:46 • Order does not matter at all here. @Ben – Adel Redjimi Jan 1 '18 at 23:43 Some things: 1. The collectively exhaustive events are not going to change or be updated. So use a tuple instead of a list: events = ( # <<- Note: ( not [ ('Win', 50), ... ) 2. You don't show your import of random. But I'd prefer that you not import uniform by name, and rather spell it out inside your function. You're only calling it once, so it's more efficient to the reader for you to simply say: random_number = random.uniform(...) than to have them search backwards for where uniform occurs. Also, of course, in this context random.uniform actually reads better than just uniform. 3. You have an input parameter events. You use event, prob in your loop. But event is the singular of events, so when you say return event it seems like you are returning a member of the events iterable. I'd choose a different name, like outcome, making it clear that your returning a component of the inner tuple. 4. You compare lower_bound <= random_number <= lower_bound + prob but you don't have to. Since your random number is pinned to 0., you could just use a comparison against the high value and get the same result. 5. Structurally, the lower_bound += prob shouldn't be in the else branch. Yes, it only executes when the if is not executed, but the if should not be part of the flow for that statement. I'd rather see: if ... return ... lower_bound += prob This kind of thing comes back to bite you when you're refactoring. Or, worse yet, when someone else is refactoring. (Imagine you changed the return to a print, or something. Suddenly the code stops working.) 6. The Task description would (almost) make a good function docblock. Note: s/list/iterable/: from typing import Iterable, Tuple Name = str Prob = float def get_random_event(events: Iterable[Tuple[Name, Prob]]) -> Name: """Get a random event from a set of collectively exhaustive and mutually exclusive events, with this set being represented by an iterable of tuples (name, probability). """ If you are using a recent version of Python (3.6+), the behaviour you're seeking is already implemented in random.choices. Basically, all you need to do is to split the event list of couples into two lists and feed these two lists to the choices function. zip is there for you: def get_random_event(events): event, = random.choices(*zip(*events)) return event • @Graipher I thought it was pretty obvious from the docs, but added it in the answer anyway. – 301_Moved_Permanently Jan 2 '18 at 14:01 • Yes, but I had to click the link first :). Did not know about this new function until now, this is a nice addition to the random module. – Graipher Jan 2 '18 at 14:03 • Can I import this function, somehow, to Python 2.7 ? – Adel Redjimi Jan 2 '18 at 16:34 • @AdelRedjimi you might be able to use the implementation from the standard library – 301_Moved_Permanently Jan 2 '18 at 20:04
{}
# The Kernel Samepage Merging Process KSM, simply put is a service daemon which scans the page addresses to find duplicate pages, merges them and therefore reduces the memory density. The code used in this post as example can be found under /mm/ksm.c in the kernel source. Before continuing, it is important to keep in mind that: • KSM uses a red-black tree for the stable and unstable trees - efficiency is $O(\log\ n)$ per tree since the height can never be more than $(2log\ (n+1))$ with n being the number of nodes. • KSM only scans anonymous pages, file backed pages such as HugePages are not scanned and cannot be merged by KSM. This is different to Transparent Huge pages where as in RedHat 6.1, KSM will break up THP into small pages if shareable 4K pages are found and only if the system is running out of memory. • Merged pages are read-only as they are CoW protected. • Userspace application can register candidate regions for merging through the madwise() system call. We will not tackle the KSM API details in this post. • Because of the CoW nature, a merged page write action by an application will raise a page fault, which in return triggers the break_cow() routine, which issue a copy of the merged page to the writing application. With this in mind, here is a summarized view on how KSM works per steps KSM scan pages and elects whether a page could be considered to be merged… these pages are referred as “candidate” pages. To quickly state it, a candidate page which does not exists in the stable tree is added as a node to the unstable tree, but we will get to this later in the post. To determine if a page has changed or not, KSM relies on a 32bit checksum, which is then added to the page content and evaluated on the next scan. In other words, KSM finds page X, creates a checksum, stores it in page X - on the next scan, if the checksum of page X did not change, then it is considered as a candidate page. For each candidate page, KSM starts a memcmp_pages() operation to the stable tree which contains the merged pages. This unique process is as follow: Understanding the following requires an understanding of how a binary tree works in general, more specifically how a red-black tree works. The stable tree is walked left if the candidate page is less than the page in the stable tree, right if the candidate page is superior to the stable page and the page is simply merge and the candidate page freed if both pages are identical. The stable tree search function is referenced at http://lxr.free-electrons.com/source/mm/ksm.c#L985 Now if the candidate page was not found in the stable tree, its checksum is re-computed to determine whether the data has changed since or not. If it has changed it is then ignored; if not, the searching process continues in the unstable tree as with the search in the stable tree. The recursion __unstable_tree_search_insert() __can be seen at http://lxr.free-electrons.com/source/mm/ksm.c#L1078. While searching the unstable tree, KSM will create a new node in this binary tree if the candidate page is unique and if not unique such as the unstable tree contains similar candidate pages, it will be merged to the existing similar node and moved to the stable tree. Once the KSM scan is done, the unstable tree is destroyed and recreated on the next iteration I hope that was informative. Cheers, Ali
{}
# What is a Rational Number The Rational Numbers are those numbers which can either be whole numbers or Fractions or Decimals. Rational numbers are the Subsets of Real Numbers and can be written as a Ratio of two integers in the form $\frac{p}{q}$ where 'p' and 'q' are integers and 'q' is non zero. A Set of rational numbers is denoted by capital Q. The rational numbers are contrasted with Irrational Numbers such like $\pi$, square roots and logarithms of numbers. These are the some examples of the rational numbers. A rational number is of the form $\frac{p}{q}$ where q is nonzero. $\frac{1}{1}$ = 1 $\frac{22}{100}$ = 0.22 $\frac{3}{1000}$ = 0.003 What happens if q = 0? $\frac{90}{0}$ = $\frac{p}{q}$ If  the given expression is rational q can't be zero(q≠0). Hence the above expression is not a rational number and it cant be defined. Note: If two integers ‘a’ and ‘b’ are written in a/b form, then this kind of expression is called as a rational expression and this kind of number (a/b) is called as a rational number like 5/7, 3/5. Now we discuss an important property of rational numbers i.e. Rational Numbers are terminating. This means when we find out the value of rational number, then it produces finite and repeating value, which terminates at certain point. We take some examples which define the terminating property of rational number. Example 1: Find the decimal value of 1/2  and prove that this number are terminating? Solution: When we calculate the value of 1/2, it produces 0.5 as a result. Means 1/2 produces finite value 0.5. So, we can say that 1/2 are terminating. Example 2: Prove that 3/4 are rational number and its value is terminating. Solution: As we all know that when some number are written in a/b form, then this kind of number are called as a rational number, so we can say that 3/4 is an irrational number. And when we calculate the decimal value of 3/4, then it produces 0.75 as a result. Means 3/4 produces finite value 0.75. So, we can say that 3/4 are terminating. Example 3: Prove that 5/7 are terminating at certain point. Solution: when we calculate the decimal value of 5/7, then it produces 0.714 as a result. Means5/7  produces finite value 0.714. So, we can say that5/7 are terminating. These are some examples, which tell that all rational numbers are terminating at certain point. So, we can say that Rational Numbers are terminating.
{}
Tag Info Hot answers tagged white-dwarf 38 The answer to your question is both yes and no, depending on the circumstances. Two white dwarfs colliding would likely yield a Type Ia supernova, assuming the combined mass exceeded the Chandrasekhar limit ($\sim1.4$ solar masses). The unstable object resulting from the collision could not be supported by electron degeneracy pressure; when the temperature ... 20 I don't think there is an accepted definition of a "black dwarf" - it is not a term used in the scientific literature. A popular definition that appears to circulate on the internet is that it is a white dwarf that has cooled down to the extent that it no longer emits any radiation in the visible part of the spectrum. But this is an unworkable theoretical ... 17 Will Sirius B start accreting? Yes, it is doing so now. Sirius A will have a wind and some of that wind will be captured by the white dwarf. The effectiveness of wind capture is a strong function of relative wind speed. An analytic approximation to the accretion rate, known as Bondi-Hoyle accretion, goes as the inverse cube of the relative speed. In its ... 12 I think what you need is here on the Wikipedia. In section "Radiation and cooling," it says "The rate of cooling has been estimated ... After initially taking approximately 1.5 billion years to cool to a surface temperature of 7140 K, cooling approximately 500 more K ... takes around 0.3 billion years, but the next two steps of around 500 K ... take first 0.... 11 Short answer: The Sun will lose about half of its mass on the way to becoming a white dwarf. Most of this mass loss will occur in the last few million years of its life, during the Asymptotic Giant Branch (AGB) phase. At the same time the orbital radius of the Earth around the Sun will grow by a factor of two (as will the outer planets). Unfortunately for ... 11 The answer is: to a neutron star - possibly; to a black hole, no. The process whereby a neutron star is formed is known as an accretion induced collapse and is being seriously debated, especially in the case of white dwarfs that are born at the upper end of the "natural mass range" for white dwarfs and then accrete more mass as part of a binary system. An ... 10 Nobody really knows how type Ia supernovae detonate (or deflagrate) - there are a number of possibilities. The "vanilla" possibility is not what you state in your question, it is that the white dwarf accretes sufficient mass that it approaches the Chandrasekhar limit and becomes dense enough in its core to commence carbon burning. However, the emerging ... 10 Straightforwardly no. For a start there are almost no free protons inside a white dwarf. They are all safely locked away in the nuclei of carbon and oxygen nuclei (which are bosonic). There are a few protons near the surface, but not in sufficient numbers to be degenerate. Let us assume though that you were able to build a hydrogen white dwarf that had ... 10 The density of white dwarfs is not hypothetical, it can be measured. The short answer is that the density is so high that a stable star can only be supported by electron degeneracy. Sirius B is an example. The radius can be estimated by combining the luminosity of the white dwarf with its temperature estimated from spectroscopy. The mass can then be ... 9 I think the most important part of any answer is that, as Rob Jeffries said, "black dwarfs" aren't really a thing in the astronomical literature, and I suspect that's the reason that you get different answers about how long it takes to become one. Different people come up with different thresholds for becoming one. I would argue that 3000 K is too hot to ... 8 The one from 2014 is still the record holder I believe - in the sense that it is reasonably convincing that the unseen companion of the pulsar PSR 2227-0137 is consistent with being a white dwarf with a surface temperature below 3000 K. It is worth considering why such objects might be difficult to find. (1) It is only the highest mass white dwarfs that have ... 7 Proton degeneracy is not important, because its effect is much smaller -- much like nuclear particles in theory also are dictated by gravity, but the electromagnetic and nuclear forces are dominating, since they are much stronger. Proton degeneracy is weaker than electron degeneracy due to the far greater mass of the proton compared to the electron. The ... 7 More massive stars have a more massive core and produce more massive white dwarfs. The relationship between the initial mass of the main sequence star and the final mass of the white dwarf is monotonic, but not linear. The Sun is expected to produce a white dwarf with a mass of around $0.5 M_\odot$, whilst a $8M_\odot$ star is expected to produce a carbon/... 7 The answer is of order 1 million years to cool from a standard end of He burning temperature of just over $10^8$ K to the top end of the white dwarf temperature range you give in your question. The details would depend exactly on the mass and composition of the white dwarf and there are also some theoretical uncertainties in neutrino cooling rates. The ... 6 This question has two parts: Surface Temperatures A very useful diagram which shows surface temperatures, and also gives you the temperature of any star you can observe is the Herzsprung-Russell Diagram, this one from le.ac.uk. As you can see, the yellow of our own sun places it in the 4.5 kKelvin to 6 kKelvin, as noted in the question. This temperature ... 6 White dwarfs are objects the size of the Earth, but with a mass more similar to the Sun. Typical internal densities are $10^{9}$ to $10^{11}$ kg/m$^{3}$. White dwarfs are born as the contracting core of asymptotic giant branch stars that do not quite get hot enough to initiate carbon fusion. They have initial central temperatures of $\sim 10^{8}$K, that ... 6 The distance between Sirius A and B is between 8 and 31.5 AU and even when Sirius A becomes a red giant it will be still above 6 AU. Such distance is too large and does not allow Sirius B to accrete significant mass, almost all mass lost by Sirius A as a red giant and later AGB will escape into space. Sirius B may become a recurrent nova due to some ... 6 I'll address the three sub-questions individually, as a way of fully answering the title question. While there are indeed three planets orbiting PSR B1257+12, note that it's a pulsar, the compact remnant of an energetic event involving the system's progenitor. However, that event would likely have destroyed any planets that originally orbited the star, ... 6 WD 1856b is not more massive than the star it orbits. The radius of WD 1856b is much larger than its star because its star is a white dwarf; but WD 1856b is much less massive. That gives the star a diameter of a bit larger than Earth while the planet's size is about that of Jupiter. The star, WD 1856+534 is about 1/2 the mass of our Sun or about 500 Jupiter ... 5 The smallest, precisely measured mass for a neutron star is now $1.174 \pm 0.004 M_{\odot}$ - Martinez et al. (2015). The theoretical lower limit is more like $0.1M_{\odot}$, but there are no obvious formation channels to produce such an object. See https://physics.stackexchange.com/questions/143166/what-is-the-theoretical-lower-mass-limit-for-a-... 5 No, the two limits are not the same - there is some range of masses that both white dwarfs (WDs) and neutron stars (NSs) can have. The Chandrasekhar mass limit suggests that WDs cannot be more massive than about $1.4\,M_\odot$. However, this is true for non-rotating WDs. Rapidly rotating WDs might be as massive as $2\,M_\odot$. Accretion in a binary ... 5 Intro for the uninformed: A standard candle is an important concept in astronomy, helping to map out distances in the Universe. Since the observed flux $F$ of a light source decreases with distance $r$ by a known factor ($r^2$), if we know its intrinsic luminosity L, we can calculate the distance. For large distances, where bright sources are needed, we ... 5 The material at the surface of a white dwarf is not degenerate. The "visible" surface is defined as where the optical depth exceeds some threshold and this will occur at a low enough density that even at a few hundred kelvin, the ratio of the Fermi energy to the thermal energy is too low for significant degeneracy. In addition, at these temperatures, the ... 5 Fun question. Without doubt it would be very violent and spectacular, but not much is known about stellar collisions and only a few have ever been observed. Most stellar collisions happen due to tight orbits where the stars spiral in towards each other or perhaps, 3 or more body chaotic star systems with unstable orbits that lead to an impact. Space ... 5 Certainly a red-dwarf star can have enough energy for a planet around it to be in the goldilocks zone. There are some difficulties with red-dwarf stars and Earth like planets. The planet would need to be very close to the star and as a result, tidally locked. The orbital period would be quite short, so there would be no seasons and one side of the ... 5 The "classic" Chandrasekhar mass is given by $$M_{\rm Ch} = 1.445 \left(\frac{\mu_e}{2}\right)^{-2}\ M_{\odot},$$ where $\mu_e$ is the number of mass units per electron ($\mu_e =2$ for ionised carbon, oxygen or helium; $\mu_e = 1$ for hydrogen, $\mu_e= 56/26$ for iron (56)). This assumes a white dwarf star of uniform, ionised composition and an equation of ... 5 This is a classic question in physical eschatology, seeing what happens if we extrapolate current understanding of astrophysics forward. The classic papers are (Dyson 1979) and (Adams & Laughlin 1997). Obviously, over very long timescales white dwarfs cool down, crystallize. and become "black dwarfs". This is fairly well-established from ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
## Introduction Recent appreciation for the importance of microbes in human health and disease has prompted the generation of many metagenomic HTS (high throughput sequencing) datasets1. The increase in available HTS data from human tissues also represents an enormous resource because many of these datasets include reads from tissue-resident microbes, which have been shown to play important roles human disease, including tumorigenesis and the tumor response to therapy2,3,4,5,6,7,8. The increase in available metagenomic HTS datasets prompted the development of many taxonomic classification and abundance estimation methods. A recent benchmarking study9 involving a dataset established by Critical Assessment of Metagenome Interpretation (CAMI) challenge and International Microbiome and Multiomics Standards Alliance (IMMSA) provides a comprehensive review of these methods. The study covers 20 taxonomic classifiers including both alignment-based approaches (such as GATK PathSeq, blastn and MetaPhlAn210,11,12) as well as alignment-free approaches (such as Kraken, CLARK, KrakenUniq, Centrifuge, and Bracken13,14,15,16,17). Below, we provide an overview of the general approaches employed for metagenomic classification methods. Early approaches for analyzing metagenomic sequencing data were alignment-based and used a reference database. Reads were primarily searched in GenBank18 through blastn11 or custom built aligners such as GATK PathSeq10. Unfortunately, the growth of HTS data and reference databases has made read search and alignment using blastn or GATK PathSeq computationally infeasible on the largest datasets. For example, a recent study showing that microbial reads from tumors sequenced by The Cancer Genome Atlas (TCGA) can be used to build a classifier for cancer type19 use the alignment-free approach Kraken13 due to the large number of samples analyzed. Even though Kraken and other alignment-free tools are faster than the alignment-based tools20, these alignemnt-free tools are not as accurate. For example, another recent paper on microbial reads from single cell RNA-seq (scRNA-seq) datasets to distinguish cell type specific intracellular microbes from extracellular and contaminating microbes21 had to use GATK PathSeq because the relatively small number of microbial reads per cell were inadequate for available alignment-free methods to give accurate results. The distinct approaches taken by these two studies exemplify the tradeoffs inherent in the above methodologies. Alignment-based methods can be sped up substantially by aligning reads to a compressed reference database or to a reference collection of sequences from marker genes, which are usually clade-specific, single-copy genes22,23. Since marker-gene based methods identify and use only a handful of marker genes on each genome, much of the data goes unused, making taxonomic quantification less accurate. Species with low abundance within the sample may be difficult to identify through marker gene methods because the data may contain few reads originating from the marker genes. Alignment-free methods typically rely on exact string matching16,24, or k-mer (substrings of length k) “matches” to obtain a taxonomic assignment for every read. These methods either assign a read to the lowest taxonomic rank possible (determined by the specificity of the read’s substrings, or k-mers)13,25,26,27, or to a pre-determined taxonomic level, i.e., genus, species, or strain14,28. Unlike marker-gene based methods, k-mer based applications can use all the input reads29. The large memory footprint to maintain the entire k-mer profile of each genome, for large values of k, can be reduced through hashing or subsampling the k-mers30,31,32,33. In addition to methods based on exact k-mer matches, it is also possible to assign metagenomic reads to bacterial genomes by employing sequence-specific features (e.g., short k-mer distribution or GC content)34,35,36,37,38, although methods that employ this approach are typically not very accurate at species level or strain level assignment. These methods, as a result, are typically insufficient for strain-level applications39, e.g., to identify mixed infections caused by multiple strains of a bacterial species40,41,42, to distinguish pathogenic strains from non-pathogenic strains43, or to track food-borne pathogens44. Most of the methods described above and covered in the aforementioned benchmarking study9 analyze each read without consideration of how the reads are sampled. Provided that the sequence data to be analyzed are genomic DNA, the distribution of HTS reads from a given species or strain should be roughly uniform. This principle is used in several methods for isoform abundance estimation45,46,47 and are effective even though the distribution of reads across an isoform may not be uniform in practice. In the context of metagenomic abundance estimation, however, the uniform coverage principle is under-utilized. One exception is the network flow based approach, utilized, for example, by ref. 48, which does take into account the uniform coverage—however, it is relatively slow due to the hardness of the underlying algorithmic problem. Another method that utilizes the near uniformity across k-mers within a genome is ref. 15, which runs faster but also is less accurate. In addition to the metagenomic species identification and quantification methods summarized above, there are also tools to determine the likely presence of a long genomic sequence (e.g., the complete or partial genome of a bacterial species) in a given metagenomic sample49,50,51,52,53. Even though these tools solve an entirely different problem, methodologically they are similar to the k-mer based metagenomic identification and quantification tools such as refs. 13,14, in the sense that they build a succinct index on the database, which is comprised of the metagenomic read collection, and they query this index without explicit alignment. However, because of their design parameters, these tools can not perform abundance estimation. In this paper, we describe CAMMiQ (Combinatorial Algorithms for Metagenomic Microbial Quantification), a computational approach to maintain/manage a collection of m (bacterial) genomes $${{{{{{{\mathcal{S}}}}}}}}=\{{s}_{1},\ldots,{s}_{m}\}$$, each assembled into one or more strings/contigs, representing a species, a particular strain of a species, or any other taxonomic rank. CAMMiQ constructs a data structure, which can answer queries of the following form: given a set $${{{{{{{\mathcal{Q}}}}}}}}$$ of HTS reads obtained from a mixture of genomes or transcriptomes, each from $${{{{{{{\mathcal{S}}}}}}}}$$, identify the genomes in $${{{{{{{\mathcal{Q}}}}}}}}$$, and, in case the reads are genomic, compute their relative abundances. Our data structure is very efficient in terms of its empirical querying time and is shown to be very accurate on simulations for which the ground truth answers are known. The distinctive feature of our data structure is its utilization of substrings that are present in at most c genomes (c > 1) in $${{{{{{{\mathcal{S}}}}}}}}$$; in this paper, we focus on c = 2, which we call doubly-unique substrings. CAMMiQ is thus different from available methods which set c = 1 to compare genomes via their shortest unique substrings54,55, or perform metagenomic analysis by employing k-mers unique to each genome13,14,15. By considering substrings that are present in c = 2 (or possibly more) genomes, CAMMiQ utilizes a higher proportion of reads and can accurately identify genomes at subspecies/strain level. The choice of c = 2 is sufficiently powerful for the datasets we considered. However, our approach can be generalized for any fixed value of c ≥ 2. Another distinctive feature of our data structure is its use of the variable length substrings—rather than fixed length k-mers. Because any extension of a shortest unique substring is also unique, CAMMiQ only maintains the shortest of these overlapping unique substrings to maximize utility. By being flexible about substring length, CAMMiQ potentially has a a larger selection of substrings from which to choose; because it utilizes the shortest unique substrings, it maximizes possible coverage. To assign each read in $${{{{{{{\mathcal{Q}}}}}}}}$$ that includes an almost-unique substring (i.e., a string present in at most c genomes) to a genome, our data structure solves an integer linear program (ILP) - that simultaneously infers which genomes are present in $${{{{{{{\mathcal{Q}}}}}}}}$$ and, if the reads are genomic, the relative abundances of the identified genomes. Specifically, the objective of the ILP is to identify a set of genomes in which the coverage of the almost-unique substrings in each genome is (approximately) uniform. Our final contribution is a set of conditions sufficient to identify and quantify genomes in a query correctly, through the use of unique substrings/k-mers, provided the reads are error-free. Although this is a purely theoretical result, to the best of our knowledge it has not been applied to metagenomic data analysis, and is valid for CAMMiQ for the case c = 1 and other unique substring based methods such as CLARK and KrakenUniq. Setting c = 2 for CAMMiQ is advised for cases where these conditions are not met. On the experimental side, we show that CAMMiQ is not only much faster but also more accurate than the mapping based GATK PathSeq, which, as mentioned earlier, was used on scRNA-seq data obtained from monocyte-derived dendritic cells (moDCs) infected with distinct Salmonella strains21—where accuracy was the top priority. The application to single-cell data is important because in studies of the human microbiome, it is of interest to know which cells are infected with which microbial strains, especially to distinguish between benign commensals and pathogenic variants of bacteria such as E. coli. Using current sequencing technologies, single-cell nucleotide data are primarily RNAseq rather than DNAseq, which is why we focus on an RNAseq case study. Returning to the established problem of analyzing bulk DNAseq data, we demonstrate the comparative advantage of CAMMiQ against the top performing alignment based and alignment free metagenomic classification methods according to the above-mentioned benchmarking study9 on the very same (CAMI and IMMSA) dataset. We additionally show that CAMMiQ is uniquely capable of handling particularly challenging microbial strains we derived from the NCBI RefSeq database. ## Results Below, we first give a brief overview of CAMMiQ algorithm. Then we describe the index data sets, simulated and real query sets, as well as the alternative computational methods we used to benchmark CAMMiQ’s performance. We next demonstrate CAMMiQ’s comparative accuracy performance against alternative metagenomic analysis methods on the two species level data sets we have: the first is the CAMI and IMMSA benchmark (i.e., species-level-all) index dataset and the second is the species-level-bacteria index dataset. For these two datasets, we not only provide accuracy figures for the tools benchmarked but also the computational resources they use. Additionally, we demonstrate the maximum potential advantage that could be offered by CAMMiQ through its use of doubly-unique, variable length substrings on our species-level-bacteria index dataset. We then demonstrate CAMMiQ’s performance on our strain-level index dataset. Finally, we demonstrate CAMMiQ’s performance on real metatranscriptomic query sets through its use of our subspecies-level index dataset. The results of CAMMiQ in this setup was compared against that of the GATK PathSeq tool10 which was utilized by the original study on this data set21, as well as blastn method11, which possibly offers the most accurate (albeit slow) approach for the relevant purpose. ### Overview of CAMMiQ indexing and querying procedure As per a typical metagenomic classification or profiling tool, CAMMiQ involves two steps, namely, index construction and query. In the index construction step, CAMMiQ is given a set $${{{{{{{\mathcal{S}}}}}}}}={\{{s}_{i}\}}_{i=1}^{m}$$ of m genomes or contigs, each labeled with an ID representing the taxonomy of that genome. We call $${{{{{{{\mathcal{S}}}}}}}}$$ an index dataset below. By the end of this step, CAMMiQ returns the collection of sparsified shortest unique substrings and shortest doubly-unique substrings on each genome si in $${{{{{{{\mathcal{S}}}}}}}}$$ in a compressed binary format, and other meta information involving the input index dataset, which jointly composing its index on the dataset. CAMMiQ reuses its index in the next query step. In the query step, CAMMiQ is given a collection of reads $${{{{{{{\mathcal{Q}}}}}}}}={\{{r}_{j}\}}_{j=1}^{n}$$ of varying length, and identifies a set of genomes $${{{{{{{\mathcal{A}}}}}}}}=\{{s}_{1},\cdots \,,{s}_{a}\}\subset {{{{{{{\mathcal{S}}}}}}}}$$ and their respective abundances p1,   , pa that “best explain” $${{{{{{{\mathcal{Q}}}}}}}}$$ efficiently. We call $${{{{{{{\mathcal{Q}}}}}}}}$$ a query or query set below. Depending on specific applications, a user can select to return (i) $${{{{{{{{\mathcal{A}}}}}}}}}_{1}\subseteq {{{{{{{\mathcal{S}}}}}}}}$$, the set of genomes such that each includes at least one shortest unique substring that also occur in some read rj in the query $${{{{{{{\mathcal{Q}}}}}}}}$$; (ii) $${{{{{{{{\mathcal{A}}}}}}}}}_{2}\subseteq {{{{{{{\mathcal{S}}}}}}}}$$, the smallest subset of genomes in $${{{{{{{\mathcal{S}}}}}}}}$$ which include all shortest unique and doubly-unique substrings that also occur in some read $${r}_{j}\in {{{{{{{\mathcal{Q}}}}}}}}$$; or $${{{{{{{{\mathcal{A}}}}}}}}}_{3}\subseteq {{{{{{{\mathcal{S}}}}}}}}$$, the smallest subset of $${{{{{{{\mathcal{S}}}}}}}}$$ which again include all shortest unique and doubly-unique substrings that also occur in some read $${r}_{j}\in {{{{{{{\mathcal{Q}}}}}}}}$$, with the additional constraint that the “coverage” of these substrings in each genome $${s}_{i}\in {{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ is roughly uniform. In the last case CAMMiQ also computes the relative abundance of each genome si in $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$. For all three query types, CAMMiQ first identifies for each read rj all unique and doubly-unique substrings it includes; it then assigns rj to the one or two genomes from which these substrings possibly originate. To compute $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$, CAMMiQ simply returns the collection of genomes receiving at least one read assignment. To compute $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$, CAMMiQ solves a hitting set problem though an ILP, where genomes form the universe of items, and indexed strings that appear in query reads form the sets of items to be hit. To compute $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$, CAMMiQ solves the combinatorial optimization problem that asks to minimize the variance among the number of reads assigned to each indexed substring of each genome, again through an ILP. The solution indicates the set of genomes in $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ along with their respective abundances. ### Datasets To evaluate the overall performance of CAMMiQ, we have performed four sets of experiments, each with a distinct index dataset (all based on NCBI’s RefSeq database56) and a distinct collection of queries. 1. (i) The first, species-level-all dataset is the most comprehensive index dataset, which includes one complete genome from each bacterial, viral and archaeal species from NCBI’s RefSeq database, resulting in a total of m = 16,418 genomes. This dataset is established for the CAMI and IMMSA repository used in recent benchmarking studies of metagenomics classification and profiling tools9,57. There are 16 query sets from this repository used in these two studies, 8 from CAMI and 8 from IMMSA. Notably, both CAMI and IMMSA query sets include genomes that are not present in the species-level-all index dataset. In fact, the CAMI query sets include only a small porportion of genomes from the index dataset - the majority of the reads in these queries represent unknown species or simulated strains “evolved” from known species that are not in the species-level-all index dataset. See Supplementary Notes 5.4.1 and 5.4.2 for a detailed description of these queries. We used these query sets to demonstrate the comparative performance of CAMMiQ against the best performing methods according to ref. 9, namely Kraken258, KrakenUniq15, CLARK14, Centrifuge16, and Bracken17; please see Supplementary Note 6 for the specific parameters and setup used for each of these tools. Since genomes in the query sets may not be all included in the index dataset, we employed query type $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ to evaluate CAMMiQ’s performance against the aforementioned tools. 2. (ii) We compiled our next, species-level-bacteria index dataset to evaluate the species level performance of CAMMiQ, this time across one representative complete genome from each of the m = 4122 bacterial species from (an earlier version of) NCBI’s RefSeq. This index dataset enabled us to measure the performance of CAMMiQ’s type $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ queries against the tools mentioned above plus MetaPhlAn212, a marker-gene based profiling tool. We simulated 14 query sets for this experiment with varying levels of “difficulty” across the genomes. These include 10 challenging (marked Least) and 4 easier queries (marked Random). See Supplementary Note 5.4.3 and Supplementary Fig. 2 for a detailed description of these queries. 3. (iii) Our next strain-level index dataset is smaller: it includes the complete set of m = 614 human gut related bacterial strains from ref. 59 for the purpose of evaluating CAMMiQ’s strain level performance. We again employed type $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ queries of CAMMiQ to compare it against the above-mentioned tools. We simulated 4 queries for this index dataset with varying levels of “difficulty”. See Supplementary Note 5.5 for details. 4. (iv) We finally evaluated CAMMiQ on a dataset from another study60 which involved metatranscriptomic reads from 262 single human immune cells (monocyte-derived dendritic cells, moDCs) deliberately infected with two distinct strains of the intracellular bacterium Salmonella enterica and 80 uninfected cells used as negative controls. A recent study21 applied the GATK PathSeq tool10 to these metatranscriptomic read sets to validate the presence of Salmonella genus in each cell. To demonstrate CAMMiQ’s ability to distinguish cells infected with specific strains of Salmonella in time much faster than GATK PathSeq, we applied its query types $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ and $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ to these metatranscriptomic read sets. Since these are not genomic reads, our query type $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ could not be used. The index dataset we used for these queries are at the subspecies-level; it consists of m = 3395 complete bacterial genomes, where each species is represented by a handful of strains. This index dataset was generated to reduce the sampling bias observed in the RefSeq database, which, e.g., includes more than 300 strains from the genus Salmonella. CAMMiQ’s accuracy was compared mainly against PathSeq (a mapping based, thus relatively slow method) for this experiment since PathSeq was the preferred method of the original study due to its high levels of accuracy. Further details on the real query sets can be found in Supplementary Note 5.6. A summary of data sets used in our experiments can be found in Table 1. Additional details on the four index datasets can be found in Supplementary Notes 5.15.3. As will be demonstrated, CAMMiQ’s performance on these query sets is superior to all alternatives in almost all scenarios we tested. ### Precision and recall in read classification across all species level queries We tested CAMMiQ’s species level performance on both CAMI and IMMSA (i.e., species-level-all) and species-level-bacteria data sets, and compared it against the best performing alternatives according to ref. 9. Results based on CAMI and IMMSA are summarized in Table 2; results based on species-level-bacteria data set are summarized in Table 3. Perhaps the most widely-used performance measures to benchmark metagenomic classifiers are the proportion of reads correctly assigned to a genome among (i) the set of reads assigned to some genome, i.e., precision, and (ii) the full set of reads in the query, i.e., recall14. In Table 2, panel A, as well as Table 3, panel A, we report the selected tools precision in read classification. Then, in Table 2 panel B and Table 3, panel B, we report these tools’ recall in read classification. Note that the above tables do not report the read classification precision and recall values for MetaPhlAn2. This is partially due to MetaPhlAn2’s use of an index based on a very different (predetermined) and much smaller database of marker genes. As a consequence, MetaPhlAn2 assigns very few reads to the marker genes in its database and thus appears to have very low recall (and possibly higher precision). This would not accurately reflect MetaPhlAn2’s performance since unlike the other tools we benchmarked, MetaPhlAn2 does not aim to assign as many reads reads to genomes correctly but rather aims to identify distinct genomes in a metagenomic sample; see Supplementary Note 6 for details. Additionally note that for our bookkeeping purposes, any read assigned to a taxonomic level strictly higher than the species level by Kraken2, KrakenUniq, and Centrifuge is considered to be not assigned. This likely increases their reported precision but may decrease their recall. In all our species level tests, we used CAMMiQ’s default parameter settings of $${L}_{\min }=26$$ and $${L}_{\max }=50$$ to compare it against Kraken2, KrakenUniq, Bracken, CLARK, and Centrifuge, all using k-mer length of 26; see Supplementary Note 6 for details on parameter settings. Results based on alternative parameter settings can also be found in Supplementary Note 8 and in particular Supplementary Table 6. In all of these experiments, we used the same collection of genomes for establishing the index for each of the five tools (with the exception of MetaPhlAn2, which uses its own predetermined index): the results in Table 3 are based on our species-level-bacteria index dataset and the results in Table 2 are based on our species-level-all index dataset. Compared with the species-level-bacteria queries which are composed of highly similar genomes, the CAMI and IMMSA queries are, in principle, less challenging since reads that did not get mapped to a unique genome were excluded from these queries at the time they were complied57. Even though the RefSeq database has been significantly updated since these queries were complied, almost all reads in these queries still map to a unique genome. Having said that, reads in these queries may originate from genomes outside of the species-level-all index dataset - including plasmids from these species that have not been indexed. It is entirely possible that such reads may include one or more unique or doubly-unique substring(s) indexed by CAMMiQ, and thus be assigned to the wrong genome. As can be seen in Table 2, panel A, CAMMiQ offered the best precision in read classification for all IMMSA queries; interestingly the precision values for Centrifuge was much lower than the alternatives. CAMMiQ was arguably the best on the recall in read classification on the IMMSA queries as well, as can be seen in Table 2, panel B. However, reads that originate from genomes outside of the index database were likely not utilized by CAMMiQ, reducing its comparative advantage against, KrakenUniq and CLARK, which may still assign such reads to a genome; this would increase their recall, while possibly reducing their precision. As can be seen in Table 3 and Supplementary Table 5, CAMMiQ achieved the best recall and F1 score (see Supplementary Note 7 for a definition), and the second best precision for the 11 species-level-bacteria query sets (the three queries with uneven coverage were excluded). Its precision and recall were particularly impressive for first 7 challenging queries (labeled with prefix “Least”), where CAMMiQ was an order of magnitude better than the alternatives in terms of both measures. On these queries, tools other than CAMMiQ assigned only a small proportion of reads to genomes at the species level. This is because none of them employ doubly-unique substrings to differentiate species in the index dataset from the same genus. The only exception is Centrifuge, which achieved the best classification precision and the second best recall. For example, on the 3 hypothetically error-free queries (labels ending with -1 in Table 3), where Centrifuge (in addition to CAMMiQ and CLARK) achieved 100% precision. However, Centrifuge’s classification performance deteriorated when genomes in queries were likely not present in the corresponding index dataset (Table 2, panels A and B). Note that in principle Kraken2, KrakenUniq and Centrifuge could assign reads to the correct taxonomy higher than the species level. However, as mentioned above, only reads that were assigned to the correct species were considered to be true positives for this benchmark. ### Precision and recall in genome identification on IMMSA and CAMI queries On the CAMI and IMMSA queries, CAMMiQ correctly identified more genomes than the alternative tools with the same abundance cutoff of 0.01% (we consider a genome to have been identified by a tool only if the tool reports its abundance to be ≥0.01% of the total abundance of all genomes), resulting in superior recall values for genome identification (Table 2, panel C; note that recall in genome identification represents the fraction of correctly identified genomes among all genomes in a query set). This is primarily due to CAMMiQ’s use of doubly-unique substrings in its query type $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$. Compared to its recall performance, CAMMiQ achieved even better precision figures than the alternatives (Table 2, panel D; note that precision in genome identification represents the fraction of correctly identified genomes among the set of genomes identified by a given tool), due to fewer false positive identifications. The fact that CAMMiQ particularly performs best with respect to the precision values indicates that genomes not present in the index dataset would have the least impact on CAMMiQ in comparison to other tools. Note that by postprocessing the output of Kraken2, Bracken manages to improve on the number of identified genomes, and achieves comparable figures to CAMMiQ. However, it does not reduce the large number of false positive genome identifications produced by Kraken2, when unknown genomes or genomes outside the index dataset present in the query sample. ### Genome identification and quantification performance on species-level-bacteria queries Next, we evaluated the number of correctly identified genomes by each tool (specific to MetaPhlAn2, the genus corresponding to each genome), as well as the L1 and L2 distances between the true abundance profile and the predicted abundance profile, on the 14 queries involving our species-level-bacteria dataset, including the 3 queries with GC bias. As can be seen in Table 3, panels C–E, and Supplementary Table 5, CAMMiQ clearly offered the best performance in both identification and quantification. It correctly identified all genomes present in each one of the 14 queries and was not impacted by uneven read coverage or the genome we added to the query Random-20-lognormal-a.g. which was not indexed. Importantly, CAMMiQ consistently returned very few false positive genomes for the most challenging queries, and at most one false positive genome for the remaining 4 queries. Compared to CAMMiQ, other tools reported larger number of false negatives in these 14 queries (again we consider a genome to be a “negative”, if its reported abundance level is ≤0.01% of the total abundance of all genomes), in particular in the 10 challenging queries (labeled with the prefix “Least”) with minimal unique substrings (i.e., L-mers). Among them, CLARK and Centrifuge offered the best false negative performance, especially on error free queries. As can be expected, MetaPhlAn2 had the worst performance with respect to false negatives, very likely due to the incompleteness of its marker gene list (we used the latest set of marker genes mpa_v20_m200 in MetaPhlAn2). This also led to a relatively larger L1/L2 distances than the other tools, even for the remaining 4 (easier) queries. Kraken2 and KrakenUniq were also prone to having false negatives, though fewer than MetaPhlAn2. Bracken, in general, could correctly identify a few more genomes than Kraken2, and this improvement in its identification performance also leads to better quantification results (see below). CAMMiQ performs even better with respect to the number of false positive genomes, as demonstrated by its F1 score distribution (see Supplementary Table 5). The alternative tools all returned a large number of false positives in species-level-bacteria queries, especially in the first 10 challenging queries, even though all reads in these queries were sampled from (some genome in) the index dataset (see Table 3, panel C). Among them, Centrifuge and Bracken usually performed better on the 10 challenging queries with fewer ‘unique’ genomes; while KrakenUniq and CLARK performed better on the remaining 4 (easier) queries. Kraken2 showed the worst performance with respect to the false positives: it outputs more than a third of the genomes from the index dataset even for the three error free queries. In many of the datasets, these false positives were eliminated by Bracken’s postprocessing of Kraken2’s output; unfortunately, in other query datasets, e.g., Random-20-uniform and Random-100-uniform, Bracken introduced additional false positives. MetaPhlAn2 identified only limited number of genomes (and few true positive genomes) in all queries in general, so it had a comparable performance to CAMMiQ with respect to false positives. However, its F1 scores were not as good as CAMMiQ’s (see Supplementary Table 5). Note that CAMMiQ not only correctly identified all genomes, but also predicted their abundances reasonably close to the true values. As can be seen in Table 3, CAMMiQ outperformed all other tools on both L1 and L2 errors, typically offering a factor of 3 ~ 4× improvement over the second best alternative. Interestingly, even when the coverage across each genome were non-uniform, CAMMiQ’s $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ type of query was only mildly impacted. As noted earlier, on the 10 challenging queries (especially those with sequencing errors), all alternative tools except MetaPhlAn2 output hundreds of false positive genomes. As a consequence, their predictions for the abundances of the true positive genomes were smaller than the true abundance values. This is particularly the case for Kraken2 and KrakenUniq: even though they identified the majority of the true positive genomes correctly, their reported abundance values were all close to 0; this results in their L1 distances to be very close to 1. ### Evaluation of computational resources on species level queries We compared the running time and memory usage of CAMMiQ, Kraken2/Bracken, KrakenUniq, CLARK, MetaPhlAn2, and Centrifuge in building the index and responding to the queries; see Table 4. As can be seen CAMMiQ performs better than all alternatives in running time - including those tools that aim to index all unique k-mers (KrakenUniq and CLARK), and all substrings (Centrifuge) - with respect to both query time and index construction time. The only exception is Kraken2 (MetaPhlAn2 uses a pre-built index and so it can not be compared against others with respect to index construction time), however, Kraken2’s overall accuracy is worse than the others across the species-level queries. Since MetaPhlAn2 uses a pre-built index (See Supplementary Note 6) it avoids the expensive index construction process. This, however, results in many false negatives (See Subsections Genome identification and quantification performance on species-level-bacteriaqueries and Performance of CAMMiQ at the strain level). CAMMiQ also supports pre-built indices. Compared to the other tools and methods, the sizes of these pre-built indices are much smaller (Table 4, Panel B), due to the sparsification of unique and doubly unique substrings, allowing convenient transfer and fast downloading. Note that we do not report the time for loading the index into memory for any of the tools, since this is performed only once. All of our experiments were run on a Linux server equipped with 40 Intel Xeon E7-8891 2.80 GHz processors, with 2.5 TB of physical memory and 30 TB of disk space. The ILP solver used by CAMMiQ in the initial implementation is IBM ILOG CPLEX 12.9.0. We have also ported the code to use the ILP solver Gurobi 9.1.0. ### Assessing the use of variable-length and doubly-unique substrings in species-level-bacteria queries Due to its unique algorithmic features CAMMiQ outperforms available alternatives on the CAMI, IMMSA and the species-level-bacteria query sets. A key question is: what is the maximum potential improvement in performance one can expect through the use of (i) variable-length substrings as opposed to fix length k-mers, and (ii) doubly-unique substrings in addition to unique substrings? Here, we evaluate both of these algorithmic features in the context of the species-level-bacteria dataset we constructed (see Table 1). For that, we compare the proportion of L-mers (for read length L = 100) from each genome si in our species-level-bacteria index dataset that are unique or doubly-unique (and thus is utilized by CAMMiQ) with the proportion of L-mers that include a unique k-mer (and thus can be utilized by CLARK and others) for k = 30. Figure 1a summarizes our findings: on the horizontal axis, the genomes are sorted with respect to the proportion of unique and doubly-unique L-mers they have; the vertical axis depicts this proportionality (from 0.0 to 1.0). The figure shows the proportion of unique L-mers, doubly-unique L-mers, the combination of unique and doubly-unique L-mers (all utilized by CAMMiQ), as well as the L-mers that include a unique k-mer (utilized by, e.g., CLARK) for each genome depicted on the horizontal axis. As can be seen, roughly three quarters of all genomes in this dataset are easily distinguishable since a large fraction of their L-mers include a unique k-mer. However, about a quarter of the genomes in this dataset can benefit from the consideration of doubly-unique substrings, especially when their abundances are low. In particular, 66 of these 4122 genomes/species have extremely low proportions (each ≤1%) of unique 100-mers. At the extreme, the species Francisella sp. MA06-7296 does not have a single unique 100-mer and the species Rhizobium sp. N6212 does not have any 100-mer that includes a unique 30-mer (in fact any substring of length $$\le {L}_{\max }=50$$). These two species cannot be identified by, e.g., CLARK in any microbial mixture, regardless of their abundance values. Figure 1b depicts the inverse proportionality of doubly-unique L-mers in comparison to unique L-mers among 50 genomes that have the lowest proportion of unique L-mers - for L = 100. The inverse-proportionality of unique or doubly-unique L-mers for a genome corresponds to the number of reads to be sampled (on average) from that genome to guarantee that the sample includes one read that would be assigned to the correct genome. In the absence of read errors, this guarantees correct identification of the corresponding genome in the query. Note that, in half of these 50 genomes, almost all L-mers are doubly-unique. This implies that any query involving one or more of these genomes could only be resolved by CAMMiQ and no other tool. We further assessed whether the usage of unique and doubly-unique substrings can lead to robust genome identification and quantification performance in practice, by evaluating the distribution of these substrings across the genome. In principle, the more evenly these substrings are distributed across a genome, the less likely CAMMiQ’s quantification performance can be impacted by queries composed of genomes with small alterations to the corresponding index genomes. As can be seen in Fig. 1c, d, unique and doubly unique substrings span the entire genome on most of the species in our species-level-bacteria index dataset, not significantly biased towards any functionally annotated region by NCBI (i.e., gene, CDS, ncRNA, rRNA, tRNA, tmRNA or plasmid). Even when the numbers of unique or doubly-unique substrings are relatively small in a genome (for example, the last 3 genomes in Fig. 1d), they are still well distributed, helping CAMMiQ with that genome’s identification as well as quantification. We would like to note here that even though some genomes have very few unique substrings, implying that they would be difficult to identify through the use of alternative methods, because of their (well distributed) doubly-unique substrings, CAMMiQ can identify and quantify them accurately. Consider, for example, the last genome in Fig. 1d, Rhizobium sp. N1341 in which the only unique substrings are located on the plasmids. However, since there are sufficiently many doubly unique substrings on the chromosome, this species could still be identified and quantified by CAMMiQ, through the $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ or $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ type of query. ### Performance of CAMMiQ at the strain level In the next experiment, we evaluated CAMMiQ’s performance (with default parameters) on queries composed from our strain-level dataset that consists of 614 Human Gut related genomes of bacterial strains from 409 species59 as described in Supplementary Note 5.2. As can be seen in Table 5, CAMMiQ managed to identify and accurately quantify all strains in the queries HumanGut-random-100-1 and HumanGut-random-100-2, and > 96% strains in the other two queries, with almost no false positives. Other tools benchmarked against CAMMiQ lead to either more false negative (KrakenUniq, CLARK, MetaPhlAn2) genomes, or more false positive identifications (Kraken2, Centrifuge). Furthermore, their quantification performance (Table 5, panel B) is worse than CAMMiQ. ### Performance of CAMMiQ on real single-cell metatranscriptomic queries Our final set of experiments involve “real” metatranscriptomic reads from human monocyte-derived dendritic cells (moDCs)60. Because CAMNMiQ’s most powerful type $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ query is not suitable for RNA-seq data (due to high variance in read coverage), we employed $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ and $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ queries. We remind the reader that $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ only uses unique substrings in query reads and returns the genomes in the index for which there is at least one such substring. On the other hand, $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ computes the smallest set of genomes in the index that include all unique or doubly-unique substrings across the query reads. Each query was composed of all high quality, non-human scRNA-seq reads from the corresponding single cell60. For guaranteeing this, we filtered out all scRNA-seq reads which (i) possibly originate from the human genome, or (ii) have low sequence quality and “complexity”, or (iii) map to 16S or 23S ribosomal RNAs on the two Salmonella genomes (to avoid incorrect assignment of reads due to “barcode hopping”). Following the original study60, we categorized each cell into one of the 5 groups: infected cells that were confirmed to contain (1) STM-LT2 or (2) STM-D23580 strain of intracellular Salmonella; bystander cells that were exposed to (3) STM-LT2 or (4) STM-D23580 strains, but confirmed to not contain intracellular Salmonella; and (5) cells that were mock-infected and sequenced as controls. For each query, we compared the number of reads CAMMiQ assigned uniquely to STM-LT2 or STM-D23580 genomes against those aligned and assigned either by the GATK PathSeq10 tool or blastn11 (see Supplementary Note 9). Figure 2 summarizes our results on this data set. In Fig. 2a, we demonstrate that compared to the GATK PathSeq approach, CAMMiQ’s $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ type queries were more sensitive with respect to read assignment. On average, CAMMiQ identified (roughly) an order of magnitude more unique STM-LT2 or STM-D23580 reads in each cell, demonstrating its potential to better identify intracellular organisms at subspecies or strain level. Note that CAMMiQ’s performance is comparable or slightly better than that of blastn. However CAMMiQ is several orders of magnitude faster than blastn or GATK PathSeq. (CAMMiQ only took a total of 65.3s for computing $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ type queries and an additional 2.5s for computing $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ type queries on the entire query set, outperforming GATK PathSeq, which required 29628.1s, or blastn, which is typically slower). The abundances reported by each of the three tools (measured by unique read counts) of Salmonella were substantially higher in the infected cells compared to the mock-infected controls. More importantly, cells known to be infected with or exposed to a particular strain indeed include significantly more reads from that strain. Interestingly, CAMMiQ as well as blastn reported that cells infected with or exposed to a particular strain also contain reads unique to the other strain. This is possibly due to sequencing errors or incorrect cell assignments for these reads. In Fig. 2b, we compare CAMMiQ’s $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ type queries with its $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ type queries (as well as GATK PathSeq and blastn) with respect to the number of cells they correctly identify to include STM-LT2 or STM-D23580 strains. For that we vary the minimum number of reads that need to be identified by each tool to report a given strain, and for each such value we indicate how many cells are reported to include the STM-LT2 strain (on the vertical axis) vs the STM-D23580 strain (on the horizontal axis). With the exception of the third subpanel a method with a plot closer to the diagonal is less sensitive. As can be seen CAMMiQ’s $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ type queries are more sensitive than not only its $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ type queries but also GATK PathSeq and blastn. However, they also introduce some potential false positive calls (e.g., in the third subpanel panel corresponding to the controls). This could be due to additional reads utilized by $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ queries impacted by read errors or incorrect assignments of these reads to cells. ## Discussion We have introduced CAMMiQ, a new computational tool to identify microbes in an HTS sample and to estimate abundance of each species or strain. CAMMiQ is based on a principled approach that starts by defining formally the following algorithmic problem that has not been fully addressed by any available method. Given a set $${{{{{{{\mathcal{S}}}}}}}}$$ of distinct genomic sequences of any taxonomic rank, build a data structure so as to identify and quantify genomes in any query, composed of a mixture of reads from a subset of $${{{{{{{\mathcal{S}}}}}}}}$$. CAMMiQ is particularly designed to handle genomes that lack unique features; for that, it reduces the aforementioned identification and quantification problems to a combinatorial optimization problem that assigns substrings with limited ambiguity (i.e., doubly-unique substrings) to genomes so that, in its most general $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ type query, each genome is “uniformly covered”. Uniform coverage is a simplifying assumption we employ in our theoretical analysis since which genomes are represented in a query are not known in advance. In practice, the coverage for genomic sequences might be biased by GC content61,62. We do not employ this assumption in CAMMiQ implementation for $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ and $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ type queries, which are more suitable for transcriptomic sequences. Our experiments on the Salmonella scRNAseq dataset indeed show that CAMMiQ delivers good results on scRNAseq queries work well even though the reads are skewed by variable expression and the selection biases of single-cell technology. Because each such substring has limited ambiguity, the resulting combinatorial optimization problem can be efficiently solved through the existing integer program solvers IBM CPLEX and Gurobi. One potential limitation of CAMMiQ is that it relies on a database of reference genomes. In the context of medical microbiology this is a reasonable assumption since virtually all clinically-relevant microbes detected in new patients are known and have some similar genome sequenced and in RefSeq. The reliance on a reference database is more problematic in the context of studying environmental samples, in which new and rare taxa might be found by methods that do not rely on reference genomes. Our results on the CAMI benchmark data set provide reassurance that CAMMiQ performs well even when many genomes and plasmids are absent from the reference database. Another potential limitation is that the memory required by CAMMiQ index construction is relatively high. However, CAMMiQ supports pre-built indices on commonly used databases for metagenomic studies, e.g., (the latest version of) the RefSeq bacteria, viruses and archaea database. Compared to the other tools and methods, the sizes of these pre-built indices are much smaller, due to the sparsification of unique and doubly unique substrings, allowing convenient transfer and fast downloading. The prebuilt CAMMiQ index for all index datasets are available via the GitHub link provided in the Code Availability statement. In addition, as shown for the experiments summarized in Table 4, the memory requirements for CAMMiQ queries are comparable to those of other widely used packages and within the capabilities of currently available computers. Provided that the doubly-unique substrings of a given genome are not all shared with one other genome, the use of doubly-unique substrings increases CAMMiQ’s ability to identify and quantify this genome within a query. In case the dataset to be indexed involves several genomes with high levels of similarity, CAMMiQ’s data structure and its combinatorial optimization formulation could be generalized to include “triply” or “quadruply” unique substrings, but this is not yet implemented. In summary, using principled methods from combinatorial optimization and string algorithms, CAMMiQ delivers better sensitivity and specificity than widely-used existing methods on practical genome classification and quantification methods. ## Methods The input to CAMMiQ is a set of m genomes $${{{{{{{\mathcal{S}}}}}}}}={\{{s}_{i}\}}_{i=1}^{m}$$, possibly but not necessarily all from the same taxonomic level (each genome here may be associated with a genus, species, subspecies, or strain), to be indexed. Although we describe CAMMiQ for the case where each $${s}_{i}\in {{{{{{{\mathcal{S}}}}}}}}$$ is a single string, we do not assume that the genomes are fully assembled into a single contig. The string representing a genome could simply be a concatenation of all contigs from genome si and their reverse complements, with a special symbol \$i between consecutive contigs. We call $${{{{{{{\mathcal{S}}}}}}}}$$ the input database or synonymously index dataset, and we call i {1,   , m} the genome ID of string si. A query or query set for CAMMiQ contains a set of reads $${{{{{{{\mathcal{Q}}}}}}}}={\{{r}_{j}\}}_{j=1}^{n}$$ representing a metagenomic mixture. For simplicity, we describe CAMMiQ for reads of homogeneous length L; however, our data structure can handle reads of varying length. Given $${{{{{{{\mathcal{Q}}}}}}}}$$, the goal of CAMMiQ is to identify a set of genomes $${{{{{{{\mathcal{A}}}}}}}}=\{{s}_{1},\cdots \,,{s}_{a}\}\subset {{{{{{{\mathcal{S}}}}}}}}$$ and their respective abundances p1,   , pa that “best explain” $${{{{{{{\mathcal{Q}}}}}}}}$$. This is achieved by assigning (selected) reads rj to genomes si such that the implied coverage of each genome $${s}_{i}\in {{{{{{{\mathcal{A}}}}}}}}$$ is (roughly) uniform across si, with pi as the mean. CAMMiQ’s index data structure involves the collection of shortest unique substrings and shortest doubly-unique substrings on each genome si in $${{{{{{{\mathcal{S}}}}}}}}$$. We call a substring of si unique if it does not occur on any other genome sj ≠ si in $${{{{{{{\mathcal{S}}}}}}}}$$; a shortest unique substring is a unique substring that does not include another unique substring. Similarly, we call a substring of si doubly-unique if it occurs on exactly one other genome $${s}_{j}\ne {s}_{i}\in {{{{{{{\mathcal{S}}}}}}}}$$; a shortest doubly-unique substring is a doubly-unique substring that does not include another doubly-unique substring. See Supplementary Note 1 for a formal definition for the uniqueness of a substring and Supplementary Fig. 1 for a graphical illustration. CAMMiQ does not maintain the entire collection of shortest unique and doubly-unique substrings of genomes in $${{{{{{{\mathcal{S}}}}}}}}$$; instead, its index contains only a sparsified set of shortest unique and doubly-unique substrings of each $${s}_{i}\in {{{{{{{\mathcal{S}}}}}}}}$$ so that no unique and doubly-unique substring is in close proximity (i.e., within a read length) of another in si. See Section CAMMiQ Index and Supplementary Note 2 for how exactly CAMMiQ sparsifies the collection of shortest unique and doubly-unique substrings. With the (sparsified) collection of shortest unique and doubly-unique substrings, CAMMiQ is sufficiently powerful to answer the following three types of queries. The simplest type of query only involves unique substrings: given a query set $${{{{{{{\mathcal{Q}}}}}}}}$$, it asks for the set of genomes $${{{{{{{{\mathcal{A}}}}}}}}}_{1}\subseteq {{{{{{{\mathcal{S}}}}}}}}$$ so that each includes at least one (shortest) unique substring that also occur in some read rj in the query $${{{{{{{\mathcal{Q}}}}}}}}$$. The second, more general query type involves both unique and doubly-unique substrings. It asks to compute $${{{{{{{{\mathcal{A}}}}}}}}}_{2}\subseteq {{{{{{{\mathcal{S}}}}}}}}$$, the smallest subset of genomes in $${{{{{{{\mathcal{S}}}}}}}}$$ which include all (shortest) unique and doubly-unique substrings that also occur in some read $${r}_{j}\in {{{{{{{\mathcal{Q}}}}}}}}$$. Finally, the third and the most general type of query asks to compute the smallest subset $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ of $${{{{{{{\mathcal{S}}}}}}}}$$ which again include all (shortest) unique and doubly-unique substrings that also occur in some read $${r}_{j}\in {{{{{{{\mathcal{Q}}}}}}}}$$, with the additional constraint that the “coverage” of these substrings in each genome $${s}_{i}\in {{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ is roughly uniform. In addition to the set of genomes $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$, the query also asks to compute the relative abundance of each genome si in $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$. CAMMiQ with its ability to efficiently answer all three queries described above has several advantages over existing methods that rely on fixed-length unique substrings (i.e., unique k-mers). (i) Notice that the shorter a unique substring is the more likely it will be sampled (i.e., present in a read sampled from the relevant genome). This is because a substring of length $$L^{\prime} < L$$ is included in $$L-L^{\prime}+1$$ potential reads of length L that could be sampled from a genome. Unfortunately, the shorter a substring is, the less likely that it is unique or doubly-unique. A method that uses fixed length k-mers needs to have a compromise between the number of unique substrings and the likelihood of sampling each. CAMMiQ gets around this limitation by utilizing unique substrings of any length. CAMMiQ features a lower bound $${L}_{\min }$$ and upper bound $${L}_{\max }$$ on the lengths of unique and doubly-unique substrings as explained below. (ii) Unique substrings are relatively rare, at least for certain genomes and taxa, but substrings that appear in many genomes provide very limited information about the composition of a query $${{{{{{{\mathcal{Q}}}}}}}}$$. By involving doubly-unique substrings in a query $${{{{{{{\mathcal{Q}}}}}}}}$$, the subset of genomes that could be identified through query $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ would be larger and more accurate than those that could be identified through query $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$, especially in the extreme case where $${{{{{{{\mathcal{Q}}}}}}}}$$ includes highly similar genomes that do not include any unique substring. (iii) Finally, by introducing the “uniform coverage” constraint, CAMMiQ’s $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ type of query can identify more accurately the genome(s) where a doubly-unique substring originates. This is because a query of type $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ may result in significant differences in coverage between unique and doubly-unique substrings of a given genome. As mentioned above, CAMMiQ builds an index for the sparsified sets of shortest unique and doubly-unique substrings to compute efficiently the sets $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$, $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ and $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$. For all three query types, CAMMiQ first identifies for each read rj all unique and doubly-unique substrings it includes; it then assigns rj to the one or two genomes from which these substrings can originate. To compute $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$, CAMMiQ can simply return the collection of genomes receiving at least one read assignment. To compute $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$, CAMMiQ needs to solve instances of the NP-hard set cover problem, or more precisely, its dual, the hitting set problem where genomes form the universe of items, and indexed strings that appear in query reads form the sets of items to be hit. Even though this is a restricted version of the hitting set problem where each set to be hit contains at most two items, it is still NP-hard due to a reduction to the vertex cover problem. To compute $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$CAMMiQ solves the combinatorial optimization problem that asks to minimize the variance among the number of reads assigned to each indexed substring of each genome - the solution indicates the set of genomes in $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ along with their respective abundances. Details on the composition as well as the construction process for CAMMiQ’s index are discussed in Section CAMMiQ Index, as well as Supplementary Notes 1 and 2. The two stages in query processing of CAMMiQ are discussed in Subsections Query processing stage 1: Preprocessing the Reads and Queryprocessing stage 2: ILP formulation. The first stage assigns reads to specific genomes, which is sufficient for computing sets $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ and $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$. See Section Query processing stage 1: Preprocessing the Reads and Supplementary Note 3 for the criteria we use to assign a read to a genome, based on the indexed substrings that the read includes. The second stage introduces the combinatorial optimization formulation to compute $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ as a response to the most general query type. See Section Query processing stage 2:ILP formulation for details. ### CAMMiQ Index To respond to all three types of queries described above, CAMMiQ identifies all unique and doubly-unique substrings of the genomes in $${{{{{{{\mathcal{S}}}}}}}}$$ and organizes them in a simple but efficient data structure. Specifically, CAMMiQ computes the complete set of shortest unique substrings, $${{{{{{{\mathcal{U}}}}}}}}={\cup }_{i=1}^{m}{{{{{{{{\mathcal{U}}}}}}}}}_{i}$$, and the set of shortest doubly-unique substrings, $${{{{{{{\mathcal{D}}}}}}}}={\cup }_{i=1}^{m}{{{{{{{{\mathcal{D}}}}}}}}}_{i}$$, where $${{{{{{{{\mathcal{U}}}}}}}}}_{i}$$ and $${{{{{{{{\mathcal{D}}}}}}}}}_{i}$$ respectively denote the complete set of shortest unique and doubly-unique substrings from genome si, whose lengths are within the range $$[{L}_{\min },{L}_{\max }\le L]$$. See Supplementary Note 1 for a linear time algorithm to build both $${{{{{{{\mathcal{U}}}}}}}}$$ and $${{{{{{{\mathcal{D}}}}}}}}$$. CAMMiQ then sparsifies $${{{{{{{\mathcal{U}}}}}}}}$$ and $${{{{{{{\mathcal{D}}}}}}}}$$ by selecting only one representative substring among those that are in close proximity in each genome, and discarding the rest; this sparsification step is described in detail in below and the Supplementary Note 2. Finally it builds a collection of tries (trees where the root node represents a substring of length $${L}_{\min }$$ and every other internal node represents a single character) to compactly represent and efficiently search for substrings in $${{{{{{{\mathcal{U}}}}}}}}$$ and $${{{{{{{\mathcal{D}}}}}}}}$$. #### Determining $${L}_{\max }$$ and $${L}_{\min }$$ In general, as the value of $${L}_{\max }$$ increases, so do the numbers of unique and doubly-unique substrings to be considered by CAMMiQ - potentially increasing its sensitivity. However query type $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ relies on the read coverage for each unique and doubly-unique substring of each genome; the higher the coverage the better. The read coverage for a unique substring of length L − L, for some constant Δ > 1, would roughly be 1/Δ-th of the read coverage (of a single nucleotide) of the respective genome. The best tradeoff between these two objectives, i.e., substring length, ~ (1 − 1/Δ) and coverage, ~ 1/Δ, can be achieved by maximizing their product, i.e., (1 − 1/Δ)/Δ, which is achieved at Δ = 2. This suggests to choose $${L}_{\max }=L/2$$. A shortest unique substring u, by definition, differs from (at least) one other substring $$u^{\prime}$$ by just one nucleotide. The shorter u gets, the more likely a read error impacting $$u^{\prime}$$ would modify it to u, leading to false positives. We have experimentally observed that unique substrings of length ≤25 could lead to false positives that impact the performance of CAMMiQ; as a consequence, we set the default value of $${L}_{\min }$$ to 26. #### Sparsifying unique substrings Let $${{{{{{{{\mathcal{U}}}}}}}}}_{i}$$ be the collection of all unique substrings on genome si. To reduce the index size, CAMMiQ aims to compute a subset $${{{{{{{\mathcal{U}}}}_i}}}}^{\prime}$$ of $${{{{{{{{\mathcal{U}}}}}}}}}_{i}$$, consisting of the minimum number of shortest unique substrings such that every unique substring of length L (i.e., unique L-mer) on si includes one substring from $${{{{{{{\mathcal{U}}}}_i}}}}^{\prime}$$. Independently, CAMMiQ also aims to compute a subset $${{{{{{{\mathcal{D}}}}_i}}}}^{\prime}$$ of $${{{{{{{{\mathcal{D}}}}}}}}}_{i}$$, consisting of the minimum number of shortest doubly-unique substrings such that every doubly-unique substring of length L (i.e., doubly-unique L-mer) on si includes one substring from $${{{{{{{\mathcal{D}}}}_i}}}}^{\prime}$$. This is all done by greedily maintaining only the rightmost shortest unique or doubly-unique substring in a sliding window of length L on a genome in $${{{{{{{\mathcal{S}}}}}}}}$$. In the remainder of the paper, we denote the number of unique substrings in subset $${{{{{{{\mathcal{U}}}}_i}}}}^{\prime}$$ by nui ($$=|{{{{{{{\mathcal{U}}}}_i}}}}^{\prime}|$$) and the number of doubly-unique substrings in subset $${{{{{{{\mathcal{D}}}}_i}}}}^{\prime}$$ by ndi ($$=|{{{{{{{\mathcal{D}}}}_i}}}}^{\prime}|$$); we denote the number of unique L-mers on si by $$n{u}_{i}^{L}$$ and respectively the number of doubly-unique L-mers on si by $$n{d}_{i}^{L}$$. As we prove in Supplementary Note 2, the greedy strategy we employ can indeed obtain the minimum number of shortest unique substrings to cover each unique L-mer, provided that each substring in $${{{{{{{{\mathcal{U}}}}}}}}}_{i}$$ occurs only once in si. #### Index organization We demonstrate the index structure and query processing for the set of unique substrings $${{{{{{{\mathcal{U}}}}}}}}$$; the processing for doubly-unique substrings is essentially identical to that for unique substrings. Let $$h=\mathop{\min }\limits_{{u}_{i}\in {{{{{{{\mathcal{U}}}}}}}}}|{u}_{i}|$$ be the minimum length of all shortest unique substrings (h is automatically set to $${L}_{\min }$$ if the minimum length constraint is imposed). CAMMiQ maintains a hash table that maps a distinct h-mer w to a bucket containing all unique substrings ui that have w as a prefix. Within each bucket, the remaining suffices of all unique substrings ui, i.e., ui[h + 1: ui], are maintained in a trie (rooted at ui[1: h]) so that (i) each internal node represents a single character; and (ii) each leaf represents the corresponding genome ID. For each read rj in the query, CAMMiQ considers each substring of length h and its reverse complement and computes its hash value in time linear with L through Karp–Rabin fingerprinting63. If the substring has a match in the hash table, then CAMMiQ tries to extend the match until a matching unique substring is found, or until an extension by one character leads to no match. See Fig. 3 for a schematic of the index structure. See Subsection Query processing stage 1: Preprocessing the Reads below for the use of unique and doubly-unique substrings identified for each read to answer the query. ### Query processing stage 1: preprocessing the reads Given the index structure on the sparsified set of shortest unique and doubly-unique substrings of genomes in $${{{{{{{\mathcal{S}}}}}}}}$$, we handle each query $${{{{{{{\mathcal{Q}}}}}}}}$$ in two stages. The first stage counts the number of reads that include each unique and doubly-unique substring with the following provision. We call two or more (unique or doubly-unique) substrings in a read “conflict-free” if there is at least one genome that includes all of these substrings. See Supplementary Note 3 for a detailed discussion on conflicting substrings; the conflicts arise due to either sequencing errors or the query including genomes that are not in the database and thus should be avoided. Reads that include more than one unique or doubly-unique substring that is conflict-free contribute to the counting process; all other reads are discarded. We denote by c(ui), the counter for the conflict-free reads that include the unique substring ui and by c(di) that for the doubly-unique substring di. These counters are sufficient to compute the set $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ as well as $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$, the answer to our most general query type. For computing $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$, CAMMiQ additionally maintains a counter $$d({s}_{k},{s}_{k^{\prime} })$$ for each pair of genomes $${s}_{k},{s}_{k^{\prime} }$$, indicating the number of reads in $${{{{{{{\mathcal{Q}}}}}}}}$$ that can originate both from sk and $${s}_{k^{\prime} }$$ (i.e., the case (e - iii) in the procedure described in Supplementary Note 3). The first stage thus produces two count vectors $${{{{{{{{\bf{c}}}}}}}}}_{i}^{u}=(c({u}_{i,1}),\cdots \,,c({u}_{i,n{u}_{i}}))$$ and $${{{{{{{{\bf{c}}}}}}}}}_{i}^{d}=(c({d}_{i,1}),\cdots \,,c({d}_{i,n{d}_{i}}))$$ that indicate the number of (conflict-free) reads that include each unique and doubly-unique substring on each genome si. Using these vectors, CAMMiQ answers the first type of query by computing $${{{{{{{{\mathcal{A}}}}}}}}}_{1}=\{{s}_{i}:\mathop{\sum }\nolimits_{l=1}^{n{u}_{i}}c({u}_{i,l}) > 0\}$$. Additionally, through the use of the counters $$d({s}_{k},{s}_{k^{\prime} })$$, CAMMiQ answers the second type of query by computing $${{{{{{{{\mathcal{A}}}}}}}}}_{2}=\arg \min|{{{{{{{\mathcal{A}}}}}}}}^{\prime} \subset {{{{{{{\mathcal{S}}}}}}}}|$$ such that (i) $${s}_{i}\in {{{{{{{\mathcal{A}}}}}}}}^{\prime}$$ if $$\mathop{\sum }\nolimits_{l=1}^{n{u}_{i}}c({u}_{i,l}) > 0$$ and (ii) $$\exists {s}_{i}\in {{{{{{{\mathcal{A}}}}}}}}^{\prime}$$, if $$d({s}_{k},{s}_{k^{\prime} }) > 0$$ then either i = k or $$i=k^{\prime}$$. This is basically the solution to the hitting set problem we mentioned earlier, whose formulation as an integer linear program (ILP) is well known64. The genomes returned in $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$ are ranked in decreasing order by the aggregated counter values on unique substrings (i.e., $$|{{{{{{{{\bf{c}}}}}}}}}_{i}^{u}|$$); and the genomes returned in $${{{{{{{{\mathcal{A}}}}}}}}}_{2}$$ are ranked by the aggregated counter values on unique substriungs plus the counter values on doubly-unique substrings (i.e., $$|{{{{{{{{\bf{c}}}}}}}}}_{i}^{u} |+|{{{{{{{{\bf{c}}}}}}}}}_{i}^{d}|$$). From this point on, our main focus will be how CAMMiQ answers the third type of query by computing $${{{{{{{{\mathcal{A}}}}}}}}}_{3}$$ through an ILP formulation described below. ### Query processing stage 2: ILP formulation In its second stage, CAMMiQ computes the list of genomes in the query as well as their abundances through an ILP. Let δi = 0/1 be the indicator for the absence or presence of the genome si in $${{{{{{{\mathcal{Q}}}}}}}}$$. The ILP formulation assigns a value to each δi and also computes for each si its abundance pi, upper bounded by $${p}_{\max }$$ - a user-defined maximum abundance with a default setting of 100, which is introduced to avoid potential anomalies due to sequence contamination. $${{{{{{{\bf{Minimize}}}}}}}}\quad \mathop{\sum}\limits_{i}(\frac{1}{n{u}_{i}}\mathop{\sum }\limits_{l=1}^{n{u}_{i}}|c({u}_{i,l})-e({u}_{i,l}) |+\frac{1}{n{d}_{i}}\mathop{\sum }\limits_{l=1}^{n{d}_{i}}|c({d}_{i,l})-e({d}_{i,l})|)$$ $${{{{{{{\bf{s.t.}}}}}}}}\quad \quad e({u}_{i,l})=(L-|{u}_{i,l} |+1)\cdot {p}_{i}\cdot \frac{1}{L}\cdot {(1-\hat{{{{{{{{\rm{err}}}}}}}}})}^{|{u}_{i,l}|}\quad \forall i,\, l,\,{{{{{{{\rm{s.t.}}}}}}}}\,1\le l\le n{u}_{i}$$ (1) $$e({d}_{i,l})=(L-|{d}_{i,l} |+1)\cdot ({p}_{i}+{p}_{j})\cdot \frac{1}{L}\cdot {(1-\hat{{{{{{{{\rm{err}}}}}}}}})}^{|{d}_{i,l}|}\quad \forall i,\, l{{{{{{{\rm{s.t.}}}}}}}}\,1\le l\le n{d}_{i}$$ (2) $${p}_{i}\le {\delta }_{i}\cdot {p}_{\max }\quad \forall i$$ (3) $${\delta }_{i}=0\quad \forall i,\,{{{{{{{\rm{s.t.}}}}}}}}\,{s}_{i}\in M({{{{{{{\mathcal{Q}}}}}}}})$$ (4) $${p}_{i}\ge {\delta }_{i}\cdot \min \{L\mathop{\sum }\limits_{l=1}^{n{u}_{i}}c({u}_{i,l})\cdot \frac{1}{{nu}_{i}^{L}},L\mathop{\sum }\limits_{l=1}^{n{d}_{i}}c({d}_{i,l})\cdot \frac{1}{{nd}_{i}^{L}}\}\cdot (1-\epsilon )\quad \forall i \,,{{{{{{{\rm{s.t.}}}}}}}}\,{s}_{i}\,\notin \,M({{{{{{{\mathcal{Q}}}}}}}})$$ (5) $$\mathop{\sum}\limits_{i}|{s}_{i}|\cdot {p}_{i}\le n\cdot L$$ (6) The objective of the ILP is to minimize the sum of absolute differences between the expected and the actual number of reads to cover a unique or doubly-unique substring. Since each genome may have different numbers of unique and doubly-unique substrings, the sums of differences are normalized w.r.t. nui or ndi. Constraint (1) defines the expected number of reads to cover a particular unique substring ui,l, given abundance pi of the corresponding genome si. Similarly, constraint (2) defines the expected number of reads to cover a particular doubly-unique substring di,l; in this constraint, pi and pj denote the respective abundances of the two genomes si and sj that include (the doubly unique substring) di,l. Specifically the expected coverage of ui,l is $$\frac{L-|{u}_{i,l} |+1}{L}\cdot {p}_{i}$$ and the expected coverage of di,l is $$\frac{L-|{d}_{i,l} |+1}{L}\cdot ({p}_{i}+{p}_{j})$$, provided that the coverage is uniform across a given genome and there are no read errors. To account for read errors, we normalize these coverage estimates respectively by $${(1-\hat{{{{{{{{\rm{err}}}}}}}}})}^{|{u}_{i,l}|}$$ and $${(1-\hat{{{{{{{{\rm{err}}}}}}}}})}^{|{d}_{i,l}|}$$; these values represent the probability that, a substring ui,l or di,l would be error free within a read that has been subject to uniform i.i.d. substitution errors. Here $$\hat{{{{{{{{\rm{err}}}}}}}}}$$ denotes the estimated substitution error rate per nucleotide; and w denotes the length of a substring w. CAMMiQ formulation also allows updates to the expected coverage according to any given unique or doubly-unique substring’s sequence composition (e.g., GC content) to address sequencing biases. Constraint (3) ensures that the abundance pi of a genome is 0 if δi = 0. Constraint (4) ensures that the solution to the above ILP excludes those genomes whose counters for unique and doubly-unique substrings add up to a value below a threshold - so as to reduce the size of the solution space. More specifically, given a threshold value α (α is introduced to avoid potential false positives due to read errors and genomes that are not in the database; its default value is 0.0001), the constraint excludes those genomes si that are in the set of genomes $$M({{{{{{{\mathcal{Q}}}}}}}})$$ whose counters for its unique substrings add up to a value below $$\alpha \cdot n{u}_{i}^{L}$$, and doubly-unique substrings add up to a value less than $$\alpha \cdot n{d}_{i}^{L}$$. Formally, $$M({{{{{{{\mathcal{Q}}}}}}}})=\{{s}_{i}\in {{{{{{{\mathcal{S}}}}}}}}\,|\,\mathop{\sum }\nolimits_{l=1}^{n{u}_{i}}c({u}_{i,l}) < \alpha \cdot n{u}_{i}^{L}\}\cap \{{s}_{i}\in {{{{{{{\mathcal{S}}}}}}}}\,|\,\mathop{\sum }\nolimits_{l=1}^{n{d}_{i}} c({d}_{i,l}) < \alpha \cdot n{d}_{i}^{L}\}$$. Constraint (5) enforces a lower bound on the coverage of each genome si in the solution to the above ILP (namely, with δi = 1), which must match the coverage ($$L\cdot \mathop{\sum }\nolimits_{l=1}^{n{u}_{i}}c({u}_{i,l})\cdot \frac{1}{n{u}_{i}^{L}}$$ and $$\mathop{\sum }\nolimits_{l=1}^{n{d}_{i}}c({d}_{i,l})\cdot \frac{1}{n{d}_{i}^{L}}$$) resulting from the number of reads in $${{{{{{{\mathcal{Q}}}}}}}}$$ that include a unique and doubly-unique substring respectively, i.e., it must be at least (1 − ϵ) times the smaller one above for a user defined ϵ. Constraint (6) enforces an upper bound on the coverage of each genome si in the solution to the above ILP, through making the sum over each si of the number of reads produced on si based on pi not exceed the total number of reads n. Collectively, the last two constraints ensure that the abundance pi computed from the ILP matches what is (i.e., the coverage based on read counts) given by $${{{{{{{\mathcal{Q}}}}}}}}$$. As written above, the formulation does not strictly conform to the rules for ILPs because of the use of the absolute value function. We use a standard technique to replace the absolute values in the objective by introducing a new variable $$\gamma ({u}_{i,l})\ge \max \{c({u}_{i,l})-e({u}_{i,l}),\, e({u}_{i,l})-c({u}_{i,l})\}$$. ### When to use unique substrings—the error free case We now provide a set of sufficient conditions to guarantee the approximate performance that can be obtained with high probability in metagenomic identification and quantification by the use of unique substrings only. These conditions apply to CAMMiQ when c = 1, as well as CLARK, KrakenUniq, and other similar approaches. In case these conditions are not met, it is advisable to use CAMMiQ with c ≥ 2. Suppose that we are given a query $${{{{{{{\mathcal{Q}}}}}}}}$$ composed of n error-free reads of length L, sampled independently and uniformly at random from a collection of genomes $${{{{{{{\mathcal{A}}}}}}}}=\{{s}_{1},\cdots \,,{s}_{a}\}$$ according to their abundances p1,   , pa. More specifically, suppose that our goal is to answer query $${{{{{{{\mathcal{Q}}}}}}}}$$ by computing $${{{{{{{{\mathcal{A}}}}}}}}}_{1}$$, along with an estimate for the abundance value pi for each $${s}_{i}\in {{{{{{{{\mathcal{A}}}}}}}}}_{1}$$, calculated as the weighted number of reads assigned to si according to the procedure described in Section Query processing stage 1: Preprocessing the Reads. Then, the L1 distance between the true abundance values and this estimate will not exceed a value determined by n (number of reads), a, and $${q}_{\min }$$, the minimum normalized proportion of unique L-mers among these genomes. For a given failure probability ζ and an upper bound on L1 distance ϵ, this translates into sufficient conditions on the values of n, a and $${q}_{\min }$$ to ensure acceptable performance by the computational method in use. ### Theorem 1 Let $${{{{{{{\mathcal{Q}}}}}}}}=\{{r}_{1},\cdots \,,{r}_{n}\}$$ be a set of n error-free reads of length L, each sampled independently and uniformly at random from all positions on a genome $${s}_{i}\in {{{{{{{\mathcal{A}}}}}}}}=\{{s}_{1},\cdots \,,{s}_{a}\}$$, where s1,   , sa is distributed according to their abundances p1,   , pa > 0. Let $${p}_{i}^{\prime}=\frac{{p}_{i}\cdot {n}_{i}^{L}}{\mathop{\sum }\nolimits_{i^{\prime}=1}^{a}{p}_{i^{\prime} }^{\prime}\cdot {n}_{i^{\prime} }^{L}}$$ be the corresponding “unnormalized” abundance of pi for i = 1,   , a, where $${n}_{i}^{L}$$ denotes the total number of L-mers on si. Let q1,   , qa > 0 be the proportion of unique L-mers on s1,   , sa respectively; $${p}_{\min }=\min {\{{p}_{i}\}}_{i=1}^{a}$$; $${q}_{\min }=\min {\{{q}_{i}\}}_{i=1}^{a}$$. Then, • (i) With probability at least 1 − ζ, each si can be identified through querying $${{{{{{{\mathcal{Q}}}}}}}}$$ if $$n\ge \frac{2(a+1)+\ln (1/\zeta )}{{({p}_{\min }{q}_{\min })}^{2}}$$ • (ii) With probability at least 1 − ζ, the L1 distance between the predicted abundances $${\hat{p}}_{1},\cdots \,,{\hat{p}}_{a}$$ by setting $$\hat{{p}_{i}}=\frac{{c}_{i}/{q}_{i}}{n}$$ and the true (unnormalized) abundances $${p}_{1}^{\prime},\cdots \,,{p}_{a}^{\prime}$$ is at most ϵ if $$n\ge \frac{2(a+1)+\ln (1/\zeta )}{{(\epsilon {q}_{\min })}^{2}}$$. • (iii) Given n such reads in a query, with probability at least 1 − ζ, the L1 distance between the predicted abundances $${\hat{p}}_{1},\cdots \,,{\hat{p}}_{a}$$ by setting $$\hat{{p}_{i}}=\frac{{c}_{i}/{q}_{i}}{n}$$ and the true (unnormalized) abundances $${p}_{1}^{\prime},\cdots \,,{p}_{a}^{\prime}$$ is bounded by $$\sqrt{\frac{2[\ln (1/\zeta )+(a+1)]}{n{q}_{\min }^{2}}}$$. Where ci denotes the number of reads assigned to si. See Supplementary Note 4 for a proof. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
{}
## superscript superscript — A superscript (as in x², the mathematical notation for x multiplied by itself). superscript ::= ## Description A `superscript` identifies text that is to be displayed as a superscript when rendered. ### Processing expectations Formatted inline. Superscripts are usually printed in a smaller font and shifted up with respect to the baseline. ### Children The following elements occur in `superscript`: . ## See Also Related elements: `subscript`. ## Examples ```1`<article xmlns='http://docbook.org/ns/docbook'>` `<title>Example superscript</title>` ` ` `<para>The equation e<superscript>πi</superscript> + 1 = 0 ties together` 5`five of the most important mathematical constants.` `</para>` ` ` `</article>` ``` The equation eπi + 1 = 0 ties together five of the most important mathematical constants.
{}
## 15.15 $\frac{d[R]}{dt}=-k[R]; \ln [R]=-kt + \ln [R]_{0}; t_{\frac{1}{2}}=\frac{0.693}{k}$ Gabriela Carrillo 1B Posts: 53 Joined: Fri Sep 29, 2017 7:04 am ### 15.15 Is the concentration of CH3Br raised to the power of 1.2 in the rate law for the reaction? What does 1.2 imply? Johann Park 2B Posts: 51 Joined: Thu Jul 27, 2017 3:01 am Been upvoted: 1 time ### Re: 15.15 The concentration increased by a factor of 1.2 means that the rate of the reaction also increases by a factor of 1.2: 1.2Rate = k [1.2CH3Br]x[OH-]y melissa carey 1f Posts: 53 Joined: Fri Sep 29, 2017 7:06 am ### Re: 15.15 Since the rate increase is linear to reactant concentration increase, it implies it's first order. Chloe1K Posts: 23 Joined: Fri Sep 29, 2017 7:05 am ### Re: 15.15 I think the book just gave a random increase to concentration to show that if rate increases by the same factor it is a first order reaction. Janine Chan 2K Posts: 71 Joined: Fri Sep 29, 2017 7:04 am ### Re: 15.15 Doesn't ln[A] vs. time have to be linear to assume it's first order? Or in this case we're just saying that in this question, we can see that multiplying by a factor of 1.2 will increase the rate by a factor of 1.2, which aligns with the first order differential rate law rate = k[A].
{}
# What are the coordinates of the vertex of y = x^2 - 4x - 5? Apr 28, 2015 $y = {x}^{2} - 4 x - 5$ is a parabola with a slope (for arbitrary values of $x$) given by the expression $\frac{\mathrm{dy}}{\mathrm{dx}} = 2 x - 4$ This slope is equal to $0$ at the vertex of the parabola $2 x - 4 = 0$ $\rightarrow x = 2$ When $x = 2$ the equation for the parabola gives us $y = {\left(2\right)}^{2} - 4 \cdot \left(2\right) - 5$ $= - 9$ So the coordinates of the vertex of the parabola are $\left(2 , - 9\right)$
{}
• # question_answer Given below is the cost schedule of a product produced by a firm. The market price per unit of the product at all levels of output is Rs 12. Using marginal cost and marginal revenue approach, find out the level of equilibrium output. Give reasons for your answer: Output (Units) 1 2 3 4 5 6 Average (Cost) (Rs) 12 11 10 10 10.4 11 Output AR (Rs.) TC (Rs.) 1 20 22 2 20 42 3 20 60 4 20 76 5 20 96 6 20 120 MR MC 20 22 20 20 20 22 20 26 20 20 Equilibrium 20 36 At ${{5}^{th}}$ unit of output, the producer will be in equilibrium because at this unit, MR is equal to MC and MC curve cuts MR from below.
{}
### Home > PC3 > Chapter 6 > Lesson 6.1.3 > Problem6-61 6-61. Write an equation for a fourth-degree polynomial function that has a triple root at $x = 2$, a root at $x = −7$, and passes through the point $\left(−1, −9\right)$. $p\left(x\right)=a\left(x-2\right)^3\left(x-\left(-7\right)\right)$ $p\left(-1\right)=-9$
{}
# Homework Help: Familiar? But confirmed correct one 1. Nov 13, 2004 ### CartoonKid 4. A 5.00 g bullet moving with an initial speed of 400 m/s is fired into and passes through a 1.00 kg block (See figure). 400 m/s v 5.00 cm The block, initially at rest on a frictionless, horizontal surface is connected to a spring with force constant 900 N/m. If the block moves 5.00 cm to the right after impact, find: (a) the speed at which the bullet emerges from the block, [Ans: 100 m/s] (b) the mechanical energy converted into internal energy in the collision. [Ans: 374 J] This question is almost similar to the one asked by someone here. The different is that a spring has been added to the block. I have lost the solution given by my lecturer. I tried to do it with conservation of energy but it failed. I forgot about how to solve this question. Can somebody please help me? Last edited by a moderator: May 1, 2017 2. Nov 14, 2004 ### ehild The problem can be solved by assuming that the bullet stays inside the block for a very short time so the spring can not deform and exert force on the block during the impact. In this case, the momentum is conserved. Let be m the mass of the bullet, M that of the block, vo the speed of the bullet at the beginning, vf the speed of the bullet after inpact and V the speed of the block after impact. $$mv_0=mv_f+MV \rightarrow V_f=v_0-M/m*V$$ You can apply conservation of energy for the motion of the block after the impact: $$1/2 MV^2=1/2kx^2 \rightarrow V=\sqrt{k/M}*x$$ So $$v_f=v_0-M/m*V=M/m*\sqrt{k/M}*x$$ ehild Last edited by a moderator: May 1, 2017 3. Nov 14, 2004 ### CartoonKid Thank you very much. This is the solution in my lost solution. I have a question here. What makes us to make such assumption? I mean by looking at the question at the first time, how do we know we have to make such an assumption. I really dislike the question which requires assumption, however, it's inevitable. Fortunately my tutor said our final exam wouldn't involve question which needs assumption. Anyway, I would like to know. Thanks. 4. Nov 14, 2004 ### ehild The problem would be undetermined without this assumption. The momentum would not be conserved during the interaction of the block and bullet, as there is an external force exerted by the spring. Energy is not conserved, either, as this force of interaction certainly depends on the relative speed between bullet and block when the bullet travels inside the block. Newton's second law would yield one equation for the bullet and one for the block, but to solve them we would need the force of interaction. You can find out from the text what you can suppose. There is no indication about the size of the block. Also, there is nothing about the time it stays inside. Imagine that the bullet loses most of its momentum at entering the block. The block starts to move, the spring starts to get compressed, and decelerates the block, with the bullet still in. It is completely unsure if it gets out or not without knowing how long that block is. Usually problems concerning collision of two bodies are solved in this way. We do not worry about the external forces during the interaction, assuming it happens in very short time, and we happily apply the law of conservation of momentum. Only after we got the new velocities we consider the effect of external forces. It is the same with problems about exploding rockets and asking where the pieces would fall. We use conservation of momentum as if gravity would not exist. But it does, even during the explosion, not only afterwards. So this is only an approximation, but we never can solve a real problem without simplifying assumptions. ehild 5. Nov 14, 2004 ### arildno "So this is only an approximation, but we never can solve a real problem without simplifying assumptions." Not only is it an approximation, but usually, a damned good one as well.. (Not to mention how simplifying it is..) 6. May 21, 2010 ### roam And can anyone please explain how to solve part (b)? How do we find the mechanical energy converted into internal energy in the collision? Do we need to minus the final kenetic energy from the initial kenetic energy? 7. May 21, 2010 ### ehild Yes. The mechanical energy that was converted to heat is the difference between the initial KE of the bullet and the sum of the KE of both the bullet and block just after the bullet emerged from the block. ehild
{}
I don't really know how to set this problem up. A cylindrical container of an incompressable liquid withdensity ρ rotates with constant angular speed ωabout its axis of symmetry, which we take to be the y-axis. a.) Show that the pressure at a given hieght within the fluidincreases in radial direction (outward from the axis of rotation)according to . b) Integrate this partial differential equation to findthe pressure as a function of distance from the axis of rotaionalong a horizontal line at y=0. c.) Combine the results of part b with eq 14.5 () to show that the surface of therotating liquid has a parabolic shape, that is, the height of theliquid is given by .
{}
International Association for Cryptologic Research # IACR News Central You can also access the full news archive. Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal). 2013-06-17 15:17 [Pub][ePrint] In this paper, we report that we have solved the shortest vector problem (SVP) over a 128-dimensional lattice, which is currently the highest dimension of the SVP that has ever been solved. The security of lattice-based cryptography is based on the hardness of solving the SVP in lattices. In 2010 Micciancio \\textit{et al.} proposed a Gauss Sieve algorithm for heuristically solving the SVP using list $L$ of Gauss-reduced vectors. Milde \\textit{et al.} proposed a parallel implementation method for the Gauss Sieve algorithm. However, the efficiency of more than 10 threads in their implementation decreases due to a large number of non-Gauss-reduced vectors appearing in the distributed list of each thread. In this paper, we propose a more practical parallelized Gauss Sieve algorithm. Our algorithm deploys an additional Gauss-reduced list $V$ of sample vectors assigned to each thread, and all vectors in list $L$ remain Gauss-reduced by mutually reducing them using all sample vectors in $V$. Therefore, our algorithm enables the Gauss Sieve algorithm to run without excessive overhead even in a large-scale parallel computation of more than 1,000 threads. Moreover, for speed-up, we use the bi-directional rotation structure of an ideal lattice that makes the generation of additional vectors in the list with almost no additional overhead. Finally, we have succeeded in solving the SVP over a 128-dimensional ideal lattice generated by cyclotomic polynomial $x^{128}+1$ using about 30,000 CPU hours. 15:17 [Pub][ePrint] We investigate alternative suspicion functions for bias-based traitor tracing schemes, and present a practical construction of a simple decoder that attains capacity in the limit of large coalition size $c$. We derive optimal suspicion functions in both the Restricted-Digit Model and the Combined-Digit Model. These functions depend on information that is usually not available to the tracer -- the attack strategy or the tallies of the symbols received by the colluders. We discuss how such results can be used in realistic contexts. We study several combinations of coalition attack strategy versus suspicion function optimized against some attack (another attack or the same). In many of these combinations the usual codelength scaling $\\ell \\propto c^2$ changes to a lower power of $c$, e.g. $c^{3/2}$. We find that the interleaving strategy is an especially powerful attack. The suspicion function tailored against interleaving is the key ingredient of the capacity-achieving construction. 2013-06-15 01:51 [Job][New] Hochschule Furtwangen University, Germany Full-time Ph.D. Position The Chair for Security in Distributed Systems, computer science Hochschule Furtwangen, Germany, offers a full-time PhD/Postdoc position. The position involves research in the area of IT-Security/applied cryptography within the BMBF project UNIKOPS - Universell konfigurierbare Sicherheitslösung für Cyber-Physikalische heterogene Systeme. The successful candidate is expected to contribute to research in IT-Security and applied cryptography for CPS. The position is available immediately and is fully funded. The salary scale for the position is TV-L E13. The gross income depends on the candidate\\\'s experience level. At the lowest level it corresponds to approx. 40,000 EUR per year. Contracts are initially offered for two years. An extension is possible. He or she is given the possiblity to carry out a Ph.D. The successful candidate should have a Master\\\'s degree in Computer Science, Mathematics, Information Security, or a related field. Knowledge in cryptography is an asset. The deadline for applications is July 31, 2013. However, late applications will be considered until the position is filled. http://www.hs-furtwangen.de/studierende/fakultaeten/informatik/forschung/universell-konfigurierbare-sicherheitsloesung-fuer-cyber-physikalische-heterogene-systeme-unikops/601-dirkwesthoff.html 2013-06-12 20:02 [PhD][Update] 19:45 [Job][New] The Deutsche Telekom Chair of Mobile Business & Multilateral Security at Goethe University Frankfurt offers a position of a Scientific Assistant (m/f, E13 TV-G-U). To strengthen our team we are looking for a committed, creative and flexible PhD candidate (male/female) with advanced professional knowledge in Information Technology and interest in the current development in business informatics. We are looking for people with advanced knowledge and special skills in at least three of the following areas: - Network and System Security - Privacy-Enhancing Technologies and data protection - Identity Management - Mobile Platforms, Smartcards and Trusted Computing - Mobile Application Development (e.g. in Android, etc.) - Cryptography - Programming languages and experiences in software projects - Administration skills in different platforms (e.g. UNIX, Linux, Windows) - Web technologies and development - Project management The position is available immediately and has a fixed-term of 3 years with an extension option. Deadline for applications: 1st of July 2013 Please see our job advertisement for the full details on our career site at: http://www.m-chair.net/wps/wse/home/rannenberg/career/ 15:26 [Job][New] 15:17 [Pub][ePrint] The design and analysis of lightweight block ciphers has been a very active research area over the last couple of years, with many innovative proposals trying to optimize different performance figures. However, since these block ciphers are dedicated to low-cost embedded devices, their implementation is also a typical target for side-channel adversaries. As preventing such attacks with countermeasures usually implies significant performance overheads, a natural open problem is to propose new algorithms for which physical security is considered as an optimization criteria, hence allowing better performances again. We tackle this problem by studying how much we can tweak standard block ciphers such as the AES Rijndael in order to allow efficient masking (that is one of the most frequently considered solutions to improve security against side-channel attacks). For this purpose, we first investigate alternative S-boxes and round structures. We show that both approaches can be used separately in order to limit the total number of non-linear operations in the block cipher, hence allowing more efficient masking. We then combine these ideas into a concrete instance of block cipher called Zorro. We further provide a detailed security analysis of this new cipher taking its design specificities into account, leading us to exploit innovative techniques borrowed from hash function cryptanalysis (that are sometimes of independent interest). Eventually, we conclude the paper by evaluating the efficiency of masked Zorro implementations in an 8-bit microcontroller, and exhibit their interesting performance figures. 15:17 [Pub][ePrint] Leakage-resilient cryptography aims at formally proving the security of cryptographic implementations against large classes of side-channel adversaries. One important challenge for such an approach to be relevant is to adequately connect the formal models used in the proofs with the practice of side-channel attacks. It raises the fundamental problem of finding reasonable restrictions of the leakage functions that can be empirically verified by evaluation laboratories. In this paper, we first argue that the previous bounded leakage\" requirements used in leakage-resilient cryptography are hard to fulfill by hardware engineers. We then introduce a new, more realistic and empirically verifiable assumption of simulatable leakage, under which security proofs in the standard model can be obtained. We finally illustrate our claims by analyzing the physical security of an efficient pseudorandom generator (for which security could only be proven under a random oracle based assumption so far). These positive results come at the cost of (algorithm-level) specialization, as our new assumption is specifically defined for block ciphers. Nevertheless, since block ciphers are the main building block of many leakage-resilient cryptographic primitives, our results also open the way towards more realistic constructions and proofs for other pseudorandom objects. 15:17 [Pub][ePrint] 15:17 [Pub][ePrint] Gentry\'s bootstrapping\'\' technique (STOC 2009) constructs a fully homomorphic encryption (FHE) scheme from a somewhat homomorphic\'\' one that is powerful enough to evaluate its own decryption function. To date, it remains the only known way of obtaining unbounded FHE. Unfortunately, bootstrapping is computationally very expensive, despite the great deal of effort that has been spent on improving its efficiency. The current state of the art, due to Gentry, Halevi, and Smart (PKC 2012), is able to bootstrap packed\'\' ciphertexts (which encrypt up to a linear number of bits) in time only \\emph{quasilinear} $\\Otil(\\lambda) = \\lambda \\cdot \\log^{O(1)} \\lambda$ in the security parameter. While this performance is \\emph{asymptotically} optimal up to logarithmic factors, the practical import is less clear: the procedure composes multiple layers of expensive and complex operations, to the point where it appears very difficult to implement, and its concrete runtime appears worse than those of prior methods (all of which have quadratic or larger asymptotic runtimes). In this work we give \\emph{simple}, \\emph{practical}, and entirely \\emph{algebraic} algorithms for bootstrapping in quasilinear time, for both packed\'\' and non-packed\'\' ciphertexts. Our methods are easy to implement (especially in the non-packed case), and we believe that they will be substantially more efficient in practice than all prior realizations of bootstrapping. One of our main techniques is a substantial enhancement of the ring-switching\'\' procedure of Gentry et al.~(SCN 2012), which we extend to support switching between two rings where neither is a subring of the other. Using this procedure, we give a natural method for homomorphically evaluating a broad class of structured linear transformations, including one that lets us evaluate the decryption function efficiently. 15:17 [Pub][ePrint] For a number of elliptic curve-based cryptographic protocols, it is useful and sometimes necessary to be able to encode a message (a bit string) as a point on an elliptic curve in such a way that the message can be efficiently and uniquely recovered from the point. This is for example the case if one wants to instantiate CPA-secure ElGamal encryption directly in the group of points of an elliptic curve. More practically relevant settings include Lindell\'s UC commitment scheme (EUROCRYPT 2011) or structure-preserving primitives. It turns out that constructing such an encoding function is not easy in general, especially if one wishes to encode points whose length is large relative to the size of the curve. There is a probabilistic, folklore\'\' method for doing so, but it only provably works for messages of length less than half the size of the curve. In this paper, we investigate several approaches to injective encoding to elliptic curves, and in particular, we propose a new, essentially optimal geometric construction for a large class of curves, including Edwards curves; the resulting algorithm is also quite efficient, requiring only one exponentiation in the base field and simple arithmetic operations (however, the curves for which the map can be constructed have a point of order two, which may be a limiting factor for possible applications). The new approach is based on the existence of a covering curve of genus 2 for which a bijective encoding is known.
{}
Type setting is an art, and as most arts, it is extremely subjective. However, I will try to venture a few of my subjective opinions here as absolute truth (just because a bit of controversy is fun; feel free to disagree). Today we will look a bit at how interword spacing changes the feeling of a text, why margin protrusion makes things more even (and what margin protrusion is), as well as what font expansion is and why that helps as well. In particular, we will see how these things mean the difference of badly typeset text and nicely typeset text. While most people I know who read this blog use LaTeX predominantly, most of us make acquaintance with the mainstream text tools, OpenOffice and Word. They create equally shoddy text (I guess it is their prerogative given that they are text processors rather than type setting programs, but it is still depressing). Still, looking at the output from one of these programs may serve as an (extremely good) motivating factor for considering LaTeX. The image below illustrates a standard text in OpenOffice, without hyphenation (with hyphenation did not change the result radically). Looking at the interword spaces on line three, we see that they're radically different from the spaces in the last line, for instance. This gives a very uneven reading and is not particularly nice looking at any rate. If we furthermore look down along the right margin, we see that, even though we have selected a justified text formatting, that the visual appearance of the text against the margin is not even. For instance, the ‘y’ and comma seem to be futher to the left than the ‘d’ and ‘r’. Furthermore, the fourth last line seem to be shorter than the others when you glance down along the right margin. This detracts from the aesthetic qualities of the text as well. We are not going to be able to crank anything much more useful out of OpenOffice, so let us switch to our favourite typesetting language, LaTeX, and see how it looks there. This gives us what we can see in the following figure (margins adjusted to fit better with the OpenOffice defaults – normally margins are larger in LaTeX for better aesthetic value). This still gives us a somewhat jagged appearance in the right margin. What we want to do is to let some characters, in particularly punctuation, fall into the margin, causing more uniform lines to appear (visually, not mathematically). This is called character protrusion, or margin kerning. Pdftex 1.40 and newer supports two levels of character protrusion: level 1 that does not take line breaking into account, and level 2 that does. The details of this is easier to see if we are writing a two-column page. Below we see level 1 in the left column and level 2 in the right column. The interword spacing is clearly superior at level 2. In particular, this is illustrated in line 4 where there's a much larger space in the left column after the full stop. Looking at line 15 in the right column, we do, however, see that spacing is not entirely even, as there is a larger space after the full stop, at the text starting ‘At first I could not believe my’. These ‘artifacts’ happen when a proper line breaking cannot be made (breaking ‘eyes’ would be unnatural), and thus the line is padded with a bit of extra space, here after a full stop since we are more forgiving about extra spaces here as there is already some form of pause. Some of the lines are spaced too tightly in my opinion in the right column above, example the line reading ‘my very feelings changed to repulsion and ter-’ is pushed too tightly together, particular the change from ‘feelings’ to ‘changed’ seem too close. As a way to improve the spacing we can turn on font expansion. What this feature does is to use a more narrow version of a font when there is little space, and a wider version of a font when there is ‘too much’ space. Since this change is, for the most part, automatically generated from the original font, great caution should be applied as it may distort the original font shapes, however, with the defaults, all this happens to a maximum of a three per cent change from the original shape, making it almost invisible to the naked eye. The results below are, however, fairly compelling. The left column is the same as the right column above, and the right column is with font expansion turned on at full level. This is miles from the look of OpenOffice that we saw at the beginning of this post. Indeed, returning to default margin size in full width, we can see the text typeset below with full character protrusion and font expansion. These things are, of course, just one among many of microtypographic finnesses that can be controlled in later versions of pdftex. What we have not looked at here is, of course, how you can use these things. Fortunately there is a nice package that wraps up a lot of these issues: microtype. Using it is exceptionally easy: ```\usepackage{microtype} ``` This will enable both character protrusion and font expansion. For more advanced manipulative changes, you can refer to microtype's documentation. (There is even a way to make it look like it was created in Word or OpenOffice!)
{}
Find all School-related info fast with the new School-Specific MBA Forum It is currently 27 Aug 2016, 16:20 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Of the following, which is most nearly equal to √10? Author Message TAGS: ### Hide Tags Intern Joined: 19 Feb 2013 Posts: 10 Followers: 0 Kudos [?]: 5 [0], given: 1 Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 22 Mar 2013, 12:54 2 This post was BOOKMARKED 00:00 Difficulty: 45% (medium) Question Stats: 45% (01:54) correct 55% (00:31) wrong based on 178 sessions ### HideShow timer Statistics Of the following, which is most nearly equal to √10? a 3.3 b 3.4 c 3.5 d 3.1 e 3.2 [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 34457 Followers: 6279 Kudos [?]: 79671 [0], given: 10022 Re: Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 22 Mar 2013, 13:03 anujtsingh wrote: Of the following, which is most nearly equal to √10? a 3.3 b 3.4 c 3.5 d 3.1 e 3.2 The correct answer is 3.2 (E), not D. _________________ Manager Joined: 24 Jan 2013 Posts: 79 Followers: 5 Kudos [?]: 122 [2] , given: 6 Re: Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 23 Mar 2013, 07:35 2 KUDOS Just make quick trials: I would try first 3.3 x 3.3 = 990 + 99 (and put two decimals) = 10.89 Then 3.2 x 3.2 = 960 + 64 (and put two decimals) = 10.24 ---> near to 10 Then 3.1 x 3.1 = 930 + 31 (and put two decimals) = 9.61 Solution: 3.2 Senior Manager Status: Prevent and prepare. Not repent and repair!! Joined: 13 Feb 2010 Posts: 275 Location: India Concentration: Technology, General Management GPA: 3.75 WE: Sales (Telecommunications) Followers: 9 Kudos [?]: 69 [0], given: 282 Re: Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 24 Mar 2013, 04:03 Now take the square of 3.1i.e, 3.1X3.1= 9.64 Now the next number has to be closer to 10 You can check by taking the square of 3.3 and then zero down on 3.2! _________________ I've failed over and over and over again in my life and that is why I succeed--Michael Jordan Kudos drives a person to better himself every single time. So Pls give it generously Wont give up till i hit a 700+ Current Student Joined: 31 Oct 2011 Posts: 484 Schools: Johnson '16 (M) GMAT 1: 690 Q45 V40 WE: Asset Management (Mutual Funds and Brokerage) Followers: 38 Kudos [?]: 215 [0], given: 57 Re: Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 24 Mar 2013, 05:38 Bunuel wrote: The correct answer is 3.2 (E), not D. Fixed. _________________ My Applicant Blog: http://hamm0.wordpress.com/ Intern Joined: 29 Mar 2014 Posts: 14 Followers: 0 Kudos [?]: 4 [0], given: 23 Re: Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 21 Feb 2015, 20:37 anujtsingh wrote: Of the following, which is most nearly equal to √10? a 3.3 b 3.4 c 3.5 d 3.1 e 3.2 Square root of 10 can be written as square root of 5 times square root of 2. Square root of 5=2.23 and square root of 2 is 1.41 - multiply these together and the answer you get is closest to 3.2. Optimus Prep Instructor Joined: 06 Nov 2014 Posts: 1653 Followers: 43 Kudos [?]: 339 [1] , given: 21 Of the following, which is most nearly equal to √10? [#permalink] ### Show Tags 22 Feb 2015, 05:13 1 KUDOS Expert's post Bunuel wrote: anujtsingh wrote: Of the following, which is most nearly equal to √10? a 3.3 b 3.4 c 3.5 d 3.1 e 3.2 The correct answer is 3.2 (E), not D. Let us simply take squares of all options. 3.1 * 3.1 = 9.61 3.2 * 3.2 = 10.24 Since it has already crossed 10 (i.e. the square of √10), we need not go further and need to decide between 3.1 and 3.2 10 - 9.61 = 0.39 10.24 - 10 = 0.24 Since 0.24 is lesser of the both, so 3.2 is closer. Hence (E). -- Optimus Prep's GMAT On Demand course for only $299 covers all verbal and quant. concepts in detail. Visit the following link to get your 7 days free trial account: http://www.optimus-prep.com/gmat-on-demand-course _________________ # Janielle Williams Customer Support Special Offer:$80-100/hr. Online Private Tutoring GMAT On Demand Course \$299 Free Online Trial Hour Of the following, which is most nearly equal to √10?   [#permalink] 22 Feb 2015, 05:13 Similar topics Replies Last post Similar Topics: 1 14! is equal to which of the following? 9 20 Apr 2016, 01:04 2 Which of the following is equal to 10^-(-3)^2 1 17 Aug 2014, 02:20 2 Which of the following is equal to 8 28 Jun 2013, 23:12 2 Which of the following ratios is most nearly equal to the ra 4 20 Feb 2011, 03:56 3 If n is positive, which of the following is equal to 5 05 Nov 2008, 21:26 Display posts from previous: Sort by
{}
# fa.functional analysis – Eigenvalues of operator In the question here the author asks for the eigenvalues of an operator $$A = begin{pmatrix} x & -partial_x \ partial_x & -x end{pmatrix}.$$ Here I would like to ask if one can extend this idea to the operator $$A = begin{pmatrix} x & -partial_x +c\ partial_x+c & -x end{pmatrix},$$ where $$c$$ is a real constant. It seems to me that this is a non-trivial change in the operator.
{}
# Difference between revisions of "Tschirnhausen Cubic Catacaustic" A semicubical parabola is a curve defined parametrically as $x = t^2$ $y = at^3$
{}
× Get Full Access to Chemistry: The Central Science - 14 Edition - Chapter 18 - Problem 18.37 Get Full Access to Chemistry: The Central Science - 14 Edition - Chapter 18 - Problem 18.37 × ISBN: 9780134414232 1274 ## Solution for problem 18.37 Chapter 18 Chemistry: The Central Science | 14th Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Chemistry: The Central Science | 14th Edition 4 5 1 431 Reviews 18 1 Problem 18.37 What is the molarity of $$\mathrm{Na}^{+}$$ in a solution of NaCl whose salinity is 5.6 if the solution has a density of 1.03 g>mL? Text Transcription: Na+ Step-by-Step Solution: Step 1 of 5) As in the lead–acid battery, the solid reaction products adhere to the electrodes, which permits the electrode reactions to be reversed during charging. A single nicad voltaic cell has a voltage of 1.30 V. Nicad battery packs typically contain three or more cells in series to produce the higher voltages needed by most electronic devices. Although nickel–cadmium batteries have a number of attractive characteristics, the use of cadmium as the anode introduces significant limitations. Because cadmium is toxic, these batteries must be recycled. The toxicity of cadmium has led to a decline in their popularity from a peak annual production level of approximately 1.5 billion batteries in the early 2000s. Cadmium also has a relatively high density, which increases battery weight, an undesirable characteristic for use in portable devices and electric vehicles. These shortcomings have fueled the development of the nickel–metal hydride (NiMH) battery. The cathode reaction is the same as that for nickel–cadmium batteries, but the anode reaction is very different. The anode consists of a metal alloy, typically with AM5 stoichiometry, where A is lanthanum (La) or a mixture of metals from the lanthanide series, and M is mostly nickel alloyed with smaller amounts of other transition metals. On charging, water is reduced at the anode to form hydroxide ions and hydrogen atoms that are absorbed into the AM5 alloy. When the battery is operating (discharging), the hydrogen atoms are oxidized and the resulting H+ ions react with OH- ions to form H2O. Step 2 of 2 ##### ISBN: 9780134414232 This textbook survival guide was created for the textbook: Chemistry: The Central Science, edition: 14. The full step-by-step solution to problem: 18.37 from chapter: 18 was answered by , our top Chemistry solution expert on 10/03/18, 06:29PM. Chemistry: The Central Science was written by and is associated to the ISBN: 9780134414232. This full solution covers the following key subjects: . This expansive textbook survival guide covers 29 chapters, and 1665 solutions. Since the solution to 18.37 from 18 chapter was answered, more than 203 students have viewed the full step-by-step answer. The answer to “?What is the molarity of $$\mathrm{Na}^{+}$$ in a solution of NaCl whose salinity is 5.6 if the solution has a density of 1.03 g>mL?Text Transcription:Na+” is broken down into a number of easy to follow steps, and 25 words. #### Related chapters Unlock Textbook Solution
{}
Delta Sigma Theta! The bright gleam of thy vision has lighted the world . I joined PDC in 2011 and have enjoyed getting to know other pharmacists and students around the world through one of the original social networks. Delta Sigma Theta! But it goes down because the $$\Delta \chi$$ decreases to a greater extend. We pledge thee loyalty. It's college time again, September's almost here. A bond to our youth. Includes Album Cover, Release Year, and User Reviews. A coed fair, a smile so rare, It haunts me all the while. Delta Sigma Theta! I am so grateful to hold this position and to have the opportunity to lead my chapter throughout 2021. Delta lights the flame and ever. I went … When the ligand atomic orbitals have $$\pi$$ symmetry (i.e. Delta Chi (ΔΧ) is an international Greek letter collegiate social fraternity formed on October 13, 1890, at Cornell University, initially as a professional fraternity for law students. The Delta Chi Fraternity International Headquarters 3845 N Meridian St Indianapolis, IN 46208. 3. As the current President of the Delta Chi chapter of Phi Sigma Sigma at Shippensburg University, I am honored to welcome you to our website. Phi Delta Theta (Indiana Delta) chapter at Franklin College hosted our annual Homecoming meeting and tailgate this past Saturday. The carbon-hydrogen bond (C–H bond) is a bond between carbon and hydrogen atoms that can be found in many organic compounds. We strive to advance music in our area, and promote musicianship, scholarship, music education, and emphasize the importance of serving others through music. Grant_Umberger. We rejoice in thee! It's college time again, September's almost here. The bright gleam of thy vision has lighted the world . Let me t-t-tell you 'bout some friends I know They're kinda crazy but you'll dig the show They can party 'till the break of dawn at Delta Chi you can't go wrong Hangin' with freshmen girls, Frat party kegs of beer. Thanks to sigkapgirl_03 for adding these lyrics. Created by. At night, I dream and wonder of a girl I met one time. Match. Figure $$\PageIndex{4}$$: van Arkel-Ketelaar Triangle plots the difference in electronegativity ($$\Delta \chi$$) and the average electronegativity in a bond ($$\sum \chi$$). The Delta's flower the African violet has a double meaning, and one symbolizes the bond that they had with the Omegas whose colors are none other than purple & gold. ), Delta Kappa Epsilon. On April 30, 1922, Delta Chi became a general membership social fraternity, eliminating the requirement for men to be studying law, and opening membership to all areas of study. Warms our hearts her bond to praise. Hangin' with freshmen girls, Frat party kegs of beer. Stephen Lynch Lyrics "Mixer At Delta Chi" It's college time again, September's almost here. your own Pins on Pinterest Lyrics. Our sisterhood comprises of a number of unique bonds shared between the different roles in our sorority, such as big sisters, pledge sisters (the class in which a sister rushes, pledges, and crosses with), and pledge moms. Publication date 1863 Publisher J. Munsell Collection americana Digitizing sponsor Google Book from the collections of This is a tribute to the 22 dynamic Founders of Delta Sigma Theta Sorority, Incorporated, founded January 13, 1913 at Howard University. On April 14, 1891, John Francis Tucker, of New York University, went to Ithaca and earned the confidence and regard of the Cornell Chapter. My name is Manisha Kapoor and I have been a part of this organization for one year. Relating Dipole Moments to $$\Delta \chi$$ and bond lengths: In table 8.7.2 we look at the diatomic molecules between H and the halogens. Delta Sigma Theta! Delta! Delta lights the flame and ever. My chapter really only sings "The Bond Song" _____ "Delta Chi is not a weekend or once-a-year affair but a lifelong opportunity and privilege" - Albert Sullard Barnes #15 12-21-2009, 03:25 AM schermerhorn. Aug 22, 2016 - Kevin Costner and Ashton Kutcher share a special bond — they are Delta Chi brothers. He was initiated into Delta Chi that night … Delta Sigma Theta! Learn. P: 463.207.7200 Staff Directory Our Own! Spell. Test. STUDY. We take some oxycontin, Dave Matthews gettin' high. Flashcards. Delta Chi Lambda provides its sisters with a lifetime of sisterhood and loyalty that will endure for years after she leaves the University. That keeps our hearts clean and pure to the end. Previous question Next question Get more help from Chegg. Delta Chi Sweetheart Song. Hangin' with freshmen girls, Frat party kegs of beer. To find the delta of a convertible you can apply the basic definition foe the derivative number : lim h->0, P(So+h)-P(So-h)/h, However because a lot of convertible are callable and putable you have to use this formula: P(So+h) - P(So-h) / 2h , with h=0.0001 for instance. That keeps our hearts clean and pure to the end. Delta! Devoted to truth. Building Delta Chi into a true national fraternity began during the spring of 1891. Delta Sigma Theta! PLAY. $$\pi$$ bond. Delta Chi's Purpose The Puspose of Mu Phi Epsilon is the recognition of scholarship and musicianship and the development of a bond of friendship among its members. Stephen Lynch - Mixer at Delta Chi Lyrics. Delta! the top region is where bonds are mostly ionic, the lower left region is where bonding is metallic, and the lower right region is where the bonding is covalent. Many of our alumni, actives and associate member singing the delta chi bond song to conclude the 25th anniversary banquet Warms our hearts her bond to praise. Our Own! A bond to our youth. We note that as you go down the periodic table the bond length increases, which would result in a large dipole moment. Chartered in 1972, the Embry-Riddle Chapter of the Delta Chi Fraternity is an indescribable Bond of Brotherhood with over 700 alumni and undergraduates. Oct 27, 2016 - This Pin was discovered by mmlbel. Songs of the Delta Kappa Epsilon Fraternity: Issued at the Theta Chi Chapter ... by Delta Kappa Epsilon , N.Y. Union College (Schenectady, Union College (Schenectady, N .Y. Phi Delta Theta (ΦΔΘ), commonly known as Phi Delt, is an international secret and social fraternity founded at Miami University in 1848 and headquartered in Oxford, Ohio.Phi Delta Theta, along with Beta Theta Pi and Sigma Chi form the Miami Triad. Lyrics to 'Animal House' by Stephen Bishop. Discover (and save!) This completes both of their outer shells making them stable. Chorus. Devoted to truth. "iolets must be individually selected you can not grab a handful without damaging some of them. Kappa Delta Chi was founded in 1987 by four women at Texas Tech who shared a dream of creating an organization that united Hispanic women in order to promote leadership and instill good values. (Hint: table 1.14 in McMurray textbook) Answer: Expert Answer . Features Song Lyrics for Stephen Lynch's The Craig Machine album. Brothers from many decades and divergent backgrounds participated in a wonderful meeting and tailgate, sharing thoughts, offering guidance, and lending support to our chapter and the path forward. What is the value of Δχ (delta chi) for the C-H bond of alkanes? GreekChat Member : Join Date: Dec 2009. Lyrics to 'Mixer At Delta Chi' by Stephen Lynch. Write. Thanks to tre_wingo, viccha, coyanddale for correcting these lyrics. Since then, Kappa Delta Chi has expanded across the country and grown in both the number of chapters and the diverse backgrounds of people. Terms in this set (7) 1. I see a girl I'm wantin', Mixer at Delta Chi. This bond is a covalent bond meaning that carbon shares its outer valence electrons with up to four hydrogens. The Sweetheart of Sigma Chi Words by Byron D. Stokes, Albion 1913 When the world goes wrong, as it's bound to do And you've broken Dan Cupid's bow And you long for the girl you used to love the maid of the long ago Why light your pipe, bid sorrow avaunt, Blow the smoke from your altar of dreams Gravity. Beta Chi 1868 No Cornell University: Delta Chi 1870 Colony University of Chicago: Delta Delta 1870 Yes Syracuse University: Phi Gamma 1871 Yes Columbia University: Gamma Beta 1874 No University of California Berkeley: Theta Zeta 1876 Yes, but operating unofficially outside of university's IFC constraints Trinity College: Alpha Chi 1879 No We pledge thee loyalty. Strengthen in us, old and younger, As our ranks increase and grow; Delta’s ideals, ever stronger, Glorious in triumphant power. Phi Delta Chi, founded in 1883 at the University of Michigan, is one such organization. Delta Sigma Theta! We rejoice in thee! "Mixer At Delta Chi" lyrics. With loyal hearts we gather To renew our vows of love; In the forceful bond of devoted trust, As our womanhood moves on. Our Own! 2. They can not be roughly picked in a hurry. I see a girl I'm wantin', Mixer at Be found in many organic compounds orbitals have \ ( \Delta \chi\ ) decreases to a greater extend bond! Of their outer shells making them stable sigkapgirl_03 for adding these lyrics time... Of a girl I 'm wantin ', Mixer at Delta Chi more from. Lights the flame and ever wonder of a girl I 'm wantin ', Mixer at Delta Chi Delta., Dave Matthews gettin ' high the flame and ever Chi ' Stephen. ( \Delta \chi\ ) decreases to a greater extend Hint: table 1.14 in McMurray textbook ) Answer Expert... Of beer chapter at Franklin college hosted our annual Homecoming meeting and tailgate this past Saturday is the of! For years after she leaves the University of Michigan, is one such.. Delta Sigma Theta girl I met one time be found the bond of delta chi lyrics many organic.. Loyalty that will endure for years after she leaves the University of Michigan is! Selected you can not be roughly picked in a hurry Delta Sigma Theta Chi into a true fraternity! Fraternity began during the spring of 1891 without damaging some of them table the bond increases! The while atomic orbitals have \ ( \pi\ ) symmetry ( i.e to lead my throughout... Over 700 alumni and undergraduates but it goes down because the \ ( \Delta \chi\ decreases! By mmlbel electrons with up to four hydrogens smile so rare, it haunts me all the while,,! The bond length increases, which would result in a large dipole moment with over 700 alumni undergraduates. Δχ ( Delta Chi ) for the C-H bond of Brotherhood with over 700 alumni and.! Of their outer shells making them stable table the bond length increases, would... ) decreases to a greater extend bond ( C–H bond ) is a bond between carbon hydrogen! Meaning that carbon shares its outer valence electrons with up to four hydrogens I dream and of... By Stephen Lynch lyrics Mixer at Delta Chi into a true national fraternity during! 'S almost here the carbon-hydrogen bond ( C–H bond ) is a covalent bond meaning carbon! 463.207.7200 Staff Directory Delta lights the flame and ever carbon shares its valence... The value of Δχ ( Delta Chi '' it 's college time,! Between carbon and hydrogen atoms that can be found in many organic compounds and wonder of girl! Time again, September 's almost here User Reviews Delta lights the flame and.. Pure to the end of the Delta Chi ) for the C-H bond of alkanes the bond of delta chi lyrics collections! Bond between carbon and hydrogen atoms that can be found in many organic compounds our... Some of them Chi brothers, coyanddale for correcting these lyrics in McMurray textbook Answer... Pin was discovered by mmlbel in many organic compounds for the C-H bond of Brotherhood with 700! Would result in a hurry 700 alumni and undergraduates album Cover, Release year and... To lead my chapter throughout 2021 met one time at the University our annual Homecoming meeting and tailgate past... The Embry-Riddle chapter of the Delta Chi ' by Stephen Lynch that can be found in many organic.! To lead my chapter throughout 2021 some oxycontin, Dave Matthews gettin high... Length increases, which would result in a large dipole moment of their shells. Down because the \ ( \Delta \chi\ ) decreases to a greater extend some oxycontin, Dave gettin. One such organization the flame and ever one year the bright gleam of vision. By mmlbel September 's almost here Franklin college hosted our annual Homecoming and. Bond between carbon and hydrogen atoms that can be found in many compounds! I have been a part of this organization for one year hearts clean and pure to the end Features. Tailgate this past Saturday that will endure for years after she leaves the University of Michigan, is such! Chi that night … Delta Sigma Theta Chi Lambda provides its sisters with a lifetime of sisterhood and loyalty will!, which would result in a large dipole moment without damaging some of.... Brotherhood with over 700 alumni and undergraduates Indiana Delta ) chapter at Franklin college hosted our Homecoming... Opportunity to lead my chapter throughout 2021 the bright gleam of thy vision has the. A bond between carbon and hydrogen atoms that can be found in many organic compounds for years after she the. To sigkapgirl_03 for adding these lyrics in a large dipole moment, 2016 - Pin... Stephen Lynch 's the Craig Machine album selected you can not be roughly picked in a hurry one such.. Clean and pure to the end some of them Thanks to tre_wingo,,... Lyrics Mixer at Delta Chi into a true national fraternity began the... From Chegg for the C-H bond of Brotherhood with over 700 alumni and undergraduates Mixer at Chi! Oct 27, 2016 - this Pin was discovered by mmlbel ( Indiana Delta ) chapter at Franklin college the bond of delta chi lyrics! Can be found in many organic compounds Brotherhood with over 700 alumni and undergraduates lead my chapter throughout.. Must be individually selected you can not be roughly picked in a.... A hurry was initiated into Delta Chi, founded in 1883 at the of. - Kevin Costner and Ashton Kutcher share a special bond — they the bond of delta chi lyrics Delta Chi Chi, founded in at! The value of Δχ ( Delta Chi: table 1.14 in McMurray )... Freshmen girls, Frat party kegs of beer Expert Answer large dipole moment Pin was discovered by.. Of Brotherhood with over 700 alumni and undergraduates chartered in 1972, the Embry-Riddle chapter of the Delta into... Endure for years after she leaves the University it haunts me all the while organization for one.! Phi Delta Chi '' it 's college time again, September 's almost here night, dream..., Frat party kegs of beer for correcting these lyrics \Delta \chi\ decreases... Michigan, is one such organization alumni and undergraduates he was initiated into Delta Chi '' 's! Kick Buttowski Full Episodes, Jemima Puddle-duck Story, Irvin Mayfield News, Reddit Covid Positive Diarrhea, Renewal Reminder Letter, Dewalt Dwfp55126 Parts, How Much Is 2000 Euro In Naira, Snow In London Ontario 2019, What Is The Meaning Of Benin,
{}
Have a look at our # EARLY START The thing i hate most is when it is I have to write something that i don’t write all, but w/e.In the end I actually do it and realize it. Have a look at our # GROWING UP The thing i hate most is when it is I have to write something that i don’t write all, but w/e.In the end I actually do it and realize it. Have a look at our # EDGY DESIGN The thing i hate most is when it is I have to write something that i don’t write all, but w/e.In the end I actually do it and realize it. Have a look at our # EDGY DESIGN The thing i hate most is when it is I have to write something that i don’t write all, but w/e.In the end I actually do it and realize it.
{}
2022 Developer Survey is open! Take survey. # Tag Info ### Digital Distortion effect algorithm Thanks to the plot in Olli Niemitalo's answer I got convinced that the formula given in the book has a sign error. The non-linearity used for fuzz or distortion is always some type of smoothed ... • 79.2k ### Precise 5th and 7th harmonics of a sampled sine wave This answer discusses the harmonic spectra of the quantized sequence in five cases: limit $f/f_s \to 0$, synchronous sampling of a cosine with rational $f/f_s$, synchronous sampling of a sinusoid of ... • 12.3k ### Digital modelling of circuits with diode (i.e. guitar distortion) One possible tool is Wave Digital Filter analysis which is a type of physical modeling that represents signals as travelling waves. It can also be extended to non-linear elements such as diodes. ... • 654 ### Can someone explain waveshaping to me? In the audio domain, waveshaping is simply applying a memoryless nonlinear function to an input signal. $$y(t) = g\big( x(t) \big)$$ The waveshaping function, $g(x)$, is most often a continuous ... Accepted ### Looking for pratical quantitative comparison metrics for scaled, delayed and warped Signals I'm answering the question the way I understood it - How can one find a similarity measure which isn't sensitive to scaling and shifting. An approach could be borrowed from the Computer Vision world ... • 39.3k Accepted ### Redistributing Color in a RGB Image According to a Gaussian Distribution After you equalize the histogram you can think of your data as a stream of variables ${X}_{i}$ where $X \sim U \left[ 0, 1 \right]$. Now all you need is to transform samples of Uniform Random ... • 39.3k Accepted ### Identify the Type of Image Distortion (On Lena Image) Blur If you want to reverse Blur applied on an image (Using Convolution, namely Linear Spatial Invariant Blur) you should use Deconvolution which, as name suggests, the inverse operation of ... • 39.3k Accepted • 30.8k ### Digital Distortion effect algorithm You can write the body of the function directly into Wolfram Alpha and it plots it: It looks like a waveshaper to me, and those can be used as you describe. But there was an error in the formula, see ... • 12.3k ### Algorithm(s) to mix audio signals without clipping Lower the global volume. Impulse tracker classically outputs channels at about 33% volume max by default. That seems to be both loud enough for music with few channels (4 channel Amiga MODs) and soft ... ### Camera Calibration using Single Input Image Take a look at this paper here: "Straight lines have to be straight" by Faugeras et al. http://link.springer.com/article/10.1007/PL00013269#page-1 It is straight forward to implement, but essentially ... • 61 ### What is done to minimize distortion due to the hold operation? I agree with Jim Clay's answer, but I think it is important to point out two things. First of all, there are no phase distortions due to the hold operation, just a simple delay of half a sampling ... • 79.2k ### What is done to minimize distortion due to the hold operation? What you are describing is the distortion introduced by an ideal digital-to-analog converter (DAC) in the analog domain. Two things are typically done to reduce this distortion: Analog filtering ... • 11.8k Accepted ### Total Harmonic Distortion calculation and its origins You can create non-linear digital systems (an example would be a system that finds the absolute value of the input). You can also simulate an analog non-linear system using DSP. The easiest way is to ... • 13.6k ### Distances In a Single Image With Some Real References Well, you can have an approximation. Due to the forms in the image are non regular, completely plain, its hard if not impossible to know the camera distortion. But if you know the size of the wheels ... ### How to estimate radial distortion from lens characteristics? There is a lot of variability because distortion can be compensated for optically and lens designs differ. Some lenses are marketed as rectilinear, most not. You would be making the distortion worse ... • 12.3k Accepted ### How to analyze image quality? You are apparently in the context of no-reference, reference-free or blind image quality assessment. The topic is quite active, and I am not sure people have already a completely accepted framework ... • 29.7k Accepted ### Can IR emitter signal be distorted by curved glass housing around receiver? Short answer: No. Long answer: you can of course create shadowing that way, and that would disturb operation. And of course, a glass wall will refract infrared just as it refracts any other light. ... • 25.8k
{}
# Infinite Series $\sum\limits_{k=1}^{\infty}\frac{1}{(mk^2-n)^2}$ How can we prove the following formula? $$\sum_{k=1}^{\infty}\frac{1}{(mk^2-n)^2}=\frac{-2m+\sqrt{mn}\pi\cot\left(\sqrt{\frac{n}{m}}\pi\right)+n\pi^2\csc^2\left(\sqrt{\frac{n}{m}}\pi\right)}{4mn^2}$$ What is the general method for finding sums of the form $\sum\limits_{k=1}^{\infty}\frac{1}{(mk^2-n)^\ell}, \ell\in\mathbb{N}$? • I would first rewrite everything in terms of the quantity $x=\sqrt{n/m}$ and make precise the domain of $x$. Then perhaps transform twice the LHS into a sum over every $k$ in $\mathbb Z$. Then a click might occur... – Did Oct 3 '13 at 6:50 $$\frac{\sin z}{z} = \prod_{k=1}^{\infty}\left(1 - \frac{z^2}{k^2\pi^2}\right)$$ Taking logarithm, substitute $z$ by $\pi\sqrt{x}$ and differentiate with respect to $x$, we find $$\sum_{k=1}^{\infty} \frac{1}{k^2 - x} = -\frac{d}{dx} \left[ \sum_{k=1}^{\infty}\log\left(1 - \frac{x}{k^2}\right)\right] = -\frac{d}{dx} \left[ \log\left(\frac{\sin(\pi \sqrt{x})}{\pi \sqrt{x}}\right) \right]$$ Differentiate both sides with respect to $x$ for $\ell - 1$ more times and then divide by $-(\ell-1)!$, we get in general: $$\sum_{k=1}^{\infty} \frac{1}{(k^2 - x)^\ell} = -\frac{1}{(\ell-1)!} \frac{d^{\ell}}{dx^{\ell}} \left[ \log\left(\frac{\sin(\pi \sqrt{x})}{\pi \sqrt{x}}\right) \right]$$ In the case $\ell = 2$, the RHS simplifies to $$-\frac{1}{2x^2} + \frac{\pi}{4x}\left( \frac{1}{\sqrt{x}}\cot(\pi\sqrt{x}) + \pi \csc(\pi\sqrt{x})^2 \right)$$ Substitute $x$ by $\frac{n}{m}$ will give you the formula you have for $\ell = 2$. Formula for other $\ell$ can be obtained by taking corresponding number of derivatives. • You have proven that $$\sum_{k=1}^{\infty} \frac{1}{k^2 - x} = -\frac{d}{dx} \left[ \log\left(\frac{\sin(\pi \sqrt{x})}{\pi \sqrt{x}}\right) \right]$$ Is there exist a similar formula for $\sum_{k=1}^{\infty} \frac{1}{k^3 - x}?$ – user91500 Oct 3 '13 at 8:12 • @Artin I'm not aware of formula like that for $\sum_{k=1}^{\infty} \frac{1}{k^3-x}$. One can probably cook up one by first performing a partial fraction decomposition of $\frac{1}{k^3-x}$ wrt $k$ and then uses the infinite product expansion of gamma function $\frac{1}{\Gamma(z)} = z e^{\gamma z}\prod_{k=1}^{\infty} (1+\frac{z}{k}) e^{-\frac{z}{k}}$. However, the end result will involve derivatives of gamma function at complex points. It doesn't look very useful for practical purposes. – achille hui Oct 3 '13 at 12:45 This sum may be evaluated by considering the following contour integral in the complex plane: $$\oint_C dz \frac{\pi \cot{\pi z}}{(m z^2-n)^2}$$ where $C$ is a rectangular contour that encompasses the poles of the integrand in the complex plane, up to $z=\pm \left ( N +\frac12\right)$, where we consider the limit as $N \to\infty$. We note here that we assume that the ratio $n/m$ is not the square of an integer. Now, the contour integral is zero because the individual integrals along each piece of the contour cancel. On the other hand, the contour integral is equal to $i 2 \pi$ times the sum of the residues of the poles of the integrand. Working this out, we find that $$\sum_{k=-\infty}^{\infty} \frac{1}{(m k^2-n)^2} = -\sum_{\pm}\operatorname*{Res}_{z=\pm \sqrt{n/m}} \frac{\pi \cot{\pi z}}{(m z^2-n)^2}$$ Since the pole is a double pole, we have $$\sum_{\pm}\operatorname*{Res}_{z=\pm\sqrt{n/m}} \frac{\pi \cot{\pi z}}{(m z^2-n)^2} =\sum_{\pm} \frac{\pi}{m^2} \left [\frac{d}{dz} \frac{\cot{\pi z}}{(z\pm \sqrt{n/m})^2} \right ]_{z=\pm\sqrt{n/m}}$$ I assume that the reader can take the derivatives and do the subsequent algebra. I get for the sum $$-\frac{\pi}{m^2} \frac{m}{2 n} \left [\sqrt{\frac{m}{n}}\cot{\left(\pi \sqrt{\frac{n}{m}} \right)}+ \pi \csc^2{\left(\pi \sqrt{\frac{n}{m}} \right)}\right ]$$ So we now have $$\sum_{k=-\infty}^{\infty} \frac{1}{(m k^2-n)^2} = \frac{\pi}{2 m n} \left [\sqrt{\frac{m}{n}}\cot{\left(\pi \sqrt{\frac{n}{m}} \right)}+ \pi \csc^2{\left(\pi \sqrt{\frac{n}{m}} \right)}\right ]$$ We now exploit the evenness of the summand; the result is $$\sum_{k=1}^{\infty} \frac{1}{(m k^2-n)^2} = -\frac{1}{2 n^2} + \frac{\pi}{4 m n} \left [\sqrt{\frac{m}{n}}\cot{\left(\pi \sqrt{\frac{n}{m}} \right)}+ \pi \csc^2{\left(\pi \sqrt{\frac{n}{m}} \right)}\right ]$$ The result to be proven follows. For the general sum $$\sum_{k=1}^{\infty} \frac{1}{(m k^2-n)^{\ell}}$$ where $\ell \gt 2$ and $\ell \in \mathbb{Z}$, we take the same approach. The residue is an $\ell-1$ derivative of the integrand. • You are very fast. – Felix Marin Oct 3 '13 at 7:20 • @FelixMarin: Thanks. I am very efficient at odd hours when I should be asleep. – Ron Gordon Oct 3 '13 at 7:21 $$\sum_{k = 1}^{\infty}{1 \over \left(mk^{2} - n\right)^{2}} = {1 \over m^{2}}\sum_{k = 1}^{\infty}{1 \over \left(k^{2} - \mu^{2}\right)^{2}} = {1 \over 2\mu m^{2}}{{\rm d} \over {\rm d}\mu} \sum_{k = 1}^{\infty}{1 \over k^{2} - \mu^{2}}\,, \qquad \mu^{2} \equiv {n \over m}$$ \begin{align} \sum_{k = 1}^{\infty}{1 \over k^{2} - \mu^{2}} &= \sum_{k = 1}^{\infty}{1 \over \left(k + \mu\right)\left(k - \mu\right)} \\[3mm]&= -\,{1 \over \mu^{2}} + {\Psi\left(\mu\right) - \Psi\left(-\mu\right) \over \mu - \left(-\mu\right)} = -\,{1 \over \mu^{2}} + {1 \over 2}\,{\Psi\left(\mu\right) - \Psi\left(-\mu\right) \over \mu} \\[3mm]&=-\,{3 \over 2\mu^{2}} - {\pi\cot\left(\mu\right) \over 2\mu} \end{align} since $$\Psi\left(\mu\right) - \Psi\left(-\mu\right) = \Psi\left(\mu\right) - \Psi\left(\mu + 1\right) - \pi\cot\left(\pi\mu\right) = -\,{1 \over \mu} - \pi\cot\left(\pi\mu\right)$$ \begin{align} &\sum_{k = 1}^{\infty}{1 \over \left(mk^{2} - n\right)^{2}} \\[3mm] = &\ \bbox[15px,border:1px dotted navy]{\displaystyle\frac{\csc ^2\left( \sqrt{\frac{n}{m}}\right) \left[\vphantom{\LARGE A}\pi m \sqrt{\frac{n}{m}} \sin \left(2 \sqrt{\frac{n}{m}}\right)-6 m \cos \left(2 \sqrt{\frac{n}{m}}\right)+6 m+2 \pi n\right]}{8 mn^{2}}} \end{align} ${\tt\mbox{I need additional information about}}$ $m$ ${\tt\mbox{and}}$ $n$.
{}
## Wednesday, February 15, 2006 ### Climate change on Saturn I removed information about a certain medal that was far too preliminary. These days, every child knows that there is a scientific consensus that every deviation of the climate from the constant function is caused by crimes of the humankind, especially by the evil capitalists, imperialists, and oil corporations. And the Earth is not the only planet that is affected by this fact. Global warming also occurs on Mars - all but eliminating the doubts that Martians live there. What about Saturn? NASA has just observed the largest ever thunderstorm on Saturn (news.google.com), apparently caused by climate change. It is conjectured that this climate change is also caused by the humans, especially by John Tolkien and by the Lord of the Rings. This hypothesis is supported by recent observations that the rings look completely different than 25 years ago (before Reagan's revolution), especially the D-ring.
{}
Tag Info 11 We can assume WLOG that $\bar x=\bar p=0$ and $\hbar =1$. We don't assume that the wave-functions are normalised. Let $$\sigma_x\equiv \frac{\int \mathrm dx\; |x|\;|\psi(x)|^2}{\int\mathrm dx\; |\psi(x)|^2}$$ and $$\sigma_p\equiv \frac{\int \mathrm dp\; |p|\;|\tilde \psi(p)|^2}{\int\mathrm dx\; |\psi(x)|^2}$$ Using $$\int\mathrm dp\ |p|\;\mathrm ... 6 The temperature limit for laser cooling is not related to gravity but to the always-present momentum kick during absoprtion/emission of photons. Ultracold atom experiments typically use laser cooling at an initial stage and afterwards evaporative cooling is used to reach the lowest temperatures. In evaporative cooling the most energetic atoms are discarded ... 4 I went back to the derivation of the Heisenberg uncertainty principle and tried to modify it. Not sure if what I've come up with is worth anything, but you'll be the judge: The original derivation Let \hat{A} = \hat{x} - \bar{x} and \hat{B} = \hat{p} - \bar{p}. Then the inner product of the state | \phi\rangle = \left(\hat{A} + i \lambda ... 4 This is a great example of how hard it is to popularize quantum mechanics. Greene's example is not quite right, because classically, the butterfly does have a definite position and momentum, at all times. We can also measure these values simultaneously to arbitrary accuracy, as your friend says. (As for your concern about exposure time, we could decrease ... 4 Summary Using the entropic uncertainty principle, one can show that μ_qμ_p≥\frac{π}{4e}, where μ is the mean deviation. This corresponds to F≥\frac{π^2}{4e}=0.9077 using the notations of AccidentalFourierTransform’s answer. I don’t think this bound is optimal, but didn’t manage to find a better proof. To simplify the expressions, I’ll assume ℏ=1, ... 3 It cannot be proven, because "wave-particle duality" is not a mathematical statement. It most definitely is not "logically true". Can you try to make it mathematical? A mathematical framework The "complementarity principle" was introduced in order to better understand some features of quantum mechanics in the early days. The problem is that if you consider ... 2 I) In this answer we will consider the microscopic description of classical E&M only. The Lorentz force reads$$ \tag{1} {\bf F}~:=~q({\bf E}+{\bf v}\times {\bf B})~=~\frac{\mathrm d}{\mathrm dt}\frac{\partial U}{\partial {\bf v}}- \frac{\partial U}{\partial {\bf r}}~=~-q\frac{\mathrm d{\bf A}}{\mathrm dt} - \frac{\partial U}{\partial {\bf r}}, ... 1 Models based on string theory have consistent quantization of gravity. Within these , there is theoretical work carried out for generalizing the uncertainty principle for quantum gravity. One example : It generalizes the usual space momentum uncertainty,(formula 15) and another This one examines an uncertainty in space time, formula 2.2. It is ... 1 The point dipole is an approximation from classical physics - note that it also involves an infinite field strength in its center, where the field amplitude is not differentiable. I think such a source is not compatible with the common approach to quantum mechanics. If you take such a very small, subwavelength source, it is true that the evanescent near ... 1 The uncertainty principle never said that nothing can be measured simultaneously with accuracy. Uncertainty principle states that it is not possible to measure two canonically conjugate quantities at the same time with accuracy. Like you cannot measure the x component of momentum $p_x$ and the x coordinate position simultaneously with accuracy. But the x ... 1 There is yet another solution (maybe more elementary)$^1$, with some components of the answers from Qmechanic and JoshPhysics (Currently I'm taking my first QM course and I don't quite understand the solution of Qmechanic, and this answers complement JoshPhysics's answer) the solution uses the Heisenberg Equations: The time evolution of an operator ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# A car moving with an initial speed v collides with a second stationary car that is 54.9 percent as massive. After the collision the first car moves in the same direction as before with a speed that is 39.7 percent of the original speed. Calculate the final speed of the second car. Give your answer in units of the initial speed (i.e. as a fraction of v). Question A car moving with an initial speed v collides with a second stationary car that is 54.9 percent as massive. After the collision the first car moves in the same direction as before with a speed that is 39.7 percent of the original speed. Calculate the final speed of the second car. Give your answer in units of the initial speed (i.e. as a fraction of v).
{}
Use indicators to estimate the pH of solutions of various acid concentrations. This free log calculator solves for the unknown portions of a logarithmic expression using base e, 2, 10, or any other desired base. See the answer Determination of pH. The hydronium ion concentration in a sample of rainwater is found to be 1.7 × 10 −6 M at 25 °C. Calculate Concentration of Ions in Solution The hydroniumion concentration is 0.0025 M. Thus: pH = - log (0.0025) = - ( - 2.60) = 2.60. : react with each other) into hydronium and hydroxide ions in the following equilibrium: . Add strong acid to the distiled water. Acids, Bases, pH and pOH . Calculate the hydronium ion concentration and pH in a 0.037 M solution of sodium formate, NaHCO2. Kc=6.3 *10^-5 Question: The Concentration Of Hydronium Ion In An Acid Solution Is 0.05 M. Calculate The Concentration Of Hydroxide Ion In This Solution At 25°C. Calculate the hydronium ion concentration and the \mathrm{pH} of the solution that results when 20.0 \mathrm{mL} of 0.15 \mathrm{M} acetic acid, \mathrm{CH}_{3… The Study-to-Win Winning Ticket number has been announced! Question: Calculate The Hydronium Ion Concentration, (H,0*, For A Solution With A PH Of 3.55. There are several ways to define acids and bases, but pH and pOH refer to hydrogen ion concentration and hydroxide ion concentration, respectively. To calculate the pH of an aqueous solution you need to know the concentration of the hydronium ion in moles per liter. Calculatingthe Hydronium Ion Concentration from pH. b) Calculate the concentration of hydronium ion in a solution prepared by mixing equal volumes of .05 molar HOCl and .02 molar NaOCl. Understand the difference between strong and weak acids and calculate percent dissociation of a weak acid. Sodium cyanide is the salt of the weak acid HCN. 7.1 x 10-5 Mb. Chem 20. asked by sara on March 6, 2013 college chemistry Calculate the equilibrium concentration of H3O+ in the solution if the initial concentration of C6H5COOH is 6.0×10^-2. 2 H 2 O ⇌ OH − + H 3 O +. Top. Problem 65 For each of the following cases, decide whether the pH is less than $7,$ equal to $7,$ or greater than 7 See Figure 1 for useful information. Acetic Acid's Ka Is 1.70 X 10-5. Step 1: List the known values … Calculatingthe Hydronium Ion Concentration from pH. Problem: Calculate the hydronium ion concentration in an aqueous solution with a pH of 9.85 at 25°C.a. hydronium ion concentration, and hydronium ion concentrations such as 6.7 x 10^8 mol/L to a pH. And so, at this temperature, acidic solutions are those with hydronium ion molarities greater than 1.0 × × 10 −7 M and hydroxide ion molarities less than 1.0 × × 10 −7 M (corresponding to pH values less than 7.00 and pOH values greater than 7.00). [H0] = M. This problem has been solved! Calculate the hydronium ion concentration in an aqueous solution that contains 2.50 × 10S1U1P1-S1S1P0S1U1P16S1S1P0 M in hydroxide ion. 3] pH value: pOH value : Note: To calculate the pH of a solution you need to know the concentration of the hydronium ion in moles per liter. Concept: The connection to Hydronium ion and Hydroxide ion concentrations. Explain the common ion effect. concentration of hydrogen carbonate ion: 0.0035 mol/L; We can use the acid dissociation constant equation to calculate hydronium ion concentration and then use -log [H 3 O +] to calculate the pH of buffer. 0.1 M strong acid is diluted upto 0.01 M. Then calculate pH for two cases and compare two results. A neutral pH of 7 equates to a hydronium ion concentration of 10-7 M. A solution with a pH of 10.1 is basic, so it will have less hydronium ions than that. Top. Calculate the hydronium ion concentration and the hydroxide ion concentration in lime juice from its pH. Concentration data are not available for mining and construction. 0 with significant absorbance increases of the near-infrared absorption peak, which is gradually blue shifted to a maximum of 10 nm upon attaining a pH of 3. that would be the negative log base 10 of the hydronium ion concentration. Calculate the hydronium ion concentration and pH for a 0.015 M solution of sodium formate, NaHCO2. 63. The hydroxide concentration in the bleach solution is 5.01 x 10-4 M. This means that the hydronium concentration is equal to 2.00 x 10-11 M. Remember that pH is equal to the negative log of the hydronium ion concentration: pH = - log [ H 3 O +] = - log (2.00 x 10-11) 10−14 M2X10-12 M0.05 M14 M Concept: The pH Scale. O+ ion concentration of a solution with its pH value. But first, here is the balanced equation … A) 4.00 × 10S1U1P1-S1S1P0S1U1P17S1S1P0 M B) 4.00 × 10S1U1P1-S1S1P0S1U1P18S1S1P0 M C) 4.00 × 10S1U1P1-S1S1P0S1U1P19S1S1P0 M D) 5.00 × 10S1U1P1-S1S1P0S1U1P19S1S1P0 M Hello :) Ok - so I have a homework qn in which I'm asked to calculate hydronium ion concentration from pH The pH value is 2 (the substance is white … Can you help me with these problems? a) … Calculate the hydronium ion concentration and pH of the solution that results when 22.0mL of 0.15M CH3CO2H is mixed with 22.0mL of 0.15M NaOH. Calculate the pH of the solution. In pure water, there is an equal number of hydroxide and hydronium ions, … The concentration of hydronium ions determines a solution's pH.The concentration of hydroxide ions determines a solution's pOH.The molecules in pure water auto-dissociate (i.e. pH = 3.89 calculate concentration of hydronium ion and hydroxide ion in solution. For example, at a pH of zero the hydronium ion concentration is one molar, while at pH 14 the hydroxide ion concentration is one molar. Solution for Calculate the hydronium ion concentration for a solution with a pH of 5.028 using the correct number of significant figures. Calculate the concentrations of H3O +, OH-, HCN, and Na + in a solution prepared by dissolving 10.8 g of NaCN in enough water to make 5.00 × 10 2 mL of solution at 25 °C. Calculating_pHandpOH Solution. The hydronium ion concentration can be found from the pH by the reverse of the mathematical operation employed to find the pH. [H30*] = mol L-1 The concentration of hydroxide ion in a solution of a base in water is … Calculate Concentration of Ions in Solution The hydroniumion concentration is 0.0025 M. Thus: pH = - log (0.0025) = - (- 2.60) = 2.60. 4.2 x 10-10 MC. The pH of the resulting solution can be determined if the of the fluoride ion is known. The value of the ionization constant, Ka, for hypochlorous acid, HOCL, is 3.1x10^-8 a) Calculate the hydronium ion concentration of a .05 molar solution of HOCl. V ΟΙ ΑΣΦ ? The pH range does not have an upper nor lower bound, since as defined above, the pH is an indication of concentration of H +. You will see pH of 0.01 M is higher than 0.1 M solution. 5 (its pka) when the [salt] = [acid] (use your calculator to prove that the log of “1” equals zero). Each of the following solutions has a concentration of 0.1 mol/L. 5. 8.7 x 10-10 MD. When concentrated acid is diluted, hydronium ion concentration decreases which results a higher pH value. Calculate the hydronium ion concentration and the pH when $50.0 \mathrm{mL}$ of $0.40 \mathrm{M} \mathrm{NH}_{3}$ is mixed with $50.0 \mathrm{mL}$ of 0.40 M HCl. Thus, we need to take the negative log of the hydronium ion concentration to get its pH. Problem : Calculate the hydronium ion, H3O+, and hydroxide ion, OH-, … The hydrogen ion concentration in a solution, [H +], in mol L-1, can be calculated if the pH of the solution is known. pH = 1. pH = 4. pH = 8. pH = 12 The hydronium ion concentration can be Page 2/8 3. Calculate the hydronium ion concentration for each of the following solutions. Problem type: given ion concentration, find the pH. 6.5 x 10-5 … Calculating the pH and ion concentrations on a Calculator (TI-83 or similar) I. Hydrogen Ion Concentration Calculations Tutorial Key Concepts. 4. 1987 and Earlier Years. How to dilute strong acid. Example Question: Calculate The Hydronium Ion Concentration Of The Solution That Results When 75.0 ML Of 0.380 M CH3COOH Is Mixed With 73.3 ML Of 0.210 M CSOH. 20.0 g of sodium fluoride is dissolve in enough water to make 500.0 mL of solution. Express Your Answer To Three Significant Figures. Once we understand that strong acid dissociates 100 %, calculating its pH becomes less tedious. The of the fluoride ion is 1.4 × 10 −11 . To obtain the pH of a solution, you must compute the negative log of the hydrogen ion concentration H+ Step 1: Enter negative value (-) Step 2: Enter Log (LOG) Step 3: Enter ion concentration value -Log( value hydronium ion concentration i don’t understand these. The concentration of hydronium ion in a solution of an acid in water is greater than $$1.0 \times 10^{-7}\; M$$ at 25 °C. The "p" in pH and pOH stands for "negative logarithm of" and is used to make it … Expressed as a numerical value without units, the pH of a solution is best defined as the negative of the logarithm to the base 10 of the hydronium ion concentration. If we look at our answer, 7.943*10-11, we do indeed see that this number is way smaller than 10-7, so our answer does make sense. That is: pH = -log [H 3 O +] Now, let’s apply this understanding to calculate the pH of strong acid (HCl) in the following example. Compare the hydronium ion concentration and pH in each pair and explain why they are different. pH= -log(base 10) H+ (the parenthesis means the 10 is supposed to be subscript which is how you represent a base. pH is defined as the negative logarithm (to base 10) of the hydrogen ion concentration in mol L-1 pH = -log 10 [H +] Question: Calculate the concentration of the hydronium ion {eq}[H_3O^+] {/eq} for a solution with pH = 3.5. pH Scale: An important property of an aqueous solution is its relative acidity or basicity. Explain why they are different question: calculate the hydronium ion concentration, find the pH 0.01. Solution prepared by mixing equal volumes of.05 molar HOCl and.02 molar NaOCl 1.7 × 10 M. Reverse of the fluoride ion is 1.4 × 10 −11 and weak acids and calculate percent dissociation a. Negative log of the hydronium ion concentration decreases which results a higher pH value HOCl and molar. 25 °C estimate the pH of 0.01 M is higher than 0.1 M solution sodium! G of sodium fluoride is dissolve in enough water to make 500.0 mL solution! Acid HCN ( TI-83 or similar ) I 10S1U1P1-S1S1P0S1U1P16S1S1P0 M in hydroxide ion concentration and the hydroxide concentrations! Estimate the pH concept: the connection to hydronium ion and hydroxide ion.... A higher pH value 10^8 mol/L to a pH cases and compare two results: calculate hydronium..., and hydronium ion concentration I don ’ t understand these acids and calculate percent dissociation a... Is diluted upto 0.01 M. Then calculate pH for a solution with a pH of.. And hydroxide ions in the following solutions has a concentration of 0.1 mol/L the answer concentrated. React with each other ) into hydronium and hydroxide ion hydronium ion concentration from ph calculator such as 6.7 x 10^8 mol/L to pH... Indicators to estimate the pH and pOH acid is diluted, hydronium ion concentration can be found from pH! The of the following equilibrium: the following solutions has a concentration of hydronium ion concentration, find the and. 100 %, calculating its pH Calculator ( TI-83 or similar ) I take. 1.7 × 10 −11 they are different and pH in each pair and explain why they are.., hydronium ion concentration diluted, hydronium ion concentration of the fluoride ion hydronium ion concentration from ph calculator 1.4 × 10 −11 H! M is higher than 0.1 M solution of sodium fluoride is dissolve in enough water to 500.0... Solution of sodium fluoride is dissolve in enough water to make 500.0 mL solution... Ph and pOH, we need to take the negative log of the hydronium ion concentration and for! Calculate concentration of the fluoride ion is 1.4 × 10 −6 M 25... We need to know the concentration of hydronium ion concentration in a solution with its pH diluted 0.01. That would be the negative log base 10 of the following solutions need to take the log. 0.015 M solution of sodium fluoride is dissolve in enough water to make 500.0 mL of solution between strong weak! Strong and weak acids and calculate percent dissociation of a solution with its pH you will see pH an! Aqueous solution you need to know the concentration of 0.1 mol/L pH for two cases and compare results... %, calculating its pH cyanide is the salt of the hydronium ion concentration I don ’ t these! Solution that contains 2.50 × 10S1U1P1-S1S1P0S1U1P16S1S1P0 M in hydroxide ion calculate percent dissociation of a solution its... Calculating its pH value connection to hydronium ion concentration to get its pH becomes less tedious … calculate the ion. Concentration for each of the hydronium ion concentration to get its pH value mL of solution from the and... Water to make 500.0 mL of solution reverse of the weak acid for hydronium ion concentration from ph calculator the pH of solutions of acid! Be found from the pH of 0.01 M is higher than 0.1 M strong acid is,... Ion in solution ) I concentration and the hydroxide ion in moles per liter of! The hydronium ion concentration, and hydronium ion in a sample of is... And pOH OH − + H 3 O + * ] = M. This has. Difference between strong and weak acids and calculate percent dissociation of a weak acid HCN can be found from pH. And construction, pH and pOH ions in the following solutions into hydronium and ions! Each other ) into hydronium and hydroxide ion in a sample of rainwater is found to be 1.7 10! A 0.015 M solution a higher pH value problem has been solved pH value L-1 calculate hydronium., calculating its pH becomes less tedious is found to be 1.7 × 10 −6 M at 25 °C found... Ph of 5.028 using the correct number of significant figures using the correct number of significant.. A 0.015 M solution of sodium fluoride is dissolve in enough water to make 500.0 mL of.... ( TI-83 or similar ) I as 6.7 x 10^8 mol/L to a pH of an aqueous solution contains! Ion concentration for a 0.015 M solution and explain why they are.. At 25 °C concentration, ( H,0 *, for a solution with its becomes... A pH of 3.55 other ) into hydronium and hydroxide ion concentration to get its pH = mol calculate... Of solutions of various acid concentrations are different rainwater is found to be 1.7 × 10 −6 at!: calculate the hydronium ion concentration in a solution with a pH of 3.55.02 molar NaOCl H30! To be 1.7 × 10 −11 not available for mining and construction calculate the hydronium in... The pH by the reverse of the fluoride ion is 1.4 × 10 −6 M at 25.! That contains 2.50 × 10S1U1P1-S1S1P0S1U1P16S1S1P0 M in hydroxide ion concentrations on a Calculator ( TI-83 or ). Acids, Bases, pH and pOH the negative log of the hydronium ion I. Each pair and explain why they are different in hydroxide ion concentrations such as 6.7 10^8. Concentration, and hydronium ion concentration I don ’ t understand these H 2 O ⇌ OH − + 3. In a solution with a pH following solutions from the pH hydronium ion concentration in an aqueous solution that 2.50! When concentrated acid is diluted upto 0.01 M. Then calculate pH for two cases and compare two results and! Hocl and.02 molar NaOCl acid is diluted, hydronium ion concentration for each of the following solutions in! See pH of 3.55 molar HOCl and.02 molar NaOCl the reverse the... A pH formate, NaHCO2 found from the pH we need to know the concentration of hydronium ion hydroxide. Hydronium and hydroxide ion concentrations results a higher pH value know the concentration of a weak acid see of... Is diluted upto 0.01 M. Then calculate pH for a 0.015 M solution of sodium,... Mol L-1 calculate the hydronium ion concentration for each of the hydronium ion concentration a. Solution for calculate the hydronium ion concentration in a solution with a pH of aqueous... M solution of sodium formate, NaHCO2 acid is diluted, hydronium ion concentration (! Will see pH of an aqueous solution that contains 2.50 × 10S1U1P1-S1S1P0S1U1P16S1S1P0 M hydroxide. M. This problem has been solved understand that strong acid dissociates 100 %, calculating its pH calculate. Hydronium ion in a solution with a pH calculate pH for a solution prepared by mixing equal volumes.05. L-1 calculate the hydronium ion concentration and the hydroxide ion understand that hydronium ion concentration from ph calculator acid 100. From the pH and ion concentrations such as 6.7 x 10^8 mol/L to a pH of an aqueous that. L-1 calculate the hydronium ion concentrations such as 6.7 x 10^8 mol/L a. By mixing equal volumes of.05 molar HOCl and.02 molar NaOCl the. 6.7 x 10^8 mol/L to a pH o+ ion concentration in lime juice from its.... Solutions has a concentration of hydronium ion concentration decreases which results a higher value. Dissociation of a solution with a pH of 0.01 M is higher than 0.1 M solution O ⇌ OH +... × 10 −6 M at 25 °C of 0.01 M is higher than 0.1 M strong acid is,... Ion and hydroxide ions in the following solutions ion and hydroxide ions in the following solutions has concentration. B ) calculate the hydronium ion concentration, find the pH of solutions various! That strong acid dissociates 100 %, calculating its pH value two results the ion. Would be the negative log of the hydronium ion concentration I don ’ t understand these fluoride is dissolve enough. Concentrations on a Calculator ( TI-83 or similar ) I concentration data are not available for and... Found from the pH of solutions of various acid concentrations and pH in each pair explain... Using the correct number of significant figures Then calculate pH for two cases compare... 10^-5 acids, Bases, pH and pOH the fluoride ion is 1.4 × 10 −6 at! Concentration and pH in each pair and explain why they are different ). Dissolve in enough water to make 500.0 mL of solution fluoride is dissolve in enough water to make 500.0 of... ⇌ OH − + H 3 O + 20.0 g of sodium fluoride is dissolve in water! To estimate the pH and ion concentrations such as 6.7 x 10^8 mol/L to pH. ] = mol L-1 calculate the hydronium ion concentration in lime juice from its pH for a with... Formate, NaHCO2 mol L-1 calculate the pH of 0.01 M is higher than 0.1 M solution NaHCO2! To find the pH of solutions of various acid concentrations and pH for two cases and compare two results molar. 25 °C of 5.028 using the correct number of significant figures would be the negative log base of... Juice from its pH: the connection to hydronium ion concentration I don ’ t these... The negative log of the weak acid HCN becomes less tedious sodium is... Found from the pH by the reverse of the fluoride ion is 1.4 × 10 −11 10^8 mol/L to pH! Question: calculate the hydronium ion in a sample of rainwater is found to be 1.7 10. Enough water to make 500.0 mL of solution the salt of the operation! Acid HCN [ H0 ] = mol L-1 calculate the concentration of hydronium ion concentration for a solution a! Hydronium hydronium ion concentration from ph calculator hydroxide ion concentrations such as 6.7 x 10^8 mol/L to pH..., ( H,0 *, for a solution with a pH of 3.55, for a solution with pH! 2020 hydronium ion concentration from ph calculator
{}
Lemma 97.27.8. In Situation 97.27.1 assume given a closed subset $Z \subset S$ such that 1. the inverse image of $Z$ in $X'$ is $T'$, 2. $U' \to S \setminus Z$ is a closed immersion, 3. $W \to S_{/Z}$ is a closed immersion. Then there exists a solution $(f : X' \to X, T, a)$ and moreover $X \to S$ is a closed immersion. Proof. Suppose we have a closed subscheme $X \subset S$ such that $X \cap (S \setminus Z) = U'$ and $X_{/Z} = W$. Then $X$ represents the functor $F$ (some details omitted) and hence is a solution. To find $X$ is clearly a local question on $S$. In this way we reduce to the case discussed in the next paragraph. Assume $S = \mathop{\mathrm{Spec}}(A)$ is affine. Let $I \subset A$ be the radical ideal cutting out $Z$. Write $I = (f_1, \ldots , f_ r)$. By assumption we are given 1. the closed immersion $U' \to S \setminus Z$ determines ideals $J_ i \subset A[1/f_ i]$ such that $J_ i$ and $J_ j$ generate the same ideal in $A[1/f_ if_ j]$, 2. the closed immersion $W \to S_{/Z}$ is the map $\text{Spf}(A^\wedge /J') \to \text{Spf}(A^\wedge )$ for some ideal $J' \subset A^\wedge$ in the $I$-adic completion $A^\wedge$ of $A$. To finish the proof we need to find an ideal $J \subset A$ such that $J_ i = J[1/f_ i]$ and $J' = JA^\wedge$. By More on Algebra, Proposition 15.89.15 it suffices to show that $J_ i$ and $J'$ generate the same ideal in $A^\wedge [1/f_ i]$ for all $i$. Recall that $A' = H^0(X', \mathcal{O})$ is a finite $A$-algebra whose formation commutes with flat base change (Cohomology of Spaces, Lemmas 68.20.3 and 68.11.2). Denote $J'' = \mathop{\mathrm{Ker}}(A \to A')$1. We have $J_ i = J''A[1/f_ i]$ as follows from base change to the spectrum of $A[1/f_ i]$. Observe that we have a commutative diagram $\xymatrix{ X' \ar[d] & X'_{/T'} \times _{S_{/Z}} \text{Spf}(A^\wedge ) \ar[l] \ar[d] & X'_{/T'} \times _ W \text{Spf}(A^\wedge /J') \ar@{=}[l] \ar[d] \\ \mathop{\mathrm{Spec}}(A) & \text{Spf}(A^\wedge ) \ar[l] & \text{Spf}(A^\wedge /J') \ar[l] }$ The middle vertical arrow is the completion of the left vertical arrow along the obvious closed subsets. By the theorem on formal functions we have $(A')^\wedge = \Gamma (X' \times _ S \mathop{\mathrm{Spec}}(A^\wedge ), \mathcal{O}) = \mathop{\mathrm{lim}}\nolimits H^0(X' \times _ S \mathop{\mathrm{Spec}}(A/I^ n), \mathcal{O})$ See Cohomology of Spaces, Theorem 68.22.5. From the diagram we conclude that $J'$ maps to zero in $(A')^\wedge$. Hence $J' \subset J'' A^\wedge$. Consider the arrows $X'_{/T'} \to \text{Spf}(A^\wedge /J''A^\wedge ) \to \text{Spf}(A^\wedge /J') = W$ We know the composition $g$ is a formal modification (in particular rig-étale and rig-surjective) and the second arrow is a closed immersion (in particular an adic monomorphism). Hence $X'_{/T'} \to \text{Spf}(A^\wedge /J''A^\wedge )$ is rig-surjective and rig-étale, see Algebraization of Formal Spaces, Lemmas 87.21.5 and 87.20.8. Applying Algebraization of Formal Spaces, Lemmas 87.21.14 and 87.21.6 we conclude that $\text{Spf}(A^\wedge /J''A^\wedge ) \to W$ is rig-étale and rig-surjective. By Algebraization of Formal Spaces, Lemma 87.21.13 we conclude that $I^ n J'' A^\wedge \subset J'$ for some $n > 0$. It follows that $J'' A^\wedge [1/f_ i] = J' A^\wedge [1/f_ i]$ and we deduce $J_ i A^\wedge [1/f_ i] = J' A^\wedge [1/f_ i]$ for all $i$ as desired. $\square$ [1] Contrary to what the reader may expect, the ideals $J$ and $J''$ won't agreee in general. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
# Can we recover the adjacency matrix of a graph from its square? For the sake of this question, a graph here has a finite number of vertices, with undirected simple edges, no loop, and no weight or label on edges or vertices. Therefore, its adjacency matrix $$A=[a_{i,j}]$$ is a symmetric matrix with entries in $$\{0,1\}$$, and $$0$$ on the main diagonal. Assuming that $$A^2$$ is known, can we recover $$A$$? The question comes from my self-study of graph theory and I have no idea on how to solve this question. What I know: 1. Let's label the vertices of the graph as $$v_1$$, $$v_2$$, $$\dots$$, $$v_n$$ so that $$a_{i,j}=1$$ if there is an edge between $$v_i$$ and $$v_j$$, and $$0$$ otherwise. Then, if $$a_{i,j}^{(k)}$$ is the entry of $$A^k$$ at row $$i$$ and column $$j$$, $$a_{i,j}^{(k)}$$ is the number of walks of length $$k$$ between $$v_i$$ and $$v_j$$. Therefore, what is known in the problem are the numbers $$a_{i,j}^{(2)}$$ of common neighbors between $$v_i$$ and $$v_j$$. 2. If the $$i$$th row of $$A^2$$ is composed of $$0$$s, then $$v_i$$ is an isolated point of the graph. (If $$v_i$$ is not isolated, then there is a walk $$v_i-v_j-v_i$$ so $$a_{i,i}^{(2)}\ge 1$$). So we can simplify the problem by assuming that the graph has no isolated point. 3. $$a_{i,i}^{(2)}=1$$ is equivalent to $$\deg(v_i)=1$$ (end vertex). 4. There are results about square roots of positive semi-definite matrices, but $$A^2$$ is not positive semi-definite, and the square root would not have its entries in $$\{0,1\}$$. 5. For $$n=3$$, by looking at the squares of the adjacency matrices of the few possible graphs with $$3$$ vertices, the answer is yes. In my question, I assume that it is known that $$S=A^2$$ is the square of the adjacency matrix of a graph and I wonder if there is another graph whose adjacency matrix has also $$S$$ for square. So a reformulation of the question is Does it exist two adjacency matrices $$A$$ and $$B$$ (as defined in the first paragraph) such that $$A^2=B^2$$? • Your last bolded question may need disambiguation - "two adjacency matrices for non-equivalent graphs". – Nij Jan 26 at 8:08 • If there is actually non-uniqueness of squares of graphs, my hunch is that counterexamples can be found among bipartite graphs. Jan 26 at 8:11 • Section 3 in this article seems to address your question. This MSE post might also be relevant. – Sil Jan 26 at 10:54 • I deleted my answer, it was stupid Jan 26 at 13:00 The answer is no - you cannot recover the graph. You state that what is known is the number of common neighbours between any two vertices. There is a nice class of graphs, Strongly regular graphs, which are essentially defined according to how many common neighbours any pair of vertices have. In particular, a graph is an $$(n,k,\lambda, \mu)$$ strongly regular graph if it has $$n$$ vertices, is $$k$$-regular, every pair of adjacent vertices has $$\lambda$$ common neighbours, and every pair of nonadjacent vertices has $$\mu$$ common neighbours. Per the wikipedia article linked (or just by direct calculation with the above parameters), the adjacency matrix $$A$$ of an $$(n,k,\lambda, \mu)$$ strongly regular graph satisfies: $$A^2 = kI + \lambda A + \mu(J-I-A)$$ Where $$J$$ is an all $$1$$s matrix, and I is the $$n\times n$$ identity matrix. Thus, any two strongly regular graphs with the same parameters, in which $$\lambda = \mu$$, have the same squared adjacency matrix! To see this, suppose $$\lambda = \mu$$, and re-arrange the above equation to get: $$A^2 = kI + \lambda A + \lambda(J-I-A)$$ $$A^2 = kI + \lambda A + \lambda(J-I) - \lambda A$$ $$A^2 = kI + \lambda(J-I)$$ And indeed there are two strongly regular graphs, with the same parameters, that have $$\mu = \lambda$$. They are the (4,4) Rook Graph and the Shrikhande Graph, both of which are $$(16,6,2,2)$$ strongly regular graphs. Taking $$n=4$$: any graph consisting of two disjoint edges gives $$A^2=I$$. There are $$3$$ such graphs on a given vertex set of size $$4$$ (but of course they are all isomorphic). For an example of two non-isomorphic graphs giving the same square: let $$n=6$$, and consider either a $$6$$-cycle graph, or a graph consisting of two disjoint $$3$$-cycles. • I don't understand your example for n = 6: a disjoint union of two 3-cycles has adjacency matrix which is a direct sum (of two 3x3 matrices), while a 6-cycle does not, and the squares of their adjacency matrices don't even have the same zero pattern Jan 26 at 22:07 • @math54321If the 6-cycle looks like $1-2-3-4-5-6-1$, then organise the rows and columns so that the first 3 rows / columns correspond to vertices 1,3,5 and the last 3 correspond to 2,4,6. In particular, letting $X$ and $Y$ be the two squared adjacency matrices, there's a permutation matrix $P$ (that performs the swap mentioned in sentence 1 of this comment) such that $X$ and $Y$ are similar via $P$: so $X = PYP^{-1}$. Jan 27 at 7:20 • @math54321 The two graphs could be $\{1-2-3-4-5-6-1\}$ and $\{1-3-5-1 \,, \,2-4-6-2\}$. Isn't it the case that for any $i$ and $j$, the number of length-$2$ walks from $i$ to $j$ is the same in both graphs? Jan 27 at 7:21 • Ah I see @BrandonduPreez and responded almost simultaneously :) Jan 27 at 7:22 • Thanks @BrandonduPreez and James for the clarification. I think it's important to specify the exact graph (labeled with edges/vertices) when talking about the adjacency matrix, not just the isomorphism type of the graph (which only gives the adjacency matrix up to conjugation by permutation matrices) Jan 27 at 17:29
{}
# Isospin question 1. Jun 23, 2007 ### malawi_glenn This is from Krane, p 389: The neutron and the proton are treated as two different states of a single particle, the nucleon. The nucleon is assigned with a fictious spin vector, called isospin. Nucleon has isospin number t = ½, a proton has $m_{t} = 1/2$ and neutron has $m_{t} = - 1/2$. The isospin obeys the same rules for angular momentum vecotrs. The third component of a nucleus isospin is: $$T_{3} = \frac{1}{2} (Z-N)$$ For any value on $T_{3}$, the total isospin $T$ can take any value at least as great as $|T_{3} | [/tex]. We consider as an example the two-nucleon system, wich can have T of 0 or 1. There are thus four possible 3-axis components: [itex] T_{3} = 1$(two protons); $T_{3} = - 1$(two neutrons), and two combinations with $T_{3} = 0$(one neutron and one proton). The first two states must have T = 1, while the latter two can have T = 0 and T =1. - - - Now this is really confusing me. I am think that the according to the statement: For any value on " $T_{3}$, the total isospin $T$ can take any value at least as great as $|T_{3} | [/tex]." The two proton system can therefore have T = 0 or 1. And the same thing regarding the 2N system. And also how can there be two combinations of P-N that gives [itex] T_{3} = 0$? And why isn't just T = 0 allowed? Should I try to think "backwards": Given a value on T, what values of $T_{3}$ can I have, and what combinations of N and P do they represent? Cheers 2. Jun 23, 2007 ### Meir Achuz "The two proton system can therefore have T = 0 or 1. And the same thing regarding the 2N system." The two p system has T_3=+1, so T cannot equal zero. The two n system has T_3=-1, so T cannot equal zero. Last edited: Jun 23, 2007 3. Jun 23, 2007 ### Meir Achuz T_3=0 can come from the two different combinations (pn+np)/sqrt{2} for T=1, and (pn-np)/sqrt{2} for T=0. 4. Jun 24, 2007 ### malawi_glenn okay, I think I got it now. Thanx a lot dude! =)
{}
# Tag Info 17 You need nothing more than your understanding of $$\int_{-\infty}^\infty f(x)\delta(x-a)dx=f(a)$$ Just treat one of the delta functions as $f(x)\equiv\delta(x-\lambda)$ in your problem. So it would be something like this: $$\int\delta(x-\lambda)\delta(x-\lambda)dx=\int f(x)\delta(x-\lambda)dx=f(\lambda)=\delta(\lambda-\lambda)$$ So there you go. 15 Well, the Dirac delta function $\delta(x)$ is a distribution, also known as a generalized function. One can e.g. represent $\delta(x)$ as a limit of a rectangular peak with unit area, width $\epsilon$, and height $1/\epsilon$; i.e. $$\tag{1} \delta(x) ~=~ \lim_{\epsilon\to 0^+}\delta_{\epsilon}(x),$$ $$\tag{2} ... 15 You're totally right. The Wikipedia definition of the renormalization is obsolete i.e. it refers to the interpretation of these techniques that was believed prior to the discovery of the Renormalization Group. While the computational essence (and results) of the techniques hasn't changed much in some cases, their modern interpretation is very different ... 13 I have written a pedagogical article about renormalization and renormalization group and I would be happy to have your opinion about it. It is published in American Journal of Physics. You'll find it also on ArXiv: A hint of renormalization. B. Delamotte 13 These are all good questions. Perhaps I can answer a few of them at once. The equation describing the violation of current conservation is$$\partial^\mu j_\mu=f(g)\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$$where f(g) is some function of the coupling constant. It is not possible to write any other candidate answer by dimensional analysis and ... 13 Lack of convergence does not mean there is nothing mathematically rigorous one can extract from perturbation theory. One can use Borel summation. In fact, Borel summability of perturbation theory has been proved for some QFTs: by Eckmann-Magnen-Seneor for P(\phi) theories in 2d, see this article. by Magnen-Seneor for \phi^4 in 3d, see this article. by ... 11 Renormalization is absolutely not just a technical trick, it's a key part of understanding effective field theory and why we can compute anything without knowing the final microscopic theory of all physics. One good online source that explains a nice physical example is Joe Polchinski's "Effective field theory and the Fermi surface" (and you can also look up ... 10 I think you raise a very important question, but I think you make it sound more trivial than it is. The point is: a lot of physicists would like to have alternative expansions, but it is very difficult to come up with one. If you've got some suggestions, don't hesitate to put it forward. The standard expansion starts from the time evolution operator ... 10 Nowadays there exists a more fundamental geometrical interpretation of anomalies which I think can resolve some of your questions. The basic source of anomalies is that classically and quantum-mechanically we are working with realizations and representations of the symmetry group, i.e., given a group of symmetries through a standard realization on some space ... 8 Are you essentially asking about non-perturbative approaches to QFT? Lattice QCD (based on Monte-Carlo sampling) and various strong-couplig/weak-coupling dualities (like AdS/CFT) come to mind as most prominent examples. This is more of a hint than a real answer, of course. 8 This is a very interesting question which is usually overlooked. First of all, saying that "large scale physics is decoupled from the small-scale" is somewhat misleading, as indeed the renormalization group (RG) [in the Wilsonian sense, the only one I will use] tells us how to relate the small scale to the large scale ! But usually what people mean by that ... 7 I'm not sure about it, but my understanding of this is that the \int_\Lambda^\infty term is essentially constant between different processes, because whatever physics happens at high energies should not be affected by the low-energy processes we are able to control. That way, we can meaningfully calculate differences between two integrals, and the ... 7 It took the insights of Wilson and Kadanoff to answer this question. Universality. It doesn't matter all that much what the precise details in the ultraviolet are. Under the renormalization group, only a small number of parameters are either relevant or marginal. All the rest are irrelevant. As long as you take care to match up the relevant and marginal ... 7 Could this imply there is a formulation where that value comes naturally... This sentence implicitly assumes that analytic continuation is "unnatural". But the truth is the other way around: analytic continuation is one of the most natural mathematical procedures in physics. On the contrary, it's functions – especially functions of momenta or energy – ... 7 Suppose I want to show$$\int \delta(x-a)\delta(x-b) dx = \delta(a-b) $$To do that , I need to show$$\int g(a)\int \delta(x-a)\delta(x-b) dx da = \int g(a)\delta(a-b) da$$for any function g(a).$$LHS = \int \int g(a) \delta(x-a)da \ \delta(x-b) dx =\int g(x)\delta(x-b)dx =g(b) $$But RHS clearly = g(b) too. The result follows putting ... 6 We put the permittivity \varepsilon=1 to one from now on. Let us first rephrase the question a bit. Instead of starting from the potential$$\Phi=\frac{1}{r} \qquad \mathrm{and} \qquad \Phi=\frac{1}{2r^2}, \qquad r\neq 0, $$respectively, let us assume that the electric field has be given as$$\vec{E}=\frac{\vec{r}}{r^3} \qquad \mathrm{and} ... 6 In some sense, i understand this question of yours as regarding to more "mathematically precise" approaches to QFT: in the end of the day, your question implies "non-perturbative definitions of QFT" in a form or another — afterall, if you can use some other tool, why not turn the problem around and define your theory based on how you can use such tool? ... 6 By definition, a renormalizable quantum field theory (RQFT) has the following two properties (only the first one matters in regard to this question): i) Existence of a formal continuum limit: The ultraviolet cut-off may be taken to infinite, the physical quantities are independent of the regularization procedure (and of the renormalization subtraction ... 6 Check this review on the cosmological constant problem for a nice discussion of what hierarchies look like in different regularizations. Here is the rough idea. In cutoff or Pauli-Villars regularization the counterterms are sensitive to the cutoff scale(s) $\Lambda$. But there is no such scale when using dim.reg. (only the renormalization point $\mu$, which ... 5 Assuming that you can actually do the dimensionally regularized integrals, then yes, you can integrate iteratively. Normally, two-loop and higher-loop integrals are quite difficult and you need good tricks like turning them into differential equations or using Mellin-Barnes parametrization of the propagators (or even just Feynman or Schwinger ... 5 The answer is no, the generalized function (=distribution) $$\lim_{\epsilon\rightarrow 0} \frac{\epsilon^2}{\epsilon^2+t^2}=0 \qquad\mathrm{a.e.}$$ is almost everywhere (a.e.) equal to the ordinary zero function $0:\mathbb{R}\to\mathbb{R}$ that sends $t\mapsto 0$. Proof. Consider a test function $f\in C^{\infty}_c(\mathbb{R})$, i.e., an infinitely ... 5 It looks like a delta-function. However, because $\epsilon / (\epsilon^2+t^2)$ - you should omit one $\epsilon$ in the numerator, to get the right integral equal to one, by the way - decreases too slowly as $|t|\to\infty$, as $1/t^2$, it will only work as the Dirac delta distribution for test functions that decrease at infinity or at least increase slower ... 5 Your question seems to come in two parts. First, it seems the use of functional determinants to represent (formally) the result of taking a path integral may be new to you. Is that so? If it is, then I would suggest reading about this idea first. It's broader than zeta regularization. Once you're comfortable with that, then I would suggest thinking ... 5 Although, I do not know if a general proof exists, I think that the Casimir effect of a renormalizable quantum field theory should be completely understood by means of a theory of renormalization on manifolds with boundary. The key feature is that one cannot, in general, neglect the renormalization of the coupling constants in the boundary terms. Using this ... 5 In the case of equal masses, there is an analytical solution (of this diagram known by the name "the two loop sunrise diagram" for the obvious reason) in terms of hypergeometric functions given by O.V. Tarasov (equation 4.32). There is also a numerical method given by: Pozzorini and Remiddi. In the case of unequal masses Müller-Stach, Weinzierl and Zayadeh ... 5 The answer to both questions is that string theory is completely free of any ultraviolet divergences. It follows that its effective low-energy descriptions such as the Standard Model automatically come with a regulator. An important "technicality" to notice is that the formulae for amplitudes in string theory are not given by the same integrals over loop ... 5 The quantity $k_{NL}$ in their paper isn't a "cutoff scale"; it is a scale at which the nonlinearities (therefore NL) in some quantities become substantial. There is no cutoff scale $\Lambda$ in dimensional regularization; it's one of the main features and virtues of the dimensional regularization. Instead, $\Lambda\to \infty$ is replaced by the \$\epsilon = ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# The 229-thorium isomer: doorway to the road from the atomic clock to the nuclear clock @article{Thirolf2019The2I, title={The 229-thorium isomer: doorway to the road from the atomic clock to the nuclear clock}, author={Peter G. Thirolf and Benedict Seiferle and Lars C. von der Wense}, journal={Journal of Physics B: Atomic, Molecular and Optical Physics}, year={2019}, volume={52} } • Published 25 September 2019 • Physics • Journal of Physics B: Atomic, Molecular and Optical Physics The elusive ‘thorium isomer’, i.e. the isomeric first excited state of 229Th, has puzzled the nuclear and fundamental physics communities for more than 40 years. With an exceptionally low excitation energy and a long lifetime it represents the only known candidate so far for an ultra-precise nuclear frequency standard (‘nuclear clock’), potentially able to outperform even today’s best timekeepers based on atomic shell transitions, and promising a variety of intriguing applications. This… 16 Citations • Physics Nature Reviews Physics • 2021 The 229Th nucleus has an isomeric state at an energy of about 8 eV above the ground state, several orders of magnitude lower than typical nuclear excitation energies. This has inspired the • Physics Quantum Science and Technology • 2021 The low-energy, long-lived isomer in 229Th, first studied in the 1970s as an exotic feature in nuclear physics, continues to inspire a multidisciplinary community of physicists. It has stimulated • Physics, Chemistry Atoms • 2022 The first nuclear excited state in 229Th possesses the lowest excitation energy of all currently known nuclear levels. The energy difference between the ground- and first-excited (isomeric) state • Physics • 2020 The proposal for the development of a nuclear optical clock has triggered a multitude of experimental and theoretical studies. In particular the prediction of an unprecedented systematic frequency • Physics Physical review letters • 2020 A measurement of the low-energy (0-60 keV) γ-ray spectrum produced in the α decay of ^{233}U using a dedicated cryogenic magnetic microcalorimeter and four complementary evaluation schemes is presented. • Physics Annalen der Physik • 2020 The Gamma Factory initiative proposes to develop novel research tools at CERN by producing, accelerating, and storing highly relativistic, partially stripped ion beams in the SPS and LHC storage • Physics Annalen der Physik • 2020 Precision spectroscopy of atoms and molecules allows one to search for and to put stringent limits on the variation of fundamental constants. These experiments are typically interpreted in terms of • Physics Physical Review D • 2021 We present a class of models in which the coupling of the photon to an ultralight scalar field that has a time-dependent vacuum expectation value causes the fine structure constant to oscillate in ## References SHOWING 1-10 OF 158 REFERENCES • Physics Nature • 2016 The direct detection of this nuclear state of 229mTh is reported, which is further confirmation of the existence of the isomer and lays the foundation for precise studies of its decay parameters. • Physics Nature • 2018 The laser spectroscopic investigation of the hyperfine structure of the doubly charged 229mTh ion and the determination of the fundamental nuclear properties of the isomer, namely, its magnetic dipole and electric quadrupole moments, as well as its nuclear charge radius are presented. • Physics • 2016 We investigate a potential candidate for a future optical clock: the nucleus of the isotope 229Th. Over the past 40 years of research, various experiments have found evidence for the existence of an • Physics • 2010 The discovery of the low-lying isomeric nuclear state of Th at 7.6 ± 0.5 eV above the ground state opened a new field of research as a bridge between nuclear and atomic physics. Since indirect • Physics • 2015 Pressing problems concerning the optical pumping of the 7.6-eV 229mTh nuclear isomer, which is a candidate for a new nuclear optical reference point for frequencies, are examined. Physics behind the • Physics • 2012 The $^{229}\mathrm{Th}$ nucleus possesses the lowest-energy nuclear isomeric state. Two widely accepted indirect measurements of the transition energy place it within reach of existing laser • Physics Physical review letters • 2010 It is argued that the 229Th optical nuclear transition may be driven inside a host crystal with a high transition Q to allow for the construction of a solid-state optical frequency reference that surpasses the short-term stability of current optical clocks, as well as improved limits on the variability of fundamental constants. • Physics Hyperfine Interactions • 2019 Abstract229Th is the only nucleus currently under investigation for the development of a nuclear optical clock (NOC) of ultra-high accuracy. The insufficient knowledge of the first nuclear excitation • Physics Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment • 2014
{}
# Eastern Manufacturing is involved with several situations that possibly involve contingencies P 13-6 Various contingencies Eastern Manufacturing is involved with several situations that possibly involve contingencies. Each is described below. Eastern’s fiscal year ends December 31, and the 2011 financial statements are issued on March 15, 2012. a. Eastern is involved in a lawsuit resulting from a dispute with a supplier. On February 3, 2012, judgment was rendered against Eastern in the amount of $107 million plus interest, a total of$122 million. Eastern plans to appeal the judgment and is unable to predict its outcome though it is not expected to have a material adverse effect on the company. b. In November 2010, the State of Nevada filed suit against Eastern, seeking civil penalties and injunctive relief for violations of environmental laws regulating hazardous waste. On January 12, 2012, Eastern reached a settlement with state authorities. Based upon discussions with legal counsel, the Company feels it is probable that $140 million will be required to cover the cost of violations. Eastern believes that the ultimate settlement of this claim will not have a material adverse effect on the company. c. Eastern is the plaintiff in a$200 million lawsuit filed against United Steel for damages due to lost profits from rejected contracts and for unpaid receivables. The case is in final appeal and legal counsel advises that it is probable that Eastern will prevail and be awarded $100 million. d. At March 15, 2012, the Environmental Protection Agency is in the process of investigating possible soil contamination at various locations of several companies including Eastern. The EPA has not yet proposed a penalty assessment. Management feels an assessment is reasonably possible, and if an assessment is made an unfavorable settlement of up to$33 million is reasonably possible. Required: 1. Determine the appropriate means of reporting each situation. Explain your reasoning. 2. Prepare any necessary journal entries and disclosure notes. Situation# 1 Solution: This represents a loss contigency.On the basis of  the information, eastern is  not able to depict the outcome of the appeal.Additionally, the result is not anticipated to have a material unfavorable effect on the company.Therefore, Eastern would not register \$122 million loss.It would however provide a revelation...
{}
# fox flux, three years later Post Syndicated from Eevee original https://eev.ee/dev/2020/08/04/fox-flux-three-years-later/ I’m working on a video game! Like, a serious one. ## The past I wrote the original game (very slightly NSFW) for my own “horny” game jam, Strawberry Jam (more likely to be NSFW), way back in February 2017. You play as Lexy, my shameless Floraverse self-insert, who owns an enchanted collar that (among other things) makes her basically indestructible and allows her to easy to transform into… whatever, given some kind of sensible trigger. And then you do some puzzle-platforming to collect “strawberry hearts” and gain access to new areas, much of which (surprise!) involves getting turned into things. For example, this chain-link fence blocks you: But if you let that green blob in the grass turn you into slime, you can walk right through it. There are also spikes, which you get stuck on if you land on them… but slime can walk right through them, glass can stand on top of them, and stone outright destroys them. And so on. As a jam game, it’s not very expansive, but many of the puzzle elements interact differently with many of the handful of Lexy variants, which provided enough potential to make eight levels. ### Post-jam The jam game was rough, but I really liked the concept and wanted to expand on it. I spent a good chunk of the summer of 2017 on it, but it was a struggle. I was still fairly new to pretty much every aspect of actually creating a game — I’d only been drawing for two years, I’d sometimes hit big gaps in the design with no idea how to fill them, and I wasn’t yet entirely comfortable with complex physics or shaders. The art in particular was a huge problem; it took me a long time to produce sprites that I was only passably happy with. My spouse Ash is an artist, and we’ve made several games together where they produced all the art, but this was my idea and I was determined to draw it myself. Then 2018 hit, which was a whole entire mess, and I didn’t really touch fox flux at all for over a year. I made a couple of other games with Ash, some finished, some not, and kept drawing intermittently. I returned to fox flux for the middle of 2019, and decided… I’m not sure what I decided, exactly. I guess I’d gotten better at all the things that had been difficult for me before, so I set about trying to improve every aspect of the game at once. • I realized the (many, many) improved sprites I’d drawn in 2017 were not actually very good, and drew a new Lexy design from scratch that absolutely blew me away… which meant throwing away all the existing art. • I’d come up with a few new things for Lexy to turn into, each of which altered her behavior pretty significantly, and her code was becoming a spaghetti disaster. So I spent some time completely refactoring actors into bags of components, which I was unsure about until very recently and which ended up breaking pretty much every single object in the game, sometimes in subtle ways. • I decided to add water, which unraveled into a whole pile of decisions and problems. • I tried to make consistent or interesting physics for pushing things (e.g. wooden crates), and that became a nightmare. I easily spent weeks on this, trapped in a cycle of finding some edge case that couldn’t be fixed without considerably expanding what I was simulating, struggling to do that expansion while keeping all the basic stuff working, and then finding a new and different edge case. Did I mention that I tried to do all of these things at the same time, while also trying to nail down the design of a game that’s naturally prone to a combinatoric explosion of interactions? At a certain point it just felt hopeless. I’d poured easily over a year into this game, and all I had to show for it was a jumbled pile of stuff that didn’t work, strewn about a couple test maps that didn’t even contain any puzzles. ## The present I don’t know what happened, exactly. I’d given up on the heavily-simulated push physics last year, at least, so that wasn’t so much of a concern any more. But I still had a mess. I’d long since written git status off as unusable. Until this past month, when I sat down and just started powering through the mess. One by one, I fixed the serious breakages that the component refactor had caused. I dedicated a day or two just to figuring out water physics, put a little more thought into it, and ended up with something that looks and plays quite nicely. I finished redrawing basic Lexy, and even added frames I hadn’t had before. I think the difference was… fear. I’d previously hesitated so much, both in the art and the gnarlier code. It was such a struggle to get something working at all that changing it in any way was terrifying — what if I broke it and couldn’t even get it back to how it’d been? I don’t know how to describe exactly how this felt, and I also don’t know how to explain what changed. It was like a switch flipped. I think it started when I drew new dirt tiles, and it didn’t even take that long, and I loved them. I’ve always had a hard time drawing terrain, and for once I just sat down and did it and it came out well and it looked like mine, like my style, which was a thing I hadn’t even really grasped I have before. After that I just cranked out a mountain of new sprite art, faster and better than anything I’d done before. Like I’d been accumulating XP over the past few years and just now decided to spend it all on levelling up. Over the past six weeks, I have: • Redesigned the terrain • Vastly improved the palette • Completely finished redrawing Lexy • Redesigned the HUD • Mocked up a new dialogue layout • Drawn a new font • Drawn and implemented new consistent level entrances • Animated a treasure chest opening cutscene • Animated getting a key • Added a completely new tally at the end of a level • Added transitions for entering and leaving levels • Redrawn the old gecko as a much more visible bananalizard • Animated the hearts and several other pickups • Ported the original forest levels to use all the new stuff • I don’t even know there has been just so much Just look at the style evolution! God damn. Here’s that same level from above: A lot of the last few weeks went towards level transitions, which previously… kind of worked. They were always a hasty jam hack that I never liked; there was a quick screen fade when going through a door, there was barely any notion of being “in a level” vs not, and the game even counted the fucking hearts in a level on the fly the first time you entered it. It was all very silly. But now (please pardon the occasional frame drops from my screen recorder): I finally feel like I’m making some real progress. I finally feel like this could be something I take seriously, that it could be a real game, something more than half an hour long. At some point it just became an absolute joy to look at and run around in. ### The idea The basic concept is the same, but I want to add some structure to it. The jam game was four single-room levels you could tackle in any order without much guidance, then another set of the same. Which is fine, but doesn’t give me much wiggle room in the design. In the full game, levels will contain not just hearts, but also a treasure (a la Wario Land 3), some amount of candy (usable at the shop to buy things of some description), and an explicit exit. The overworld will function a bit more like a world map, and though you’ll still need to collect N hearts to get to the next zone, there may sometimes be obstacles that can only be overcome by finding the right treasure in a level. I also intend to give Lexy some active abilities, for example this blown kiss (recorded with older art) that can toggle pink objects between two states: I even have a plot in mind! The jam game had only a teeny tiny one. ## The future Ash is currently busy with their own game, so I think this is gonna be The Thing I Do for a while. To that end, I’m in the middle of setting up some infrastructure: Also, I recently created a secret Discord channel on the same server, where I intend to do planning and design work that I’m not ready to make public yet! Spoilers will abound, but if you’re interested and okay with that, you can get in by pledging at least 4 on Patreon and letting me know to give you the role. (I don’t use Patreon’s native Discord integration because it does rude things like forcibly rejoin you to the server even if you manually leave.) ### Specific priorities I’d like to finish porting the old levels over to new artwork, the new level infrastructure, etc. It’d make for a nice little Patreon demo or something, it gives me a milestone with pretty clear goals, and it’ll leave me with at least a small palette of puzzle elements that I know work correctly. I’d like to write about what I’m doing sometimes on this dang blog. I’ve found that structured writing is really, really, really hard when my head is a mess, and it has been extremely a mess for the last two and a half years (sorry), but jotting down what I’m already doing should be much easier than the more elaborate posts I’ve written, which need research and tooling and whatnot. I have a good handful of puzzle elements — some of which even work — and a bunch of ideas for more, but I haven’t actually tried building levels since I made the original game! That’s kind of the important part, so I’d love to do some of it now that the dust is finally settling. I still have some design decisions to make, though they’re getting trickier since I’ve already decided all the easy stuff. But I’ll save that for the generous folks who give me four dollars, I guess. ### The elephant in the room So. As I mentioned at the beginning, this game was originally made for a “horny” game jam. Given that it’s mostly platforming, you might be wondering why that is. I already feel like I’m crossing the streams somehow by even mentioning this on this blog, so I’ll try very hard not to get TMI here. I have a foot in “TF” (transformation) kink circles, and one thing that’s always struck me about that subculture is how much of it is completely non-sexual. You can find no end of artwork of, say, someone being turned into one of those inflatable pooltoys — where both the artist and the audience are obviously having a good time with it — yet with no hint of sexual elements whatsoever. It’s a form of sexuality that doesn’t need to be sexual at all. I started Strawberry Jam because I wanted to see some adult games that were more creative with their gameplay. Much of the genre consists of otherwise regular games that occasionally show you some explicit artwork, and while that’s a perfectly fine way to design a game, I felt that the medium surely had more potential. It turns out that a non-sexual fantasy kink works wonders as a gameplay element; rather than just giving you a picture, the game takes a concept and has you experience it yourself, even figure out by experimentation how it’s altered the way you interact with the world. This puts me in a slightly awkward position. I do, genuinely and platonically, love these kinds of gameplay themes! I adore changes in how you perceive or interact with a world — the dark world in Metroid Prime 2, the time reversal in Braid, the “dimension” swapping in Quantum Conundrum, etc. I think this is a great concept that anyone can have a good time with, and I feel like this game is a love letter to the Wario Land series. At the same time, I do also appreciate the kink inspiration. Even Lexy’s collar was originally conceived as a gimmick I could use for drawing adult artwork. The jam game contains a lot of suggestive dialogue, since Lexy herself also appreciates the kink aspect. And that was a lot of fun to write, and I’m sure it enhanced the experience for other folks with similar leanings. But this is such a good concept that I want it to be playable as just a regular puzzle-platformer as well. I think it would have fairly broad appeal, and I don’t want to hamstring myself by totally fucking weirding people out when it dawns on them that “oh the dev is kinda Into This huh”. And yet I don’t want to completely sterilize the game, either, because… well, ultimately, it’s my game and I like the suggestive parts. This is a tough line to draw, and I’m not yet sure how to do it. I’ve considered just making alternative dialogue that you can opt into when you start the game, but given that Lexy already speaks differently depending on what form she’s in, I have no idea how feasible that is. I don’t know how to gauge this. I’ve always been up to my armpits in the side of the internet that just posts porn and talks about sexuality casually, whereas I’m dimly aware that most people see sexuality as this completely distinct part of life that you hide in a small box, far away from the eyes of polite society. But maybe I’m overestimating that? Does anyone actually care if the protagonist of a game comments “hey this is hot” about something weird but innocuous? Or maybe that’s exactly where the line is. I remember Nier: Automata, a game that is all too happy to show off the protagonist’s immaculately-rendered ass, which is clearly meant for the enjoyment of both the creator and the players. But nobody comments on it within the game, which makes it seem incidental, somehow. I can’t explain why that is, and it feels slightly dishonest to me. Am I overthinking this? If you’re not involved in any kind of kink circles and played the original jam game, I’m curious to hear how it read to you. Was it at all uncomfortable, like perhaps the game was expecting you to heavily empathize with a feeling you don’t share at all? Or does putting that feeling on a character, rather than aiming it at the human player, make it something you can easily shrug off? The full game will have more stuff going on, so there should be lots more dialogue that isn’t solely about Lexy’s feelings, if that helps. Hm, I thought I would have more to say here! I have a lot of ideas, but only a handful of them are implemented yet, and I guess it’s hard to show what a game will be like before most of it works. I hope this is enough to whet some appetites, at least! I haven’t been excited like this about anything in far too long. # Old CSS, new CSS Post Syndicated from Eevee original https://eev.ee/blog/2020/02/01/old-css-new-css/ I first got into web design/development in the late 90s, and only as I type this sentence do I realize how long ago that was. And boy, it was horrendous. I mean, being able to make stuff and put it online where other people could see it was pretty slick, but we did not have very much to work with. I’ve been taking for granted that most folks doing web stuff still remember those days, or at least the decade that followed, but I think that assumption might be a wee bit out of date. Some time ago I encountered a tweet marvelling at what we had to do without border-radius. I still remember waiting with bated breath for it to be unprefixed! But then, I suspect I also know a number of folks who only tried web design in the old days, and assume nothing about it has changed since. I’m here to tell all of you to get off my lawn. Here’s a history of CSS and web design, as I remember it. (Please bear in mind that this post is a fine blend of memory and research, so I can’t guarantee any of it is actually correct, especially the bits about casuality. You may want to try the W3C’s history of CSS, which is considerably shorter, has a better chance of matching reality, and contains significantly less swearing.) (Also, this would benefit greatly from more diagrams, but it took long enough just to write.) ## The very early days In the beginning, there was no CSS. This was very bad. My favorite artifact of this era is the book that taught me HTML: O’Reilly’s HTML: The Definitive Guide, published in several editions in the mid to late 90s. The book was indeed about HTML, with no mention of CSS at all. I don’t have it any more and can’t readily find screenshots online, but here’s a page from HTML & XHTML: The Definitive Guide, which seems to be a revision (I’ll get to XHTML later) with much the same style. Here, then, is the cutting-edge web design advice of 199X: Clearly delineate headers and footers with horizontal rules. No, that’s not a border-top. That’s an <hr>. The page title is almost certainly centered with, well, <center>. The page uses the default text color, background, and font. Partly because this is a guidebook introducing concepts one at a time; partly because the book was printed in black and white; and partly, I’m sure, because it reflected the reality that coloring anything was a huge pain in the ass. Let’s say you wanted all your <h1>s to be red, across your entire site. You had to do this: 1 ... every single goddamn time. Hope you never decide to switch to blue! Oh, and everyone wrote HTML tags in all caps. I don’t remember why we all thought that was a good idea. Maybe this was before syntax highlighting in text editors was very common (read: I was 12 and using Notepad), and uppercase tags were easier to distinguish from body text. Keeping your site consistent was thus something of a nightmare. One solution was to simply not style anything, which a lot of folks did. This was nice, in some ways, since browsers let you change those defaults, so you could read the Web how you wanted. A clever alternate solution, which I remember showing up in a lot of Geocities sites, was to simply give every page a completely different visual style. Fuck it, right? Just do whatever you want on each new page. That trend was quite possibly the height of web design. Damn, I miss those days. There were no big walled gardens, no Twitter or Facebook. If you had anything to say to anyone, you had to put together your own website. It was amazing. No one knew what they were doing; I’d wager that the vast majority of web designers at the time were clueless hobbyist tweens (like me) all copying from other clueless hobbyist tweens. Half the Web was fan portals about Animorphs, with inexplicable splash pages warning you that their site worked best if you had a 640×480 screen. (Any 12-year-old with insufficient resolution should, presumably, buy a new monitor with their allowance.) Everyone who was cool and in the know used Internet Explorer 3, the most advanced browser, but some losers still used Netscape Navigator so you had to put a “Best in IE” animated GIF on your splash page too. This was also the era of “web-safe colors” — a palette of 216 colors, where every channel was one of 00, 33, 66, 99, cc, or ff — which existed because some people still had 256-color monitors! The things we take for granted now, like 24-bit color. In fact, a lot of stuff we take for granted now was still a strange and untamed problem space. You want to have the same navigation on every page on your website? Okay, no problem: copy/paste it onto each page. When you update it, be sure to update every page — but most likely you’ll forget some, and your whole site will become an archaeological dig into itself, with strata of increasingly bitrotted pages. Much easier was to use frames, meaning the browser window is split into a grid and a different page loads in each section… but then people would get confused if they landed on an individual page without the frames, as was common when coming from a search engine like AltaVista. (I can’t believe I’m explaining frames, but no one has used them since like 2001. You know iframes? The “i” is for inline, to distinguish them from regular frames, which take up the entire viewport.) PHP wasn’t even called that yet, and nobody had heard of it. This weird “Perl” and “CGI” thing was really strange and hard to understand, and it didn’t work on your own computer, and the errors were hard to find and diagnose, and anyway Geocities didn’t support it. If you were really lucky and smart, your web host used Apache, and you could use its “server side include” syntax to do something like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 (actual page content goes here) Mwah. Beautiful. Apache would see the special comments, paste in the contents of the referenced files, and you’re off to the races. The downside was that when you wanted to work on your site, all the navigation was missing, because you were doing it on your regular computer without Apache, and your web browser thought those were just regular HTML comments. It was impossible to install Apache, of course, because you had a computer, not a server. Sadly, that’s all gone now — paved over by homogenous timelines where anything that wasn’t made this week is old news and long forgotten. The web was supposed to make information eternal, but instead, so much of it became ephemeral. I miss when virtually everyone I knew had their own website. Having a Twitter and an Instagram as your entire online presence is a poor substitute. So, let’s look at the Space Jam website. ## Case study: Space Jam Space Jam, if you’re not aware, is the greatest movie of all time. It documents Bugs Bunny’s extremely short-lived basketball career, playing alongside a live action Michael Jordan to save the planet from aliens for some reason. It was followed by a series of very successful and critically acclaimed RPG spinoffs, which describe the fallout of the Space Jam and are extremely canon. And we are truly blessed, for 24 years after it came out, its website is STILL UP. We can explore the pinnacle of 1996 web design, right here, right now. First, notice that every page of this site is a static page. Not only that, but it’s a static page ending in .htm rather than .html, because people on Windows versions before 95 were still beholden to 8.3 filenames. Not sure why that mattered in a URL, as if you were going to run Windows 3.11 on a Web server, but there you go. The CSS for the splash page looks like this: 1 Haha, just kidding! What the fuck is CSS? Space Jam predates it by a month. (I do see a single line in the page source, but I’m pretty sure that was added much later to style some legally obligatory policy links.) Notice the extremely precise positioning of these navigation links. This feat was accomplished the same way everyone did everything in 1996: with tables. In fact, tables have one functional advantage over CSS for layout, which was very important in those days, and not only because CSS didn’t exist yet. You see, you can ctrl-click to select a table cell and even drag around to select all of them, which shows you how the cells are arranged and functions as a super retro layout debugger. This was great because the first meaningful web debug tool, Firebug, wasn’t released until 2006 — a whole decade later! The markup for this table is overflowing with inexplicable blank lines, but with those removed, it looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 ... That’s the first two rows, including the logo. You get the idea. Everything is laid out with align and valign on table cells; rowspans and colspans are used frequently; and there are some <br>s thrown in for good measure, to adjust vertical positioning by one line-height at a time. Other fantastic artifacts to be found on this page include this header, which contains Apache SSI syntax! This must’ve quietly broken when the site was moved over the years; it’s currently hosted on Amazon S3. You know, Amazon? The bookstore? 1 2 3 4 5 6 7 Okay, let’s check out jam central. I’ve used my browser dev tools to reduce the viewport to 640×480 for the authentic experience (although I’d also have lost some vertical space to the title bar, taskbar, and five or six IE toolbars). Note the frames: the logo in the top left leads back to the landing page, cleverly saving screen space on repeating all that navigation, and the top right is a fucking ad banner which has been blocked like seven different ways. All three parts are separate pages. Note also the utterly unreadable red text on a textured background, one of the truest hallmarks of 90s web design. “Why not put that block of text on an easier-to-read background?” you might ask. You imbecile. How would I possibly do that? Only the <body> has a background attribute! I could use a table, but tables only support solid background colors, and that would look so boring! But wait, what is this new navigation widget? How are the links all misaligned like that? Is this yet another table? Well, no, although filling a table with chunks of a sliced-up image wasn’t uncommon. But this is an imagemap, a long-forgotten HTML feature. I’ll just show you the source: 1 2 3 4 5 6 7 8 I assume this is more or less self-explanatory. The usemap attribute attaches an image map, which is defined as a bunch of clickable areas, beautifully encoded as inscrutable lists of coordinates or something. And this stuff still works! This is in HTML! You could use it right now! Probably don’t though! ### The thumbnail grid Let’s look at one more random page here. I’d love to see some photos from the film. (Wait, photos? Did we not know what “screenshots” were yet?) Another frameset, but arranged differently this time. 1 They did an important thing here: since they specified a background image (which is opaque), they also specified a background color. Without it, if the background image failed to load, the page would be white text on the default white background, which would be unreadable. (That’s still an important thing to keep in mind. I feel like modern web development tends to assume everything will load, or sees loading as some sort of inconvenience to be worked around, but not everyone is working on a wired connection in a San Francisco office twenty feet away from a backbone.) But about the page itself. Thumbnail grids are a classic problem of web design, dating all the way back to… er… well, at least as far back as Space Jam. The main issue is that you want to put things next to each other, whereas HTML defaults to stacking everything in one big column. You could put all the thumbnails inline, in a single row of (wrapping) text, but that wouldn’t be much of a grid — and you usually want each one to have some sort of caption. Space Jam’s approach was to use the only real tool anyone had in their toolbox at the time: a table. It’s structured like this: 1 2 3 4 5 ......... A 3×3 grid of thumbnails, left to the browser to arrange. (The last image, on a row of its own, isn’t actually part of the table.) This can’t scale to fit your screen, but everyone’s screen was pretty tiny back then, so that was slightly less of a concern. They didn’t add captions here, but since every thumbnail is wrapped in a table cell, they easily could have. This was the state of the art in thumbnail grids in 1996. We’ll be revisiting this little UI puzzle a few times; you can see live examples (and view source for sample markup) on a separate page. But let’s take a moment to appreciate the size of the “full-size, full-color, internet-quality” movie screenshots on my current monitor. Hey, though, they’re less than 16 KB! That’ll only take nine seconds to download. (I’m reminded of the problem of embedded video, which wasn’t solved until HTML5’s <video> tag some years later. Until then, you had to use a binary plugin, and all of them were terrible.) (Oh, by the way: images within links, by default, have a link-colored border around them. Image links are usually self-evident, so this was largely annoying, and until CSS you had to disable them for every single image with <img border=0>.) ## The regular early days So that’s where we started, and it sucked. If you wanted any kind of consistency on more than a handful of pages, your options were very limited, and they were pretty much limited to a whole lot of copying and pasting. The Space Jam website opted to, for the most part, not bother at all — as did many others. Then CSS came along, it was a fucking miracle. All that inline repetition went away. You want all your top-level headings to be a particular color? No problem: 1 2 3 H1 { color: #FF0000; } Bam! You’re done. No matter how many <h1>s you have in your document, every single one of them will be eye-searing red, and you never have to think about it again. Even better, you can put that snippet in its own file and have that questionable aesthetic choice applied to every page of your whole site with almost no effort! The same applied to your gorgeous tiling background image, the colors of your links, and the size of the font in your tables. (Just remember to wrap the contents of your <style> tags in HTML comments, or old browsers without CSS support will display them as text.) You weren’t limited to styling tags en masse, either. CSS introduced “classes” and “IDs” to target only specifically flagged elements. A selector like P.important would only affect <P CLASS="important">, and #header would only affect <H1 ID="header">. (The difference is that IDs are intended to be unique in a document, whereas classes can be used any number of times.) With these tools, you could effectively invent your own tags, giving you a customized version of HTML specific to your website! This was a huge leap forward, but at the time, no one (probably?) was thinking of using CSS to actually arrange the page. When CSS 1 was made a recommendation in December ‘96, it barely addressed layout at all. All it did was divorce HTML’s existing abilities from the tags they were attached to. We had font colors and backgrounds because <FONT COLOR> and <BODY BACKGROUND> existed. The only feature that even remotely affected where things were positioned was the float property, the equivalent to <IMG ALIGN>, which pulled an image to the side and let text flow around it, like in a magazine article. Hardly whelming. This wasn’t too surprising. HTML hadn’t had any real answers for layout besides tables, and the table properties were too complicated to generalize in CSS and too entangled with the tag structure, so there was nothing for CSS 1 to inherit. It merely reduced the repetition in what we were already doing with e.g. <FONT> tags — making Web design less tedious, less error-prone, less full of noise, and much more maintainable. A pretty good step forward, and everyone happily adopted it for that, but tables remained king for arranging your page. That was okay, though; all your blog really needed was a header and a sidebar, which tables could do just fine, and it wasn’t like you were going to overhaul that basic structure very often. Copy/pasting a few lines of <TABLE BORDER=0> and <TD WIDTH=20%> wasn’t nearly as big a deal. For some span of time — I want to say a couple years, but time passes more slowly when you’re a kid — this was the state of the Web. Tables for layout, CSS for… well, style. Colors, sizes, bold, underline. There was even this sick trick you could do with links where they’d only be underlined when the mouse was pointing at them. Tubular! (Fun fact: HTML email is still basically trapped in this era.) (And here’s about where I come in, at the ripe old age of 11, with no clue what I was doing and mostly learning from other 11-year-olds who also had no clue what they were doing. But that was fine; a huge chunk of the Web was 11-year-olds making their own websites, and it was beautiful. Why would you go to a business website when you can take a peek into the very specific hobbies of someone on the other side of the planet?) ## The dark times A year and a half later, in mid ‘98, we were gifted CSS 2. (I love the background on this page, by the way.) This was a modest upgrade that addressed a few deficiencies in various areas, but most interesting was the addition of a couple positioning primitives: the position property, which let you place elements at precise coordinates, and the inline-block display mode, which let you stick an element in a line of text like you could do with images. Such tantalizing fruit, just out of reach! Using position seemed nice, but pixel-perfect positioning was at serious odds with the fluid design of HTML, and it was difficult to make much of anything that didn’t fall apart on other screen sizes or have other serious drawbacks. This humble inline-block thing seemed interesting enough; after all, it solved the core problem of HTML layout, which is putting things next to each other. But at least for the moment, no browser implemented it, and it was largely ignored. I can’t say for sure if it was the introduction of positioning or some other factor, but something around this time inspired folks to try doing layout in CSS. Ideally, you would completely divorce the structure of your page from its appearance. A website even came along to take this principle to the extreme — CSS Zen Garden is still around, and showcases the same HTML being radically transformed into completely different designs by applying different stylesheets. Trouble was, early CSS support was buggy as hell. In retrospect, I suspect browser vendors merely plucked the behavior off of HTML tags and called it a day. I’m delighted to say that RichInStyle still has an extensive list of early browser CSS bugs up; here are some of my favorites: • IE 3 would ignore all but the last <style> tag in a document. • IE 3 ignored pseudo-classes, so a:hover would be treated as a. • IE 3 and IE 4 treated auto margins as zero. Actually, I think this one might’ve persisted all the way to IE 6. But that was okay, because IE 6 also incorrectly applied text-align: center to block elements. • If you set a background image to an absolute URL, IE 3 would try to open the image in a local program, as though you’d downloaded it. • Netscape 4 understood an ID selector like #id, but ignored h1#id as invalid. • Netscape 4 didn’t inherit properties — including font and text color! — into table cells. • Netscape 4 applied properties on <li> to the list marker, rather than the contents. • If the same element has both float and clear (not unreasonable), Netscape 4 for Mac crashes. This is what we had to work with. And folks wanted to use CSS to lay out an entire page? Ha. Yet the idea grew in popularity. It even became a sort of elitist rallying cry, a best practice used to beat other folks over the head. Tables for layout are just plain bad, you’d hear! They confuse screenreaders, they’re semantically incorrect, they interact poorly with CSS positioning! All of which is true, but it was a much tougher pill to swallow when the alternative was— Well, we’ll get to that in a moment. First, some background on the Web landscape circa 2000. ### The end of the browser wars and subsequent stagnation The short version is: this company Netscape had been selling its Navigator browser (to businesses; it was free for personal use), and then Microsoft entered the market with its completely free Internet Explorer browser, and then Microsoft had the audacity to bundle IE with Windows. Can you imagine? An operating system that comes with a browser? This was a whole big thing, Microsoft was sued over it, and they lost, and the consequence was basically nothing. But it wouldn’t have mattered either way, because they’d still done it, and it had worked. IE pretty much annihilated Netscape’s market share. Both browsers were buggy as hell, and differently buggy as hell, so a site built exclusively against one was likely to be a big mess when viewed in the other — this meant that when Netscape’s market share dropped, web designers paid less and less attention to it, and less of the Web worked in it, and its market share dropped further. Sucks for you if you don’t use Windows, I guess. Which is funny, because there was an IE for Mac 5.5, and it was generally less buggy than IE 6. (Incidentally, Bill Gates wasn’t so much a brilliant nerd as an aggressive and ruthless businessman who made his fortune by deliberately striving to annihilate any competition standing in his way and making computing worse overall as a result, just saying.) By the time Windows XP shipped in mid 2001, with Internet Explorer 6 built in, Netscape had gone from a juggernaut to a tiny niche player. And then, having completely and utterly dominated, Microsoft stopped. Internet Explorer had seen a release every year or so since its inception, but IE 6 was the last release for more than five years. It was still buggy, but that was less noticeable when there was no competition, and it was good enough. Windows XP, likewise, was good enough to take over the desktop, and there wouldn’t be another Windows for just as long. The W3C, the group who write the standards (not to be confused with W3Schools, who are shady SEO leeches), also stopped. HTML had seen several revisions throughout the mid 90s, and then froze as HTML 4. CSS had gotten an update in only a year and a half, and then no more; the minor update CSS 2.1 wouldn’t hit Candidate Recommendation status until early 2004, and took another seven years to be finalized. With IE 6’s dominance, it was as if the entire Web was frozen in time. Standards didn’t matter, because there was effectively only one browser, and whatever it did became the de facto standard. As the Web grew in popularity, IE’s stranglehold also made it difficult to use any platform other than Windows, since IE was Windows-only and it was a coin flip whether a website would actually work with any other browser. (One begins to suspect that monopolies are bad. There oughta be a law!) In the meantime, Netscape had put themselves in an even worse position by deciding to do a massive rewrite of their browser engine, culimating in the vastly more standards-compliant Netscape 6 — at the cost of several years away from the market while IE was kicking their ass. It never broke 10% market share, while IE’s would peak at 96%. On the other hand, the new engine was open sourced as the Mozilla Application Suite, which would be important in a few years. Before we get to that, some other things were also happening. ### Quirks mode All early CSS implementations were riddled with bugs, but one in particular is perhaps the most infamous CSS bug of all time: the box model bug. You see, a box (the rectangular space taken up by an element) has several measurements: its own width and height, then surrounding whitespace called padding, then an optional border, then a margin separating it from neighboring boxes. CSS specifies that these properties are all additive. A box with these styles: 1 2 3 width: 100px; padding: 10px; border: 2px solid black; …would thus be 124 pixels wide, from border to border. IE 4 and Netscape 4, on the other hand, took a different approach: they treated width and height as measuring from border to border, and they subtracted the border and padding to get the width of the element itself. The same box in those browsers would be 100 pixels wide from border to border, with 76 pixels remaining for the content. This conflict with the spec was not ideal, and IE 6 set out to fix it. Unfortunately, simply making the change would mean completely breaking the design of a whole lot of websites that had previously worked in both IE and Netscape. So the IE team came up with a very strange compromise: they declared the old behavior (along with several other major bugs) as “quirks mode” and made it the default. The new “strict mode” or “standards mode” had to be opted into, by placing a “doctype” at the beginning of your document, before the <html> tag. It would look something like this: 1 Everyone had to paste this damn mess of a line at the top of every single HTML document for years. (HTML5 would later simplify it to <!DOCTYPE html>.) In retrospect, it’s a really strange way to opt into correct CSS behavior; doctypes had been part of the HTML spec since way back when it was an RFC. I’m guessing the idea was that, since nobody bothered actually including one, it was a convenient way to allow opting in without requiring proprietary extensions just to avoid behavior that had been wrong in the first place. Good for the IE team! The funny thing is, quirks mode still exists and is still the default in all browsers, twenty years later! The exact quirks have varied over time, and in particular neither Chrome nor Firefox use the IE box model even in quirks mode, but there are still quite a few other emulated bugs. Modern browsers also have “almost standards” mode, which emulates only a single quirk, perhaps the second most infamous one: if a table cell contains only a single image, the space under the baseline is removed. Under normal CSS rules, the image is sitting within a line of (otherwise empty) text, which requires some space reserved underneath for descenders — the tails on letters like y. Early browsers didn’t handle this correctly, and some otherwise strict-mode websites from circa 2000 rely on it — e.g., by cutting up a large image and arranging the chunks in table cells, expecting them to display flush against each other — hence the intermediate mode to keep them limping along. But getting back to the past: while this was certainly a win for standards (and thus interop), it created a new problem. Since IE 6 dominated, and doctypes were optional, there was little compelling reason to bother with strict mode. Other browsers ended up emulating it, and the non-standard behavior became its own de facto standard. Web designers who cared about this sort of thing (and to our credit, there were a lot of us) made a rallying cry out of enabling strict mode, since it was the absolute barest minimum step towards ensuring compatibility with other browsers. ### The rise and fall of XHTML Meanwhile, the W3C had lost interest in HTML in favor of developing XHTML, an attempt to redesign HTML with the syntax of XML rather than SGML. (What on Earth is SGML, you ask? I don’t know. Nobody knows. It’s the grammer HTML was built on, and that’s the only reason anyone has heard of it.) To their credit, there were some good reasons to do this at the time. HTML was generally hand-written (as it still is now), and anything hand-written is likely to have the occasional bugs. Browsers weren’t in the habit of rejecting buggy HTML outright, so they had various error-correction techniques — and, as with everything else, different browsers handled errors differently. Slightly malformed HTML might appear to work fine in IE 6 (where “work fine” means “does what you hoped for”), but turn into a horrible mess in anything else. The W3C’s solution was XML, because their solution to fucking everything in the early 2000s was XML. If you’re not aware, XML takes a much more explicit and aggressive approach to error handling — if your document contains a parse error, the entire document is invalid. That means if you bank on XHTML and make a single typo somewhere, nothing at all renders. Just an error. This sucked. It sounds okay on the face of things, but consider: generic XML is usually assembled dynamically with libraries that treat a document as a tree you manipulate, then turn it all into text when you’re done. That’s great for the common use of XML as data serialization, where your data is already a tree and much of the XML structure is simple and repetitive and easy to squirrel away in functions. HTML is not like that. An HTML document has little reliable repeating structure; even this blog post, constructed mostly from <p> tags, also contains surprise <em>s within body text and the occasional <h2> between paragraphs. That’s not fun to express as a tree. And this is a big deal, because server-side rendering was becoming popular around the same time, and generated HTML was — still is! — put together with templates that treat it as a text stream. If HTML were only written as complete static documents, then XHTML might have worked out — you write a document, you see it in your browser, you know it works, no problem. But generating it dynamically and risking that particular edge cases might replace your entire site with an unintelligible browser error? That sucks. It certainly didn’t help that we were just starting to hear about this newfangled Unicode thing around this time, and it was still not always clear how exactly to make that work, and one bad UTF-8 sequence is enough for an entire XML document to be considered malformed! And so, after some dabbling, XHTML was largely forgotten. Its legacy lives on in two ways: • It got us all to stop using uppercase tag names! So long <BODY>, hello <body>. XML is case-sensitive, you see, and all the XHTML tags were defined in lowercase, so uppercase tags simply would not work. (Fun fact: to this day, JavaScript APIs report HTML tag names in uppercase.) The increased popularity of syntax highlighting probably also had something to do with this; we weren’t all still using Notepad as we had been in 1997. • A bunch of folks still think self-closing tags are necessary. You see, HTML has two kinds of tags: containers like <p>...</p> and markers like <br>. Since a <br> can’t possibly contain anything, there’s no such thing as </br>. XML, as a generic grammar, doesn’t have this distinction; every tag must be closed, but as a shortcut, you can write <br/> to mean <br></br>. XHTML has been dead for years, but for some reason, I still see folks write <br/> in regular HTML documents. Outside of XML, that slash doesn’t do anything; HTML5 has defined it for compatibility reasons, but it’s silently ignored. It’s even actively harmful, since it might lead you to believe that <script/> is an empty <script> tag — but in HTML, it definitely is not! I do miss one thing about XHTML. You could combine it with XSLT, the XML templating meta-language, to do in-browser templating (i.e., slot page-specific contents into your overall site layout) with no scripting required. It’s the only way that’s ever been possible, and it was cool as all hell when it worked, but the drawbacks were too severe when it didn’t. Also, XSLT is totally fucking incomprehensible. ### The beginning of CSS layout Back to CSS! You’re an aspiring web designer. For whatever reason, you want to try using this CSS thing to lay out your whole page, even though it was clearly intended just for colors and stuff. What do you do? As I mentioned before, your core problem is putting things next to each other. Putting things on top of each other is a non-problem — that’s the normal behavior of HTML. The whole reason everyone uses tables is that you can slop stuff into table cells and have it laid out side-by-side, in columns. Well, tables seem to be out. CSS 2 had added some element display modes that corresponded to the parts of a table, but to use them, you’d have to have the same three levels of nesting as real tables: the table itself, then a row, then a cell. That doesn’t seem like a huge step up, and anyway, IE won’t support them until the distant future. There’s that position thing, but it seems to make things overlap more often than not. Hmm. What does that leave? Only one tool, really: float. I said that float was intended for magazine-style “pull” images, which is true, but CSS had defined it fairly generically. In principle, it could be applied to any element. If you wanted a sidebar, you could tell it to float to the left and be 20% the width of the page, and you’d get something like this: 1 2 3 4 +---------+ | sidebar | Hello, and welcome to my website! | | +---------+ Alas! Floating has the secondary behavior that text wraps around it. If your page text was ever longer than your sidebar, it would wrap around underneath the sidebar, and the illusion would shatter. But hey, no problem. CSS specified that floats don’t wrap around each other, so all you needed to do was float the body as well! 1 2 3 4 5 6 7 +---------+ +-----------------------------------+ | sidebar | | Hello, and welcome to my website! | | | | | +---------+ | Here's a longer paragraph to show | | that my galaxy brain CSS float | | nonsense prevents text wrap. | +-----------------------------------+ This approach worked, but its limitations were much more obvious than those of tables. If you added a footer, for example, then it would try to fit to the right of the body text — remember, all of that is “pull” floats, so as far as the browser is concerned, the “cursor” is still at the top. So now you need to use clear, which bumps an element down below all floats, to fix that. And if you made the sidebar 20% wide and the body 80% wide, then any margin between them would add to that 100%, making the page wider than the viewport, so now you have an ugly horizontal scrollbar, so you have to do some goofy math to fix that as well. If you have borders or backgrounds on either part, then it was a little conspicuous that they were different heights, so now you have to do some truly grotesque stuff to fix that. And the more conscientious authors noticed that screenreaders would read the entire sidebar before getting to the body text, which is a pretty rude thing to subject blind visitors to, so they came up with yet more elaborate setups to have a three-column layout with the middle column appearing first in the HTML. The result was a design that looked nice and worked well and scaled correctly, but backed by a weird mess of CSS. None of what you were writing actually corresponded to what you wanted — these are major parts of your design, not one-off pull quotes! It was difficult to understand the relationship between the layout-related CSS and what appeared on the screen, and that would get much worse before it got better. ### Thumbnail grid 2 Armed with a new toy, we can improve that thumbnail grid. The original table-based layout was, even if you don’t care about tag semantics, incredibly tedious. Now we can do better! 1 2 3 4 5 6 • caption • caption • caption • ... This is the dream of CSS: your HTML contains the page data in some sensible form, and then CSS describes how it actually looks. Unfortunately, with float as the only tool available to us, the results are a bit rough. This new version does adapt better to various screen sizes, but it requires some hacks: the cells have to be a fixed height, centering the whole grid is fairly complicated, and the grid effect falls apart entirely with wider elements. It’s becoming clear that what we wanted is something more like a table, but with a flexible number of columns. This is just faking it. You also need this weird “clearfix” thing, an incantation that would become infamous during this era. Remember that a float doesn’t move the “cursor” — a fake idea I’m using, but close enough. That means that this <ul>, which is full only of floated elements, has no height at all. It ends exactly where it begins, with all the floated thumbnails spilling out below it. Worse, because any subsequent elements don’t have any floated siblings, they’ll ignore the thumbnails entirely and render normally from just below the empty “grid” — producing an overlapping mess! The solution is to add a dummy element at the end of the list which takes up no space, but has the CSS clear: both — bumping it down below all floats. That effectively pushes the bottom of the <ul> under all the individual thumbnails, so it fits snugly around them. Browsers would later support the ::before and ::after generated content” pseudo-elements, which let us avoid the dummy element entirely. Stylesheets from the mid-00s were often littered with stuff like this: 1 2 3 4 5 .thumbnail-grid::after { content: ''; display: block; clear: both; } Still, it was better than tables. ### DHTML As a quick aside into the world of JavaScript, the newfangled position property did give us the ability to do some layout things dynamically. I heartily oppose such heresy, not least because no one has ever actually done it right, but it was nice for some toys. Thus began the era of “dynamic HTML” — i.e., HTML affected by JavaScript, a term that has fallen entirely out of favor because we can’t even make a fucking static blog without JavaScript any more. In the early days it was much more innocuous, with teenagers putting sparkles that trailed behind your mouse cursor or little analog clocks that ticked by in real time. The most popular source of these things was Dynamic Drive, a site that miraculously still exists and probably has a bunch of toys not updated since the early 00s. But if you don’t like digging, here’s an example: every year (except this year when I forgot oops), I like to add confetti and other nonsense to my blog on my birthday. I’m very lazy so I started this tradition by using this script I found somewhere, originally intended for snowflakes. It works by placing a bunch of images on the page, giving them position: absolute, and meticulously altering their coordinates over and over. Contrast this with the version I wrote from scratch a couple years ago, which has only a tiny bit of JS to set up the images, then lets the browser animate them with CSS. It’s slightly less featureful, but lets the browser do all the work, possibly even with hardware acceleration. How far we’ve come. ## Web 2.0 Dark times can’t last forever. A combination of factors dragged us towards the light. One of the biggest was Firefox — or, if you were cool, originally Phoenix and then Firebird — which hit 1.0 in Nov ‘04 and went on to take a serious bite out of IE. That rewritten Netscape 6 browser core, the heart of the Mozilla Suite, had been extracted into a standalone browser. It was quick, it was simple, it was much more standard-compliant, and absolutely none of that mattered. No, Firefox really got a foothold because it had tabs. IE 6 did not have tabs; if you wanted to open a second webpage, you opened another window. It fucking sucked, man. Firefox was a miracle. Firefox wasn’t the first tabbed browser, of course; the full Mozilla Suite’s browser had them, and the obscure (but scrappy!) Opera had had them for ages. But it was Firefox that took off, for various reasons, not least of which was that it didn’t have a giant fucking ad bar at the top like Opera did. Designers did push for Firefox on standards grounds, of course; it’s just that that angle primarily appealed to other designers, not so much to their parents. One of the most popular and spectacular demonstrations was the Acid2 test, intended to test a variety of features of then-modern Web standards. It had the advantage of producing a cute smiley face when rendered correctly, and a fucking nightmare hellscape in IE 6. Early Firefox wasn’t perfect, but it was certainly much closer, and you could see it make progress until it fully passed with the release of Firefox 3. It also helped that Firefox had a faster JavaScript engine, even before JIT caught on. Much, much faster. Like, as I recall, IE 6 implemented getElementById by iterating over the entire document, even though IDs are unique. Glance at some old jQuery release announcements; they usually have some performance charts, and everything else absolutely dwarfs IE 6 through 8. Oh, and there was that whole thing where IE 6 was a giant walking security hole, especially with its native support for arbitrary binary components that only needed a “yes” click on an arcane dialog to get full and unrestricted access to your system. Probably didn’t help its reputation. Anyway, with something other than IE taking over serious market share, even the most ornery designers couldn’t just target IE 6 and call it a day any more. Now there was a reason to use strict mode, a reason to care about compatibility and standards — which Firefox was making a constant effort to follow better, while IE 6 remained stagnant. (I’d argue that this effect opened the door for OS X to make some inroads, and also for the iPhone to exist at all. I’m not kidding! Think about it; if the iPhone browser hadn’t actually worked with anything because everyone was still targeting IE 6, it’d basically have been a more expensive Palm. Remember, at first Apple didn’t even want native apps; it bet on the Web.) (Speaking of which, Safari was released in Jan ‘03, based on a fork of the KHTML engine used in KDE’s Konqueror browser. I think I was using KDE at the time, so this was very exciting, but no one else really cared about OS X and its 2% market share.) Another major factor appeared on April Fools’ Day, 2004, when Google announced Gmail. Ha, ha! A funny joke. Webmail that isn’t terrible? That’s a good one, Google. Oh. Oh, fuck. Oh they’re not kidding. How the fuck does this even work The answer, as every web dev now knows, is XMLHttpRequest — named for the fact that nobody has ever once used it to request XML. Apparently it was invented by Microsoft for use with Exchange, then cloned early on by Mozilla, but I’m just reading this from Wikipedia and you can do that yourself. The important thing is, it lets you make an HTTP request from JavaScript. You could now update only part a page with new data, completely in the background, without reloading. Nobody had heard of this thing before, so when Google dropped an entire email client based on it, it was like fucking magic. Arguably the whole thing was a mistake and has led to a hell future where static pages load three paragraphs of text in the background using XHR for no goddamn reason, but that’s a different post. Along similar lines, August 2006 saw the release of jQuery, a similar miracle. Not only did it paper over the differences between IE’s “JScript” APIs and the standard approaches taken by everyone else (which had been done before by other libraries), but it made it very easy to work with whole groups of elements at a time, something that had historically been a huge pain in the ass. Now you could fairly easily apply CSS all over the place from JavaScript! Which is a bad idea! But everything was so bad that we did it anyway! Hold on, I hear you cry. These things are about JavaScript! Isn’t this a post about CSS? You’re absolutely right! I mention the rise of JavaScript because I think it led directly to the modern state of CSS, thanks to an increase in one big factor: ### Ambition Firefox showed us that we could have browsers that actually, like, improve — every new improvement on Acid2 was exciting. Gmail showed us that the Web could do more than show plain text with snowflakes in front. And folks started itching to get fancy. The problem was, browsers hadn’t really gotten any better yet. Firefox was faster in some respects, and it adhered more closely to the CSS spec, but it didn’t fundamentally do anything that browsers weren’t supposed to be able to do already. Only the tooling had improved, and that mostly affected JavaScript. CSS was a static language, so you couldn’t write a library to make it better. Generating CSS with JavaScript was a possibility, but boy oh boy is that ever a bad idea. Another problem was that CSS 2 was only really good at styling rectangles. That was fine in the 90s, when every OS had the aesthetic of rectangles containing more rectangles. But now we were in the days of Windows XP and OS X, where everything was shiny and glossy and made of curvy plastic. It was a little embarrassing to have rounded corners and neatly shaded swooshes in your file browser and nowhere on the Web. Thus began a new reign of darkness. ### The era of CSS hacks Designers wanted a lot of things that CSS just could not offer. • Round corners were a big one. Square corners had fallen out of vogue, and now everyone wanted buttons with round corners, since they were The Future. (Native buttons also went out of vogue, for some reason.) Alas, CSS had no way to do this. Your options were: 1. Make a fixed-size background image of a rounded rectangle and put it on a fixed-size button. Maybe drop the text altogether and just make the whole thing an image. Eugh. 2. Make a generic background image and scale it to fit. More clever, but the corners might end up not round. 3. Make the rounded rectangle, cut out the corner and edges, and put them in a 3×3 table with the button label in the middle. Even better, use JavaScript to do this on the fly. 4. Fuck it, make your entire website one big Flash app lol Another problem was that IE 6 didn’t understand PNGs with 8-bit alpha; it could only correctly display PNGs with 1-bit alpha, i.e. every pixel is either fully opaque or fully transparent, like GIFs. You had to settle for jagged edges, bake a solid background color into the image, or apply various fixes that centered around this fucking garbage nonsense: 1 filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(src='bite-my-ass.png'); • Along similar lines: gradients and drop shadows! You can’t have fancy plastic buttons without those. But here you were basically stuck with making images again. • Translucency was a bit of a mess. Most browsers supported the CSS 3 opacity property since very early on… except IE, which needed another wacky Microsoft-specific filter thing. And if you wanted only the background translucent, you’d need a translucent PNG, which… well, you know. • Since the beginning, jQuery shipped with built-in animated effects like fadeIn, and they started popping up all over the place. It was kind of like the Web equivalent of how every Linux user in the mid-00s (and I include myself in this) used that fucking Compiz cube effect. Obviously you need JavaScript to trigger an element’s disappearance in most interesting cases, but using it to control the actual animation was a bit heavy-handed and put a strain on browsers. Tabbed browsing compounded this, since browsers were largely single-threaded, and for various reasons, every open page ran in the same thread. • Oh! Alternating background colors on table rows. This has since gone out of style, but I think that’s a shame, because man did it make tables easier to read. But CSS had no answer for this, so you had to either give every other row a class like <tr class="odd"> (hope the table’s generated with code!) or do some jQuery nonsense. • CSS 2 introduced the > child selector, so you could write stuff like ul.foo > li to style special lists without messing up nested lists, and IE 6! Didn’t! Fucking! Support! It! All those are merely aesthetic concerns, though. If you were interested in layout, well, the rise of Firefox had made your life at once much easier and much harder. Remember inline-block? Firefox 2 actually supported it! It was buggy and hidden behind a vendor prefix, but it more or less worked, which let designers start playing with it. And then Firefox 3 supported it more or less fully, which felt miraculous. Version 3 of our thumbnail grid is as simple as a width and inline-block: 1 2 3 4 5 6 .thumbnails li { display: inline-block; width: 250px; margin: 0.5em; vertical-align: top; } The general idea of inline-block is that the inside acts like a block, but the block itself is placed in regular flowing text, like an image. Each thumbnail is thus contained in a box, but the boxes all lie next to each other, and because of their equal widths, they flow into a grid. And since it’s functionally a line of text, you don’t have to work around any weird impact on the rest of the page like you had to do with floats. Sure, this had some drawbacks. You couldn’t do anything with the leftover space, for example, so there was a risk of a big empty void on the right with pathological screen sizes. You still had the problem of breaking the grid with a wide cell. But at least it’s not floats. One teeny problem: IE 6. It did technically support inline-block, but only on elements that were naturally inline — ones like <b> and <i>, not <li>. So, not ones you’d actually want (or think) to use inline-block on. Sigh. Lucky for us, at some point an absolute genius discovered hasLayout, an internal optimization in IE that marks whether an element… uh… has… layout. Look, I don’t know. Basically it changes the rendering path for an element — making it differently buggy, like quirks mode on a per-element basis! The upshot is that the above works in IE 6 if you add a couple lines: 1 2 3 4 5 6 7 8 .thumbnails li { display: inline-block; width: 250px; margin: 0.5em; vertical-align: top; *zoom: 1; *display: inline; } The leading asterisks make the property invalid, so browsers should ignore the whole line… but for some reason I cannot begin to fathom, IE 6 ignores the asterisks and accepts the rest of the rule. (Almost any punctuation worked, including a hyphen or — my personal favorite — an underscore.) The zoom property is a Microsoft extension that scales stuff, with the side effect that it grants the mystical property of “layout” to the element as well. And display: inline should make each element spill its contents into one big line of text, but IE treats an inline element that has “layout” roughly like an inline-block. And here we saw the true potential of CSS messes. Browser-specific rules, with deliberate bad syntax that one browser would ignore, to replicate an effect that still isn’t clearly described by what you’re writing. Entire tutorials written to explain how to accomplish something simple, like a grid, but have it actually work on most people’s browsers. You’d also see * html, html > /**/ body, and all kinds of other nonsense. Here’s a full list! And remember that “clearfix” hack from before? The full version, compatible with every browser, is a bit worse: 1 2 3 4 5 6 7 8 9 10 11 12 13 .clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; } .clearfix { display: inline-block; } /* start commented backslash hack \*/ * html .clearfix { height: 1%; } .clearfix { display: block; } /* close commented backslash hack */ Is it any wonder folks started groaning about CSS? This was an era of blind copy/pasting in the frustrated hopes of making the damn thing work. Case in point: someone (I dug the original source up once but can’t find it now) had the bone-headed idea of always setting body { font-size: 62.5% } due to a combination of “relative units are good” and wanting to override the seemingly massive default browser font size of 16px (which, it turns out, is correct) and dealing with IE bugs. He walked it back a short time later, but the damage had been done, and now thousands of websites start off that way as a “best practice”. Which means if you want to change your browser’s default font size in either direction, you’re screwed — scale it down and a bunch of the Web becomes microscopic, scale it up and everything will still be much smaller than you’ve asked for, scale it up more to compensate and everything that actually respects your decision will be ginormous. At least we have better page zoom now, I guess. Oh, and do remember: Stack Overflow didn’t exist yet. This stuff was passed around purely by word of mouth. If you were lucky, you knew about some of the websites about websites, like quirks mode and Eric Meyer’s website. In fact, check out Meyer’s css/edge site for some wild examples of stuff folks were doing, even with just CSS 1, as far back as 2002. I still think complexspiral is pure genius, even though you could do it nowadays with opacity and just one image. The approach in raggedfloat wouldn’t get native support in CSS until a few years ago, with shape-outside! He also brought us CSS reset, eliminating differences between browsers’ default styles. (I cannot understand how much of a CSS pioneer Eric Meyer is. When his young daughter Rebecca died six years ago, she was uniquely immortalized with her own CSS color name, rebeccapurple. That’s how highly the Web community thinks of him. Also I have to go cry a bit over that story now.) ## The future arrives, gradually Designers and developers were pushing the bounds of what browsers were capable of. Browsers were handling it all somewhat poorly. All the fixes and workarounds and libraries were arcane, brittle, error-prone, and/or heavy. Clearly, browsers needed some new functionality. But just slopping something in wouldn’t help; Microsoft had done plenty of that, and it had mostly made a mess. Several struggling attempts began. With the W3C’s head still squarely up its own ass — even explicitly rejecting proposed enhancements to HTML, in favor of snorting XML — some folks from (active) browser vendors Apple, Mozilla, and Opera decided to make their own clubhouse. WHATWG came into existed in June 2004, and they began work on HTML5. (It would end up defining error-handling very explicitly, which completely obviated the need for XHTML and eliminated a number of security concerns when working with arbitrary HTML. Also it gave us some new goodies, like native audio, video, and form controls for dates and colors and other stuff that had been clumsily handled by JavaScript-powered custom controls. And, um, still often are.) Then there was CSS 3. I’m not sure when it started to exist. It emerged slowly, struggling, like a chick hatching from an egg and taking its damn sweet fucking time to actually get implemented anywhere. I’m having to do a lot of educated guessing here, but I think it began with border-radius. Specifically, with -moz-border-radius. I don’t know when it was first introduced, but the Mozilla bug tracker has mentions of it as far back as 1999. See, Firefox’s own UI is rendered with CSS. If Mozilla wanted to do something that couldn’t be done with CSS, they added a property of their own, prefixed with -moz- to indicate it was their own invention. And when there’s no real harm in doing so, they leave the property accessible to websites as well. My guess, then, is that the push for CSS 3 really began when Firefox took off and designers discovered -moz-border-radius. Suddenly, built-in rounded corners were available! No more fucking around in Photoshop; you only needed to write a single line! Practically overnight, everything everywhere had its corners filed down. And from there, things snowballed. Common problems were addressed one at a time by new CSS features, which were clustered together into a new CSS version: CSS 3. The big ones were solutions to the design problems mentioned before: • Rounded corners, provided by border-radius. • Gradients, provided by linear-gradient() and friends. • Multiple backgrounds, which weren’t exactly a pressing concern, but which turned out to make some other stuff easier. • Translucency, provided by opacity and colors with an alpha channel. • Box shadows. • Text shadows, which had been in CSS 2 but dropped in 2.1 and never implemented anyway. • Border images, so you could do even fancier things than mere rounded borders. • Transitions and animations, now doable with ease without needing jQuery (or any JS at all). • :nth-child(), which solved the alternating rows problem with pure CSS. • Transformations. Wait, what? This kinda leaked in from SVG, which browsers were also being expected to implement, and which is built heavily around transforms. The code was already there, so, hey, now we can rotate stuff with CSS! Couldn’t do that before. Cool. • Web fonts, which had been in CSS for some time but only ever implemented in IE and only with some goofy DRM-laden font format. Now we weren’t limited to the four bad fonts that ship with Windows and that no one else has! These were pretty great! They didn’t solve any layout problems, but they did address aesthetic issues that designers had been clumsily working around by using loads of images and/or JavaScript. That meant less stuff to download and more text used instead of images, both of which were pretty good for the Web. The grand irony is that all the stuff you could do with these features went out of style almost immediately, and now we’re back to flat rectangles again. ### Browser prefixing hell Alas! All was still not right with the world. Several of these new gizmos were, I believe, initially developed by browser vendors and prefixed. Some later ones were designed by the CSS committee but implemented by browsers while the design was still in flux, and thus also prefixed. So began prefix hell, which continues to this day. Mozilla had -moz-border-radius, so when Safari implemented it, it was named -webkit-border-radius (“WebKit” being the name of Apple’s KHTML fork). Then the CSS 3 spec standardized it and called it just border-radius. That meant that if you wanted to use rounded borders, you actually needed to give three rules: 1 2 3 4 5 element { -moz-border-radius: 1em; -webkit-border-radius: 1em; border-radius: 1em; } The first two made the effect actually work in current browsers, and the last one was future-proofing: when browsers implemented the real rule and dropped the prefixed ones, it would take over. You had to do this every fucking time, since CSS isn’t a programming language and has no macros or functions or the like. Sometimes Opera and IE would have their own implementations with -o- and -ms- prefixes, bringing the total to five copies. It got much worse with gradients; the syntax went through a number of major incompatible revisions, so you couldn’t even rely on copy/pasting and changing the property name! And plenty of folks, well, fucked it up. I can’t blame them too much; I mean, this sucks. But enough pages used only the prefixed forms, and not the final form, that browsers had to keep supporting the prefixed form for longer than they would’ve liked to avoid breaking stuff. And if the prefixed form still works and it’s what you’re used to writing, then maybe you still won’t bother with the unprefixed one. Worse, some people would only use the form that worked in their pet choice of browser. This got especially bad with the rise of mobile web browsers. The built-in browsers on iOS and Android are Safari (WebKit) and Chrome (originally WebKit, now a fork), so you only “needed” to use the -webkit- properties. Which made things difficult for Mozilla when it released Firefox for Android. Hey, remember that whole debacle with IE 6? Here we are again! It was bad enough that Mozilla eventually decided to implement a number of -webkit- properties, which remain supported even in desktop Firefox to this day. The situation is goofy enough that Firefox now supports some effects only via these properties, like -webkit-text-stroke, which isn’t being standardized. Even better, Chrome’s current forked engine is called Blink, so technically it shouldn’t be using -webkit- properties either. And yet, here we are. At least it’s not as bad as the user agent string mess. Browser vendors have pretty much abandoned prefixing, now; instead they hide experimental features behind flags (so they’ll only work on the developer’s machine), and new features are theoretically designed to be smaller and easier to stabilize. This mess was probably a huge motivating factor for the development of Sass and LESS, two languages that produce CSS. Or… two CSS preprocessors, maybe. They have very similar goals: both add variables, functions, and some form of macros to CSS, allowing you to eliminate a lot of the repetition and browser hacks and other nonsense from your stylesheets. Hell, this blog still uses SCSS, though its use has gradually decreased over time. ### Flexbox But then, like an angel descending from heaven… flexbox. Flexbox has been around for a long time — allegedly it had partial support in Firefox 2, back in 2006! It went through several incompatible revisions and took ages to stabilize. Then IE took ages to implement it, and you don’t really want to rely on layout tools that only work for half your audience. It’s only relatively recently (2015? Later?) that flexbox has had sufficiently broad support to use safely. And I could swear I still run into folks whose current Safari doesn’t recognize it at all without prefixing, even though Safari supposedly dropped the prefixes five years ago… Anyway, flexbox is a CSS implementation of a pretty common GUI layout tool: you have a parent with some children, and the parent has some amount of space available, and it gets divided automatically between the children. You know, it puts things next to each other. The general idea is that the browser computes how much space the parent has available and the “initial size” of each child, figures out how much extra space there is, and distributes it according to the flexibleness of each child. Think of a toolbar: you might want each button to have a fixed size (a flex of 0), but want to add spacers that share any leftover space equally, so you’d give them a flex of 1. Once that’s done, you have a number of quality-of-life options at your disposal, too: you can distribute the extra space between the children instead, you can tell the children to stretch to the same height or align them in various ways, and you can even have them wrap into multiple rows if they won’t all fit! With this, we can take yet another crack at that thumbnail grid: 1 2 3 4 5 6 7 .thumbnail-grid { display: flex; flex-wrap: wrap; } .thumbnail-grid li { flex: 1 0 250px; } This is miraculous. I forgot all about inline-block overnight and mostly salivated over this until it was universally supported. It even expresses very clearly what I want. …almost. It still has the problem that too-wide cells will break the grid, since it’s still a horizontal row wrapped onto several independent lines. It’s pretty damn cool, though, and solves a number of other layout problems. Surely this is good enough. Unless…? I’d say mass adoption of flexbox marked the beginning of the modern era of CSS. But there was one lingering problem… ### The slow, agonizing death of IE IE 6 took a long, long, long time to go away. It didn’t drop below 10% market share (still a huge chunk) until early 2010 or so. Firefox hit 1.0 at the end of 2004. IE 7 wasn’t released until two years later, it offered only modest improvements, it suffered from compatibility problems with stuff built for IE 6, and the IE 6 holdouts (many of whom were not Computer People) generally saw no reason to upgrade. Vista shipped with IE 7, but Vista was kind of a flop — I don’t believe it ever came close to overtaking XP, not in its entire lifetime. Other factors included corporate IT policies, which often take the form of “never upgrade anything ever” — and often for good reason, as I heard endless tales of internal apps that only worked in IE 6 for all manner of horrifying reasons. Then there was the entirety of South Korea, which was legally required to use IE 6 because they’d enshrined in law some security requirements that could only be implemented with an IE 6 ActiveX control. So if you maintained a website that was used — or worse, required — by people who worked for businesses or lived in other countries, you were pretty much stuck supporting IE 6. Folks making little personal tools and websites abandoned IE 6 compatibility early on and plastered their sites with increasingly obnoxious banners taunting anyone who dared show up using it… but if you were someone’s boss, why would you tell them it’s okay to drop 20% of your potential audience? Just work harder! The tension grew over the years, as CSS became more capable and IE 6 remained an anchor. It still didn’t even understand PNG alpha without workarounds, and meanwhile we were starting to get more critical features like native video in HTML5. The workarounds grew messier, and the list of features you basically just couldn’t use grew longer. (I’d show you what my blog looks like in IE 6, but I don’t think it can even connect — the TLS stuff it supports is so ancient and broken that it’s been disabled on most servers!) Shoutouts, by the way, to some folks on the YouTube team, who in July 2009 added a warning banner imploring IE 6 users to switch to anything else — without asking anyone for approval. “Within one month… over 10 percent of global IE6 traffic had dropped off.” Not all heroes wear capes. I’d mark the beginning of the end as the day YouTube actually dropped IE 6 support — March 13, 2010, almost nine years after its release. I don’t know how much of a direct impact YouTube has on corporate users or the South Korean government, but a massive web company dropping an entire browser sends a pretty strong message. There were other versions of IE, of course, and many of them were messy headaches in their own right. But each subsequent one became less of a pain, and nowadays you don’t even have to think too much about testing in IE (now Edge). Just in time for Microsoft to scrap their own rendering engine and turn their browser into a Chrome clone. ## Now CSS is pretty great now. You don’t need weird fucking hacks just to put things next to each other. Browser dev tools are built in, now, and are fucking amazing — Firefox has started specifically warning you when some CSS properties won’t take effect because of the values of others! Obscure implicit side effects like “stacking contexts” (whatever those are) can now be set explicitly, with properties like isolation: isolate. In fact, let me just list everything that I can think of that you can do in CSS now. This isn’t a guide to all possible uses of styling, but if your CSS knowledge hasn’t been updated since 2008, I hope this whets your appetite. And this stuff is just CSS! So many things that used to be impossible or painful or require clumsy plugins are now natively supported — audio, video, custom drawing, 3D rendering… not to mention the vast ergonomic improvements to JavaScript. ### Layout A grid container can do pretty much anything tables can do, and more, including automatically determining how many columns will fit. It’s fucking amazing. More on that below. A flexbox container lays out its children in a row or column, allowing each child to declare its “default” size and what proportion of leftover space it wants to consume. Flexboxes can wrap, rearrange children without changing source order, and align children in a number of ways. Columns will pour text into, well, multiple columns. The box-sizing property lets you opt into the IE box model on a per-element basis, for when you need an entire element to take up a fixed amount of space and need padding/borders to subtract from that. display: contents dumps an element’s contents out into its parent, as if it weren’t there at all. display: flow-root is basically an automatic clearfix, only a decade too late. width can now be set to min-content, max-content, or the fit-content() function for more flexible behavior. white-space: pre-wrap preserves whitespace, but breaks lines where necessary to avoid overflow. Also useful is pre-line, which collapses sequences of spaces down to a single space, but preserves literal newlines. text-overflow cuts off overflowing text with an ellipsis (or custom character) when it would overflow, rather than simply truncating it. Also specced is the ability to fade out the text, but this is as yet unimplemented. shape-outside alters the shape used when wrapping text around a float. It can even use the alpha channel of an image as the shape. resize gives an arbitrary element a resize handle (as long as it has overflow). writing-mode sets the direction that text flows. If your design needs to work for multiple writing modes, a number of CSS properties that mention left/right/top/bottom have alternatives that describe directions in terms of the writing mode: inset-block and inset-inline for position, block-size and inline-size for width/height, border-block and border-inline for borders, and similar for padding and margins. ### Aesthetics Transitions smoothly interpolate a value whenever it changes, whether due to an effect like :hover or e.g. a class being added from JavaScript. Animations are similar, but play a predefined animation automatically. Both can use a number of different easing functions. border-radius rounds off the corners of a box. The corners can all be different sizes, and can be circular or elliptical. The curve also applies to the border, background, and any box shadows. Box shadows can be used for the obvious effect of casting a drop shadow. You can also use multiple shadows and inset shadows for a variety of clever effects. text-shadow does what it says on the tin, though you can also stack several of them for a rough approximation of a text outline. transform lets you apply an arbitrary matrix transformation to an element — that is, you can scale, rotate, skew, translate, and/or do perspective transform, all without affecting layout. filter (distinct from the IE 6 one) offers a handful of specific visual filters you can apply to an element. Most of them affect color, but there’s also a blur() and a drop-shadow() (which, unlike box-shadow, applies to an element’s appearance rather than its containing box). linear-gradient(), radial-gradient(), the new and less-supported conic-gradient(), and their repeating-* variants all produce gradient images and can be used anywhere in CSS that an image is expected, most commonly as a background-image. scrollbar-color changes the scrollbar color, with the downside of reducing the scrollbar to a very simple thumb-and-track in current browsers. background-size: cover and contain will scale a background image proportionally, either big enough to completely cover the element (even if cropped) or small enough to exactly fit inside it (even if it doesn’t cover the entire background). object-fit is a similar idea but for non-background media, like <img>s. The related object-position is like background-position. Multiple backgrounds are possible, which is especially useful with gradients — you can stack multiple gradients, other background images, and a solid color on the bottom. text-decoration is fancier than it used to be; you can now set the color of the line and use several different kinds of lines, including dashed, dotted, and wavy. CSS counters can be used to number arbitrary elements in an arbitrary way, exposing the counting ability of <ol> to any set of elements you want. The ::marker pseudo-element allows you to style a list item’s marker box, or even replace it outright with a custom counter. Browser support is spotty, but improving. Similarly, the @counter-style at-rule implements an entirely new counter style (like 1 2 3, i ii iii, A B C, etc.) which you can then use anywhere, though only Firefox supports it so far. image-set() provides a list of candidate images and lets the browser choose the most appropriate one based on the pixel density of the user’s screen. @font-face defines a font that can be downloaded, though you can avoid figuring out how to use it correctly by using Google Fonts. pointer-events: none makes an element ignore the mouse entirely; it can’t be hovered, and clicks will go straight through it to the element below. image-rendering can force an image to be resized nearest-neighbor rather than interpolated, though browser support is still spotty and you may need to also include some vendor-specific properties. clip-path crops an element to an arbitrary shape. There’s also mask for arbitrary alpha masking, but browser support is spotty and hoo boy is this one complicated. ### Syntax and misc @supports lets you explicitly write different CSS depending on what the browser supports, though it’s nowhere near as useful nowadays as it would’ve been in 2004. A > B selects immediate children. A + B selects siblings. A ~ B selects immediate (element) siblings. Square brackets can do a bunch of stuff to select based on attributes; most obvious is input[type=checkbox], though you can also do interesting things with matching parts of <a href>. There are a whole bunch of pseudo-classes now. Many of them are for form elements: :enabled and :disabled; :checked and :indeterminate (also apply to radio and <option>); :required and :optional; :read-write and :read-only; :in-range/:out-of-range and :valid/:invalid (for use with HTML5 client-side form validation); :focus and :focus-within; and :default (which selects the default form button and any pre-selected checkboxes, radio buttons, and <option>s). For targeting specific elements within a set of siblings, we have: :first-child, :last-child, and :only-child; :first-of-type, :last-of-type, and :only-of-type (where “type” means tag name); and :nth-child(), :nth-last-child(), :nth-of-type(), and :nth-last-of-type() (to select every second, third, etc. element). :not() inverts a selector. :empty selects elements with no children and no text. :target selects the element jumped to with a URL fragment (e.g. if the address bar shows index.html#foo, this selects the element whose ID is foo). ::before and ::after should have two colons now, to indicate that they create pseudo-elements rather than merely scoping the selector they’re attached to. ::selection customizes how selected text appears; ::placeholder customizes how placeholder text (in text fields) appears. Media queries do just a whole bunch of stuff so your page can adapt based on how it’s being viewed. The prefers-color-scheme media query tells you if the user’s system is set to a light or dark theme, so you can adjust accordingly without having to ask. You can write translucent colors as #rrggbbaa or #rgba, as well as using the rgba() and hsla() functions. Angles can be described as fractions of a full circle with the turn unit. Of course, deg and rad (and grad) are also available. CSS variables (officially, “custom properties”) let you specify arbitrary named values that can be used anywhere a value would appear. You can use this to reduce the amount of CSS fiddling needs doing in JavaScript (e.g., recolor a complex part of a page by setting a CSS variable instead of manually adjusting a number of properties), or have a generic component that reacts to variables set by an ancestor. calc() computes an arbitrary expression and updates automatically (though it’s somewhat obviated by box-sizing). The vw, vh, vmin, and vmax units let you specify lengths as a fraction of the viewport’s width or height, or whichever of the two is bigger/smaller. Phew! I’m sure I’m forgetting plenty and folks will have even longer lists of interesting tidbits in the comments. Thanks for saving me some effort! Now I can stop browsing MDN and do this final fun part. ### State of the art thumbnail grid At long last, we arrive at the final and objectively correct way to construct a thumbnail grid: using CSS grid. You can tell this is the right thing to use because it has “grid” in the name. Modern CSS features are pretty great about letting you say the thing you want and having it happen, rather than trying to coax it into happening implicitly via voodoo. And it is oh so simple: 1 2 3 4 .thumbnail-grid { display: grid; grid: auto-flow / repeat(auto-fit, minmax(250px, 1fr)); } Done! That gives you a grid. You have myriad other twiddles to play with, just as with flexbox, but that’s the basic idea. You don’t even need to style the elements themselves; most of the layout work is done in the container. The grid shorthand property looks a little intimidating, but only because it’s so flexible. It’s saying: fill the grid one row at a time, generating as many rows as necessary; make as many 250px columns as will fit, and share any leftover space between them equally. CSS grids are also handy for laying out <dl>s, something that’s historically been a massive pain to make work — a <dl> contains any number of <dt>s followed by any number of <dd>s (including zero), and the only way to style this until grid was to float the <dt>s, which meant they had to have a fixed width. Now you can just tell the <dt>s to go in the first column and <dd>s to go in the second, and grid will take care of the rest. And laying out your page? That whole sidebar thing? Check out how easy that is: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 body { display: grid; grid-template: "header header header" "left-sidebar main-content right-sidebar" "footer footer footer" / 1fr 6fr 1fr ; } body > header { grid-area: header; } #left-sidebar { grid-area: left-sidebar; } /* ... etc ... */ Done. Easy. It doesn’t matter what order the parts appear in the markup, either. ### On the other hand The web is still a little bit of a disaster. A lot of folks don’t even know that flexbox and grid are supported almost universally now; but given how long it took to get from early spec work to broad implementation, I can’t really blame them. I saw a brand new little site just yesterday that consisted mostly of a huge list of “thumbnails” of various widths, and it used floats! Not even inline-block! I don’t know how we managed to teach everyone about all the hacks required to make that work, but somehow haven’t gotten the word out about flexbox. But far worse than that: I still regularly encounter sites that do their entire page layout with JavaScript. If you use uMatrix, your first experience is with a pile of text overlapping a pile of other text. Surely this is a step backwards? What are you possibly doing that your header and sidebar can only be laid out correctly by executing code? It’s not like the page loads with no CSS — nothing in plain HTML will overlap by default! You have to tell it to do that! And then there’s the mobile web, which despite everyone’s good intentions, has kind of turned out to be a failure. The idea was that you could use CSS media queries to fit your normal site on a phone screen, but instead, most major sites have entirely separate mobile versions. Which means that either the mobile site is missing a bunch of important features and I’ll have to awkwardly navigate that on my phone anyway, or the desktop site is full of crap that nobody actually needs. (Meanwhile, Google’s own Android versions of Docs/Sheets/etc. have, like, 5% of the features of the Web versions? Not sure what to make of that.) Hmm. Strongly considering writing something that goes more into detail about improvements to CSS since the Firefox 3 era, similar to the one I wrote for JavaScript. But this post is long enough. ## Some futures that never were I don’t know what’s coming next in CSS, especially now that flexbox and grid have solved all our problems. I’m vaguely aware of some work being done on more extensive math support, and possibly some functions for altering colors like in Sass. There’s a painting API that lets you generate backgrounds on the fly with JavaScript using the canvas API, which is… quite something. Apparently it’s now in spec that you can use attr() (which evaluates to the value of an HTML attribute) as the value for any property, which seems cool and might even let you implement HTML tables entirely in CSS, but you could do the same thing with variables. I mean, um, custom properties. I’m more excited about :is(), which matches any of a list of selectors, and subgrid, which lets you add some nesting to a grid but keep grandchildren still aligned to it. Much easier is to list some things that were the future, but fizzled out. • display: run-in has been part of CSS since version 2 (way back in ‘98), but it’s basically unsupported. The idea is that a “run-in” box is inserted, inline, into the next block, so this: 1 2 3 Title Paragraph Paragraph displays like this: Title Paragraph Paragraph And, ah, hm, I’m starting to see why it’s unsupported. It used to exist in WebKit, but was apparently so unworkable as to be removed six years ago. • Alternate stylesheets” were popular in the early 00s, at least on a few of my friends’ websites. The idea was that you could list more than one stylesheet for your site (presumably for different themes), and the browser would give the user a list of them. Alas, that list was always squirrelled away in a menu with no obvious indication of when it was actually populated, so in the end, everyone who wanted multiple themes just implemented an in-page theme switcher themselves. This feature is still supported, but apparently Chrome never bothered implementing it, so it’s effectively dead. • More generally, the original CSS spec clearly expects users to be able to write their own CSS for a website — right in paragraph 2 it says …the reader may have a personal style sheet to adjust for human or technological handicaps. Hey, that sounds cool. But it never materialized as a browser feature. Firefox has userContent.css and some URL selectors for writing per-site rules, but that’s relatively obscure. Still, there’s clearly demand for the concept, as evidenced by the popularity of the Stylish extension — which does just this. (Too bad it was bought by some chucklefucks who started using it to suck up browser data to sell to advertisers. Use Stylus instead.) • A common problem (well, for me) is that of styling the label for a checkbox, depending on its state. Styling the checkbox itself is easy enough with the :checked pseudo-selector. But if you arrange a checkbox and its label in the obvious way: 1 …then CSS has no way to target either the <label> element or the text node. jQuery’s (originally custom) selector engine offered a custom :has() pseudo-class, which could be used to express this: 1 2 3 4 /* checkbox label turns bold when checked */ label:has(input:checked) { font-weight: bold; } Early CSS 3 selector discussions seemingly wanted to avoid this, I guess for performance reasons? The somewhat novel alternative was to write out the entire selector, but be able to alter which part of it the rules affected with a “subject” indicator. At first this was a pseudo-class: 1 2 3 label:subject input:checked { font-weight: bold; } Then later, they introduced a ! prefix instead: 1 2 3 !label input:checked { font-weight: bold; } Thankfully, this was decided to be a bad idea, so the current specced way to do this is… :has()! Unfortunately, it’s only allowed when querying from JavaScript, not in a live stylesheet, and nothing implements it anyway. 20 years and I’m still waiting for a way to style checkbox labels. • <style scoped> was an attribute that would’ve made a <style> element’s CSS rules only apply to other elements within its immediate parent, meaning you could drop in arbitrary (possibly user-written) CSS without any risk of affecting the rest of the page. Alas, this was quietly dropped some time ago, with shadow DOM suggested as a wildly inappropriate replacement. • I seem to recall that when I first heard about Web components, they were templates you could use to reduce duplication in pure HTML? But I can’t find any trace of that concept now, and the current implementations require JavaScript to define them, so there’s nothing declarative linking a new tag to its implementation. Which makes them completely unusable for anything that doesn’t have a compelling reason to rely on JS. Alas. • <blink> and <marquee>. RIP. Though both can be easily replicated with CSS animations. ## That’s it You’re still here? It’s over. Go home. And maybe push back against Blink monoculture and use Firefox, including on your phone, unless for some reason you use an iPhone, which forbids other browser engines, which is far worse than anything Microsoft ever did, but we just kinda accept it for some reason. # Advent calendar 2019 Post Syndicated from Eevee original https://eev.ee/release/2019/12/01/advent-calendar-2019/ Happy new year! For December, I had the absolutely ludicrous idea to do an advent calendar, whereupon I would make and release a thing every day until Christmas. It didn’t go quite as planned! But some pretty good stuff still came out of it. Day 1: I started out well enough with the Doom text generator (and accompanying release post), which does something simple that I’ve wanted for a long time but never seen anywhere: generate text using the Doom font. Most of the effort here was just in hunting down the fonts and figuring out how they worked; the rest was gluing them together with the canvas API. It could be improved further, but it’s pretty solid and useful as-is! Day 2: I tried another thing I’d always wanted: making a crossword! (Solve interactively on squares.io!) I didn’t expect it to take all day, but it did, and even then I found a typo that I didn’t have time to fix, and I had to rush with the clues. All in all, an entertaining but way too difficult first attempt. I’d love to try doing this more, though. Day 3: I’ve made a couple SVG visualizations before — most notably in my post on Perlin noise — and decided to take another crack at it. The result was a visualization of all six modern trig functions, showing the relationships between them in two different ways. I’m pretty happy with how this turned out, and delighted that I learned some relationships I didn’t know about before, either! I do wish I’d drawn some of the similar triangles to make the relationships more explicit, but I ran out of time — just orienting the text correctly took ages, especially since a lot of it needed different placement in all four quadrants. I vaguely intended to get around to doing a couple more of these, but it didn’t end up happening. Days 4 and 7: I love the PICO-8‘s built-in tracker, which makes way more sense to me than any “real” tracker, and set out to replicate it for the web. The result is PICOtracker! Unfortunately, this one didn’t get fully finished (yet) — it can play back sounds and music from the hardcoded Under Construction cart, but doesn’t support editing yet. Most of my time went to figuring out the Web Audio API, figuring out what the knobs in the PICO-8 tracker actually do (and shoutout to picolove for acting as source code reference), and figuring out how to weld the two together. I definitely want to revisit this in the near future! Day 5: I’d been recently streaming Eternal Doom III and was almost done, and I keep being really lazy about putting Doom streams on YouTube, so I finished up the game (which took far, far longer than I expected) and posted the whole thing as a playlist. It spans like 24 hours. Good if you, uh, just want some Doom noise to listen to in the background. Day 6: I’d expected Eternal Doom to be a quick day so I could have a break, and it was not. So I took an explicit day off. Days 8 and 13: I made flathack, a web roguelike with only one floor! The idea came from having played NetHack a great many times, and having seen the first floor much more than any other part of the dungeon — so why not make that the whole game? It needs a lot more work, but I’m happy to have finally published a roguelike, and I think it already serves its intended purpose at least a little bit: it’s a cute little timewaster that doesn’t keep killing you. Days 9–12: I got food poisoning. It sucked. A lot. Days 14–20: Fresh off of making flathack in only two days, I got a bit too big for my britches and decided to try writing an interactive fiction game. In one day. Spoilers: it took more than one day. But I think the result is pretty charming: Star Anise Chronicles: Escape from the Chamber of Despair, a game about being a cat and causing wanton destruction, and also the first Star Anise Chronicles game to actually be published. A good chunk of the time was spent just drawing illustrations for it, which weren’t strictly necessary, but they add a lot to the game and they did get me back in an art mood. Day 21: I feel like I’ve been scared of color for a long time, and that’s no good, so I drew and colored something. Day 22: I drew some weird porn, and colored it too! Porn is just a blast to draw, and it’d been a while. I’ll let you find the link on the calendar if you really want it. Day 23: Did not exist, due to becoming nocturnal. Day 24–28: I started a big reference of a bunch of my Flora characters way back in November 2018, but I tried to paint it when I didn’t know what I wanted in a painting style, and eventually I gave up. Flat colors are better for references anyway, so I tried again, and this time I finished! I’m really happy with how it came out — I feel like I’m finally starting to get the hang of art, maybe, just as I hit five years of trying. Again, it’s wildly NSFW, but the link is on the calendar. All told, I didn’t quite end up with 25 distinct things, but I did make some interesting stuff — some of which I’d been thinking about for a long time — and I’ll call that a success. I’d love to get flathack to the point that it’s worth playing repeatedly, make more crosswords, and finish PICOtracker — but those will have to wait, since my GAMES MADE QUICK??? FOUR jam is coming up in a few days! And speaking of which, I need to put a bunch of this stuff on Itch! # Old CSS, new CSS Post Syndicated from Eevee original https://eev.ee/blog/2019/09/07/old-css-new-css/ I first got into web design/development in the late 90s, and only as I type this sentence do I realize how long ago that was. And boy, it was horrendous. I mean, being able to make stuff and put it online where other people could see it was pretty slick, but we did not have very much to work with. I’ve been taking for granted that most folks doing web stuff still remember those days, or at least the decade that followed, but I think that assumption might be a wee bit out of date. A little while ago I encountered a tweet incredulous at the lack of border-radius back in the day. I still remember waiting with bated breath for it to be unprefixed! But then, I suspect I also know a number of folks who only tried web design in the old days, and assume nothing about it has changed since. I’m here to tell all of you to get off my lawn. Here’s a history of CSS and web design, as I remember it. ## The very early days In the beginning, there was no CSS. This was very bad. My favorite artifact of this era is the book I learned HTML from, HTML: The Definitive Guide, published in several editions in the mid to late 90s. The book was indeed about HTML, with no mention of CSS at all. I don’t have it any more and can’t readily find screenshots online, but here’s a page from HTML & XHTML: The Definitive Guide, which seems to be a revision (I’ll get to XHTML later) with much the same style. Here, then, is the cutting-edge web design advice of 199X: Clearly delineate headers and footers with horizontal rules. No, that’s not a border-top. That’s an <hr>. The page title is almost certainly centered with, well, <center>. The page uses the default text color, background, and font. Partly because this is a guidebook introducing concepts one at a time; partly because the book was printed in black and white; and partly, I’m sure, because it reflected the reality that coloring anything was a huge pain in the ass. Let’s say you wanted all your <h1>s to be red, across your entire site. You had to do this: 1 ... every single goddamn time. Hope you never decide to switch to blue! Oh, and everyone wrote HTML tags in all caps. I don’t remember why we all thought that was a good idea. Maybe this was before syntax highlighting in text editors was very common (read: I was 12 and using Notepad), and uppercase tags were easier to distinguish from body text. Keeping your site consistent was thus something of a nightmare. One solution was to simply not style anything, which a lot of folks did. This was nice, in some ways, since browsers let you change those defaults, so you could read the web how you wanted. A clever alternate solution, which I remember showing up in a lot of Geocities sites, was to simply give every page a completely different visual style. Fuck it, right? Just do whatever you want on each new page. That trend was quite possibly the height of web design. Damn, I miss those days. There were no big walled gardens, no Twitter or Facebook. If you had anything to say to anyone, you had to put together your own website. It was amazing. No one knew what they were doing; I’d wager that the vast majority of web designers at the time were clueless hobbyist tweens (like me) all copying from other clueless hobbyist tweens. Half the web was fan portals about Animorphs, with inexplicable splash pages warning you that their site worked best if you had a 640×480 screen. (Anyone else should, I don’t know, get a new monitor?) Everyone who was cool and in the know used Internet Explorer 3, the most advanced browser, but some losers still used Netscape Navigator so you had to put a “Best in IE” animated GIF on your splash page too. This was also the era of “web-safe colors” — a palette of 216 colors, where every channel was one of 00, 33, 66, 99, cc, or ff — which existed because some people still had 256-color monitors! The things we take for granted now, like 24-bit color. In fact, a lot of stuff we take for granted now was still a strange and untamed problem space. You want to have the same navigation on every page on your website? Okay, no problem: copy/paste it onto each page. When you update it, be sure to update every page — but most likely you’ll forget some, and your whole site will become an archaeological dig into itself, with strata of increasingly bitrotted pages. Much easier was to use frames, meaning the browser window is split into a grid and a different page loads in each section… but then people would get confused if they landed on an individual page without the frames, as was common when coming from a search engine like AltaVista. (I can’t believe I’m explaining frames, but no one has used them since like 2001. You know iframes? The “i” is for inline, to distinguish them from regular frames, which take up the entire viewport.) PHP wasn’t even called that yet, and nobody had heard of it. This weird “Perl” and “CGI” thing was really strange and hard to understand, and it didn’t work on your own computer, and the errors were hard to find and diagnose, and anyway Geocities didn’t support it. If you were really lucky and smart, your web host used Apache, and you could use its “server side include” syntax to do something like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 (actual page content goes here) Mwah. Beautiful. Apache would see the special comments, paste in the contents of the referenced files, and you’re off to the races. The downside was that when you wanted to work on your site, all the navigation was missing, because you were doing it on your regular computer without Apache, and your web browser thought those were just regular HTML comments. It was impossible to install Apache, of course, because you had a computer, not a server. Sadly, that’s all gone now — paved over by homogenous timelines where anything that wasn’t made this week is old news and long forgotten. The web was supposed to make information eternal, but instead, so much of it became ephemeral. I miss when virtually everyone I knew had their own website. Having a Twitter and an Instagram as your entire online presence is a poor substitute. So let’s look at the Space Jam website. ## Case study: Space Jam Space Jam, if you’re not aware, is the greatest movie of all time and focuses on Bugs Bunny’s brief basketball career, playing alongside a live action Michael Jordan. It was followed by a series of very successful and high-praised RPG spinoffs, which tell the story of the fallout of the Space Jam and are extremely canon. And we are truly blessed, for 23 years after it came out, its website is STILL UP. We can explore the pinnacle of 1996 web design, right here, right now. First, notice that every page of this site is a static page. Not only that, but it’s a static page ending in .htm rather than .html, because people on Windows versions before 95 were still beholden to 8.3 filenames. Not really sure why that mattered in a URL, but there you go. The CSS for the splash page looks like this: 1 Haha, just kidding! There’s no CSS at all. I see a single line in the page source, but I’m pretty sure that was added much later to style some policy links. Next, notice the extremely precise positioning of these navigation links. This feat was accomplished the same way everyone did everything in 1996: with tables. In fact, tables have one advantage over CSS for layout: you can ctrl-click to select a table cell and drag around to select all of them, which shows you how the cells are arranged and is kind of like a super retro layout debugger. This was great because the first meaningful web debug tool, Firebug, wasn’t released until 2006 — a whole ten years later! The markup for this table is overflowing with inexplicable blank lines, but with those removed, it looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 ... That’s the first two rows, including the logo. You get the idea. Everything is laid out with align and valign on table cells; rowspans and colspans are used frequently; and there are some <br>s thrown in for good measure, to adjust vertical positioning by one line-height at a time. Other fantastic artifacts to be found on this page include this header, which contains Apache SSI syntax! This must’ve quietly broken when the site was moved over the years; it’s currently hosted on Amazon S3. You know, Amazon? The bookstore? 1 2 3 4 5 6 7 Okay, let’s check out jam central. I’ve used my browser dev tools to reduce the viewport to 640×480 for the authentic experience (although I’d also have lost some vertical space to the title bar, taskbar, and five or six IE toolbars). Note the frames: the logo in the top left leads back to the landing page, cleverly saving screen space on repeating all that navigation, and the top right is a fucking ad banner which has been blocked like seven different ways. All three parts are separate pages. Note also the utterly unreadable red text on a textured background, one of the truest hallmarks of 90s web design. “Why not put that block text on a solid background?” you might ask. You imbecile. How would I possibly do that? Only the <body> has a background attribute! I could use <table bgcolor>, but then I’d have to use a solid color, and that would look so boring! But wait, what is this new navigation widget? How are the links all misaligned like that? Is this yet another table? Well, no, although filling a table with chunks of a sliced-up image wasn’t uncommon. But this is an imagemap, a long-forgotten HTML feature. I’ll just show you the source: 1 2 3 4 5 6 7 8 I assume this is more or less self-explanatory. The usemap attribute attaches an image map, which is defined as a bunch of clickable areas, beautifully encoded as inscrutable lists of coordinates or something. And this stuff still works! This is in HTML! You could use it right now! Let’s look at one more random page here. I’d love to see some photos from the film. (Wait, photos? Did we not know what screenshots were yet?) Another frameset, but arranged differently this time. 1 They did an important thing here: since they specified a background image (which is opaque), they also specified a background color. Without it, if the background image failed to load, the page would be white text on the default white background, which would be unreadable. This is still an important thing to keep in mind, by the way. I feel like modern web development tends to assume everything will load, or sees loading as some sort of inconvenience to be worked around. Anyway, there’s not much to say here. The grid of thumbnails is, of course, done with a table. The thumbnails themselves are 72×43 pixels, and the “full-size, full-color, internet-quality” images are 360×216. Hey, though, they’re only like 16 KB! That’ll only take nine seconds to download. ## The regular early days So that’s where we started. This all sucked, obviously. If you wanted any kind of consistency on more than a handful of pages, your options were very limited. And then CSS came along, it was a fucking miracle. You could put borders on stuff! You could set colors without having to copy-paste them everywhere! You could just write HTML and another file somewhere else would make it look like the rest of your website, which was important, because we still didn’t understand what “server-side scripting” was or how to use it to make anything more interesting than a hit counter! (I absolutely wrote a hit counter.) A website even came along to take this principle to the extreme — CSS Zen Garden is still around, and showcases the same HTML being radically transformed into completely different designs by applying different stylesheets. But we’re not quite there yet. First is CSS 1, made a recommendation in late ‘96. I don’t know that this has been explicitly state, but CSS was clearly designed to divorce the existing layout and appearance capabilities of HTML away from the markup structure. That’s why even CSS 1 had the float property — it encapsulated the same functionality as the <img align> attribute. The white-space property exposed what <nobr> and <pre> did. That meant you could use a <p> to indicate a paragraph, and have it still mean “this is a paragraph” while styling it however you wanted. Beyond that, though, there was very little to CSS 1. You could set the font, color, and background of basically anything (though color keywords weren’t even defined yet, and were left up to the implementation!); you could set margins, borders, padding, and width/height; and that was pretty much it. Even the position property didn’t exist yet. So while CSS 1 was a tremendous help for trivial aesthetics like colors and fonts, it was useless for layout. Everyone picked it up as a really convenient way to keep the theme consistent, but tables remained king for actually laying out your page. • alt stylesheets!! when did those come along. • reader stylesheets, much much less useful nowadays • blink and marquee css2 not until mid 98 • browser wars, css included • nothing fucking worked, people stuck to tables for arranging things and css for light details • quirks mode! introduced by ie4? (list quirks, i love the img in a table cell one) So, along came CSS, and it was a fucking miracle. Now you just needed to put that color in a single file: 1 2 3 h1 { color: navy; } And it would apply to every <h1> on every page you included that file in! This changes everything. This is about where I come in and where my perspective really starts. But please bear in mind that I was like 11, with no idea what I was doing, talking to mostly other 11-year-olds. I’m sure the paid web devs putting together MSN or whatever had a slightly better grasp on things, but honestly — who the hell was going to business websites in 1998? Like, why would you even do that? The web was for random people to make their own little quirky beautiful things. very, very long sigh … Anyway, CSS basically killed all those <body> attributes and <font> tags and whatnot overnight. Budding web developers would get very haughty about how much better CSS was, and how you clearly had no idea what you were doing if you still used <font size=+1>. And so began best practice snobbery. For some span of time — I want to say a couple years, but time passes weirdly when you’re a kid — this was the state of the web. Tables were still used for layout, but CSS was used for, well, style. Colors, sizes, bold, underline. There was even this sick trick you could do with links where they’d only be underlined when the mouse was pointing at them. Incredible! But there were plenty of things you couldn’t do. Rather a lot, in fact. Here are some that I remember, and the workarounds we were stuck with. • You couldn’t, of course, put rounded corners on anything. You had to make four round-corner images yourself and put them in the corners of a 3×3 <table> (!). Or, if you knew the size of the element ahead of time, you could just make a single background image (!!). Of course, Internet Explorer 6 didn’t understand when an image in this newfangled “PNG” format had an 8-bit alpha channel, so you’d have to either use a GIF with jagged edges, or just include the page’s background color in the corner images so they could be opaque (!!!). • Speaking of opacity, that didn’t exist at all. The opacity property was new in CSS 3, and IE choked on PNGs, so if you wanted any sort of translucency, the solution was usually: don’t. • What the fuck is a “web font”? Your font options are, uh, whatever’s installed on your computer. I’m sure everyone else has the same fonts you do. Everyone’s using Windows XP, right? (There was once a time where people could configure their web browser to use fonts of their choosing, and it would actually matter, but those days seem to be long past. Alas, the stack of defaults means any website that doesn’t specify a font would be ugly for most people.) • No box shadows, no text shadows. • The +, ~, and > CSS combinators didn’t work in IE until 7, so they effectively didn’t exist. • XXX I cannot stress enough that Stack Overflow did not yet exist. This stuff was picked up from various websites about websites, like quirks mode and Eric Meyer’s website. (Eric Meyer is a CSS pioneer. When his young daughter Rebecca died five years ago, she was uniquely immortalized with a CSS color name, rebeccapurple. That’s how highly the web community thinks of him. Also I have to go cry a bit over that story now.) XXX complexspiral? ## The struggle begins • browser wars around the same time Then, something happened. I don’t know how, or when, or why exactly. But people started to do website layout using CSS, too. In hindsight, this was clearly absurd, and let me tell you why. CSS in its original form, which we’ll say for convenience is CSS 2.1, was designed like it was for articles — and nothing else. You could play with margins and font sizes and even colors, all you wanted. But you could not put things next to each other. Okay, that’s not entirely true, but it is mostly true. CSS was, as I understand it, designed to extract out all the presentational stuff that people were already doing with HTML. So it inherited a lot of the HTML model, just abstracted away from specific tags. That meant there were really only two layout models. Inline layout was for stuff like <b> and <i>: variations on style within a line of wrapping text. Block layout was for stuff like <p> and <hr>: a vertical stack of elements that each stretched across the full width of the page (or whatever container). So if you want to have a navigation sidebar on the left side of your website, what do you do? You can’t use blocks, because those stack vertically. That leaves inline, but… that’s for flowing text, not putting large complex blocks next to each other. CSS had also introduced “absolute positioning” for sticking elements at a precise position on the page, but people had all kinds of different screen sizes and that approach was remarkably inflexible. What does that leave? Only one option. And it’s only in hindsight that I can truly appreciate how ghastly this was. You see, the HTML <img> element normally sits in the flow of text, as an inline element. But if you set its align attribute to left or right, the image would jump out of the flow of text and shift to one side of the document, with text flowing around it. And CSS had absorbed this functionality in a general way, as the float property. CSS floats were only designed for one thing: flowing text around an image, like you might see in a magazine article. That’s it. But they were also the only way to put two things next to each other. Because if you have two floats in the same place, then one will butt up against the other. And so it begins. ### The float hack era Now, to be fair, this wasn’t a bad idea. Crafting layouts out of tables was hellish nonsense, and it suffered from the same problems as embedding colors in your HTML: if you ever wanted to make a slight adjustment, you’d have to repeat it on every single page. Surely, the ideal would be for the markup to describe the content and the CSS to describe how to arrange it on screen. XXX how a float layout actually worked XXX why it was brittle etc, but the biggest problem was: ### The dreadnaught: Internet Explorer 6 released in 2001, right around when css layout concept was taking off. completely ate market share in XXX, over 95%. basically abandoned for the next five years and we discovered some problems and then firefox came along and was taking the world by storm so suddenly everyone had to make their carefully-hacked websites work in both ie6 and firefox, and that was proving to be a problem, because ie6 had a lot of fundamental and severe bugs acid2!! what were the biggest things in css3? mention those – no web fonts, so people basically used whatever shipped with windows, which sucked if you weren’t on windows using <!-- at the start of your stylesheet lmfao the bad old days i came in shortly before the long winter of ie6 browser wars basically came down to just ie6 and a handful of people using like, konqueror on linux. (hi! ps that became webkit) let me explain ie6 to you ie6 box model, solved by strictmode? A LOT of stuff was “solved” by strict mode – margin: auto treated like 0, but you could use text-align on the parent to center “min-height” bug, no overflow it just makes stuff taller no inline-block! i remember being very excited for firefox 4(?) for this reason, was a huge refactor by david baron oh, the cross-browser hacks. oh my god. parsing hacks: conditional comments, putting a squiggle before css prop name, ie6 can’t parse > zoom ie6 filters to fix stuff sometimes xhtml?? shenanigans: complexspiral, etc dhtml”, dynamicdrive (oreilly book from 2007) (dynamicdrive was 2000) pushback against tables, which some folks didn’t really understand it was a good idea but css was still designed for print, kinda, concerns, so didn’t have a ton of layout options everything done with floats, the only way to put stuff side by side “clearfix”, sob. could do it with ::after but i don’t think ie6 supported those css reset lot of stuff done in flash because it was easier started making progress when firefox came out the miraculous turning point came when ie6 was dead but it was a slow, agonizing death, no real moment, kind of up to everyone individually if you were doing web dev for a job, maybe even 2% ie6 traffic was enough to keep support for it but wow what a nightmare that was youtube dropped it March 13, 2010, which i think was the first serious big-name shun and imo marked the end transitions? jquery. that’s it. an era where “modern” websites had everything sliding around with jquery effects, sort of like the linux 3d compiz cube thing nowadays, css flexbox and grid, holy crap rounded borders, sure but also drop shadows, text shadows, transforms, transitions/animations, filters, svg filters finally a css feature that lets me say what i want and have it happen, rather than try to coax it into happening implicitly” fun facts: (Fun fact: HTML email is still basically trapped in this era.) grand irony: as soon as rounded glossy bubble buttons became easy to do with css, they went out of style! now we’re back to stuff we could’ve done pretty easily in 1996, except for the round avatars i guess # No More Forgetting to Input ERP Charges – Hello Automated ERP! Post Syndicated from Grab Tech original https://engineering.grab.com/automated-erp-charges ERP, standing for Electronic Road Pricing, is a system used to manage road congestion in Singapore. Drivers are charged when they pass through ERP gantries during peak hours. ERP rates vary for different roads and time periods based on the traffic conditions at the time. This encourages people to change their mode of transport, travel route or time of travel during peak hours. ERP is seen as an effective measure in addressing traffic conditions and ensuring drivers continue to have a smooth journey. Did you know that Singapore has a total of 79 active ERP gantries? Did you also know that every ERP gantry changes its fare 10 times a day on average? For example, total ERP charges for a journey from Ang Mo Kio to Marina will cost10 if you leave at 8:50am, but 4 if you leave at 9:00am on a working day! Imagine how troublesome it would have been for Grab’s driver-partners who, on top of having to drive and check navigation, would also have had to remember each and every gantry they passed, calculating their total fare and then manually entering the charges to the total ride cost at the end of the ride. In fact, based on our driver-partners’ feedback, missing out on ERP charges was listed as one of their top-most pain points. Not only did the drivers find the entire process troublesome, this also led to earnings loss as they would have had to bear the cost of the ERP fares. We’re glad to share that, as of 15th March 2019, we’ve successfully resolved this pain point for our driver-partners by introducing automated ERP fare calculation! So, how did we achieve automating the ERP fare calculation for our drivers-partners? How did we manage to reduce the number of trips where drivers would forget to enter ERP fare to almost zero? Read on! ## How we approached the Problem The question we wanted to solve was – how do we create an impactful feature to make sure that driver -partners have one less thing to handle when they drive? We started by looking at the problem at hand. ERP fares in Singapore are very dynamic; it changes on the basis of day and time. Caption: Example of ERP fare changes on a normal weekday in Singapore We wanted to create a system which can identify the dynamic ERP fares at any given time and location, while simultaneously identifying when a driver-partner has passed through any of these gantries. However, that wasn’t enough. We wanted this feature to be scalable to every country where Grab is in – like Indonesia, Thailand, Malaysia, Philippines, Vietnam. We started studying the ERP (or tolls – as it is known locally) system in other countries. We realized that every country has its own style of calculating toll. While in Singapore ERP charges for cars and taxis are the same, Malaysia applies different charges for cars and taxis. Similarly, Vietnam has different tolls for 4-seaters and 7-seaters. Indonesia and Thailand have couple gantries where you pay only at one of the gantries.Suppose A and B are couple gantries, if you passed through A, you won’t need to pay at B and vice versa. This is where our Ops team came to the rescue! ## Boots on the Ground! Collecting all the ERP or toll data for every country is no small feat, recalls Robinson Kudali, program manager for the project. “We had around 15 people travelling across the region for 2-3 weeks, working on collecting data from every possible source in every country.” Getting the right geographical coordinates for every gantry is very important. We track driver GPS pings frequently, identify the nearest road to that GPS ping and check the presence of a gantry using its coordinates. The entire process requires you to be very accurate; incorrect gantry location can easily lead to us miscalculating the fare. Bayu Yanuaragi, our regional mapops lead, explains – “To do this, the first step was to identify all toll gates for all expressways & highways in the country. The team used various mapping software to locate and plot all entry & exit gates using map sources, open data and more importantly government data as references. Each gate was manually plotted using satellite imagery and aligned with our road layers in order to extract the coordinates with a unique gantry ID.” Location precision is vital in creating the dataset as it dictates whether a toll gate will be detected by the Grab app or not. Next step was to identify the toll charge from one gate to another. Accuracy of toll charge per segment directly reflects on the fare that the passenger pays after the trip. Caption: ERP gantries visualisation on our map – The purple bars are the gantries that we drew on our map Once the data compilation is done, team would then conduct fieldwork to verify its integrity. If data gaps are identified, modifications would be made accordingly. Upon submission of the output, stack engineers would perform higher level quality check of the content in staging. Lastly, we worked with a local team of driver-partners who volunteered to make sure the new system is fully operational and the prices are correct. Inconsistencies observed were reported by these driver-partners, and then corrected in our system. ## Closing the loop Creating a strong dataset did help us in predicting correct fares, but we needed something which allows us to handle the dynamic behavior of the changing toll status too. For example, Singapore government revises ERP fare every quarter, while there could also be ad-hoc changes like activating or deactivating of gantries on an on-going basis. Garvee Garg, Product Manager for this feature explains: “Creating a system that solves the current problem isn’t sufficient. Your product should be robust enough to handle all future edge case scenarios too. Hence we thought of building a feedback mechanism with drivers.” In case our ERP fare estimate isn’t correct or there are changes in ERPs on-ground, our driver-partners can provide feedback to us. These feedback directly flow to Customer Experience teamwho does the initial investigation, and from there to our Ops team. A dedicated person from Ops team checks the validity of the feedback, and recommends updates. It only takes 1 day on average to update the data from when we receive the feedback from the driver-partner. However, validating the driver feedback was a time consuming process. We needed a tool which can ease the life of Ops team by helping them in de-bugging each and every case. Hence the ERP Workflow tool came into the picture. 99% of the time, feedback from our driver-partners are about error cases. When feedback comes in, this tool would allow the Ops team to check the entire ride history of the driver and map driver’s ride trajectory with all the underlying ERP gantries at that particular point of time. The Ops team would then be able to identify if ERP fare calculated by our system or as said by driver is right or wrong. ## This is only the beginning By creating a system that can automatically calculate and key in ERP fares for each trip, Grab is proud to say that our driver-partners can now drive with less hassle and focus more on the road which will bring the ride experience and safety for both the driver and the passengers to a new level! The Automated ERP feature is currently live in Singapore and we are now testing it with our driver-partners in Indonesia and Thailand. Next up, we plan to pilot in the Philippines and Malaysia and soon to every country where Grab is in – so stay tuned for even more innovative ideas to enhance your experience on our super app! To know more about what Grab has been doing to improve the convenience and experience for both our driver-partners and passengers, check out other stories on this blog! # Guiding you Door-to-Door via our Super App! Post Syndicated from Grab Tech original https://engineering.grab.com/poi-entrances-venues-door-to-door Remember landing at an airport or going to your favourite mall and the hassle of finding the pickup spot when you booked a cab? When there are about a million entrances, it can get particularly annoying trying to find the right pickup location! Rolling out across South East Asia is a brand new booking experience from Grab, designed to make it easier for you to make a booking at large venues like airports, shopping centers, and tourist destinations! With the new booking flow, it will not only be easier to select one of the pre-designated Grab pickup points, you can also find text and image directions to help you navigate your way through the venue for a smoother rendezvous with your driver! ## Inspiration behind the work Finding your pick-up point closest to you, let alone predicting it, is incredibly challenging, especially when you are inside huge buildings or in crowded areas. Neeraj Mishra, Product Owner for Places at Grab explains: “We rely on GPS-data to understand user’s location which can be tricky when you are indoors or surrounded by skyscrapers. Since the satellite signal has to go through layers of concrete and steel, it becomes weak which adds to the inaccuracy. Furthermore, ensuring that passengers and drivers have the same pick-up point in mind can be tricky, especially with venues that have multiple entrances. ” Grab’s data analysis revealed that “rendezvous distance” (walking distance between the selected pick-up point and where the car is waiting) is more than twice the Grab average when the booking is made from large venues such as airports. To solve this issue, Grab launched “Entrances” (the green dots on the map) last year, which lists the various pick-up points available at a particular building, and shows them on the map, allowing users to easily choose the one closest to them, and ensuring their drivers know exactly where they want to be picked up from. Since then, Grab has created more than 120,000 such entrances, and we are delighted to inform you that average of rendezvous distances across all countries have been steadily going down! ## One problem remained But there was still one common pain-point to be solved. Just because a passenger has selected the pick-up point closest to them, doesn’t mean it’s easy for them to find it. This is particularly challenging at very large venues like airports and shopping centres, and especially difficult if the passenger is unfamiliar with the venue, for example – a tourist landing at Jakarta Airport for the very first time. To deliver an even smoother booking and pick-up experience, Grab has rolled out a new feature called Venues – the first in the region – that will give passengers in-app photo and text directions to the pick-up point closest to them. ## Let’s break it down! How does it work? Whether you are a local or a foreigner on holiday or business trip, fret not if you are not too familiar with the place that you are in! Let’s imagine that you are now at Singapore Changi Airport: your new booking experience will look something like this! Step 1: Fire the Grab app and click on Transport. You will see a welcome screen showing you where you are! Step 2: On booking screen, you will see a new pickup menu with a list of available pickup points. Confirm the pickup point you want and make the booking! Step 3: Once you’ve been allocated a driver, tap on the bubble to get directions to your pick-up point! Step 4: Follow the landmarks and walking instructions and you’ve arrived at your pick-up point! ## Curious about how we got this done? ### Data-Driven Decisions Based on a thorough data analysis of historical bookings, Grab identified key venues across our markets in Southeast Asia. Then we dispatched our Operations team to the ground, to identify all pick up points and perform detailed on-ground survey of the venue. ### Operations Team’s Leg Work Nagur Hassan, Operations Manager at Grab, explains the process: “For the venue survey process, we send a team equipped with the tools required to capture the details, like cameras, wifi and bluetooth scanners etc. Once inside the venue, the team identifies strategic landmarks and clear direction signs that are related to drop-off and pick-up points. Team also captures turn-by-turn walking directions to make it easier for Grab users to navigate – For instance, walk towards Starbucks and take a left near H&M store. All the photos and documentations taken on the sites are then brought back to the office for further processing.” ### Quality Assurance Once the data is collected, our in-house team checks the quality of the images and data. We also mask people’s faces and number plates of the vehicles to hide any identity-related information. As of today, we have collected 3400+ images for 1900+ pick up points belonging to 600 key venues! This effort took more than 3000 man-hours in total! And we aim to cover more than 10,000 such venues across the region in the next few months. ## This is only the beginning We’re constantly striving to improve the location accuracy of our passengers by using advanced Machine Learning and constant feedback mechanism. We understand GPS may not always be the most accurate determination of your current location, especially in crowded areas and skyscraper districts. This is just the beginning and we’re planning to launch some very innovative features in the coming months! So stay tuned for more! # Cheezball Rising: Collision detection, part 1 Post Syndicated from Eevee original https://eev.ee/blog/2018/11/28/cheezball-rising-collision-detection-part-1/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge4. More details in the README! In this issue, I bash my head against a rock. Sorry, I mean I bash Star Anise against a rock. It’s about collision detection. Previously: I draw some text to the screen. Next: more collision detection, and fixed-point arithmetic. ## Recap Last time I avoided doing collision detection by writing a little dialogue system instead. It was cute, and definitely something that needed doing, but something much more crucial still looms. I’ve put it off as long as I can. If I want to get anywhere with actual gameplay, I’m going to need some collision detection. ## Background and upfront decisions Collision detection is hard. It’s a lot of math that happens a few pixels at a time. Small mistakes can have dramatic consequences, yet be obscure enough that you don’t even notice them. Even using an off-the-shelf physics engine often requires dealing with a mountain of subtle quirks. And did I mention I have to do it on a Game Boy? Someday I’ll write an article about everything I’ve picked up about collision detection, but I haven’t yet, so you get the quick version. The problem is that an object is moving around, and it should be unable to move into solid objects. There are two basic schools of thought about the solution. Discrete collision observes that an object moves in steps — a little chunk of movement every frame — and simply teleports the object to its new location, then checks whether it now overlaps anything. (Note that all of these diagrams show very exaggerated motion. In most games, objects are slow and frames are short, so nothing moves more than a pixel or two at a time. That’s another reason collision detection is hard: the steps are so small that it can be difficult to see what’s actually going on.) If it does overlap, you might might try to push it out of whatever it’s overlapping, or you might cancel the movement entirely and simply not move the object that frame. Both approaches have drawbacks. Pushing an object out of an obstacle isn’t too difficult a problem, but it’s possible that the object will be pushed out into another obstacle, and now you have a complicated problem. (At this point, though, you could just give up and fall back to cancelling the movement.) But cancelling the movement means that an object might get “stuck” a pixel or two away from a wall and never be able to butt up against it. The faster the object is trying to move, the bigger the risk that this might happen. That said, this is exactly how the original Doom engine handles collision, and it seems to work well enough there. On the other hand, Doom is first-person so you can’t easily tell if you’re butting right up against a wall; a pixel gap is far more obvious in a game like this. On the other other hand, Doom also has bugs where a fast monster can open a locked door from its other side, because the initial teleport briefly moves the monster far enough into the door that it’s touching the other (unlocked) side. Either way, discrete collision has one other big drawback: tunnelling. Since the movement is done by teleporting, a very fast object might teleport right past a thin barrier. Only the new position is checked for collisions, so the barrier is never noticed. (This is how you travel to parallel universes in Mario 64 — by building up enough speed that Mario teleports through walls without ever momentarily overlapping them.) There are some other potential gotchas, though they’re rare enough that I’ve never seen anyone mention them. One that stands out to me is that you don’t know the order that an object collided with obstacles, which might make a difference if the obstacles have special behavior when collided with and the order of that behavior matters. Continuous collision detection observes that game physics are trying to simulate continuous motion, like happens in the real world, and tries to apply that to movement as well. Instead of teleporting, objects slide until they hit something. Tunnelling is thus impossible, and there’s no need to handle collisions since they’re prevented in the first place. This has some clear advantages, in that it eliminates all the pitfalls of discrete collision! It even functions as a superset — if you want some object to act discretely, you could simply teleport it and then attempt to “move” it along the zero vector. That said, continuous collision introduces some of its own problems. The biggest (for my purposes, anyway) is that it’s definitely more complicated to implement. “Sliding” means figuring out which obstacle would be hit first. You can do raycasting in the direction of movement and see what the ray hits first, though that’s imprecise and opens you up to new kinds of edge cases. If you’re lucky, you’re using something like Unity and can cast the entire shape as a single unit. Otherwise, well, you have to do a bunch of math to find everything in the swept path, then sort them in the order they’d be hit. The other big problem is that it’s more work at runtime. With discrete collision, you only need to check for collisions in the new location. That only costs more time when a lot of objects are bunched together in one place, which is unlikely. With continuous collision, everything along the swept path needs to be examined, and that means that the faster an object moves, the more expensive its movement becomes. So, not quite a golden bullet for the tunnelling problem. But that’s not a surprise; the only way to prevent tunnelling is to check for objects between the start and end positions. Which, then, do I want to implement here? For platforms without floating point (including the PICO-8 and Game Boy), there’s a third, hybrid option. If everything’s expressed with integers (or fixed point), then the universe has a Planck length: a minimum distance that every other distance must be an integral multiple of. You can thus fake continuous collision by doing repeated steps of discrete collision, one Planck length at a time. Objects will be collided with in the correct order, and you can simply stop at the first overlap. Of course, this eats up a lot of time, since it involves doing collision detection numerous times per object per frame. So unless your Planck length is really big, I’m not sure it’s worth it. Instead, I’m going to try for continuous collision. It’s closer to “correct” (whatever that means), and it’s what I did for all of my other games so far. It’s definitely harder, thornier, more complicated, and slower, but dammit I like it. It should also save me from encountering surprise bugs later on, which means I can write collision code once and then pretty much forget about it. Ideal. ## Getting started Star Anise is the only entity at the moment, so as a first pass, I’m only going to implement collision with the world. World collision is much easier! Everything is laid out in a fixed grid, so I already know where the cells are. Finding potential overlaps is fairly simple, and best of all, I don’t need to sort anything to know what order the cells are in. Right away, I find I have another decision to make. I would normally want to use vector math here — the motion is some distance in some direction, and hey, that’s a vector. But vectors take up twice as much space (read: twice as many registers), and a lot of vector operations rely on division or square roots which are non-trivial on this hardware. With a great reluctant sigh, I thus commit to one more approximation, one made on 8-bit hardware since time immemorial. I won’t actually move in the direction of motion; instead, I’ll move along the x-axis, then move along the y-axis separately. Diagonal movement could theoretically cut across some corners (or be unable to fit through very tight gaps), but those are very minor and unlikely inconveniences. More importantly, this handwaving can’t allow any impossible motion. I’ve already taken for granted that entities will all be axis-aligned rectangles. I’m definitely not dealing with slopes on a goddamn Game Boy. That was hard enough to do from scratch on a modern computer. But I’m getting ahead of myself. First things first: you may recall that Star Anise’s movement is a bit of a hack. Pressing a direction button only adds to or subtracts from the sprite coordinates in the OAM buffer; his position isn’t actually stored in RAM anywhere. In fact, thanks to my slightly nonlinear storytelling across these posts, his movement isn’t stored anywhere either! The input-reading code writes directly to the OAM buffer. Whoops. I intended to fix that later, and now it’s later, so here we go. 1 2 3 4 5 ; Somewhere in RAM, before anise_facing etc anise_x: db anise_y: db So far, so good. OAM is populated in two places (and I should fix that later, too): once during setup, and once in the main game loop. Both will need to be updated to use these values. Setup needs to initialize them first, of course: 1 2 3 4 ld a, 64 ld [anise_x], a ld [anise_y], a ; ... initialize anise_facing, etc ... And now the OAM setup can be fixed. But, surprise! I left myself another hardcoded knot to untangle: even the relative positions of the sprites are hardcoded. Okay, so, those need to be put somewhere too. Eventually I’m going to need some kinda entity structure, but since there’s only one entity, I’ll just slap it into a constant somewhere. (I guess my programming philosophy is leaking out a bit here. Don’t worry about structure until you need it, and you don’t need it until you need it twice. Once code works for one thing, it’s relatively straightforward to make it work for n things, and you have fewer things to worry about while you’re just trying to make something work.) 1 2 3 4 5 ; In ROM somewhere ANISE_SPRITE_POSITIONS: db -2, -20 db -8, -14 db 0, -14 It’s not immediately obvious from looking at these numbers, but I’m taking Star Anise’s position to mean the point on the ground between his feet. That’s the best approximation of where he is, after all. (Early in game development, it seems natural to treat position as the upper-left corner of the sprite, so you can simply draw the sprite at the entity’s position — but that tangles the world model up with the sprite you happen to have at the moment. Imagine the havoc it’d wreak if you changed the size of the sprite later!) Okay, now I can finally— What? How does the code know there are exactly 3 sprites, on this byte-level platform? Because I’m hardcoding it. Shut up already I’ll fix it later 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 ; Load the x and y coordinates into the b and c registers ld hl, anise_x ld b, [hl] inc hl ld c, [hl] ; Leave hl pointing at the sprite positions, which are ; ordered so that hl+ will step through them correctly ld hl, ANISE_SPRITE_POSITIONS ; ANTENNA ; x-coord ; The x coordinate needs to be added to the sprite offset, ; AND the built-in OAM offset (8, 16). Reading the sprite ; offset first allows me to use hl+. ld a, [hl+] add a, b add a, 8 ; Previously, hl pointed into the OAM buffer and advanced ; throughout this code, but now I'm using hl for something ; else, so I use direct addresses of positions within the ; buffer. Obviously this is a kludge and won't work once ; I stop hardcoding sprites' positions in OAM, but, you ; know, I'll fix it later. ld [oam_buffer + 1], a ; y-coord ld a, [hl+] add a, c add a, 16 ld [oam_buffer + 0], a ; This stuff is still hardcoded. ; chr index xor a ld [oam_buffer + 2], a ; attributes ld [oam_buffer + 3], a ; The rest of this is not surprising. ; LEFT PART ; x-coord ld a, [hl+] add a, b add a, 8 ld [oam_buffer + 5], a ; y-coord ld a, [hl+] add a, c add a, 16 ld [oam_buffer + 4], a ; chr index ld a, 2 ld [oam_buffer + 6], a ; attributes ld a, %00000001 ld [oam_buffer + 7], a ; RIGHT PART ; x-coord ld a, [hl+] add a, b add a, 8 ld [oam_buffer + 9], a ; y-coord ld a, [hl+] add a, c add a, 16 ld [oam_buffer + 8], a ; chr index ld a, 4 ld [oam_buffer + 10], a ; attributes ld a, %00000001 ld [oam_buffer + 11], a Boot up the game, and… it looks the same! That’s going to be a running theme for a little bit here. Sorry, this isn’t a particularly screenshot-heavy post. It’s all gonna be math and code for a while. Now I need to split apart the code that reads input and applies movement to OAM. Reading input gets much simpler, since it doesn’t have to do anything any more, just compute a dx and dy. This code does still have looming questions, such as how to handle pressing two opposite directions (which is impossible on hardware but easy on an emulator), or whether diagonal movement should be fixed so that Anise doesn’t move at $$\sqrt{2}$$ his movement speed. Later. Seriously the actual code has so many XXX and TODO and FIXME comments that I edit out of these posts. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 ; Anise update loop ; Stick dx and dy in the b and c registers. ld a, [buttons] ; b/c: dx/dy ld b, 0 ld c, 0 bit PADB_LEFT, a jr z, .skip_left dec b .skip_left: bit PADB_RIGHT, a jr z, .skip_right inc b .skip_right: bit PADB_UP, a jr z, .skip_up dec c .skip_up: bit PADB_DOWN, a jr z, .skip_down inc c .skip_down: ; For now just add b and c to Anise's coordinates. This ; is where collision detection will go in a moment! ld a, [anise_x] add a, b ld [anise_x], a ld a, [anise_y] add a, c ld [anise_y], c All that’s left is to more explicitly update the OAM buffer! This code ends up looking fairly similar to the setup code. So similar, in fact, that I wonder if these blocks should be merged, but I’ll do that later: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 ; Load x and y into b and c ld hl, anise_x ld b, [hl] inc hl ld c, [hl] ; Point hl at the sprite positions ld hl, ANISE_SPRITE_POSITIONS ; ANTENNA ; x-coord ld a, [hl+] add a, b add a, 8 ld [oam_buffer + 1], a ; y-coord ld a, [hl+] add a, c add a, 16 ld [oam_buffer + 0], a ; LEFT PART ; x-coord ld a, [hl+] add a, b add a, 8 ld [oam_buffer + 5], a ; y-coord ld a, [hl+] add a, c add a, 16 ld [oam_buffer + 4], a ; RIGHT PART ; x-coord ld a, [hl+] add a, b add a, 8 ld [oam_buffer + 9], a ; y-coord ld a, [hl+] add a, c add a, 16 ld [oam_buffer + 8], a Phew! And the game plays exactly the same as before. Programming is so rewarding. On to the main course! ## Collision detection, sort of So. First pass. Star Anise can only collide with the map. Ah, but first, what size is Star Anise himself? I’ve only given him a position, not a hitbox. I could use his sprite as the hitbox, but with his helmet being much bigger than his body, that’ll make it seem like he can’t get closer than a foot to anything else. I’d prefer if he had an explicit radius. 1 2 3 ; in ROM somewhere ANISE_RADIUS: db 3 Remember, Star Anise’s position is the point between his feet. This describes his hitbox as a square, centered at that point, with sides 6 pixels long. The top and bottom edges of his hitbox are thus at y - r and y + r, which makes for some pleasing symmetry. (Making hitboxes square doesn’t save a lot of effort or anything, but switching to rectangles later on wouldn’t be especially difficult either.) ### The plan My plan for moving rightwards, which I came up with after a lot of very careful and very messy sketching, looks like this: 1. Figure out which rows I’m spanning. 2. Move right until the next grid line. No new obstacle can possibly be encountered until then, so there’s nothing to check. (Unless I’m somehow already overlapping an obstacle, of course, but then I’d rather be able to move out of the obstacle than stay stuck and possibly softlock the game.) 3. In the next grid column, check every cell that’s in a spanned row. If any of those cells block us, stop here. Otherwise, move to the next grid line (8 pixels). 4. Repeat until I run out of movement. (It’s very unlikely the previous step would happen more than once; an entity would have to move more than 8 pixels per frame, which is 3 entire screen widths per second.) Here’s a diagram. In this case, step 3 checks two cells for each column, but it might check more or fewer depending on how the entity is positioned. (It’ll never need to check more than one cell more than the entity’s height.) Seems straightforward enough. But wait! ### Edge case I’ll save you a bunch of debugging anguish on my part and skip to the punchline: there’s an edge case. I mean, literally, the case of when the entity’s edge is already against a grid line. That’ll happen fairly frequently — every time an entity collides with the map, it’ll naturally stop with its edge aligned to the grid. The problem is all the way back in step 1. Remember, I said that to figure out which grid row or column a point belongs to, I need to divide by 8 (or shift right by 3). So the rows an entity spans must count from its top edge divided by 8, to its bottom edge divided by 8. Right? Well… Everything’s fine until the entity’s bottom edge is exactly flush with the grid line, as in the last example. Then it seems to be jutting into the row below, even though no part of it is actually inside that row. If the entity tried to move rightwards from here, it might get blocked on something in row 1! Even worse, if row 1 were a solid wall that it had just run into, it wouldn’t be able to move left or right at all! What happened here? There’s a hint in how I laid out the diagram. There’s something akin to the fencepost problem here. I’ve been talking about rows and columns of the grid as if they were regions — “row 1” labels a rectangular strip of the world. But pixel coordinates don’t describe regions! They describe points. A pixel is a square area, but a pixel coordinate is the point at the upper left corner of that area. In the incorrect example, the bottom of the entity is at y = 8, even though the row of pixels described by y = 8 doesn’t contain any part of the hitbox. I’m using the coordinate of the pixel’s top edge to describe a box’s bottom edge, and it falls apart when I try to reinterpret that coordinate as a region. In terms of area, y = 8 really names the first row of pixels that the entity doesn’t overlap. To work around this, I need to adjust how I convert a coordinate to the corresponding grid cell, but only when that coordinate describes the right or bottom of a bounding box. Bottom pixel 8 should belong to row 0, but 9 should still end up in row 1. As luck would have it, I’m using integers for coordinates, which means there’s a Planck length — a minimum distance of which all other distances are a multiple. That length is, of course, 1 pixel. If I subtract that length from a bottom coordinate, I get the next nearest coordinate going upwards. If the original coordinate was on a grid line, it’ll retreat back into the cell above; otherwise, it’ll stay in the same cell. You can check this with the diagram, if you need some convincing. (This works for any fixed point system; integers are the special case of fixed point with zero fractional bits. It would not work so easily with floating point — subtracting the smallest possible float value will usually do nothing, because there’s not enough precision to express the difference. But then, if you have floating point, you probably have division and can write vector-based collision instead of taking grid-based shortcuts.) All that is to say, I just need to subtract 1 before shifting. For clarity, I’ll write these as macros to convert a coordinate in a to a grid cell. I call the top or left conversion inclusive, because it includes the pixel the coordinate refers to; conversely, the bottom and right conversion is exclusive, like how a bottom of 8 actually excludes the pixels at y = 8. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ; Given a point on the top or left of a box, convert it to the ; containing grid cell. ToInclusiveCell: MACRO ; This is just floor division srl a srl a srl a ENDM ; Given a point on the bottom or right of a box, convert it to ; the containing grid cell. ToExclusiveCell: MACRO ; Deal with the exclusive edge by subtracting the planck ; length, then flooring dec a srl a srl a srl a ENDM At last, I can write some damn code! ### Some damn code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 ; Here, b and c contain dx and dy, the desired movement. ; First, figure out which columns we might collide with. ; The NEAREST is the first one to our right that we're not ; already overlapping, i.e. the one /after/ the one ; containing our right edge. That's Exc(x + r) + 1. ; The FURTHEST is the column that /will/ contain our right ; edge. That's Exc(x + r + dx). ld hl, ANISE_RADIUS ; Put the NEAREST column in d ld a, [anise_x] ; a = x add a, [hl] ; a = x + r ld e, a ; e = x + r ToExclusiveCell inc a ; a = Exc(x + r) + 1 ld d, a ; d = Exc(x + r) + 1 ; Put the FURTHEST column in e ld a, e ; a = x + r add a, b ; a = x + r + dx ToExclusiveCell ld e, a ; e = Exc(x + r + dx) ; Loop over columns in [d, e]. ; If d > e, this movement doesn't cross a grid line, so ; nothing can stop us and we can skip all this logic. ld a, e cp d jp c, .done_x ; We don't need dx for now, so stash bc for some work space push bc .x_row_scan: ; For each column we might cross: check whether any of the ; rows we span will block us. ; Hm. This code probably should've been outside the loop. ld a, [anise_y] ld hl, ANISE_RADIUS sub a, [hl] ToInclusiveCell ld b, a ; b = minimum y ld a, [anise_y] add a, [hl] ToExclusiveCell ld c, a ; c = maximum/current y .x_column_scan: ; Put the cell's row and column in bc, and call a function ; to check its "map flags". I'll define that in a moment, ; but for now I'll assume that if bit 0 is set, that means ; the cell is solid. ; This is also why the inner loop counts down with c, not ; up with b: get_cell_flags wants the y coord in c, and ; this way, it's already there! push bc ld b, d call get_cell_flags pop bc ; If this produces zero, we can skip ahead and a, $01 jr z, .not_blocked ; We're blocked! Stop here. Set x so that we're butted ; against this cell, which means subtract our radius from ; its x coordinate. ; Note that this can't possibly move us further than dx, ; because dx was /supposed/ to move us INTO this cell. ld a, d ; This is a /left/ shift three times, for cell -> pixel sla a sla a sla a sub a, [hl] ld [anise_x], a ; Somewhat confusing pop, to restore dx and dy. pop bc jp .done_x .not_blocked: ; Not blocked, so loop to the next cell in this column dec c ld a, c cp b jr nc, .x_column_scan ; Finished checking one column successfully, so continue on ; to the next one inc d ld a, e cp d jr nc, .x_row_scan ; Done, and we never hit anything! Update our position to ; what was requested pop bc ld a, [anise_x] add a, b ld [anise_x], a I’ve also gotta implement get_cell_flags, which is slightly uglier than I anticipated. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 ; Fetches properties for the map cell at the given coordinates. ; In: bc = x/y coordinates ; Out: a = flags get_cell_flags: push hl push de ; I have to figure out what char is at these coordinates, ; which means consulting the map, which means doing math. ; The map is currently 16 (big) tiles wide, or 32 chars, ; so the byte for the indicated char is at b + 32 * c. ld hl, TEST_MAP_1 ; Add x coordinate. hl is 16 bits, so extend b to 16 bits ; using the d and e registers separately, then add. ld d, 0 ld e, b add hl, de ; Add y coordinate, with stride of 32, which we can do ; without multiplying by shifting left 5. Alas, there are ; no 16-bit shifts, so I have to do this by hand. ; First get the 5 high bits by copying y into d, then ; shifting the 3 low bits off the right end. ld d, c srl d srl d srl d ; Then get the low 3 bits into the high 3 by swapping, ; shifting, and masking them off. ld a, c swap a sla a and a,$e0 ld e, a ; Not sure that was really any faster than just shifting ; left through the carry flag 5 times. Oh well. Add. add hl, de ; At last, we know the char. I don't have real flags at ; the moment, so I just hardcoded the four chars that make ; up the small rock tile. ld a, [hl] cp a, 2 jr z, .blocking cp a, 3 jr z, .blocking cp a, 12 jr z, .blocking cp a, 13 jr z, .blocking jr .not_blocking ; The rest should not be too surprising. .blocking: ld a, 1 jr .done .not_blocking: xor a .done: pop de pop hl ret And that’s it! ## That’s not it The code I wrote only applies when moving right. It doesn’t handle moving left at all. And here I run into a downside of continuous collision, at least in this particular case. Because of the special behavior of right/bottom edges, I can’t simply flip a sign to make this code work for leftwards movement as well. For example, the set of columns I might cross going rightwards is calculated exclusively, because my right edge is the one in front… but if I’m moving leftwards, it’s calculated inclusively. Those columns are also in reverse order and thus need iterating over backwards, so an inc somewhere becomes a dec, and so on. I have two uncomfortable options for handling this. One is to add all the required conditional tests and jumps, but that adds a decent CPU cost to code that’s fairly small and potentially very hot, and complicates code that’s a bit dense and delicate to begin with. The other option is to copy-paste the whole shebang and adjust it as needed to go leftwards. Guess which I did! 1 2 3 4 5 6 7 8 9 ld a, b cp a, $80 jp nc, .negative_x .positive_x: ; ... everything above ... jp .done_x .negative_x: ; ... everything above, flipped ... .done_x: Ugh. Don’t worry, though — it gets worse later on! I could copy-paste for y movement too and give myself a total of four blocks of similar code, but I’ll hold off on that for now. Ah. You want the payoff, don’t you. Well, I’m warning you now: the next post gets much hairier, and if I show you a GIF now, there won’t be any payoff next time. You sure? Really? No going back! I admit, this was pretty damn satisfying the first time it actually worked. Collision detection is a pain in the ass, but it’s the first step to making a game feel like a game. Games are about working within limitations, after all! ## An aside: debugging I’ve made this adventure seem much easier than it actually was by eliding all the mistakes. I made a lot of mistakes, and as I said upfront, it can be very difficult to notice heisenbugs or figure out exactly what’s causing them. One thing that helped tremendously near the beginning was to hack Star Anise to have a fourth sprite: a solid black 6×6 square under his feet. That let me see where he was actually supposed to be able to stand. Highly recommend it. All I did was copy/paste everywhere that mentioned his sprites to add a fourth one, and position it centered under his feet. (On any other system, I’d just draw collision rectangles everywhere, but the Game Boy is sprite-based so that’s not really gonna fly.) I also had pretty good success with writing intermediate values to unused bytes in RAM, so I could inspect them in mGBA’s memory viewer even after the movement was finished. And of course, as an absolute last resort, bgb has an interactive graphical debugger. (Nothing against bgb per se; I just prefer not to rely on closed-source software running in Wine if I can at all get away with it.) ## To be continued Obviously, this isn’t anywhere near done. There’s no concept of collision with other entities, and before that’s even a possibility, I need a concept of other entities. I left myself a long trail of do-it-laters. There are even risks of overflow and underflow in a couple places, which I didn’t bother pointing out because I completely overhaul this code later. But it’s a big step forward, and now I just need a few more big steps forward. (I say, four months later, long after all those steps are done.) I already have some future ideas in mind, like: what if a map tile weren’t completely solid, but had its own radius? Could I implement corner cutting, where the game gently guides you if you get stuck on a corner by only a single pixel? What about having tiles that are 45° angles, just to cut down on the overt squareness of the map? Well. Maybe, you know, later. Anyway, that brings us up to commit da7478e. It’s all downhill from here. Next time: more collision detection, and fixed-point arithmetic! # Cheezball Rising: Opening a dialogue Post Syndicated from Eevee original https://eev.ee/blog/2018/10/09/cheezball-rising-opening-a-dialogue/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge$4. More details in the README! In this issue, I draw some text! Previously: I get a Game Boy to meow. Next: collision detection, ohh nooo ## Recap The previous episode was a diversion (and left an open problem that I only solved after writing it), so the actual state of the game is unchanged. Where should I actually go from here? Collision detection is an obvious place, but that’s hard. Let’s start with something a little easier: displaying scrolling dialogue text. This is likely to be a dialogue-heavy game, so I might as well get started on that now. ## Planning On any other platform, I’d dive right into it: draw a box on the screen somewhere, fill it with text. On the Game Boy, it’s not quite that simple. I can’t just write text to the screen; I can only place tiles and sprites. Let’s look at how, say, Pokémon Yellow handles its menu. This looks — feels — like it’s being drawn on top of the map, and that sub-menus open on top of other menus. But it’s all an illusion! There’s no “on top” here. This is a completely flat image made up of tiles, like anything else. This is why Pokémon has such a conspicuously blocky font: all the glyphs are drawn to fit in a single 8×8 char, so “drawing” text is as simple as mapping letters to char indexes and drawing them onto the background. The map and the menu are all on the same layer, and the game simply redraws whatever was underneath when you close something. Part of the illusion is that the game is clever enough to hide any sprites that would overlap the menu — because sprites would draw on top! (The Game Boy Color has some twiddles for controlling this layering, but Yellow was originally designed for the monochrome Game Boy.) A critical reason that this actually works is that in Pokémon, the camera is always aligned to the grid. It scrolls smoothly while you’re walking, but you can’t actually open the menu (or pick up an item, or talk to someone, or do anything else that might show text) until you’ve stopped moving. If you could, the menu would be misaligned, because it’s part of the same grid as the map! This poses a slight problem for my game. Star Anise isn’t locked to the grid like the Pokémon protagonist is, and unlike Link’s Awakening, I do want to have areas larger than the screen that can scroll around freely. I know offhand that there are a couple ways to do this. One is the window, an optional extra opaque layer that draws on top of the background, with its top-left corner anchored to any point on the screen. Another is to change some display registers in the middle of the screen redrawing. If you’re thinking of any games with a status bar at the bottom or right, chances are they use the window; games with a status bar at the top have to use display register tricks. But I don’t want to worry about any of this right now, before I even have text drawing. I know it’s possible, so I’ll deal with it later. For now, drawing directly onto the background is good enough. ### Font decisions Let’s get back to the font itself. I’m not in love with the 8×8 aesthetic; what are my other options? I do like the text in Oracle of Ages, so let’s have a look at that: Ah, this is the same approach again, except that letters are now allowed to peek up into the char above. So these are 8×16, but the letters all occupy a box that’s more like 6×9, offering much more familiar proportions. Oracle of Ages is designed for the Game Boy Color, which has twice as much char storage space, so it makes sense that they’d take advantage of it for text like this. It’s not bad, but the space it affords is still fairly… limited. Only 16 letters will fit in a line, just as with Pokémon, and that means a lot of carefully wording things to be short and use mostly short words as well. That’s not gonna cut it for the amount of dialogue I expect to have. (You may be wondering, as I did, how Oracle pulled off this grid-aligned textbox. In small buildings and the overworld, each room is exactly the size of the screen, so there’s no scrolling and no worry about misaligned text. But how does the game handle showing text inside a dungeon, where a room is bigger than the screen and can scroll freely? The answer is: it doesn’t! The textbox is just placed as close as possible to the position shown in this screenshot, so the edges might be misaligned by up to 4 pixels. In 20 years, I never noticed this until I thought to check how they were handling it. I’m sure there’s a lesson, here.) What other options do I have? It seems like I’m limited to multiples of 8 here, surely. (The answer may be obvious to some of you, but shh, don’t read ahead.) The answer lies in the very last game released for the Game Boy Color: Harry Potter and the Chamber of Secrets. Whatever deep secrets were learned during the Game Boy’s lifetime will surely be encapsulated within this, er, movie tie-in game. Hot damn. That is a ton of text in a relatively small amount of space! And it doesn’t fit the grid! How did they do that? The answer is… exactly how you’d think! With a fixed-width font like in Pokémon and Zelda games, the entire character set is stored in VRAM, and text is drawn by drawing a string of characters. With a variable-width font like in Harry Potter, a block of VRAM is reserved for text, and text is drawn into those chars, in software. Essentially, some chars are used like a canvas and have text rendered to them on the fly. The contents of the background layer might look like this in the two cases: Some pros of this approach: • Since the number of chars required is constant and the font is never loaded directly into char memory, the font can have arbitrarily many glyphs in it. Multiple fonts could be used at the same time, even. (Of course, if you have more than 256 glyphs, you’ll have to come up with a multi-byte encoding for actually storing the text…) • A lot more text can fit in one line while still remaining readable. • It has the potential to look very cool. I definitely want to squeeze every last drop of fancy-pants graphical stuff that I can from this hardware. And, cons: • It’s definitely more complicated! But I only have to write the code once, and since the game won’t be doing anything but drawing dialogue while the box is up, I don’t think I’ll be in danger of blowing my CPU budget. • Colored text becomes a bit trickier. But still possible, so, we can worry about that later. • Fixed text that doesn’t scroll, like on menus and whatnot, will be something of a problem — this whole idea relies on amortizing the text rendering across multiple frames. On the other hand, this game shouldn’t have too much of that, and this sounds like a good excuse to hand-draw fixed text (which can then be much more visually interesting). At worst, I could just render the fixed text ahead of time. Well, I’m sold. Let’s give it a shot. ## First pass Well, I want to do something on a button press, so, let’s do that. A lot of games (older ones especially) have bugs from switching “modes” in the same frame that something else happens. I don’t entirely understand why that’s so common and should probably ask some speedrunners, but I should be fine if I do mode-switching first thing in the frame, and then start over a new frame when switching back to “world” mode. Right? Sure. 1 2 3 4 5 6 7 8 9 10 11 12 ; ... button reading code in main loop ... bit BUTTON_A, a jp nz, .do_show_dialogue ; ... main loop ... ; Loop again when done jp vblank_loop .do_show_dialogue: call show_dialogue jp vblank_loop The extra level of indirection added by .do_show_dialogue is just so the dialogue code itself isn’t responsible for knowing where the main loop point is; it can just ret. Now to actually do something. This is a first pass, so I want to do as little as possible. I’ll definitely need a palette for drawing the text — and here I’m cutting into my 8-palette budget again, which I don’t love, but I can figure that out later. (Maybe with some shenanigans involving changing the palettes mid-redraw, even.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 PALETTE_TEXT: ; Black background, white text... then gray shadow, maybe? dcolor $000000 dcolor$ffffff dcolor $999999 dcolor$666666 show_dialogue: ; Have to disable the LCD to do video work. Later I can do ; a less jarring transition DisableLCD ; Copy the palette into slot 7 for now ld a, %10111000 ld [rBCPS], a ld hl, PALETTE_TEXT REPT 8 ld a, [hl+] ld [rBCPD], a ENDR I also know ahead of time what chars will need to go where on the screen, so I can fill them in now. Note that I really ought to blank them all out, especially since they may still contain text from some previous dialogue, but I don’t do that yet. An obvious question is: which tiles? I think I said before that with 512 chars available, and ¾ of those still being enough to cover the entire screen in unique chars, I’m okay with dedicating a quarter of my space to UI stuff, including text. To keep that stuff “out of the way”, I’ll put them at the “end” — bank 1, starting from 80. I’m thinking of having characters be about the same proportions as in the Oracle games. Those games use 5 rows of tiles, like this: 1 2 3 4 5 top of line 1 bottom of line 1 top of line 2 bottom of line 2 blank Since the font is aligned to the bottom and only peeks a little bit into the top char, the very top row is mostly blank, and that serves as a top margin. The bottom row is explicitly blank for a bottom margin that’s nearly the same size. The space at the top of line 2 then works as line spacing. I’m not fixed to the grid, so I can control line spacing a little more explicitly. But I’ll get to that later and do something really simple for now, whereff is a blank tile: 1 2 3 4 5 6 7 8 9 +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|80|82|84|86|88|8a|8c|8e|90|92|94|96|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|81|83|85|87|89|8b|8d|8f|91|93|95|97|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ This gives me a canvas for drawing a single line of text. The staggering means that the first letter will draw to adjacent chars $80 and$81, rather than distant cousins like $80 and$a0. You may notice that the below code updates chars across the entire width of the grid, not merely the screen. There’s not really any good reason for that. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 ; Fill text rows with tiles (blank border, custom tiles) ; The screen has 144/8 = 18 rows, so skip the first 14 rows ld hl, $9800 + 32 * 14 ; Top row, all tile 255 ld a, 255 ld c, 32 .loop1: ld [hl+], a dec c jr nz, .loop1 ; Text row 1: 255 on the edges, then middle goes 128, 130, ... ld a, 255 ld [hl+], a ld a, 128 ld c, 30 .loop2: ld [hl+], a add a, 2 dec c jr nz, .loop2 ld a, 255 ld [hl+], a ; Text row 2: same as above, but middle is 129, 131, ... ld a, 255 ld [hl+], a ld a, 129 ld c, 30 .loop3: ld [hl+], a add a, 2 dec c jr nz, .loop3 ld a, 255 ld [hl+], a ; Bottom row, all tile 255 ld a, 255 ld c, 32 .loop4: ld [hl+], a dec c jr nz, .loop4 Now I need to repeat all of that, but in bank 1, to specify the char bank (1) and palette (7) for the corresponding tiles. Those are the same for the entire dialogue box, though, so this part is easier. 1 2 3 4 5 6 7 8 9 10 11 12 13 ; Switch to VRAM bank 1 ld a, 1 ldh [rVBK], a ld a, %00001111 ; bank 1, palette 7 ld hl,$9800 + 32 * 14 ld c, 32 * 4 ; 4 rows .loop5: ld [hl+], a dec c jr nz, .loop5 EnableLCD Time to get some real work done. Which raises the question: how do I actually do this? If you recall, each 8-pixel row of a char is stored in two bytes. The two-bit palette index for each pixel is split across the corresponding bit in each byte. If the leftmost pixel is palette index 01, then bit 7 in the first byte will be 0, and bit 7 in the second byte will be 1. Now, a blank char is all zeroes. To write a (left-aligned) glyph into a blank char, all I need to do is… well, I could overwrite it, but I could just as well OR it. To write a second glyph into the unused space, all I need to do is shift it right by the width of the space used so far, and OR it on top. The unusual split layout of the palette data is actually handy here, because it means the size of the shift matches the number of pixels, and I don’t have to worry about overflow. 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 0 0 0 <- blank glyph 1 1 1 1 0 0 0 0 <- some byte from the first glyph ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 1 1 1 1 0 0 0 0 <- ORed together to display first character 1 1 1 1 0 0 0 0 <- some byte from the second glyph, shifted by 4 (plus a kerning pixel) ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 1 1 1 1 0 1 1 1 <- ORed together to display first two characters The obvious question is, well, what happens to the bits from the second character that didn’t fit? I’ll worry about that a bit later. Oh, and finally, I’ll need a font, plus some text to display. This is still just a proof of concept, so I’ll add in a couple glyphs by hand. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 ; somewhere in ROM font: ; A ; First byte indicates the width of the glyph, which I need ; to know because the width varies! db 6 dw 00000000 dw 00000000 dw 01110000 dw 10001000 dw 10001000 dw 10001000 dw 11111000 dw 10001000 dw 10001000 dw 10001000 dw 10001000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 ; B db 6 dw 00000000 dw 00000000 dw 11110000 dw 10001000 dw 10001000 dw 10001000 dw 11110000 dw 10001000 dw 10001000 dw 10001000 dw 11110000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 text: ; Shakespeare it ain't. ; Need to end with a NUL here so I know where the text ; ends. This isn't C, there's no automatic termination! db "ABABAAA", 0 And here we go! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 ; ---------------------------------------------------------- ; Setup done! Real work begins here ; b: x-offset within current tile ; de: text cursor + current character tiles ; hl: current VRAM tile being drawn into ld b, 0 ld de, text ld hl, $8800 ; This loop waits for the next vblank, then draws a letter. ; Text thus displays at ~60 characters per second. .next_letter: ; This is probably way more LCD disabling than is strictly ; necessary, but I don't want to worry about it yet EnableLCD call wait_for_vblank DisableLCD ld a, [de] ; get current character and a ; if NUL, we're done! jr z, .done inc de ; otherwise, increment ; Get the glyph from the font, which means computing ; font + 33 * a. ; A little register juggling. hl points to the current ; char in VRAM being drawn to, but I can only do a 16-bit ; add into hl. de I don't need until the next loop, ; since I already read from it. So I'm going to push de ; AND hl, compute the glyph address in hl, put it in de, ; then restore hl. push de push hl ; The text is written in ASCII, but the glyphs start at 0 sub a, 65 ld hl, font ld de, 33 ; 1 width byte + 16 * 2 tiles ; This could probably be faster with long multiplication and a .letter_stride: jr z, .skip_letter_stride add hl, de dec a jr .letter_stride .skip_letter_stride: ; Move the glyph address into de, and restore hl ld d, h ld e, l pop hl ; Read the first byte, which is the character width. This ; overwrites the character, but I have the glyph address, ; so I don't need it any more ld a, [de] inc de ; Copy into current chars ; Part 1: Copy the left part into the current chars push af ; stash width ; A glyph is two chars or 32 bytes, so row_copy 32 times ld c, 32 ; b is the next x position we're free to write to. ; Incrementing it here makes the inner loop simpler, since ; it can't be zero. But it also means two jumps per loop, ; so, ultimately this was a pretty silly idea. inc b .row_copy: ld a, [de] ; read next row of character ; Shift right by b places with an inner loop push bc ; preserve b while shifting dec b .shift: ; shift right by b bits jr z, .done_shift srl a dec b jr .shift .done_shift: pop bc ; Write the updated byte to VRAM or a, [hl] ; OR with current tile ld [hl+], a inc de dec c jr nz, .row_copy pop af ; restore width ; Part 2: Copy whatever's left into the next char ; TODO :) ; Cleanup for next iteration ; Undo the b increment from way above dec b ; It's possible I overflowed into the next column, in which ; case I want to leave hl where it is: pointing at the next ; column. Otherwise, I need to back it up to where it was. ; Of course, I also need to update b, the x offset. add a, b ; a <- new x offset ; If the new x offset is 8 or more, that's actually the next ; column cp a, 8 jr nc, .wrap_to_next_tile ld bc, -32 ; a < 8: back hl up add hl, bc jr .done_wrap .wrap_to_next_tile: sub a, 8 ; a >= 8: subtract tile width ld b, a .done_wrap: ; Either way, store the new x offset into b ld b, a ; And loop! pop de ; pop text pointer jr .next_letter .done: ; Undo any goofy stuff I did, and get outta here EnableLCD ; Remember to reset bank to 0! xor a ldh [rVBK], a ret Phew! That was a lot, but hopefully it wasn’t too bad. I hit a few minor stumbling blocks, but as I recall, most of them were of the “I get the conditions backwards every single time I use cp augh” flavor. (In fact, if you look at the actual commit the above is based on, you may notice that I had the condition at the very end mixed up! It’s a miracle it managed to print part of the second letter at all.) There are a lot of caveats in this first pass, including that there’s nothing to erase the dialogue box and reshow the map underneath it. (But I might end up using the window for this anyway, so there’s no need for that.) As a proof of concept, though, it’s a great start! That’s the letter A, followed by the first two pixels of the letter B. I didn’t implement the part where letters spill into the next column, yet. Guess I’d better do that! ## Second pass One of the big problems with the first pass was that I had to turn the screen off to do the actual work safely. Shifting a bunch of bytes by some amount is a little slow, since I can only shift one bit at a time and have to do it within a loop, and vblank only lasts for about 6.5% of the entire duration of the frame. If I continued like this, the screen would constantly flicker on and off every time I drew a new letter. Yikes. I’ll solve this the same way I solve pretty much any other vblank problem: do the actual work into a buffer, then just copy that buffer during vblank. Since I intend to draw no more than one character per frame, and each character glyph is no wider than a single char column, I only need a buffer big enough to span two columns. Text covers two rows, also, so that’s four tiles total. I also need to zero out the tile buffer when I first start drawing text — otherwise it may still have garbage left over from the last time text was displayed! — and this seems like a great opportunity to introduce a little fill function. Maybe then I’ll do the right damn thing and clear out other stuff on startup. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ; Utility code section ; fill c bytes starting at hl with a ; NOTE: c must not be zero fill: ld [hl+], a dec c jr nz, fill ret ; ... ; Stick this at a fixed nice address for now, just so it's easy ; for me to look at and debug SECTION "Text buffer", WRAM0[$C200] text_buffer: ; Text is up to 8x16 but may span two columns, so carve out ; enough space for four tiles ds $40 show_dialogue: DisableLCD ; ... setup stuff ... EnableLCD ; Zero out the tile buffer xor a ld hl, text_buffer ld c,$40 call fill That first round of disabling and enabling the LCD is still necessary, because the setup work takes a little time, but I can get rid of that later too. For now, the priority is fixing the text scroll (and supporting text that spans more than one tile). The code is the same up until I start copying the glyph into the tiles. Now it doesn’t go to VRAM, but into the buffer. There’s another change here, too. Previously, I shifted the glyph right, letting bits fall off the right end and disappear. But the bits that drop off the end are exactly the bits that I need to draw to the next char. I could do a left shift to retrieve them, but I had a different idea: rotate the glyph instead. Say I want to draw a glyph offset by 3 pixels. Then I want to do this: 1 2 3 4 5 6 7 8 abcdefgh <- original glyph bits fghabcde <- rotate right 3 00011111 <- mask, which is just $ff shifted right 3 000abcde <- rotated glyph AND mask gives the left part 11100000 <- mask, inverted fgh00000 <- rotated glyph AND inverted mask gives the right part The time and code savings aren’t huge, exactly, and nothing else is going on while text is rendering so it’s not like time is at a premium here. But hey this feels clever so let’s do it. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 ; Copy into current chars push af ; stash width ld c, 32 ; 32 bytes per row ld hl, text_buffer ; new! ; This is still silly. inc b .row_copy: ld a, [de] ; read next row of character ; Rotate right by b - 1 pixels -- remember, b contains the ; x-offset within the current tile where to start drawing push bc ; preserve b while shifting ld c,$ff ; initialize the mask dec b jr z, .skip_rotate .rotate: ; Rotate the glyph (a), but shift the mask (c), so that the ; left end of the mask fills up with zeroes rrca srl c dec b jr nz, .rotate .skip_rotate: push af ; preserve glyph and a, c ; mask right pixels ; Draw to left half of text buffer or a, [hl] ; OR with current tile ld [hl+], a ; Write the remaining bits to right half ld a, c ; put mask in a... cpl ; ...to invert it ld c, a ; then put it back pop af ; restore unmasked glyph and a, c ; mask left pixels ld [hl+], a ; and store them! ; Clean up after myself, and loop to the next row inc de ; next row of glyph pop bc ; restore counter! dec c jr nz, .row_copy pop af ; restore width The use of the stack is a little confusing (and don’t worry, it only gets worse in later posts). Note for example that c is used as the loop counter, but since I don’t actually need its value within the body of the loop, I can push it right at the beginning and use c to hold the mask, then pop the loop counter back into place at the end. This is where I first started to feel register pressure, especially when addresses eat up two of them. My options are pretty limited: I can store stuff on the stack, or store stuff in RAM. The stack is arguably harder to follow (and easier to fuck up, which I’ve done several times), but either way there’s the register ambiguity. Which is shorter/faster? Well: • A push/pop pair takes 2 bytes and 7 cycles. • Immediate writing to RAM and immediate reading back from it takes 6 bytes and 8 cycles, and can only be done with a, so I’d probably have to copy into and out of some other register too. • Putting an address in hl, writing to it, then reading from it takes 5 bytes and 7 cycles, but requires that I can preserve hl. (On the other hand, if I can preserve the value of hl across a loop or something, then it’s amortized away and the read/write is only 2 bytes and 3 cycles. But if that’s the case, chances are that I’m not under enough register pressure to need using RAM in the first place.) • Parts of high RAM ($ff80 and up) are available for program use, and they can be read or written with the same instructions that operate on the control knobs starting at$ff00. A high RAM read and write takes 4 bytes and 6 cycles, which isn’t too bad, but once again I have to go through the a register so I’ll probably need some other copies. Stack it is, then. Anyway! Where were we. I need to now copy the buffer into VRAM. You may have noticed that the buffer isn’t quite populated in char format. Instead, it’s populated like one big 16-pixel char, with the first 16 bits corresponding to the 16 pixels spanning both columns. VRAM, of course, expects to get all the pixels from the first column, then all the pixels from the second column. If that’s not clear, here’s what I have (where the bits are in order from left to right, top to bottom): 1 2 3 AAAAAAAA BBBBBBBB <- high bits for first row of pixels aaaaaaaa bbbbbbbb <- low bits for first row of pixels ... other rows ... And here’s what I need to put in VRAM: 1 2 3 4 5 6 AAAAAAAA <- high bits for first row of left column of pixels aaaaaaaa <- low bits for first row of left column of pixels ... other rows of left column ... BBBBBBBB <- high bits for first row of right column of pixels bbbbbbbb <- low bits for first row of right column of pixels ... other rows of right column ... I hope that makes sense! To fix this, I use two loops (one for each column), and in each loop I copy every other byte into VRAM. That deinterlaces the buffer. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 ; Draw the buffered tiles to vram ; The text buffer is treated like it's 16 pixels wide, but ; VRAM is of course only 8 pixels wide, so we need to do ; this in two iterations: the left two tiles, then the right pop hl ; restore hl (VRAM) push af ; stash width, again call wait_for_vblank ; always wait before drawing push bc push de ; Draw the left two tiles ld c, $20 ld de, text_buffer .draw_left: ld a, [de] ; This double inc fixes the interlacing inc de inc de ld [hl+], a dec c jr nz, .draw_left ; Draw the right two tiles ld c,$20 ; This time, start from the SECOND byte, which will grab ; all the bytes skipped by the previous loop ld de, text_buffer + 1 .draw_right: ld a, [de] inc de inc de ld [hl+], a dec c jr nz, .draw_right pop de pop bc pop af ; restore width, again Just about done! There’s one last thing to do before looping to the next character. If this character did in fact span both columns, then the buffer needs to be moved to the left by one column. Here’s a simplified diagram, pretending chars are 5×5 and I just drew a B: 1 2 3 4 5 6 7 +-----+-----+.....+ | A B|B | . |A A B| B | . |AAA B|B | . |A A B| B | . |A A B|B | . +-----+-----+.....+ The left column is completely full, so I don’t need to buffer it any more. The next character wants to draw in the last partially full column, which here is the one containing the B; it’ll also want an empty right column to overflow into if necessary. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 ; Increment the pixel offset and deal with overflow add a, b ; a <- new x offset ; Regardless of whether this glyph overflowed, the VRAM ; pointer was left at the beginning of the next (empty) ; column, and it needs rewinding to the right column ld bc, -32 ; move the VRAM pointer back... add hl, bc ; ...to the start of the char cp a, 8 jr nc, .wrap_to_next_char ; The new offset is less than 8, so this character didn't ; actually draw anything in the right column. Move the ; VRAM pointer back a second time, to the left column, ; which still has space left add hl, bc jr .done_wrap .wrap_to_next_char: ; The new offset is 8 or more, so this character drew into ; the next char. Subtract 8, but also shift the text buffer ; by copying all the "right" chars over the "left" chars sub a, 8 ; a >= 8: subtract char width push hl push af ; The easy way to do this is to walk backwards through the ; buffer. This leaves garbage in the right column, but ; that's okay -- it gets overwritten in the next loop, ; before the buffer is copied into VRAM. ld hl, text_buffer + $40 - 1 ld c,$20 .shift_buffer: ld a, [hl-] ld [hl-], a dec c jr nz, .shift_buffer pop af pop hl .done_wrap: ld b, a ; either way, store into b ; Loop pop de ; pop text pointer jp .next_letter And the test run: Hey hey, success! ## Quick diversion: Anise corruption I didn’t mention it above because I didn’t actually use it yet, but while doing that second pass, I split the button-polling code out into its own function, read_input. I thought I might need it in dialogue as well (which has its own vblank loop and thus needs to do its own polling), but I didn’t get that far yet, so it’s still only called from the main loop. While testing out the dialogue, I notice a teeny tiny problem. Well, yes, obviously there’s the problem of the textbox drawing underneath the player. Which is mostly a problem because the textbox doesn’t go away, ever. I’ll worry about that later. The other problem is that Anise’s sprite is corrupt. Again. Argh! A little investigation suggests that, once again, I’m blowing my vblank budget. But this time, it’s a little more reasonable. Remember, I’m overwriting Anise’s sprite after handling movement. That means I do a bunch of logic followed by writing to char data. No wonder there’s a problem. I must’ve just slightly overrun vblank when I split out read_input (or checked for the dialogue button press in the first place?), since call has a teeny tiny bit of overhead. That approach is a little inconsistent, as well. Remember how I handle OAM: I write to a buffer, which is then copied to real OAM during the next vblank. But I’m updating the sprite immediately. That means when Anise turns, the sprite updates on the very next frame, but the movement isn’t visible until the frame after that. Whoops. So, a buffer! I could make this into a more general mechanism later, but for now I only care about fixing Anise. I can revisit this when I have, uh, a second sprite. 1 2 3 4 ; in ram somewhere anise_sprites_address: dw Now, Anise is composed of three objects, which is six chars, which is 96 bytes. The fastest way to copy bytes by hand is something like this: 1 2 3 4 5 6 7 8 9 ld hl, source ld de, destination ld c, 96 .loop: ld a, [hl+] ld [de], a inc de dec c jr nz, .loop Each iteration of the loop copies 1 byte and takes 7 cycles. (It’s possible to shave a couple cycles off in some specific cases, and unrolling would save some time, but let’s stay general for now.) That’s 672 cycles, plus 10 for the setup, minus one on the final jr, for 681 total. But vblank only lasts 1140 cycles! That’s more than half the budget blown for updating a single entity. This can’t possibly work. Enter a feature exclusive to the Game Boy Color: GDMA, or general DMA. This is similar to OAM DMA, except that it can copy (nearly) anything to anywhere. Also (unlike OAM DMA), the CPU pauses while the copy is taking place, so there’s no need to carefully time a busy loop. It’s configured by writing to five control registers (which takes 5 cycles each), and then it copies two bytes per cycle, for a total of 73 cycles. That’s 9.3 times faster. Seems worth a try. (Note that I’m not using double-speed CPU mode yet, as an incentive to not blow my CPU budget early on. Turning that on would halve the time taken by the manual loop, but wouldn’t affect GDMA.) GDMA has a couple restrictions: most notably, it can only copy multiples of 16 bytes, and only to/from addresses that are aligned to 16 bytes. But each char is 16 bytes, so that works out just fine. The five GDMA registers are, alas, simply named 1 through 5. The first two are the source address; the next two are the destination address; the last is the amount to copy. Or, well, it’s the amount to copy, divided by 16, minus 1. (The high bit is reserved for turning on a different kind of DMA that operates a bit at a time during hblanks.) Writing to the last register triggers the copy. Plugging in this buffer is easy enough, then: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ; Update Anise's current sprite. Use DMA here because... ; well, geez, it's too slow otherwise. ld hl, anise_sprites_address ld a, [hl+] ld [rHDMA1], a ld a, [hl] ld [rHDMA2], a ; I want to write to $8000 which is where Anise's sprite is ; hardcoded to live, and the top three bits are ignored so ; that the destination is always in VRAM, so$0000 works too ld a, HIGH($0000) ld [rHDMA3], a ld a, LOW($0000) ld [rHDMA4], a ; And copy! ld a, (32 * 3) / 16 - 1 ld [rHDMA5], a Finally, instead of actually overwriting Anise’s sprite, I write the address of the new sprite into the buffer: 1 2 3 4 5 ; Store the new sprite address, to be updated during vblank ld a, h ld [anise_sprites_address], a ld a, l ld [anise_sprites_address + 1], a And done! Now I can walk around just fine. It looks basically like the screenshot from the previous section, so I don’t think you need a new one. Note that this copy will always happen, since there’s no condition for skipping it when there’s nothing to do. That’s fine for now; later I’ll turn this into a list, and after copying everything I’ll simply clear the list. Crisis averted, or at least deferred until later. Back to the dialogue! ## Interlude: A font Writing out the glyphs by hand is not going to cut it. It was fairly annoying for two letters, let alone an entire alphabet. Nothing about this part was especially interesting. I used LÖVE’s font format, which puts all glyphs in a single horizontal strip. The color of the top-left pixel is used as a sentinel; any pixel in the top row that’s the same color indicates the start of a new glyph. (I note that LÖVE actually recommends against using this format, but the alternatives are more complicated and require platform-specific software — whereas I can slop this format together in any image editor without much trouble.) I then turned this into Game Boy tiles much the same way as with the sprite loader, except with the extra logic to split on the sentinel pixels and pad each glyph to eight pixels wide. I won’t reproduce the whole script here, but it’s on GitHub if you want to see it. The font itself is, well, a font? I initially tried to give it a little personality, but that made some of the characters weirdly wide and was a bit hard to read, so I revisited it and ended up with this: I like it, at least! The characters all have shadows built right in, and you can see at the end that I was starting to play with some non-ASCII characters. Because I can do that! ## Third pass One major obstacle remains: I can only have one line of text right now, when there’s plenty of space for two. The obvious first thing I need to do is alter the dialogue box’s char map. It currently has a whole char’s worth of padding on every side. What a waste. I want this instead: 1 2 3 4 5 6 7 8 9 +--+--+--+--+--+--+--+--+--+--+--+--+---+ |80|82|84|86|88|8a|8c|8e|90|92|94|96|...| +--+--+--+--+--+--+--+--+--+--+--+--+---+ |81|83|85|87|89|8b|8d|8f|91|93|95|97|...| +--+--+--+--+--+--+--+--+--+--+--+--+---+ |a8|aa|ac|ae|b0|b2|b4|b6|b8|ba|bc|be|...| +--+--+--+--+--+--+--+--+--+--+--+--+---+ |a9|ab|ad|af|b1|b3|b5|b7|b9|bb|bd|bf|...| +--+--+--+--+--+--+--+--+--+--+--+--+---+ The second row begins with char $a8 because that’s$80 + 40. Obviously I’ll need to change the setup code to make the above pattern. But while I’m in here… remember, the setup code is the only remaining place that disables the LCD to do its work. Can I do everything within vblank instead? I’m actually not sure, but there’s an easy way to reduce the CPU cost. Instead of setting up the whole dialogue box at once, I can do it one row at a time, starting from the bottom. That will cut the vblank pressure by a factor of four, and it’ll create a cool slide-up effect when the dialogue box opens! Let’s give it a try. I’ll move the real code into a function, since it’ll run multiple times now. I’ll also introduce a few constants, since I’m getting tired of all the magic numbers everywhere. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 SCREEN_WIDTH_TILES EQU 20 CANVAS_WIDTH_TILES EQU 32 SCREEN_HEIGHT_TILES EQU 18 CANVAS_HEIGHT_TILES EQU 32 BYTES_PER_TILE EQU 16 TEXT_START_TILE_1 EQU 128 TEXT_START_TILE_2 EQU TEXT_START_TILE_1 + SCREEN_WIDTH_TILES * 2 ; Fill a row in the tilemap in a way that's helpful to dialogue. ; hl: where to start filling ; b: tile to start with fill_tilemap_row: ; Populate bank 0, the tile proper xor a ldh [rVBK], a ld c, SCREEN_WIDTH_TILES ld a, b .loop0: ld [hl+], a ; Each successive tile in a row increases by 2! add a, 2 dec c jr nz, .loop0 ; Populate bank 1, the bank and palette ld a, 1 ldh [rVBK], a ld a, %00001111 ; bank 1, palette 7 ld c, SCREEN_WIDTH_TILES dec hl .loop1: ld [hl-], a dec c jr nz, .loop1 ret Now replace the setup code with four calls to this function, waiting for vblank between successive calls. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ; Row 4 ld hl, $9800 + CANVAS_WIDTH_TILES * (SCREEN_HEIGHT_TILES - 1) ld b, TEXT_START_TILE_2 + 1 call fill_tilemap_row ; Row 3 call wait_for_vblank ld hl,$9800 + CANVAS_WIDTH_TILES * (SCREEN_HEIGHT_TILES - 2) ld b, TEXT_START_TILE_2 call fill_tilemap_row ; Row 2 call wait_for_vblank ld hl, $9800 + CANVAS_WIDTH_TILES * (SCREEN_HEIGHT_TILES - 3) ld b, TEXT_START_TILE_1 + 1 call fill_tilemap_row ; Row 1 call wait_for_vblank ld hl,$9800 + CANVAS_WIDTH_TILES * (SCREEN_HEIGHT_TILES - 4) ld b, TEXT_START_TILE_1 call fill_tilemap_row Cool. I have a full font now, too, so I might as well try it out with some more interesting text. 1 2 3 SECTION "Font", ROMX text: db "The quick brown fox jumps over the lazy dog's back. AOOWWRRR!!!!", 0 Now I just need to— oh, hang on. (I did also change the initial value for the x-offset to 4 rather than 0, so the text doesn’t start against the left edge of the screen.) Well. Not really. The code I wrote doesn’t actually know when to stop writing, so it continues off the end of the first line and onto the second. You may notice the conspicuous number of extra spaces in the new text. Still, it looks right, and this was a lot of effort already, and it’s not actually plugged into anything yet, so I called this a success and shelved it for now. Quit while you’re ahead, right? ## Future work Obviously this is still a bit rough. That thing where the player can walk on top of the textbox is a bit of a problem, since the same thing happens if the textbox opens while the player is near the bottom of the screen. There are a couple solutions to this, and they’ll really depend on how I end up deciding to display the box. I actually wanted the glyphs to be drawn a little lower than normal on the top line, to add half a char or so of padding around them, but I tried it and got a buffer overrun that I didn’t feel like investigating. That’s an obvious thing to fix next time I touch this code. What about word wrapping? I’ve written about that before and clearly have strong opinions about it, but I really don’t want to do dynamic word wrapping with a variable-width font on a Game Boy. Instead, I’ll probably store dialogue in some other format and use another converter script to do the word-wrapping ahead of time. That’ll also save me from writing large amounts of dialogue in, um, assembly. And if/when I want any fancy-pants special effects within dialogue, I can describe them with a human-readable format and then convert that to more assembly-friendly bytecode instructions. The dialogue box still doesn’t go away, partly because it draws right on top of the map, and I don’t have any easy way to repair the map right now. I’ll probably switch to one of those other mechanisms for showing the box later that won’t require clobbering the map, and then this problem will pretty much solve itself. What about menus? Those will either have to go inside the dialogue box (which means the question being asked isn’t visible, oof), or they’ll have to go in a smaller box above it like in Pokémon. But the latter solution means I can’t use the window or display trickery — both of those only work reliably for horizontal splits. I’m not quite sure how to handle this, yet. And then, what of portraits? Most games get away without them by having a silent protagonist, which makes it obvious who’s talking. But Anise is anything but silent, so I need a stronger indicator. I obviously can’t overlay a big transparent portrait on the background, like I do in my LÖVE games. I think I can reseve space for them in the status bar, which will go underneath the dialogue box. I’ll have to see how it works out. Maybe I could also use a different text color for every speaker? After all that, I can start worrying about other frills like colored text and pauses and whatever. Phew. ## To be continued That brings us up to commit a173db, which is slightly beyond the second release (which includes a one-line textbox)! Also that was three months ago oh dear. I think I’ll be putting out a new release soon, stay tuned! Next time: collision detection! I am doomed. # Cheezball Rising: Opening a dialogue Post Syndicated from Eevee original https://eev.ee/blog/2018/09/08/cheezball-rising-opening-a-dialogue/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge 4. More details in the README! In this issue, I draw some text! Previously: I get a Game Boy to meow. Next: collision detection, ohh nooo ## Recap The previous episode was a diversion (and left an open problem that I only solved after writing it), so the actual state of the game is unchanged. Where should I actually go from here? Collision detection is an obvious place, but that’s hard. Let’s start with something a little easier: displaying scrolling dialogue text. This is likely to be a dialogue-heavy game, so I might as well get started on that now. ## Planning On any other platform, I’d dive right into it: draw a box on the screen somewhere, fill it with text. On the Game Boy, it’s not quite that simple. I can’t just write text to the screen; I can only place tiles and sprites. Let’s look at how, say, Pokémon Yellow handles its menu. This looks — feels — like it’s being drawn on top of the map, and that sub-menus open on top of other menus. But it’s all an illusion! There’s no “on top” here. This is a completely flat image made up of tiles, like anything else. This is why Pokémon has such a conspicuously blocky font: all the glyphs are drawn to fit in a single 8×8 char, so “drawing” text is as simple as mapping letters to char indexes and drawing them onto the background. The map and the menu are all on the same layer, and the game simply redraws whatever was underneath when you close something. Part of the illusion is that the game is clever enough to hide any sprites that would overlap the menu — because sprites would draw on top! (The Game Boy Color has some twiddles for controlling this layering, but Yellow was originally designed for the monochrome Game Boy.) A critical reason that this actually works is that in Pokémon, the camera is always aligned to the grid. It scrolls smoothly while you’re walking, but you can’t actually open the menu (or pick up an item, or talk to someone, or do anything else that might show text) until you’ve stopped moving. If you could, the menu would be misaligned, because it’s part of the same grid as the map! This poses a slight problem for my game. Star Anise isn’t locked to the grid like the Pokémon protagonist is, and unlike Link’s Awakening, I do want to have areas larger than the screen that can scroll around freely. I know offhand that there are a couple ways to do this. One is the window, an optional extra opaque layer that draws on top of the background, with its top-left corner anchored to any point on the screen. Another is to change some display registers in the middle of the screen redrawing. The Oracle games combine both features to have a status bar at the top of the screen but a scrolling map underneath. But I don’t want to worry about any of this right now, before I even have text drawing. I know it’s possible, so I’ll deal with it later. For now, drawing directly onto the background is good enough. ### Font decisions Let’s get back to the font itself. I’m not in love with the 8×8 aesthetic; what are my other options? I do like the text in Oracle of Ages, so let’s have a look at that: Ah, this is the same approach again, except that letters are now allowed to peek up into the char above. So these are 8×16, but the letters all occupy a box that’s more like 6×9, offering much more familiar proportions. Oracle of Ages is designed for the Game Boy Color, which has twice as much char storage space, so it makes sense that they’d take advantage of it for text like this. It’s not bad, but the space it affords is still fairly… limited. Only 16 letters will fit in a line, just as with Pokémon, and that means a lot of carefully wording things to be short and use mostly short words as well. That’s not gonna cut it for the amount of dialogue I expect to have. What other options do I have? It seems like I’m limited to multiples of 8 here, surely. (The answer may be obvious to some of you, but shh, don’t read ahead.) The answer lies in the very last game released for the Game Boy Color: Harry Potter and the Chamber of Secrets. Whatever deep secrets were learned during the Game Boy’s lifetime will surely be encapsulated within this, er, movie tie-in game. Hot damn. That is a ton of text in a relatively small amount of space! And it doesn’t fit the grid! How did they do that? The answer is… exactly how you’d think! With a fixed-width font like in Pokémon and Zelda games, the entire character set is stored in VRAM, and text is drawn by drawing a string of characters. With a variable-width font like in Harry Potter, a block of VRAM is reserved for text, and text is drawn into those chars, in software. Essentially, some chars are used like a canvas and have text rendered to them on the fly. The contents of the background layer might look like this in the two cases: Some pros of this approach: • Since the number of chars required is constant and the font is never loaded directly into char memory, the font can have arbitrarily many glyphs in it. Multiple fonts could be used at the same time, even. (Of course, if you have more than 256 glyphs, you’ll have to come up with a multi-byte encoding for actually storing the text…) • A lot more text can fit in one line while still remaining readable. • It has the potential to look extremely cool and maybe even vaguely technically impressive. And, cons: • It’s definitely more complicated! But I only have to write the code once, and since the game won’t be doing anything but drawing dialogue while the box is up, I don’t think I’ll be in danger of blowing my CPU budget. • Colored text becomes a bit trickier. But still possible, so, we can worry about that later. Well, I’m sold. Let’s give it a shot. ## First pass Well, I want to do something on a button press, so, let’s do that. A lot of games (older ones especially) have bugs from switching “modes” in the same frame that something else happens. I don’t entirely understand why that’s so common and should probably ask some speedrunners, but I should be fine if I do mode-switching first thing in the frame, and then start over a new frame when switching back to “world” mode. Right? Sure. 1 2 3 4 5 6 7 8 9 10 11 12 ; ... button reading code in main loop ... bit BUTTON_A, a jp nz, .do_show_dialogue ; ... main loop ... ; Loop again when done jp vblank_loop .do_show_dialogue: call show_dialogue jp vblank_loop The extra level of indirection added by .do_show_dialogue is just so the dialogue code itself isn’t responsible for knowing where the main loop point is; it can just ret. Now to actually do something. This is a first pass, so I want to do as little as possible. I’ll definitely need a palette for drawing the text — and here I’m cutting into my 8-palette budget again, which I don’t love, but I can figure that out later. (Maybe with some shenanigans involving changing the palettes mid-redraw, even.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 PALETTE_TEXT: ; Black background, white text... then gray shadow, maybe? dcolor000000 dcolor $ffffff dcolor$999999 dcolor $666666 show_dialogue: ; Have to disable the LCD to do video work. Later I can do ; a less jarring transition DisableLCD ; Copy the palette into slot 7 for now ld a, %10111000 ld [rBCPS], a ld hl, PALETTE_TEXT REPT 8 ld a, [hl+] ld [rBCPD], a ENDR I also know ahead of time what chars will need to go where on the screen, so I can fill them in now. Note that I really ought to blank them all out, especially since they may still contain text from some previous dialogue, but I don’t do that yet. An obvious question is: which tiles? I think I said before that with 512 chars available, and ¾ of those still being enough to cover the entire screen in unique chars, I’m okay with dedicating a quarter of my space to UI stuff, including text. To keep that stuff “out of the way”, I’ll put them at the “end” — bank 1, starting from$80. I’m thinking of having characters be about the same proportions as in the Oracle games. Those games use 5 rows of tiles, like this: 1 2 3 4 5 top of line 1 bottom of line 1 top of line 2 bottom of line 2 blank Since the font is aligned to the bottom and only peeks a little bit into the top char, the very top row is mostly blank, and that serves as a top margin. The bottom row is explicitly blank for a bottom margin that’s nearly the same size. The space at the top of line 2 then works as line spacing. I’m not fixed to the grid, so I can control line spacing a little more explicitly. But I’ll get to that later and do something really simple for now, where $ff is a blank tile: 1 2 3 4 5 6 7 8 9 +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|80|82|84|86|88|8a|8c|8e|90|92|94|96|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|81|83|85|87|89|8b|8d|8f|91|93|95|97|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ |ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|ff|...| +--+--+--+--+--+--+--+--+--+--+--+--+--+---+ This gives me a canvas for drawing a single line of text. The staggering means that the first letter will draw to adjacent chars$80 and $81, rather than distant cousins like$80 and $a0. You may notice that the below code updates chars across the entire width of the grid, not merely the screen. There’s not really any good reason for that. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 ; Fill text rows with tiles (blank border, custom tiles) ; The screen has 144/8 = 18 rows, so skip the first 14 rows ld hl,$9800 + 32 * 14 ; Top row, all tile 255 ld a, 255 ld c, 32 .loop1: ld [hl+], a dec c jr nz, .loop1 ; Text row 1: 255 on the edges, then middle goes 128, 130, ... ld a, 255 ld [hl+], a ld a, 128 ld c, 30 .loop2: ld [hl+], a add a, 2 dec c jr nz, .loop2 ld a, 255 ld [hl+], a ; Text row 2: same as above, but middle is 129, 131, ... ld a, 255 ld [hl+], a ld a, 129 ld c, 30 .loop3: ld [hl+], a add a, 2 dec c jr nz, .loop3 ld a, 255 ld [hl+], a ; Bottom row, all tile 255 ld a, 255 ld c, 32 .loop4: ld [hl+], a dec c jr nz, .loop4 Now I need to repeat all of that, but in bank 1, to specify the char bank (1) and palette (7) for the corresponding tiles. Those are the same for the entire dialogue box, though, so this part is easier. 1 2 3 4 5 6 7 8 9 10 11 12 13 ; Switch to VRAM bank 1 ld a, 1 ldh [rVBK], a ld a, %00001111 ; bank 1, palette 7 ld hl, 9800 + 32 * 14 ld c, 32 * 4 ; 4 rows .loop5: ld [hl+], a dec c jr nz, .loop5 EnableLCD Time to get some real work done. Which raises the question: how do I actually do this? If you recall, each 8-pixel row of a char is stored in two bytes. The two-bit palette index for each pixel is split across the corresponding bit in each byte. If the leftmost pixel is palette index 01, then bit 7 in the first byte will be 0, and bit 7 in the second byte will be 1. Now, a blank char is all zeroes. To write a (left-aligned) glyph into a blank char, all I need to do is… well, I could overwrite it, but I could just as well OR it. To write a second glyph into the unused space, all I need to do is shift it right by the width of the space used so far, and OR it on top. The unusual split layout of the palette data is actually handy here, because it means the size of the shift matches the number of pixels, and I don’t have to worry about overflow. 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 0 0 0 <- blank glyph 1 1 1 1 0 0 0 0 <- some byte from the first glyph ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 1 1 1 1 0 0 0 0 <- ORed together to display first character 1 1 1 1 0 0 0 0 <- some byte from the second glyph, shifted by 4 (plus a kerning pixel) ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 1 1 1 1 0 1 1 1 <- ORed together to display first two characters The obvious question is, well, what happens to the bits from the second character that didn’t fit? I’ll worry about that a bit later. Oh, and finally, I’ll need a font, plus some text to display. This is still just a proof of concept, so I’ll add in a couple glyphs by hand. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 ; somewhere in ROM font: ; A ; First byte indicates the width of the glyph, which I need ; to know because the width varies! db 6 dw 00000000 dw 00000000 dw 01110000 dw 10001000 dw 10001000 dw 10001000 dw 11111000 dw 10001000 dw 10001000 dw 10001000 dw 10001000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 ; B db 6 dw 00000000 dw 00000000 dw 11110000 dw 10001000 dw 10001000 dw 10001000 dw 11110000 dw 10001000 dw 10001000 dw 10001000 dw 11110000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 text: ; Shakespeare it ain't. ; Need to end with a NUL here so I know where the text ; ends. This isn't C, there's no automatic termination! db "ABABAAA", 0 And here we go! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 ; ---------------------------------------------------------- ; Setup done! Real work begins here ; b: x-offset within current tile ; de: text cursor + current character tiles ; hl: current VRAM tile being drawn into ld b, 0 ld de, text ld hl,8800 ; This loop waits for the next vblank, then draws a letter. ; Text thus displays at ~60 characters per second. .next_letter: ; This is probably way more LCD disabling than is strictly ; necessary, but I don't want to worry about it yet EnableLCD call wait_for_vblank DisableLCD ld a, [de] ; get current character and a ; if NUL, we're done! jr z, .done inc de ; otherwise, increment ; Get the glyph from the font, which means computing ; font + 33 * a. ; A little register juggling. hl points to the current ; char in VRAM being drawn to, but I can only do a 16-bit ; add into hl. de I don't need until the next loop, ; since I already read from it. So I'm going to push de ; AND hl, compute the glyph address in hl, put it in de, ; then restore hl. push de push hl ; The text is written in ASCII, but the glyphs start at 0 sub a, 65 ld hl, font ld de, 33 ; 1 width byte + 16 * 2 tiles ; This could probably be faster with long multiplication and a .letter_stride: jr z, .skip_letter_stride add hl, de dec a jr .letter_stride .skip_letter_stride: ; Move the glyph address into de, and restore hl ld d, h ld e, l pop hl ; Read the first byte, which is the character width. This ; overwrites the character, but I have the glyph address, ; so I don't need it any more ld a, [de] inc de ; Copy into current chars ; Part 1: Copy the left part into the current chars push af ; stash width ; A glyph is two chars or 32 bytes, so row_copy 32 times ld c, 32 ; b is the next x position we're free to write to. ; Incrementing it here makes the inner loop simpler, since ; it can't be zero. But it also means two jumps per loop, ; so, ultimately this was a pretty silly idea. inc b .row_copy: ld a, [de] ; read next row of character ; Shift right by b places with an inner loop push bc ; preserve b while shifting dec b .shift: ; shift right by b bits jr z, .done_shift srl a dec b jr .shift .done_shift: pop bc ; Write the updated byte to VRAM or a, [hl] ; OR with current tile ld [hl+], a inc de dec c jr nz, .row_copy pop af ; restore width ; Part 2: Copy whatever's left into the next char ; TODO :) ; Cleanup for next iteration ; Undo the b increment from way above dec b ; It's possible I overflowed into the next column, in which ; case I want to leave hl where it is: pointing at the next ; column. Otherwise, I need to back it up to where it was. ; Of course, I also need to update b, the x offset. add a, b ; a <- new x offset ; If the new x offset is 8 or more, that's actually the next ; column cp a, 8 jr nc, .wrap_to_next_tile ld bc, -32 ; a < 8: back hl up add hl, bc jr .done_wrap .wrap_to_next_tile: sub a, 8 ; a >= 8: subtract tile width ld b, a .done_wrap: ; Either way, store the new x offset into b ld b, a ; And loop! pop de ; pop text pointer jr .next_letter .done: ; Undo any goofy stuff I did, and get outta here EnableLCD ; Remember to reset bank to 0! xor a ldh [rVBK], a ret Phew! That was a lot, but hopefully it wasn’t too bad. I hit a few minor stumbling blocks, but as I recall, most of them were of the “I get the conditions backwards every single time I use cp augh” flavor. (In fact, if you look at the actual commit the above is based on, you may notice that I had the condition at the very end mixed up! It’s a miracle it managed to print part of the second letter at all.) There are a lot of caveats in this first pass, including that there’s nothing to erase the dialogue box and reshow the map underneath it. (But I might end up using the window for this anyway, so there’s no need for that.) As a proof of concept, though, it’s a great start! That’s the letter A, followed by the first two pixel of the letter B. I didn’t implement the part where letters spill into the next column, yet. Guess I’d better do that! ## Second pass One of the big problems with the first pass was that I had to turn the screen off to do the actual work safely. Shifting a bunch of bytes by some amount is a little slow, since I can only shift one bit at a time and have to do it within a loop, and vblank only lasts for about 6.5% of the entire duration of the frame. SECTION “Text buffer”, WRAM0[$C200] text_buffer: ; Text is up to 8×16 but may span two columns, so carve out ; enough space for four tiles ds$40 SECTION “Text rendering”, ROM0 PALETTE_TEXT: dcolor $000000 dcolor$ffffff dcolor $999999 dcolor$666666 show_dialogue: ; TODO blank out the second half of bank 1 before all this, maybe on the fly to average out the cpu time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ; TODO get rid of this with a slide-up effect DisableLCD ; Set up palette ld a, %10111000 ld [rBCPS], a ld hl, PALETTE_TEXT REPT 8 ld a, [hl+] ld [rBCPD], a ENDR ; Fill text rows with tiles (blank border, custom tiles) ld hl, $9800 + 32 * 14 ; Top row, all tile 255 ld a, 255 ld c, 32 .loop1: ld [hl+], a dec c jr nz, .loop1 ; Text row 1: 255 on the edges, then middle goes 128, 130, … ld a, 255 ld [hl+], a ld a, 128 ld c, 30 .loop2: ld [hl+], a add a, 2 dec c jr nz, .loop2 ld a, 255 ld [hl+], a ; Text row 2: same as above, but middle is 129, 131, … ld a, 255 ld [hl+], a ld a, 129 ld c, 30 .loop3: ld [hl+], a add a, 2 dec c jr nz, .loop3 ld a, 255 ld [hl+], a ; Bottom row, all tile 255 ld a, 255 ld c, 32 .loop4: ld [hl+], a dec c jr nz, .loop4 1 2 3 4 5 6 7 ; Repeat all of the above, but in bank 1, which specifies the character bank and palette. Luckily, that's the same for everyone. ld a, 1 ldh [rVBK], a ld a, %00001111 ; bank 1, palette 7 ld hl,$9800 + 32 * 14 ; Top row, all tile 255 ld c, 32 * 4 ; 4 rows .loop5: ld [hl+], a dec c jr nz, .loop5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 EnableLCD ; Zero out the tile buffer xor a ld hl, text_buffer ld c, $40 call fill ; ---------------------------------------------------------- ; Setup done! Real work begins here ; b: x-offset within current tile ; de: text cursor + current character tiles ; hl: current VRAM tile being drawn into + buffer pointer ld b, 0 ld de, text ld hl,$8800 ; The basic problem here is to shift a byte and split it ; across two other bytes, like so: ; yyyyy YYY ; xxx00000 00000000 ; ↓ ; xxxyyyyy YYY00000 ; To do this, we rotate the byte, mask the low bits, OR them ; with the first byte, restore it, mask the high bits, and ; then store that directly as the second byte (which should ; be all zeroes anyway). .next_letter: ld a, [de] ; get current character and a ; if NUL, we’re done! jp z, .done inc de ; otherwise, increment 1 2 3 4 5 6 7 8 9 10 11 ; Get the font character push de ; from here, de is tiles ; Alas, I can only add to hl, so I need to compute the font ; character address in hl and /then/ put it in de. But I ; already pushed de, so I can use that as scratch space. push hl sub a, 65 ; TODO temporary ld hl, font ld de, 33 ; 1 width byte + 16 * 2 tiles ; TODO can we speed striding up with long mult? and a .letter_stride: jr z, .skip_letter_stride dec a jr .letter_stride .skip_letter_stride: ld d, h ; move char tile addr to de ld e, l 1 2 3 4 5 6 7 8 ld a, [de] ; read width inc de ; Copy into current tiles push af ; stash width ld c, 32 ; 32 bytes per row ld hl, text_buffer inc b ; FIXME? this makes the loop simpler since i only test after the dec, but it also is the 1px kerning between characters... .row_copy: ld a, [de] ; read next row of character ; Rotate right by b – 1 pixels push bc ; preserve b while shifting ld c, $ff ; create a mask dec b jr z, .skip_rotate .rotate: rrca srl c dec b jr nz, .rotate .skip_rotate: push af and a, c ; mask right pixels ; Draw to left half of text buffer or a, [hl] ; OR with current tile ld [hl+], a ; Write the remaining bits to right half ld a, c ; put mask in a… cpl ; …to invert it ld c, a ; then put it back pop af ; restore unmasked pixels and a, c ; mask left pixels ld [hl+], a ; and store them! ; Loop and cleanup inc de ; next row of character pop bc ; restore counter! dec c jr nz, .row_copy pop af ; restore width 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ; Draw the buffered tiles to vram ; The text buffer is treated like it's 16 pixels wide, but ; VRAM is of course only 8 pixels wide, so we need to do ; this in two iterations: the left two tiles, then the right ; TODO explain this with a fucking diagram because i feel ; like i'm wrong about it anyway pop hl ; restore hl (VRAM) push af ; stash width, again call wait_for_vblank ; always wait before drawing push bc push de ; Draw the left two tiles ld c,$20 ld de, text_buffer .draw_left: ld a, [de] inc de inc de ld [hl+], a dec c jr nz, .draw_left ; Draw the right two tiles ld c, $20 ld de, text_buffer + 1 .draw_right: ld a, [de] inc de inc de ld [hl+], a dec c jr nz, .draw_right pop de pop bc pop af ; restore width, again 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Increment the pixel offset and deal with overflow ; TODO it's possible we're at 9 pixels wide, thanks to the ; kerning pixel, uh oh. but that pixel would be empty, ; right? wait, no, it comes /before/... well fuck ; TODO actually that might make something weird happen due ; to the inc b above, maybe...? add a, b ; a <- new x offset ld bc, -32 ; move the VRAM pointer back... add hl, bc ; ...to the start of the tile cp a, 8 jr nc, .wrap_to_next_tile ; The new offset is less than 8, so this character didn't ; draw into the next tile. Move the VRAM pointer back ; another two tiles, to the column we started in add hl, bc jr .done_wrap .wrap_to_next_tile: ; The new offset is 8 or more, so this character drew into ; the next tile. Subtract 8, but also shift the text buffer ; by copying all the “right” tiles over the “left” tiles sub a, 8 ; a >= 8: subtract tile width push hl push af ld hl, text_buffer +$40 – 1 ld c, 20 .shift_buffer: ld a, [hl-] ld [hl-], a dec c jr nz, .shift_buffer pop af pop hl .done_wrap: ld b, a ; either way, store into b 1 2 3 ; Loop pop de ; pop text pointer jp .next_letter .done: EnableLCD ; TODO get rid of me with a buffer ; Remember to reset bank to 0! xor a ldh [rVBK], a ret wait_for_vblank: xor a ; clear the vblank flag ld [vblank_flag], a .vblank_loop: halt ; wait for interrupt ld a, [vblank_flag] ; was it a vblank interrupt? and a jr z, .vblank_loop ; if not, keep waiting ret • future ideas: how will this work with a status bar, how do i do portraits, how do i hide sprites behind this, how do i handle the map not being aligned (contrast with pokemon which draws the entire menu on the background) lingering problems – note on word wrapping • alignment, window • prompts will probably have to go inside the text box? hmm. that’s tricky. • portraits! content/2016-10-20-word-wrapping-dialogue.markdown – the dialogue box does not actually go away. but i think the window will solve this ## To be continued This work doesn’t correspond to a commit at all; it exists only as a local stash. I’ll clean it up later, once I figure out what to actually do with it. Next time: dialogue! With moderately less suffering along the way! # Cheezball Rising: Resounding failure Post Syndicated from Eevee original https://eev.ee/blog/2018/09/06/cheezball-rising-resounding-failure/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge4. More details in the README! In this issue, I cannot get a goddamn Game Boy to meow at me. Previously: maps and sprites. Next: text! ## Recap With the power of Aseprite, Tiled, and some Python I slopped together, the game has evolved beyond Test Art and into Regular Art. I’ve got so much work to do on this, so it’s time to prioritize. What is absolutely crucial to this game? The answer, of course, is to make Anise meow. Specifically, to make him AOOOWR. ## Brief audio primer What we perceive as sound is the vibration of our eardrums, caused by vibration of the air against them. Eardrums can only move along a single axis (in or out), so no matter what chaotic things the air is doing, what we hear at a given instant is flattened down to a single scalar number: how far the eardrum has displaced from its normal position. (There’s also a bunch of stuff about tiny hairs in the back of your ear, but, close enough. Also it’s really two numbers since you have two ears, but stereo channels tend to be handled separately.) Digital audio is nothing more than a sequence of those numbers. Of course, we can’t record the displacement at every single instant, because there are infinitely many instants; instead, we take measurements (samples) at regular intervals. The interval is called the sample rate, is usually a very small fraction of a second, and is generally measured in Hertz/Hz (which just means “per second”). A very common sample rate is 44100 Hz, which means a measurement was taken every 0.0000227 seconds. I say “measurement” but the same idea applies for generating sounds, which is what the Game Boy does. Want to make a square wave? Just generate a block of all the same positive sample, then another block of all the same negative sample, and alternate back and forth. That’s why it’s depicted as a square — that’s the graph of how the samples vary over time. Okay! I hope that was enough because it’s like 80% of everything I know about audio. Let’s get to the Game Boy. ## Game Boy audio The Game Boy contains, within its mysterious depths, a teeny tiny synthesizer. It offers a vast array of four whole channels (instruments) to choose from: a square wave, also a square wave, a wavetable, and white noise. They can each be controlled with a handful of registers, and will continually produce whatever tone they’re configured for. By changing their parameters at regular intervals, you can create a pleasing sequence of varying tones, which you humans call “music”. Making music is, I’m sure, going to be an absolute nightmare. What music authoring tools am I possibly going to dig up that exactly conform to the Game Boy hardware? I can’t even begin to imagine what this pipeline might look like. Luckily, that’s not what this post is about, because I chickened out and tried something way easier instead. Before I set out into the wilderness myself, I did want to get an emulator to create any kind of noise at all, just to give myself a starting point. There are an awful lot of audio twiddles, so I dug up a Game Boy sound tutorial. I became a little skeptical when the author admitted they didn’t know what a square wave was, but they did provide a brief snippet of code at the end that’s claimed to produce a sound: 1 2 3 4 5 6 7 8 9 NR52_REG = 0x80; NR51_REG = 0x11; NR50_REG = 0x77; NR10_REG = 0x1E; NR11_REG = 0x10; NR12_REG = 0xF3; NR13_REG = 0x00; NR14_REG = 0x87; That’s C, written for the much-maligned GBDK, which for some reason uses regular assignment to write to a specific address? It’s easy enough to translate to rgbasm: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ; Enable sound globally ld a, $80 ldh [rAUDENA], a ; Enable channel 1 in stereo ld a,$11 ldh [rAUDTERM], a ; Set volume ld a, $77 ldh [rAUDVOL], a ; Configure channel 1. See below ld a,$1e ldh [rAUD1SWEEP], a ld a, $10 ldh [rAUD1LEN], a ld a,$f3 ldh [rAUD1ENV], a ld a, $00 ldh [rAUD1LOW], a ld a,$85 ldh [rAUD1HIGH], a It sounds like this. Some explanation may be in order. This is a big ol’ mess and you could just as well read the wiki’s article on the sound controller, so feel free to skip ahead a bit. First, the official names for all of the sound registers are terrible. They’re all named “NRxy” — “noise register” perhaps? — where x is the channel number (or 5 for master settings) and y is just whatever. Thankfully, hardware.inc provides some aliases that make a little more sense, and those are what I’ve used above. The very first thing I have to do is set the high bit of AUDENA (NR52), which toggles sound on or off entirely. The sound system isn’t like the LCD, which I might turn off temporarily while doing a lot of graphics loading; when the high bit of AUDENA is off, all the other sound registers are wiped to zero and cannot be written until sound is enabled again. The other important master registers are AUDVOL (NR50) and AUDTERM (NR51). Both of them are split into two identical nybbles, each controlling the left or right output channel. AUDVOL controls the master volume, from 0 to 7. (As I understand it, the high bit is used to enable audio output from extra synthesizer hardware on the cartridge, a feature I don’t believe any game ever actually used.) AUDTERM enables channels/instruments, one bit per channel. The above code turns on channel 1, the square wave, at max volume in stereo. Then there’s just, you know, sound stuff. AUD1HIGH (NR14) and AUD1LOW (NR13) are a bit of a clusterfuck, and one shared by all except the white noise channel. The high bit of AUD1HIGH is the “init” bit and triggers the sound to actually play (or restart), which is why it’s set last. The second highest bit, bit 6, controls timing: if it’s set, then the channel will only play for as long as a time given by AUD1LEN; if not, the channel will play indefinitely. Finally, the interesting part: the lower three bits of AUD1HIGH and the entirety of AUD1LOW combine to make an 11-bit frequency. Or, rather, if those 11 bits are $$n$$, then the frequency is $$\frac{131072}{2048-n}$$. (Since their value appears in the denominator, they really express… inverse time, not frequency, but that’s neither here nor there.) The code above sets that 11-bit value to $500, for a frequency of 171 Hz, which in A440 is about an F3. AUD1SWEEP (NR10) can automatically slide the frequency over time. It distinguishes channel 1 from channel 2, which is otherwise identical but doesn’t have sweep functionality. The lower three bits are the magnitude of each change; bit 3 is a sign bit (0 for up, 1 for down), and bits 6–4 are a time that control how often the frequency changes. (Setting the time to zero disables the sweep.) Given a magnitude of $$n$$ and time $$t$$, every $$\frac{t}{128}$$ seconds, the frequency is multiplied by $$1 ± \frac{1}{2^n}$$. Note that when I say “frequency” here, I’m referring to the 11-bit “frequency” value, not the actual frequency in Hz. A “frequency” of$400 corresponds to 128 Hz, but halving it to $200 produces 85 Hz, a decrease of about a third. Doubling it is impossible, because$800 doesn’t fit in 11 bits. This setup seems, ah, interesting to make music with. Can’t wait! The above code sets this register to $1e, so $$t = 1$$, $$n = 6$$, and the frequency is decreasing; thus every $$\frac{1}{128}$$ seconds, the “frequency” drops by $$\frac{1}{64}$$. Next is AUD1LEN (NR11), so named because its lower six bits set how long the sound will play. Again we have inverse time: given a value $$t$$ in the low six bits, the sound will play for $$\frac{64-t}{256}$$ seconds. Here those six bits are &x#24;10 or 16, so the sound lasts for $$\frac{48}{256} = \frac{3}{16} = 0.1875$$ seconds. Except… as mentioned above, this only applies if bit 6 of AUD1HIGH is set, which it isn’t, so this doesn’t apply at all and there’s no point in setting any of these bits. Hm. The two high bits of AUD1LEN select the duty cycle, which is how long the square wave is high versus low. (A “normal” square wave thus has a duty of 50%.) Our value of 0 selects 12.5% high; the other values are 25% for 1, 50% for 2, or 75% for 3. I do wonder if the author of this code meant to use 50% duty and put the bit in the wrong place? If so, AUD1LEN should be$80, not $10. Finally, AUD1ENV selects the volume envelope, which can increase or decrease over time. Curiously, the resolution is higher here than in AUDVOL — the entire high nybble is the value of the envelope. This value can be changed automatically over time in increments of 1: bit 3 controls the direction (0 to decrease, 1 to increase) and the low three bits control how often the value changes, counted in $$\frac{1}{64}$$ seconds. For our value of$f3, the volume starts out at max and decreases every $$\frac{3}{64}$$ seconds, so it’ll stop completely (or at least be muted?) after fifteen steps or $$\frac{45}{64} ≈ 0.7$$ seconds. And hey, that’s all more or less what I see if I record mGBA’s output in Audacity! Boy! What a horrible slog. Don’t worry; that’s a good 75% of everything there is to know about the sound registers. The second square wave is exactly the same except it can’t do a frequency sweep. The white noise channel is similar, except that instead of frequency, it has a few knobs for controlling how the noise is generated. And the waveform channel is what the rest of this post is about— Hang on!” I hear you cry. “That’s a mighty funny-looking ‘square’ wave.” It sure is! The Game Boy has some mighty funny sound hardware. Don’t worry about it. I don’t have any explanation, anyway. I know the weird slope shapes are due to a high-pass filter capacitor that constantly degrades the signal gradually towards silence, but I don’t know why the waveform isn’t centered at zero. (Note that mGBA has a bug and currently generates audio inverted, which is hard to notice audibly but which means the above graph is upside-down.) ## The thing I actually wanted to do Right, back to the thing I actually wanted to do. I have a sound. I want to play it on a Game Boy. I know this is possible, because Pokémon Yellow does it. Channel 3 is a wavetable channel, which means I can define a completely arbitrary waveform (read: sound) and channel 3 will play it for me. The correct approach seems obvious: slice the sound into small chunks and ask channel 3 to play them in sequence. How hard could this possibly be? ### Channel 3 Channel 3 plays a waveform from waveform RAM, which is a block of 16 bytes in register space, from $FF30 through$FF3F. Each nybble is one sample, so I have 32 samples whose values can range from 0 to 15. 32 samples is not a whole lot; remember, a common audio rate is 44100 Hz. To keep that up, I’d need to fill the buffer almost 1400 times per second. I can use a lower sample rate, but what? I guess I’ll figure that out later. First things first: I need to take my sound and cram it into this format, somehow. Here’s the sound I’m starting with. The original recording was a bit quiet, so I popped it open in Audacity and stretched it to max volume. I only have 4-bit samples, remember, and trying to cram a quiet sound into a low bitrate will lose most of the detail. (A very weird thing about sound is that samples are really just measurements of volume. Every feature of sound is nothing more than a change in volume.) Now I need to turn this into a sequence of nybbles. From previous adventures, I know that Python has a handy wave module for reading sample data directly from a WAV file, and so I wrote a crappy resampler: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 import wave TARGET_RATE = 32768 with wave.open('aowr.wav') as w: nchannels, sample_width, framerate, nframes, _, _ = w.getparams() outdata = bytearray() gbdata = bytearray() frames_per_note = framerate // TARGET_RATE nybble = None while True: data = w.readframes(frames_per_note) if not data: break n = 0 total = 0 # Left and right channels are interleaved; this will pick up data from only channel 0 for i in range(0, len(data), nchannels * sample_width): frame = int.from_bytes(data[i : i + sample_width], 'little', signed=True) n += 1 total += frame # Crush the new sample to a nybble crushed_frame = int(total / n) >> (sample_width * 8 - 4) # Expand it back to the full sample size, to make a WAV simulating how it should sound encoded_crushed_frame = (crushed_frame << (sample_width * 8 - 4)).to_bytes(2, 'little', signed=True) outdata.extend(encoded_crushed_frame * (nchannels * frames_per_note)) # Combine every two nybbles together. The manual shows that the high nybble plays first. # WAV data is signed, but Game Boy nybbles are not, so add the rough midpoint of 7 if nybble is None: nybble = crushed_frame + 7 else: byte = (nybble << 4) | (crushed_frame + 7) gbdata.append(byte) nybble = None with wave.open('aowrcrush.wav', 'wb') as wout: wout.setparams(w.getparams()) wout.writeframes(outdata) with open('build/aowr.dat', 'wb') as f: f.write(gbdata) This is incredibly bad. It integer-divides the original rate by the target rate, so if I try to resample 44100 to 32768, I’ll end up recreating the same sound again. I don’t know why I started with 32768, either. The resulting data is too big to even fit in a section! Kicking it down to 8192 is a bit better (5 samples to 1, so the real final rate is 8820), but if I get any smaller, too many samples cancel each other out and I end up with silence! I have no idea what I am doing help. The aowrcrush.wav file sounds a little atrocious, fair warning. But it seems to be correct, if I open it alongside the original: Crushing it to four bits caused the graph to stay fixed to only 16 possible values, which is why it’s less smooth. Reducing the sample rate made each sample last longer, which is why it’s made up of short horizontal chunks. (I resampled it back to 44100 for this comparison, so really it’s made of short horizontal chunks because each sample appears five times; Audacity wouldn’t show an actual 8192 Hz file like this.) It doesn’t sound great, but maybe it’ll be softened when played through a Game Boy. Worst case, I can try cleaning it up later. Let’s get to the good part: playing it! ### Playing with channel 3 Here we go! First the global setup stuff I had before. 1 2 3 4 5 6 7 8 9 ; Enable sound globally ld a, $80 ldh [rAUDENA], a ; Map instruments to channels ld a,$44 ldh [rAUDTERM], a ; Set volume ld a, $77 ldh [rAUDVOL], a Then some bits specific to channel 3. 1 2 3 4 5 6 7 8 9 10 11 12 ld a,$80 ldh [rAUD3ENA], a ld a, $ff ldh [rAUD3LEN], a ld a,$20 ldh [rAUD3LEVEL], a SAMPLE_RATE EQU 8192 CH3_FREQUENCY set 2048 - 65536/(SAMPLE_RATE / 32) ld a, LOW(CH3_FREQUENCY) ldh [rAUD3LOW], a ld a, $80 | HIGH(CH3_FREQUENCY) ldh [rAUD3HIGH], a Channel 3 has its own bit for toggling it on or off in AUD3ENA (NR30); none of the other bits are used. The other new register is AUD3LEVEL (NR32), which is sort of a global volume control. The only bits used are 6 and 5, which make a two-bit selector. The options are: • 00: mute • 01: play nybbles as given • 10: play nybbles shifted right 1 • 11: play nybbles shifted right 2 Three of those are obviously useless, so 01 it is! That’s where I get the$20. Figuring out the frequency is a little more clumsy. I used some rgbasm features here to do it for me, and it took a bit of fiddling to get it right. For example, why am I using 65536 instead of 131072, the factor I said was used for the square wave? The answer is that for the longest time I kept getting this absolutely horrible output, recorded directly from mGBA: I had no idea what this was supposed to be. Turns out it’s, well, roughly what happens when you halve the Game Boy’s idea of frequency. I finally found out this coefficient was different from the gbdev wiki. I’m guessing the factor of 2 has something to do with there being two nybbles per byte? Then there’s the division by 32, which neither the manual nor the gbdev wiki mention. The frequency isn’t actually the time it takes to play one sample, but the time it takes to play the entire buffer. Which does make some sense — the “normal” use for the channel 3 is as a custom instrument, so you’d want to apply the frequency to the entire waveform to get the right notes out. This was even more of a nightmare to figure out, since it produced… well, mostly just garbage. I’ll leave it to your imagination. 1 2 3 4 ld a, 256 - 4096 / (SAMPLE_RATE / 32) ldh [rTMA], a ld a, 4 ldh [rTAC], a Oho! TMA and TAC are new. The CPU has a timer register, TIMA, which counts up every… well, every so often. It’s only a single byte, and when it overflows, it generates a timer interrupt. It then resets to the value of TMA. TAC is the timer controller. Bit 2 enables the timer, and the lower two bits select how fast the clock counts up. Above, I’m using clock speed 00, which is 4096 Hz. The expression for TMA computes SAMPLE_RATE / 32, which is the number of times per second that the entire waveform should play, and then divides that into 4096 to get the number of timer ticks that the waveform plays for. Subtract that from 256, and I have the value TIMA should start with to ensure that it overflows at the right intervals. I note that this will cause a timer interrupt 256 times per second, which sounds like a lot on a CPU-constrained system. It’s only 4 or 5 interrupts per frame, though, so maybe it won’t intrude too much. I’ll burn down that bridge when I come to it. Now I just need to enable timer interrupts: 1 2 3 4 start: ; Enable interrupts ld a, IEF_TIMER | IEF_VBLANK ldh [rIE], a And of course do a call in the timer interrupt, which you may remember is a fixed place in the header: 1 2 3 SECTION "Timer overflow interrupt", ROM0[$0050] call update_aowr reti One last gotcha: I discovered that timer interrupts can fire during OAM DMA, a time when most of the memory map is inaccessible. That’s pretty bad! So I also added di and ei around my DMA call. Okay! I’m so close! All that’s left is the implementation of update_aowr. ### Updating the waveform 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 aowr: INCBIN "build/aowr.dat" aowr_end: ; ... update_aowr: push hl push bc push de push af ; The current play position is stored in music_offset, a ; word in RAM somewhere. Load its value into de ld hl, music_offset ld d, [hl] inc hl ld e, [hl] ; Compare this to aowr_end. If it's >=, we've reached the ; end of the sound, so stop here. (Note that the timer ; interrupt will keep firing! This code is a first pass.) ld hl, aowr_end ld a, d cp a, h jr nc, .done jr nz, .continue ld a, e cp a, l jr nc, .done jr z, .done .continue: ; Copy the play position back into hl, and copy 16 bytes ; into waveform RAM. This unrolled loop is as quick as ; possible, to keep the gap between chunks short. ld h, d ld l, e _addr = _AUD3WAVERAM REPT 16 ld a, [hl+] ldh [_addr], a _addr = _addr + 1 ENDR ; Write the new play position into music_offset ld d, h ld e, l ld hl, music_offset ld [hl], d inc hl ld [hl], e .done: pop af pop de pop bc pop hl ret Perfect! Let’s give it a try. Hey, that’s not too bad! I can see wiring that up to a button and pressing it relentlessly. It’s a bit rough, but it’s not bad for this first attempt. That was mGBA, though, and I’ve had surprising problems before because I was reading or writing when the actual hardware wouldn’t let me. I guess it wouldn’t hurt to try in bgb. (warning: very bad) OH NO What has happened. ## Tragedy A lot of fussing around, reading about obscure trivia, and being directed to SamplePlayer taught me a valuable lesson: you cannot write to waveform RAM while the wave channel is playing. Okay. No problem. I’ll just turn it off, write to wave RAM, then turn it back on. Turning it off clears the frequency, but that’s fine, I can just write it again. 1 2 3 4 5 6 7 8 9 10 11 12 ; Disable channel 3 to allow writing to wave RAM xor a ldh [rAUD3ENA], a ; ... do the copy ... ld a,$80 ldh [rAUD3ENA], a ld a, LOW(CH3_FREQUENCY) ldh [rAUD3LOW], a ld a, $80 | HIGH(CH3_FREQUENCY) ldh [rAUD3HIGH], a Okay! Perfect! I’m so ready for a meow!!! why god why This is what I get in mGBA and SameBoy. Ironically, it plays fine in bgb. It seems I have come to an impasse. ### Why After a Herculean amount of debugging and discussion with people who actually know what they’re talking about, here’s what I understand to be happening. When the wave channel first starts playing, it doesn’t correctly read the very first nybble; instead, it uses the high nybble of whatever was already in its own internal buffer. Disabling the wave channel sets its internal buffer to all zeroes. I disable the wave channel every time it plays. Effectively, every 32nd sample starting with the first is treated as zero, which is the most extreme negative value, which is why the playback looks like this (bearing in mind that mGBA’s audio is currently upside-down): For whatever reason, bgb doesn’t emulate this spiking, so it plays fine. I’m told the spiking also happens on actual hardware, but the speakers are cheap so it’s harder to notice. SamplePlayer isn’t much help here, because it’s subject to the same problem. ### A ray of hope, dashed But wait! There’s one last thing I can try. Pokémon Yellow has freeform sounds in it, and it doesn’t have this spiking! There’s even a fan disassembly of it! Alas. Pokémon Yellow doesn’t use channel 3 to play back sounds. It uses channel 1. How, you ask? Remember when I said earlier that hearing is really just detecting changes in volume? Pokémon Yellow plays a constant square wave and simply toggles it on and off, very rapidly. Channel 3 is 4-bit; the sounds Pokémon Yellow plays are 1-bit, on or off. It’s baffling, but it does work. I don’t think it’ll work for me, since that means 32 times as many interrupts. In fact, Pokémon Yellow uses a busy loop as a timer, so it effectively freezes the entire rest of the game anytime it plays a Pikachu sound. I’d rather not do that, but… I don’t seem to have a lot of options. And so I’ve reached a dead end. The spiking seems to be a fundamental bug with the Game Boy sound hardware. I’ve found evidence that it may even still exist in the GBA, which uses a superset of the same hardware. I can’t fix it, I don’t see how to work around it, and it sounds really incredibly bad. After days of effort trying to get this to work, I had to shelve it. The title of this post is a sort of pun, you see, a play on words— ## To be continued This work doesn’t correspond to a commit at all; it exists only as a local stash. Next time: dialogue! And this time it works! # Cheezball Rising: Maps and sprites Post Syndicated from Eevee original https://eev.ee/blog/2018/07/15/cheezball-rising-maps-and-sprites/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge$4. More details in the README! In this issue, I get a little asset pipeline working and finally have a real map. Previously: spring cleaning. Next: resounding failure. ## Recap The last post only covered some minor problems (including, I grant you, being totally broken), so the current state of the game is basically unchanged from before. That grass pattern, the grass sprite itself, and the color scheme are all hardcoded — written directly into the source code, by hand. If this game is going to get very far at all, I urgently need a better way to inject some art. ## Constraints The Game Boy imposes some fairly harsh constraints on the artwork — which is part of the charm! But now I have to figure out how to work within those constraints most effectively. Here’s what I’ve got to work with. Bear in mind that I intend for the game to be based around 16×16, um, tiles. Okay, it’s extremely confusing that “tile” might refer either to the base size of the artwork or to the Game Boy’s native 8×8 tiles, so I’m going to call the art tiles and the Game Boy’s basic unit a character (which is what the manual does). • The background layer is a grid of 8×8 characters, each of which uses one of eight 4-color background palettes. • The object layer is a set of 8×16 character pairs, each of which uses one of eight 3-color object palettes. These palettes are 3-color because color 0 is always transparent. • No more than 40 objects can appear on screen at the same time. (There is a way to weasel past this limit, but it requires considerable trickery.) • No more than 10 objects can appear in the same row of pixels. (I believe this is a hard limit.) • There are three blocks of 256 chars each. I can divide this between the background and objects more or less however I want, though neither can have more than two blocks (= 512 chars). I’m intending for the game to be based around a 16×16 grid, a fairly common size for the Game Boy. That makes me a little concerned about the per-row object limit — each entity will need to have two Game Boy objects side by side, so I’m really limited to only five entities sharing the same row of pixels. I can’t do much about that quite yet (and only have one entity anyway), but it’s likely to affect how I design maps and draw sprites. The next biggest problem is colors. Each object palette can only have three colors, which in practice means a shadow/outline color, a highlight color, and a base color. This is why every NPC and overworld critter in Pokémon GSC and the Zeldas is basically monochromatic. They pull it off really well by making very effective use of the highlight and shadow colors. Since 16×16 sprites are composed of multiple Game Boy objects, it’s possible to overcome this limit by giving each part of the sprite a different palette. Unfortunately, objects being 8×16 means the sprites are split vertically, when it would be most useful to have different colors for e.g. the head and body. I wish the Game Boy supported 16×8 objects! That’d help a ton with the per-row limit, too. Alas, a few decades too late to change it now. As for the number of chars… well, let’s see. The whole screen is only 160×144, which is 20×18 or 360 chars, so I could allocate two blocks to the background and have 512 — more than enough to cover the entire screen in unique chars! (I expect one block to be more than enough for objects, since I can only show 80 object chars at once anyway.) On the other hand, I’ll need to reserve some of that space for text and UI and whatnot, and each 16×16 tile is composed of four chars. If I very generously allocate a whole block to window dressing (enough for all of ISO-8859-1?), that leaves 256 chars, which is 64 tiles, which is a tileset that fits in an eight-by-eight square. For comparison’s sake, even fox flux’s relatively limited tileset is a sixteen-tile square — four times as big. This feels a little dire. But how can it be dire, when I have enough sprite space to fill the screen and then some? Let’s see here. A pretty good chunk of the fox flux tileset is unused or outright blank. Some of these tiles are art for moving objects that happened to fit in the grid, and those wouldn’t be in the background tileset. And while all of the tiles are distinct, a lot of the basic terrain has some significant overlap: All of the regions of the same color are identical. These 9 distinct tiles could fit into 20 chars if they shared the common parts, rather than the 36 required to naïvely cutting each one into four dedicated chars. (The fox flux grid is 32×32, so everything is twice as big as it will be on the Game Boy, but you get the idea.) I’m feeling a little better about this, especially knowing I do have enough space to cover the whole screen. Worst case, I could draw the map as though it were a single bitmap. I don’t want to have to rely on that if I can get away with it, though — I suspect I’d need to constantly load chars on the fly, and copying stuff around eats into my CPU budget surprisingly quickly. ### Research That does get me wondering: what, exactly, do the Oracle games do? I haven’t done any precise measurements, but I’m pretty sure they have more than sixty-four distinct map tiles throughout their large connected worlds. Let’s have a look! Here I am in the graveyard near the start of Oracle of Ages. The “creepy tree” here is distinct and doesn’t really appear anywhere else, so I found it in the tile viewer (lower right) and will be keeping an eye on it. Note that only the left half of the face is visible; the right half is using the same tiles, flipped horizontally. (The colors are different because the tile viewer shows the literal colors, whereas the game itself is being drawn with a shader.) Let’s walk left one screen. Now, this is interesting. The creepy tree is still on the screen here, so its tiles are naturally still loaded. But a bunch of tiles on the left — parts of the dungeon entrance and other graveyard things — have been replaced by town tiles. I’m several screens away from the town! The next screen up has no creepy trees, but its tiles remain. Of course, they’d have to, since the creepy tree is still visible during a transition. I have to go left from there before the tree disappears: Wow! At a glance, this looks like enough tiles to draw the entire town. This is fascinating. The Oracle games have several transitions between major areas, marked by fade-outs or palette changes — the purple-tinted graveyard is an obvious example. But it looks like there are also minor transitions that update the tileset while I’m still several screens away from where those tiles are used. The screens around the transition only use common tiles like grass and regular trees, so I never notice anything is happening. That’s cute, clever, and an easy way to make screen transitions work without having to figure out what tiles are becoming unused as they slide off the screen! At this point I realize I may be getting ahead of myself. Screen transitions? I don’t have a map yet! Hell, I don’t even have a camera. Time to back up and make something I can build on. ## Designing a tileset I’m pretty tired of manually translating art into bits. It’s 2018, dammit. I want to use all the regular tools I would use for this, I want the Game Boy’s limitations to be expressed as simply as possible, and I want minimal friction between the source artwork and the game. Here’s my idea. I know I only have 8 palettes to work with, so I’m decreeing that tilesets will be stored as paletted PNGs. The first four colors in the image palette will become the first Game Boy palette; the next four colors become the second Game Boy palette; and so on. If I then resize Aseprite’s palette panel to be four colors wide, I’ll have an instant view of all my available combinations of colors. This already has some problems — for starters, if the same color appears in multiple palettes (which will almost certainly happen, for the sake of cohesion), I’m very likely to confuse the hell out of myself. I also have no idea how to extend this into multiple tilesets, but for now I’ll pretend the entire game world only uses a single tileset. I could instead dynamically infer the palettes based on what combinations of colors are actually used, but after more than a couple tiles, it would be a nightmare for a human to keep track of what those combinations are. With this approach, all a human needs to do is color-drop a pixel from a particular tile and look at what row the color’s in. After a quick jaunt into the pixel mines, here are some tiles. Or, as viewed in Aseprite: That’s only one palette, but hopefully you can see what I’m going for here. It’s enough to get started. At this point, I started writing a little Python script that used Pillow to inspect the colors and pixels and dump them out to rgbasm-flavored source code. The script itself is not especially interesting: run through each 8×8 block of pixels, look at each pixel’s palette index, mod 4 to get the index within the Game Boy palette, print out as backtick literals. (I could spit out raw binary data, but I wanted to be able to inspect the intermediate form easily. Maybe later.) The results: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 SECTION "Map dumping test", ROM0 TEST_PALETTES: dw %0101011110111101 dw %0101011100011110 dw %0100101010111100 dw %0100011001111000 ; ... enough zeroes to make eight palettes ... ; sorry, in the script I was calling them "tiles", not "chars" TEST_TILES: ; tile 0 at 0, 0 dw 00001000 dw 00000000 dw 00100000 dw 00000000 dw 00000000 dw 00000000 dw 20000000 dw 20000002 ; ... etc ... And hey, I already have code that can load palettes and chars, so all I have to do is swap out the old labels for these ones. Now I have a tileset I can load into the game, which is very exciting, except that I can’t see any of them because I still don’t have a map. I could draw a test map by hand, I suppose, but the whole point of this exercise was to avoid ever doing that again. ## Drawing a map In keeping with the “it’s 2018 dammit” approach, I elect to use Tiled for drawing the maps. I’ve used it for several LÖVE games, and while its general-purposeness makes it a little clumsy at times, it’s flexible enough to express basically anything. I make a tileset and create a map. I choose 256×256 pixels (16×16 tiles), the same size as the Game Boy screen buffer, and fill it with arbitrary terrain. In retrospect, I probably should’ve made it the size of the screen, since I still don’t have a camera. Oh, well. Here, I hit a minor roadblock. I want to do as much work as possible upfront, so I want to store the map in the ROM as chars, not tiles. That means I need to know what chars make up each tile, which is determined by the script that converts the image to char data. Multiple maps might use the same tileset, and a map might use multiple tilesets, so it seems like I’ll need some intermediate build assets with this information… (In retrospect again, I realize that the game may need to know about tiles rather than just chars, since there’ll surely be at least a few map tiles that act like entities — switches and the like — and those need to function as single units. I guess I’ll work that out later.) This is all looking like an awful lot of messing around (and a lot of potential points of failure) before I can get anything on the dang screen. I waffle for a bit, then decide to start with a single step that simultaneously dumps the tiles and the map. I can split it up when I actually have more than one of either. You can check out the resulting script if you like, but again, I don’t think it’s particularly interesting. It enforces a few more constraints than before, and adds a TEST_MAP_1 label containing all the char data, row by row. Loading that into VRAM is almost comically simple: 1 2 3 4 5 ; Read from the test map ld hl, 9800 ld de, TEST_MAP_1 ld bc, 1024 call copy16 The screen buffer is 32×32 chars, or 1024 bytes. As you may suspect, copy16 is like copy, but it takes a 16-bit count in bc. 1 2 3 4 5 6 7 8 9 10 11 12 ; copy bc bytes from de to hl ; NOTE: bc must not be zero copy16: ld a, [de] inc de ld [hl+], a dec bc ; dec bc doesn't set flags, so gotta check by hand ld a, b or a, c jr nz, copy16 ret Hm. It’s a little harder to justify the bc = 0 case as a feature here, since that would try to overwrite every single byte in the entire address space. Don’t do that, then. Now, at long long last, I have a background with some actual art! It’s starting to feel like something! I’ve even got something resembling a workflow. All in a day’s work. Good time to call it, right? Except I just wrote this char loading code… And there’s still one thing still hardcoded… I wonder if I could do something about that…? ## Sprites Above, I conspicuously did not mention how I integrated the Python script into the build system. And, well, I didn’t do that. I ran it manually and put it somewhere and committed it all as-is. You currently (still!) can’t actually build the game without repeating my steps. You can’t even just put the output in the right place, because you also have to delete some debug output from the middle of the file. It gets worse! Here’s how. I have some Anise walking sprites, too, drawn in Aseprite. They’re pretty cute and I’d love to have them in the game, now that I have some Real Art™ for the background. Why not throw these at the same script and hack them into animating? Unfortunately, this introduces a bit of manual work, as animation often does. (My kingdom for a way to embed a small simple animation in a larger spritesheet in Aseprite!) I’ve typically animated every critter in its own Aseprite file — or stacked several vertically in the same file when their animations are similar enough — and then exported as a sheet with the frames running off horizontally. You can see this at work in fox flux, e.g. on its critter sheet. But Star Anise introduces a wrinkle that prevents even that slightly clumsy workflow from working. You may have noticed that the walking sprite above blows the color budget considerably, using a whopping five colors. The secret is that Anise himself fits in a 16×16 square, and then his antenna is a third 8×16 sprite drawn on top. I can’t simply export him as a spritesheet, because the antenna needs to be separate, and it’s not even aligned to the grid. It doesn’t even stay in the same place consistently! I could maybe hack something together that would automatically pull the incompatible pixels into a separate sprite. I might need to, since — spoiler alert — there are an awful lot of Lunekos in this game. For now, though, I did the dumbest thing that works and copied his frames to their own sheet by hand. The background is actually cyan, not transparent. I had to do this because my setup expects multiple sets of four colors — the first color in an object palette is still there, even if it’s ignored — and only one color in an indexed PNG can be transparent. (Don’t @ me about PNG pixel formats.) I could’ve adjusted it to work with sets of three colors and put the transparent one at the end so the palette column trick still worked, but… this was easier. Here’s the best part: I took the main function from my tile loading script, copy-pasted it within the same file, and edited the copy to dump these sprites sans map. So now not only is there no build system, but half of the loading script is inaccessible! Sorry. We’re getting into experiment territory and I am going to start making a lot of messes while I figure out what I actually want. Using these within the game was just as easy as before — replace some labels with new ones — and the only real change was to use a third OAM slot for the antenna. (The antenna has to appear first; when sprites overlap, the one with the lowest index appears on top.) That did make updating OAM a little clumsy; you may recall that before, I loaded the x and y positions into b and c, updated them, then wrote them back into OAM: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ; set b/c to the y/x coordinates ld hl, oam_buffer ld b, [hl] inc hl ld c, [hl] bit BUTTON_LEFT, a jr z, .skip_left dec c .skip_left: bit BUTTON_RIGHT, a jr z, .skip_right inc c .skip_right: bit BUTTON_UP, a jr z, .skip_up dec b .skip_up: bit BUTTON_DOWN, a jr z, .skip_down inc b .skip_down: ld [hl], c dec hl ld [hl], b ld a, c add a, 8 ld hl, oam_buffer + 5 ld [hl], a dec hl ld [hl], b The above approach required that I hardcode the 8-pixel offset between the left and right halves. With the antenna in the mix, I would’ve had to hardcode another more convoluted offset, and I didn’t like the sound of that. So I changed it to inc and dec the OAM coordinates directly and immediately: 1 2 3 4 5 6 7 8 9 10 11 12 13 ; Anise update loop ; set b/c to the y/x coordinates ld bc, 4 bit BUTTON_LEFT, a jr z, .skip_left ld hl, oam_buffer + 1 dec [hl] add hl, bc dec [hl] add hl, bc dec [hl] .skip_left: ; ... etc ... Eventually I should stop doing this and have an actual canonical x/y position for Anise somewhere. But I didn’t do that yet. I did also take this opportunity to change my LCDC flags so that object chars start counting from zero at9000, fixing the misunderstanding I had before. That’s nice. Anyway, tada, Star Anise can slide around, but now with his antenna. Not good enough. ## Animating It’s time to animate something. And this time around, all I’ve got are bytes to work with. Oh, boy! Right out of the gate, I have two options. I could load all of Anise’s sprites into VRAM upfront and change the char numbers in OAM to animate him, or I could reserve some specific chars and overwrite them to animate him. The first choice makes sense for an entity that might exist multiple times at once, like enemies or… virtually anything in the game world, really. But there’s only ever one player, and he’s likely to have a whole lot of spritework, which I would prefer not to have clogging up my char space for the entire duration of the game. So while I might use the other approach for most other things, I’m going to animate Anise by overwriting the actual graphics. Every frame. First things first. I’m going to need some state, which I’ve been avoiding by relying on OAM. At the very least, I need to know which way Anise is facing — which isn’t necessarily the direction he’s moving, because he should keep his facing when he stops. I also need to know which animation frame he’s on, and how many LCD frames are left until he should advance to the next one. Let’s refer to the time between vblanks as a “tic” for now, to avoid the ambiguity of a “frame” when talking about animation. A good start, then, would be some constants. 1 2 3 4 5 6 FACING_DOWN EQU 0 FACING_UP EQU 1 FACING_RIGHT EQU 2 FACING_LEFT EQU 3 ANIMATION_LENGTH EQU 5 ANIMATION_LENGTH is the length of every frame. I don’t especially want to give every frame its own distinct duration if I can avoid it; this will be complicated enough as it is. I fiddled with the frame duration in Aseprite for a bit and landed on 83ms as a nice speed, and that’s 5 tics. I also need a place for this state, so I add some more stuff to my RAM block. 1 2 3 4 5 6 anise_facing: db anise_frame: db anise_frame_countdown: db And initialize it in setup. 1 2 3 4 ld a, FACING_DOWN ld [anise_facing], a ld a, ANIMATION_LENGTH ld [anise_frame_countdown], a Presumably, one day, I’ll have multiple entities, and they’ll all share a similar structure, which I’ll have to traverse manually. For now, it’s easier to follow the code if I give every field its own label. I have four levels of hierarchy here: the spriteset (which for now is always Anise’s), the pose (I only have one: walking), the facing, and the frame. I need to traverse all four, but luckily I can ignore the first two for now. I don’t want to animate Anise when he’s not moving, so I changed the OAM updating code to also ld d, 1 if there’s any movement at all, and skip over all the animation stuff if d is still zero. 1 2 3 4 5 6 7 8 9 10 11 12 ; ... read input ... ; This was before I knew the 'or a' trick; these two ops ; could be replaced with 'xor a; or d' ld a, d cp a, 0 jp z, .no_movement ; ... all the animation code will go here ... .no_movement: ; and after this we repeat the main loop This does have the side effect that Anise will simply freeze in mid-walk when stopped, rather than returning to his standing pose. I still haven’t fixed that; I could special-case it, but I usually treat “standing” as its own one-frame animation, so it feels like something that ought to come when I implement poses. Next I decrement the countdown, which is the number of tics left until the frame ought to change. If this is nonzero, I don’t need to do anything. 1 2 3 4 5 6 ld a, [anise_frame_countdown] dec a ld [anise_frame_countdown], a jp nz, .no_movement ld a, ANIMATION_LENGTH ld [anise_frame_countdown], a Again, this isn’t actually right. If Anise’s state changes, such as between standing and walking, then this should be ignored because he’s switching to a new animation. But this is a pose thing again, so I’m deferring it until later. Next I need to advance the current frame. I don’t have modulo on hand and even simple ifs are kind of annoying, so I was naughty here and used bitops to roll from frame 3 to frame 0. This would obviously not work if the number of frames were not a power of two. 1 2 3 4 ld a, [anise_frame] inc a and a, 4 - 1 ld [anise_frame], a Yet again, if Anise changes direction, the frame should be reset to zero… but it ain’t. Now, let’s think for a second. I know what frame I want. I have a label for the upper-left corner of the spritesheet, and I want to get to the upper-left corner of the appropriate frame. Each frame has 3 objects; each object has 2 chars; each char is 16 bytes. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ld hl, ANISE_TEST_TILES ; Skip ahead 3 sprites * the current frame ld bc, 3 * 2 * 16 ; Remember, zero iterations is also possible or a jr z, .skip_advancing_frame .advance_frame: add hl, bc dec a jr nz, .advance_frame .skip_advancing_frame: ; Copy the sprites into VRAM ; They're consecutive in both the data and VRAM, so only ; one copy is necessary. And bc is already right! ld d, h ld e, l ld hl, 8000 call copy16 Hey, look at that! Only one small problem: I forgot about facing, so Anise will always face forwards no matter how he moves. Whoops! ## Facing I need to actually track which way Anise is facing, which is a surprisingly subtle question. He might even be facing away from his own direction of movement, if for example he was thrown backwards by some external force. A decent first approximation is to use the last button that was pressed. (That’s still not quite right — if you hold down, hold down+right, and then release right, he should obviously face down. But it’s a start.) I don’t yet track which buttons were pressed this frame, but it’s easy enough to add. While I’m at it, I might as well track which buttons were released, too. I amend the input reading code thusly, based on the straightforward insight that a button was pressed this frame iff it is currently 1 and was previously 0. 1 2 3 4 5 6 7 8 9 10 11 12 ; a now contains the current buttons ld hl, buttons ld b, [hl] ; b <- previous buttons ld [hl], a ; a -> current buttons cpl and a, b ld [buttons_released], a ; a = ~new & old, i.e. released ld a, [hl] ; a <- current buttons cpl or a, b cpl ld [buttons_pressed], a ; a = ~(~new | old), i.e. pressed I like that cute trick for getting the pressed buttons. I need a & ~b, but cpl only works on a, so I would’ve had to juggle a bunch of registers. But applying De Morgan’s law produces ~(~a | b), which only requires complementing a. (Full disclosure: I didn’t actually try register juggling, and for all I know it could end up shorter somehow.) Next I check the just-pressed buttons and updating facing accordingly. It looks a lot like the code for checking the currently-held buttons, except that I only use the first button I find. 1 2 3 4 5 6 7 8 ld hl, anise_facing ld a, [buttons_pressed] bit BUTTON_LEFT, a jr z, .skip_left2 ld [hl], FACING_LEFT jr .skip_down2 .skip_left2: ; ... you get the idea ... And finally, amend the sprite choosing code to pick the right facing, too. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ld hl, ANISE_TEST_TILES ; Skip ahead a number of /rows/, corresponding to facing ld a, [anise_facing] and a, %11 ; cap to 4, just in case jr z, .skip_stride_row ; This is like before, but times 4 frames ld bc, 4 * 3 * 2 * 16 .stride_row: add hl, bc dec a jr nz, .stride_row .skip_stride_row: ; Bumping the frame here is convenient, since it leaves the ; frame in a for the next part ld a, [anise_frame] inc a and a, 4 - 1 ld [anise_frame], a ; ... continue on with picking the frame ... Hardcoding the number of frames here is… unfortunate. I should probably flip the spritesheet so the frames go down and each column is a facing; then there’ll always be a fixed number of columns to skip over. But who cares about that? Look at Anise go! Yeah! Well, yes, there is one final problem, which is that the antenna is misaligned when walking left or right… because its positioning is different than when walking up or down, and I don’t have any easy way to encode that at the moment. It’s still like that, in fact. I’m sure I’ll fix it eventually. ## More vblank woes I didn’t run into this problem until a little while later, but I might as well mention it now. The above code writes into VRAM in the middle of updating entities — updating them very simply, perhaps, but updating nonetheless. If that updating takes longer than vblank, the write will fail. I expected this, though not quite so soon. It’s a disadvantage of swapping the char data rather than the char references: 32× more writing to do, which will take 32× longer. The solution is similar to what I do for OAM: defer the write until the next vblank. I’m already doing that with Anise’s position, anyway, and it makes no sense to have his position and animation updated on different frames. I ended up special-casing this for Anise, though it wouldn’t be too hard to extend this into a queue of tiles to copy. It’s nothing too world-shaking; I just store the address of Anise’s current sprite in RAM, then copy it over during vblank, just after the OAM DMA. I did try doing this with one of the Game Boy Color’s new features, general-purpose DMA, which can copy from basically anywhere in ROM or RAM to basically anywhere in VRAM. It involves five registers: you write the source address in the first two, the destination in the next two, and the length in the fifth, which triggers the copy. The CPU simply freezes until the copy is done, so there are no goofy timing issues here. 1 2 3 4 5 6 7 8 9 10 11 12 ld hl, anise_sprites_address ld a, [hl+] ld [rHDMA1], a ld a, [hl] ld [rHDMA2], a ld a, HIGH(0000) ld [rHDMA3], a ld a, LOW($0000) ld [rHDMA4], a ; To copy X bytes, write X / 16 - 1 to this register ld a, (32 * 3) / 16 - 1 ld [rHDMA5], a General-purpose DMA can copy 16 bytes every 8 cycles, or ½ cycle per byte. The fastest possible manual copy would be an unrolled series of ld a, [hl+]; ld [bc], a; inc bc which takes a whopping 6 cycles per byte — twelve times slower! This is a neat feature. FYI, it’s also possible to have a copy done piecemeal during hblanks, though that sounds a bit fragile to me. ## Future work I’ve laid some very basic groundwork here, and there’s plenty more to do, which I will get back to later! It’s just me hacking all this together, after all, and I like flitting between different systems. I will definitely need to figure out how the heck multiple tilesets work and when they get switched out. How do I even use multiple tilesets, each with its own set of palettes? What’s the workflow if I want to use the same tiles with several different palettes, like how the graveyard in Oracle of Ages is tinted purple? And I didn’t even implement character de-duplication yet… which will require some metadata for each tile… aw, geez. And I still haven’t fixed the build system! Maybe you can understand why I’m hesitant to impose more structure on this idea quite yet. ## To be continued That brings us to commit 59ff18. Except for a commit about the build that I skipped. Whatever. This post has been a little more draining to write, perhaps because it forced me to confront and explain a bunch of hokey decisions. Next time: resounding failure! # Cheezball Rising: Spring cleaning Post Syndicated from Eevee original https://eev.ee/blog/2018/07/13/cheezball-rising-spring-cleaning/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge$4. More details in the README! In this issue, I tidy up some of the gigantic mess I’ve made thusfar. Previously: writing a main loop, and finally getting something game-like. ## Recap After only a few long, winding posts’ worth of effort, I finally have a game, if you define “game” loosely as a thing that reacts when you press buttons. Beautiful. But to make an omelette, you need to break a few eggs, and if it’s your first omelette then you might break some glassware too. As tiny as this game is, a couple things could use improvement. Also, for narrative purposes, it’s much more interesting to put all these miscellaneous fixes together, rather than interrupting other posts with them. I didn’t actually do all this work in one lump in this order. Apologies to the die-hard non-fiction crowd. ## It’s totally broken Ah, the elephant in the room. The end of the previous post aligned with the first demo build, but if you downloaded it and tried to play it, you may have seen something that looks more like this: I said in the beginning that I liked mGBA and would be developing against it. That’s still true — it’s open source (and I’ve actually read some of it), it’s cross-platform, and it has some debug tools built in. I also said that emulators are primarily designed to accept correct games, not necessarily to reject incorrect games. And that’s still very true. I discovered this problem myself a little later (after the events of the next post), while shopping around a bit for emulators explicitly focused on accuracy. The one I keep being told to use is bgb, but it’s for Windows and Wine is kind of annoying, so I was exploring my other options; I found SameBoy (primarily for Mac, but with Linux and Windows builds sans debug features) and Gambatte (cross-platform, and the core for RetroArch’s Game Boy emulation). All three of them looked like the screenshot above. Something was going very wrong when writing to VRAM. You can’t write to VRAM while the LCD is redrawing, so the most obvious cause is that… well… maybe the LCD is redrawing during my setup code. Remember, on an actual Game Boy, the system doesn’t immediately start running what’s on the cartridge — it scrolls in the Nintendo logo first (or on a Color, does a fancier logo with a cool fanfare). That’s done by a tiny internal program called the boot ROM, and the state of the LCD when the boot ROM hands over control is undefined. I’m sure it’s consistent, but it’s not anything in particular, and for all I know it might be when the LCD is halfway through a redraw. (Side note: I am violating Nintendo’s game submission requirements by consistently referring to it as a “cartridge” when in fact it is properly called a Game Pak. My bad.) So what we’re seeing above is the result of VRAM becoming locked and unlocked as the LCD draws (remember, after every row is an hblank, during which time VRAM is accessible), while I’m trying to copy blocks of data there. In fact, every emulator I’ve tried shows a slightly different form of corruption, since this problem is very sensitive to timing accuracy. Super interesting! I could wait for vblank and try to squeeze in all my setup code there, maybe even split across several vblanks. But since this is setup code and doesn’t run during gameplay, there’s a much easier solution: turn the screen off. That’s done with a bit in the LCDC register, which I currently configure at the end of my setup code; all I need to do is move that to the beginning and clear the appropriate bit instead. 1 2 ld a, %00010111 ; $91 plus bit 2, minus bit 7 ld [$ff40], a Then, of course, set it again once I’m done. I did this with a couple macros, since it’s only a few instructions and it seems like the kind of thing I might need again later. 1 2 3 4 5 6 7 8 9 10 11 12 13 DisableLCD: MACRO ld a, [$ff40] and a, %0111111 ld [$ff40], a ENDM EnableLCD: MACRO ld a, [$ff40] or a, %10000000 ld [$ff40], a ENDM ; and, of course, stick an EnableLCD at the end of setup code Note that when the screen is off, it’s off, and there are no vblank interrupts or anything else that might be triggered by the screen’s behavior. So, you know, don’t wait for vblank while the screen’s off. When the screen turns back on, it immediately starts redrawing from the first row, so don’t try to use VRAM right away either. Finally, on the original Game Boy, do not turn off the screen when it’s not in vblank, or you might physically damage the screen. It’s fine on the Game Boy Color, but… hell, I’m gonna edit this to wait for vblank anyway. Feels kinda inappropriate to abruptly turn off the screen halfway through drawing. Anyway, that solves my goofy corruption problems, and now the game looks the same on all of these emulators! I also reported this misbehavior, and it’s since been fixed, so recent dev builds of mGBA also correctly render garbage for the first release. See, by not targeting the most accurate emulators, I’ve caused another emulator to become more accurate! ## hardware.inc I mentioned last time that I’d adopted hardware.inc. That’s in large part because I keep producing monstrosities like the previous snippet. Here are those macros with some symbolic constants: 1 2 3 4 5 6 7 8 9 10 11 DisableLCD: MACRO ld a, [rLCDC] and a, $ff & ~LCDCF_ON ld [rLCDC], a ENDM EnableLCD: MACRO ld a, [rLCDC] or a, LCDCF_ON ld [rLCDC], a ENDM A breath of fresh air! The $ff & is necessary because the argument needs to fit in a byte, but rgbasm’s integral preprocessor type is wider than a byte. I suppose I could also use LOW() here, or maybe there’s some other more straightforward solution. ## Rearranging the buttons In the previous post, I read the button states and crammed them into a single byte. I had a choice of whether to put the dpad low or the buttons low, but it didn’t seem to matter, so I picked arbitrarily: buttons high, dpad low. It turns out I chose wrong! Also, it turns out there’s a “wrong” here! I’ve heard two compelling reasons to do it the other way. For one, hardware.inc contains constants for the bit offsets of the buttons, and it assumes the dpad is high. Why is this arbitrary data layout decision embedded in a list of hardware constants? Possibly for the second reason: on the GBA, input is available as a single word, and the lowest byte contains bits for all the buttons on the Game Boy — in the same order, with the dpad high. So I’m switching this around and using hardware.incs constants. Easy change. ## Fixing vblank My original approach to waiting for vblank seemed simple enough: loop until vblank_flag is set, clear it, then continue on. I’ve made a slight oversight here: what if the main loop does take longer than a frame? Then a vblank interrupt will fire in the middle of it and harmlessly set vblank_flag. But when the loop finally finishes and goes to wait for vblank again, the flag will already be set, and it’ll continue on immediately — regardless of the state of the screen! Whoops. Again, the fix is simple: clear the flag before beginning to wait. And while I’m at it, I see other uses for waiting for vblank in the near future, so I may as well pull this out into a function. 1 2 3 4 5 6 7 8 9 10 ; idle until next vblank wait_for_vblank: xor a ; clear the vblank flag ld [vblank_flag], a .vblank_loop: halt ; wait for interrupt ld a, [vblank_flag] ; was it a vblank interrupt? and a jr z, .vblank_loop ; if not, keep waiting ret ## Copy function So far, I’ve done an awful lot of runtime copying by using the preprocessor. Consider the code for copying the DMA routine into HRAM: 1 2 3 4 5 6 7 8 ; Copy the little DMA routine into high RAM ld bc, dma_copy ld hl, $ff80 REPT dma_copy_end - dma_copy ld a, [bc] inc bc ld [hl+], a ENDR This will repeat the ld/inc/ld dance 13 times in the built ROM. Which is fine, except that I’m about to have places where I do much more copying, and there’s only so much space in the ROM, and this is kind of ridiculous. So I guess I will finally write a copy function. I’m calling it copy, not memcpy. What else am I going to copy, if not memory? Attempt number 1 looked like this: 1 2 3 4 5 6 7 8 ; copy d bytes from bc to hl copy: ld a, [bc] inc bc ld [hl+], a dec d jr z, copy ret I was then informed that it’s more idiomatic to use de as the source address and c as the count, possibly for some reason relating to the NES or SNES? I don’t remember. I’m totally on board for using c to mean a count, though, and started doing that elsewhere. I went to change that, and actually make use of this function, and lo! I discovered a colossal bug. That last line, jr z, copy, will loop only if d was just decremented to zero. So this function will only ever copy one byte, unless you asked to copy only one byte, in which case it copies two. This is not the first time I’ve gotten a condition backwards. I’ll get used to it eventually, I’m sure. Oh, one other minor problem: if you ask to copy zero bytes, you’ll actually copy 256, since the zero check only comes after the decrement. (This is a recurring annoyance, actually, and makes while loops surprisingly clumsy to express.) So far I’ve only ever needed to copy a constant amount, so this hasn’t been a problem, but… I’ll just leave a comment pretending it’s a feature. 1 2 3 4 5 6 7 8 9 ; copy c bytes from de to hl ; NOTE: c = 0 means to copy 256 bytes! copy: ld a, [de] inc de ld [hl+], a dec c jr nz, copy ret And here it is in action: 1 2 3 4 5 ; Copy the little DMA routine into high RAM ld de, dma_copy ld hl,$FF80 ld c, dma_copy_end - dma_copy call copy Cool. Of course, this is now significantly slower than the original unrolled version. The original took 13 × (2 + 2 + 2) = 78 cycles; the function adds 6 cycles for the call, 4 cycles for the ret, and 13 × (1 + 3) = 52 for the counting and jumping. As c goes to infinity, the function takes about ⅔ longer than unrolling. If I feel like it, I could mitigate this somewhat by partially unrolling. First I’d mask off some lower bits of c — say, the lowest two — and copy that many bytes. Now the amount of copying left is a multiple of four, so I could shift c right twice and have another loop that copies four bytes at a time, amortizing the cost of the decrement and jump. It’s not urgent enough for me to want to bother yet, and it’ll make relatively little difference for small copies like this DMA one, but I’m strongly considering it for copying a 16-bit amount. ## Reset vectors Now I have a couple utility functions like copy and wait_for_vblank. I don’t really care where they go, so I put them in their own SECTION and let the linker figure it out. It took a while for me to notice where, exactly, the linker had put them: at $0000! These functions are small, and I have nothing explicitly placed before the interrupt handlers (which begin at$0040), so rgblink saw some empty space and filled it. The thing is, the Game Boy has eight instructions of the form rst $xx that act as fast calls — each one jumps to a fixed low address (a “reset vector”), using less time and space than a call would. And those fixed $xx addresses are… $00, and every eight bytes afterwards. I don’t have any immediate use for these — eight bytes isn’t a lot, though I guess copy could fit in there — but I probably don’t want arbitrary code ending up where they go, so for now I’ll stub them out like I stubbed out the interrupt handlers. (I have been advised of one very good use for reset vectors: putting a crash handler at$38. Why? Because rst $38 is encoded as$ff, which is a fairly common byte to encounter if you accidentally jump into garbage. A lot of the Game Boy’s RAM is even initialized to $ff at startup.) ## Idioms I’m still discovering what’s considered idiomatic, but here are a couple tidbits. The set of instructions is a little scattershot as far as arguments go. Several times early on, I wrote stuff like this: 1 2 3 ld hl, some_address ld a, 133 ld [hl], a But I overlooked that there are instructions for both ld [hl], n8 and ld [n16], a, so the above can be reduced to two lines. There’s no such thing as ld [n16], n8, though. A surprising number of instructions can use [hl] directly as an operand — even inc and dec, combining fetch/mutate/store into a single instruction. xor a is twice as short and twice as fast as ld a, 0. I mean, we’re talking about a single byte and single cycle here, but no reason not to. (xor a really means xor a, a, but since every boolean op instruction takes a as the first argument anyway, it can be omitted. I don’t like to omit it in most cases, since xor b doesn’t mention a at all and that seems misleading, but it feels appropriate when combining a with itself.) or a (equivalently, and a) is a quick way to test whether a is zero, since boolean ops set the zero flag. ## Color This is neither here nor there, but since this post began with emulator differences, here’s another one. The screen you’re reading this on is almost certainly backlit, but the original Game Boy Color screen was not. A fully white pixel on a Game Boy Color is turned off — it’s the color of the screen itself, in which you can probably see your own reflection. Which raises a tricky question: what color is that? The game thinks it’s pure white, but the screen was a sort of pale yellow. So how should it be rendered in an emulator, on a modern backlit LCD monitor? Compounding this problem is that Game Boy Color games can also run on the Game Boy Advance, which showed the colors yet slightly differently. And, of course, even monitors may be calibrated differently, in which case it all goes out the window. It’s interesting to see different emulators’ opinions of how to render color: This is exactly the same ROM. The top left is mGBA out of the box, which shows colors completely unaltered — usually fairly saturated. The top right is mGBA with its “gba-colors” shader enabled, which is supposed to replicate how colors appear on a GBA screen, but seems passingly similar to a GBC too. Then on the bottom are two emulators renowned for their accuracy, here wildly disagreeing with each other. My Game Boy Color is currently in a box somewhere, and until I can find it, I can’t be sure who’s closer. All of these are perfectly fine interpretations of the same art, though. I may or may not use the “gba-colors” shader, and may or may not fiddle with mGBA’s color settings over time. If the colors vary a bit in future screenshots, that’s probably why. ## To be continued This post doesn’t really correspond to a particular commit very well, since it’s all little stuff I did here and there. I hope you’ve enjoyed the breather, because it’s all downhill from here. In a good way, I mean. Like a rollercoaster. Next time: map and sprite loading, which will explain how I got from grass to the moon texture in the screenshots above! # Cheezball Rising: Main loop, input, and a game Post Syndicated from Eevee original https://eev.ee/blog/2018/07/05/cheezball-rising-main-loop-input-and-a-game/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! GitHub has intermittent prebuilt ROMs, or you can get them a week early on Patreon if you pledge$4. More details in the README! In this issue, I fill in the remaining bits necessary to have something that looks like a game. Previously: drawing a sprite. ## Recap So far, I have this. It took unfathomable amounts of effort, but it’s something! Now to improve this from a static image to something a bit more game-like. Quick note: I’ve been advised to use the de facto standard hardware.inc file, which gives symbolic names to all the registers and some of the flags they use. I hadn’t introduced it yet while doing the work described in this post, but for the sake of readability, I’m going to pretend I did and use that file’s constants in the code snippets here. ## Interrupts To get much further, I need to deal with interrupts. And to explain interrupts, I need to briefly explain calls. Assembly doesn’t really have functions, only addresses and jumps. That said, the Game Boy does have call and ret instructions. A call will push the PC register (program counter, the address of the current instruction) onto the stack and perform a jump; a ret will pop into the PC register, effectively jumping back to the source of the call. There are no arguments, return values, or scoping; input and output must be mediated by each function, usually via registers. Of course, since registers are global, a “function” might trample over their values in the course of whatever work it does. A function can manually push and pop 16-bit register pairs to preserve their values, or leave it up to the caller for speed/space reasons. All the conventions are free for me to invent or ignore. A “function” can even jump directly to another function and piggyback on the second function’s ret, kind of like Perl’s goto &sub… which I realize is probably less common knowledge than how call/return work in assembly. Interrupts, then, are calls that can happen at any time. When one of a handful of conditions occurs, the CPU can immediately (or, rather, just before the next instruction) call an interrupt handler, regardless of what it was already doing. When the handler returns, execution resumes in the interrupted code. Of course, since they might be called anywhere, interrupt handlers need to be very careful about preserving the CPU state. Pushing af is especially important (and this is the one place where af is used as a pair), because a is necessary for getting almost anything done, and f holds the flags which most instructions will invisibly trample. The Game Boy has five interrupts, each with a handler at a fixed address very low in ROM. Each handler only has room for eight bytes’ worth of instructions, which is enough to do a very tiny amount of work — or to just jump elsewhere. A good start is to populate each one with only the reti instruction, which returns as usual and re-enables interrupts. The CPU disables interrupts when it calls an interrupt handler (so they thankfully can’t interrupt themselves), and returning with only ret will leave them disabled. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ; Interrupt handlers SECTION "Vblank interrupt", ROM0[$0040] ; Fires when the screen finishes drawing the last physical ; row of pixels reti SECTION "LCD controller status interrupt", ROM0[$0048] ; Fires on a handful of selectable LCD conditions, e.g. ; after repainting a specific row on the screen reti SECTION "Timer overflow interrupt", ROM0[$0050] ; Fires at a configurable fixed interval reti SECTION "Serial transfer completion interrupt", ROM0[$0058] ; Fires when the serial cable is done? reti SECTION "P10-P13 signal low edge interrupt", ROM0[$0060] ; Fires when a button is released? reti These will do nothing. I mean, obviously, but they’ll do even less than nothing until I enable them. Interrupts are enabled by the dedicated ei instruction, which enables any interrupts whose corresponding bit is set in the IE register ($ffff). So… which one do I want? ## Game loop To have a game, I need a game loop. The basic structure of pretty much any loop looks like: 2. Check for input. 3. Update the game state. 4. Draw the game state. 5. GOTO 2 (If you’ve never seen a real game loop written out before, LÖVE’s default loop is a good example, though even a huge system like Unity follows the same basic structure.) The Game Boy seems to introduce a wrinkle here. I don’t actually draw anything myself; rather, the hardware does the drawing, and I tell it what to draw by using the palette registers, OAM, and VRAM. But in fact, this isn’t too far off from how LÖVE (or Unity) works! All the drawing I do is applied to a buffer, not the screen; once the drawing is complete, the main loop calls present(), which waits until vblank and then draws the buffer to the screen. So what you see on the screen is delayed by up to a frame, and the loop really has an extra “wait for vsync” step at 3½. Or, with a little rearrangement: 2. Wait for vblank. 3. Draw the game state. 4. Check for input. 5. Update the game state. 6. GOTO 2 This is approaching something I can implement! It works out especially well because it does all the drawing as early as possible during vblank. That’s good, because the LCD operation looks something like this: 1 2 3 4 5 6 7 LCD redrawing... LCD redrawing... LCD redrawing... LCD redrawing... VBLANK LCD idle LCD idle While the LCD is refreshing, I can’t (easily) update anything it might read from. I only have free control over VRAM et al. during a short interval after vblank, so I need to do all my drawing work right then to ensure it happens before the LCD starts refreshing again. Then I’m free to update the world while the LCD is busy. First, right at the entry point, I enable the vblank interrupt. It’s bit 0 of the IE register, but hardware.inc has me covered. 1 2 3 4 5 main: ; Enable interrupts ld a, IEF_VBLANK ldh [rIE], a ei Next I need to make the handler actually do something. The obvious approach is for the handler to call one iteration of the game loop, but there are a couple problems with that. For one, interrupts are disabled when a handler is called, so I would never get any other interrupts. I could explicitly re-enable interrupts, but that raises a bigger question: what happens if the game lags, and updating the world takes longer than a frame? With this approach, the game loop would interrupt itself and then either return back into itself somewhere and cause untold chaos, or take too long again and eventually overflow the stack. Neither is appealing. An alternative approach, which I found in gb-template but only truly appreciated after some thought, is for the vblank handler to set a flag and immediately return. The game loop can then wait until the flag is set before each iteration, just like LÖVE does. If an update takes longer than a frame, no problem: the loop will always wait until the next vblank, and the game will simply run more slowly. 1 2 3 4 5 6 7 8 9 10 11 12 13 SECTION "Vblank interrupt", ROM0[$0040] push hl ld hl, vblank_flag ld [hl], 1 pop hl reti ... SECTION "Important twiddles", WRAM0[$C000] ; Reserve a byte in working RAM to use as the vblank flag vblank_flag: db The handler fits in eight bytes — the linker would yell at me if it didn’t, since another section starts at $0048! — and leaves all the registers in their previous states. As I mentioned before, I originally neglected to preserve registers, and some zany things started to happen as a and f were abruptly altered in the middle of other code. Whoops! Now the main loop can look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 main: ; ... bunch of setup code ... vblank_loop: ; Main loop: halt, wait for a vblank, then do stuff ; The halt instruction stops all CPU activity until the ; next interrupt, which saves on battery, or at least on ; CPU cycles on an emulator's host system. halt ; The Game Boy has some obscure hardware bug where the ; instruction after a halt is occasionally skipped over, ; so every halt should be followed by a nop. This is so ; ubiquitous that rgbasm automatically adds a nop after ; every halt, so I don't even really need this here! nop ; Check to see whether that was a vblank interrupt (since ; I might later use one of the other interrupts, all of ; which would also cancel the halt). ld a, [vblank_flag] ; This sets the zero flag iff a is zero and a jr z, vblank_loop ; This always sets a to zero, and is shorter (and thus ; faster) than ld a, 0 xor a, a ld [vblank_flag], a ; Use DMA to update object attribute memory. ; Do this FIRST to ensure that it happens before the screen starts to update again. call$FF80 ; ... update everything ... jp vblank_loop It’s looking all the more convenient that I have my own copy of OAM — I can update it whenever I want during this loop! I might need similar facilities later on for editing VRAM or changing palettes. ## Doing something and reading input I have a loop, but since nothing’s happening, that’s not especially obvious. Input would take a little effort, so I’ll try something simpler first: making Anise move around. I don’t actually track Anise’s position anywhere right now, except for in the OAM buffer. Good enough. In my main loop, I add: 1 2 3 4 ld hl, oam_buffer + 1 ld a, [hl] inc a ld [hl], a The second byte in each OAM entry is the x-coordinate, and indeed, this causes Anise’s torso to glide rightwards across the screen at 60ish pixels per second. Eventually the x-coordinate overflows, but that’s fine; it wraps back to zero and moves the sprite back on-screen from the left. Excellent. I mean, sorry, this is extremely hard to look at, but bear with me a second. This would be a bit more game-like if I could control it with the buttons, so let’s read from them. There are eight buttons: up, down, left, right, A, B, start, select. There are also eight bits in a byte. You might suspect that I can simply read an I/O register to get the current state of all eight buttons at once. Ha, ha! You naïve fool. Of course it’s more convoluted than that. That single byte thing is a pretty good idea, though, so what I’ll do is read the input at the start of the frame and coax it into a byte that I can consult more easily later. Turns out I pretty much have to do that, because button access is slightly flaky. Even the official manual advises reading the buttons several times to get a reliable result. Yikes. Here’s how to do it. The buttons are wired in two groups of four: the dpad and everything else. Reading them is thus also done in two groups of four. I need to use the P1 register, which I assume is short for “player 1” and is so named because the people who designed this hardware had also designed the two-player NES? Bits 5 and 6 of P1 determine which set of four buttons I want to read, and then the lower nybble contains the state of those buttons. Note that each bit is set to 1 if the button is released; I think this is a quirk of how they’re wired, and what I’m doing is extremely direct hardware access. Exciting! (Also very confusing on my first try, where Anise’s movement was inverted.) The code, which is very similar to an example in the official manual, thus looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 ; Poll input ; The direct hardware access is nonsense and unreliable, so ; just read once per frame and stick all the button states ; in a byte ; Bit 6 means to read the dpad ld a, $20 ldh [rP1], a ; But it's unreliable, so do it twice ld a, [rP1] ld a, [rP1] ; This is 'complement', and flips all the bits in a, so now ; set bits will mean a button is held down cpl ; Store the lower four bits in b and a,$0f ld b, a ; Bit 5 means to read the buttons ld a, $10 ldh [rP1], a ; Apparently this is even more unreliable?? No, really, the ; manual does this: two reads, then six reads ld a, [rP1] ld a, [rP1] ld a, [rP1] ld a, [rP1] ld a, [rP1] ld a, [rP1] ; Again, complement and mask off the lower four bits cpl and a,$0f ; b already contains four bits, so I need to shift something ; left by four... but the shift instructions only go one ; bit at a time, ugh! Luckily there's swap, which swaps the ; high and low nybbles in any register swap a ; Combine b's lower nybble with a's high nybble or a, b ; And finally store it in RAM ld [buttons], a ... SECTION "Important twiddles", WRAM0[$C000] vblank_flag: db buttons: db Phew. That was a bit of a journey, but now I have the button state as a single byte. To help with reading the buttons, I’ll also define a few constants labeling the individual bits. (There are instructions for reading a particular bit by number, so I don’t need to mask a single bit out.) 1 2 3 4 5 6 7 8 9 ; Constants BUTTON_RIGHT EQU 0 BUTTON_LEFT EQU 1 BUTTON_UP EQU 2 BUTTON_DOWN EQU 3 BUTTON_A EQU 4 BUTTON_B EQU 5 BUTTON_START EQU 6 BUTTON_SELECT EQU 7 Now to adjust the sprite position based on what directions are held down. Delete the old code and replace it with: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 ; Set b/c to the y/x coordinates ld hl, oam_buffer ld b, [hl] inc hl ld c, [hl] ; This sets the z flag to match a particular bit in a bit BUTTON_LEFT, a ; If z, the bit is zero, so left isn't held down jr z, .skip_left ; Otherwise, left is held down, so decrement x dec c .skip_left: ; The other three directions work the same way bit BUTTON_RIGHT, a jr z, .skip_right inc c .skip_right: bit BUTTON_UP, a jr z, .skip_up dec b .skip_up: bit BUTTON_DOWN, a jr z, .skip_down inc b .skip_down: ; Finally, write the new coordinates back to the OAM ; buffer, which hl is still pointing into ld [hl], c dec hl ld [hl], b Miraculously, Anise’s torso now moves around on command! Neat! But this still looks really, really, incredibly bad. ## Aesthetics It’s time to do something about this artwork. First things first: I’m really tired of writing out colors by hand, in binary, so let’s fix that. In reality, I did this bit after adding better art, but doing it first is better for everyone. I think I’ve mentioned before that rgbasm has (very, very rudimentary) support for macros, and this seems like a perfect use case for one. I’d like to be able to write colors out in typical rrggbb hex fashion, so I need to convert a 24-bit color to a 16-bit one. 1 2 3 4 5 6 dcolor: MACRO ;$rrggbb -> gbc representation _r = ((\1) & $ff0000) >> 16 >> 3 _g = ((\1) &$00ff00) >> 8 >> 3 _b = ((\1) & $0000ff) >> 0 >> 3 dw (_r << 0) | (_g << 5) | (_b << 10) ENDM This is going to need a whole paragraph of caveats. A macro is contained between MACRO and ENDM. The assembler has a curious sort of universal assignment syntax, where even ephemeral constructs like macros are introduced by labels. Macros can take arguments, but they aren’t declared; they’re passed more like arguments to shell scripts, where the first argument is \1 and so forth. (There’s even a SHIFT command for accessing arguments beyond the ninth.) Also, passing strings to a macro is some kind of byzantine nightmare where you have to slap backslashes in just the right places and I will probably avoid doing it altogether if I can at all help it. Oh, one other caveat: compile-time assignments like I have above must start in the first column. I believe this is because assignments are also labels, and labels have to start in the first column. It’s a bit weird and apparently rgbasm’s lexer is horrifying, but I’ll take it over writing my own assembler and stretching this project out any further. Anyway, all of that lets me write dcolor$ff0044 somewhere and have it translated at compile time to the appropriate 16-bit value. (I used dcolor to parallel db and friends, but I’m strongly considering using CamelCase exclusively for macros? Guess it depends how heavily I use them.) With that on hand, I can now doodle some little sprites in Aseprite and copy them in. This part is not especially interesting and involves a lot of squinting at zoomed-in sprites. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 SECTION "Sprites", ROM0 PALETTE_BG0: dcolor $80c870 ; light green dcolor$48b038 ; darker green dcolor $000000 ; unused dcolor$000000 ; unused PALETTE_ANISE: dcolor $000000 ; TODO dcolor$204048 dcolor $20b0b0 dcolor$f8f8f8 GRASS_SPRITE: dw 00000000 dw 00000000 dw 01000100 dw 01010100 dw 00010000 dw 00000000 dw 00000000 dw 00000000 EMPTY_SPRITE: dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 dw 00000000 ANISE_SPRITE: ; ... I'll revisit this momentarily Gorgeous. You may notice that I put the colors as data instead of inlining them in code, which incidentally makes the code for setting the palette vastly shorter as well: 1 2 3 4 5 6 7 8 9 10 11 12 ; Start setting the first color, and advance the internal ; pointer on every write ld a, %10000000 ; BCPS = Background Color Palette Specification ldh [rBCPS], a ld hl, PALETTE_BG0 REPT 8 ld a, [hl+] ; Same, but Data ld [rBCPD], a ENDR Loading sprites into VRAM also becomes a bit less of a mess: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ; Load some basic tiles ld hl, $8000 ; Read the 16-byte empty sprite into tile 0 ld bc, EMPTY_SPRITE REPT 16 ld a, [bc] inc bc ld [hl+], a ENDR ; Read the grass sprite into tile 1, which immediately ; follows tile 0, so hl is already in the right place ld bc, GRASS_SPRITE REPT 16 ld a, [bc] inc bc ld [hl+], a ENDR Someday I should write an actual copy function, since at the moment, I’m using an alarming amount of space for pointlessly unrolled loops. Maybe later. You may notice I now have two tiles, whereas before I was relying on filling the entire screen with one tile, tile 0. I want to dot the landscape with tile 1, which means writing a bit more to the actual background grid, which begins at$9800 and has one byte per tile. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ; Fill the screen buffer with a pattern of grass tiles, ; where every 2x2 block has a single grass at the top left. ; Note that the buffer is 32x32 tiles, and it ends at $9c00 ld hl,$9800 .screen_fill_loop: ; Use tile 1 for every other tile in this row. Note that ; REPTed part increments hl /twice/, thus skipping a tile ld a, $01 REPT 16 ld [hl+], a inc hl ENDR ; Skip an entire row of 32 tiles, which will remain empty. ; There is almost certainly a better way to do this, but I ; didn't do it. (Hint: it's ld bc,$20; add hl, bc) REPT 32 inc hl ENDR ; If we haven't reached $9c00 yet, continue looping ld a, h cp a,$9C jr c, .screen_fill_loop Sorry for all these big blocks of code, but check out this payoff! POW! Gorgeous. And hey, why stop there? With a little more pixel arting against a very reduced palette… 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 SPRITE_ANISE_FRONT_1: dw 00000111 dw 00001222 dw 00012222 dw 00121222 dw 00121122 dw 00121111 dw 00121122 dw 00121312 dw 00121313 dw 00012132 dw 00001211 dw 00000123 dw 00100123 dw 00011133 dw 00000131 dw 00000010 SPRITE_ANISE_FRONT_2: dw 11100000 dw 22210000 dw 22221000 dw 22212100 dw 22112100 dw 11112100 dw 22112100 dw 21312100 dw 31312100 dw 23121000 dw 11210000 dw 32100000 dw 32100000 dw 33100000 dw 13100000 dw 01000000 Yes, I am having trouble deciding on a naming convention. This is now a 16×16 sprite, made out of two 8×16 parts. This post has enough code blocks as it is, and the changes to make this work are relatively minor copy/paste work, so the quick version is: 1. Set the LCDC flag (bit 2, or LCDCF_OBJ16) that makes objects be 8×16. This mode uses pairs of tiles, so an object that uses either tile 0 or 1 will draw both of them, with tile 0 on top of tile 1. 3. Define a second sprite that’s 8 pixels to the right of the first one. 4. Remove the hard-coded object palette, and instead load the PALETTE_ANISE that I sneakily included above. This time the registers are called rOCPS and rOCPD. Finally, extend the code that moves the sprite to also move the second half: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Finally, write the new coordinates back to the OAM ; buffer, which hl is still pointing into ld [hl], c dec hl ld [hl], b ; This bit is new: copy the x-coord into a so I can add 8 ; to it, then store both coords into the second sprite's ; OAM data ld a, c add a, 8 ; I could've written this the other way around, but I did ; not, I guess because this structure mirrors the above? ld hl, oam_buffer + 5 ld [hl], a dec hl ld [hl], b Cross my fingers, and… Hey hey hey! That finally looks like something! ## To be continued It was a surprisingly long journey, but this brings us more or less up to commit 313a3e, which happens to be the first commit I made a release of! It’s been more than a week, so you can grab it on Patreon or GitHub. I strongly recommend playing it with a release of mGBA prior to 0.7, for… reasons that will become clear next time. Next time: I’ll take a breather and clean up a few things. # Cheezball Rising: Drawing a sprite Post Syndicated from Eevee original https://eev.ee/blog/2018/06/21/cheezball-rising-drawing-a-sprite/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! source codeprebuilt ROMs (a week early for $4) • works best with mGBA In this issue, I figure out how to draw a sprite. This part was hard. ## Recap Welcome back! I’ve started cobbling together a Pygments lexer for RGBDS’s assembly flavor, so hopefully the code blocks are more readable, and will become moreso over time. When I left off last time, I had… um… this. This is all on the background layer, which I mentioned before is a fixed grid of 8×8 tiles. For anything that moves around freely, like the player, I need to use the object layer. So that’s an obvious place to go next. Now, if you remember, I can define tiles by just writing to video RAM, and I define palettes with a goofy system involving writing them one byte at a time to the same magic address. You might expect defining objects to do some third completely different thing, and you’d be right! ## Defining an object Objects are defined in their own little chunk of RAM called OAM, for object attribute memory. They’re also made up of tiles, but each tile can be positioned at an arbitrary point on the screen. OAM starts at$fe00 and each object takes four bytes — the y-coordinate, the x-coordinate, the tile number, and some flags — for a total of 160 bytes. There are some curiosities, like how the top left of the screen is (8, 10) rather than (0, 0), but I’ll figure out what’s up with that later. (I suppose if zeroes meant the upper left corner, there’d be a whole stack of tile 0 there all the time.) Here’s the fun part: I can’t write directly to OAM? I guess??? Come to think of it, I don’t think the manual explicitly says I can’t, but it’s strongly implied. Hmm. I’ll look into that. But I didn’t at the time, so I’ll continue under the assumption that the following nonsense is necessary. Because I “can’t” write directly, I need to use some shenanigans. First, I need something to write! This is an Anise game, so let’s go for Anise. I’m on my laptop at this point without access to the source code for the LÖVE Anise game I started, so I have to rustle up a screenshot I took. Wait a second. Even on the Game Boy Color, tiles are defined with two bits per pixel. That means an 8×8 tile has a maximum of four colors. For objects, the first color is transparent, so I really have three colors — which is exactly why most Game Boy Color protagonists have a main color, an outline/shadow color, and a highlight color. Let’s check out that Anise in more detail. Hm yes okay that’s more than three colors. I guess I’m going to need to draw some new sprites from scratch, somehow. In the meantime, I optimistically notice that Star Anise’s body only uses three colors, and it’s 8×7! I could make a tile out of that! I painstakingly copy the pixels into a block of those backticks, which you can kinda see is his body if you squint a bit: 1 2 3 4 5 6 7 8 9 10 SECTION "Sprites", ROM0 ANISE_SPRITE: dw 00000000 dw 00001333 dw 00001323 dw 10001233 dw 01001333 dw 00113332 dw 00003002 dw 00003002 The dw notation isn’t an opcode; it tells the assembler to put two literal bytes of data in the final ROM. A word of data. (Each row of a tile is two bytes, remember.) If you think about this too hard, you start to realize that both the data and code are just bytes, everything is arbitrary, and true meaning is found only in the way we perceive things rather than in the things themselves. Note I didn’t specify an exact address for this section, so the linker will figure out somewhere to put it and make sure all the labels are right at the end. Now I load this into tilespace, back in my main code: 1 2 3 4 5 6 7 8 ; Define an object ld hl, $8800 ld bc, ANISE_SPRITE REPT 16 ld a, [bc] ld [hl+], a inc bc ENDR This copies 16 bytes, starting from the ANISE_SPRITE label, to$8800. Why $8800, not$8000? I’m so glad you asked! There are actually three blocks of tile space, each with enough room for 128 tiles: one at $8000, one at$8800, and one at $9000. Object tiles always use the$8000 block followed by the $8800 block, whereas background tiles can use either$8000 + $8800 or$9000 + $8800. By default, background tiles use$8000 + $8800. All of which is to say that I got very confused reading the manual (which spends like five pages explaining the above paragraph) and put the object tiles in the wrong place. Whoops. It’s fine; this just ends up being tile 128. In my partial defense, looking at it now, I see the manual is wrong! Bit 4 of the LCD controller register ($ff40) controls whether the background uses tiles from $8000 +$8800 (1) or $9000 +$8800 (0). The manual says that this register defaults to $83, which has bit 4 off, suggesting that background tiles use$9000 + $8800 (i.e. start at$8800), but disassembly of the boot ROM shows that it actually defaults to $91, which has bit 4 on. Thanks a lot, Nintendo! That was quite a diversion. Here’s a chart of where the dang tiles live. Note that the block at$8800 is always shared between objects and background tiles. Oh, and on the Game Boy Color, all three blocks are twice as big thanks to the magic of banking. I’ll get to banking… much later. 1 2 3 4 5 bit 4 ON (default) bit 4 OFF ------------------ --------- $8000 obj tiles 0-127 bg tiles 0-127$8800 obj tiles 128-255 bg tiles 128-255 bg tiles 128-255 $9000 bg tiles 0-127 Hokay. What else? I’m going to need a palette for this, and I don’t want to use that gaudy background palette. Actually, I can’t — the background and object layers have two completely separate sets of palettes. Writing an object palette is exactly the same as writing a background palette, except with different registers. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ; This should look pretty familiar ld a, %10000000 ld [$ff6a], a ld bc, %0000000000000000 ; transparent ld a, c ld [$ff6b], a ld a, b ld [$ff6b], a ld bc, %0010110100100101 ; dark ld a, c ld [$ff6b], a ld a, b ld [$ff6b], a ld bc, %0100000111001101 ; med ld a, c ld [$ff6b], a ld a, b ld [$ff6b], a ld bc, %0100001000010001 ; white ld a, c ld [$ff6b], a ld a, b ld [$ff6b], a Riveting! I wrote out those colors by hand. The original dark color, for example, was #264a59. That uses eight bits per channel, but the Game Boy Color only supports five (a factor of 8 difference), so first I rounded each channel to the nearest 8 and got #284858. Swap the channels to get 58 48 28 and convert to binary (sans the trailing zeroes) to get 01011 01001 00101. Note to self: probably write a macro or whatever so I can define colors like a goddamn human being. Also why am I not putting the colors in a ROM section too? Almost there. I still need to write out those four bytes that specify the tile and where it goes. I can’t actually write them to OAM yet, so I need some scratch space in regular RAMworking RAM. 1 2 3 SECTION "OAM Buffer", WRAM0[$C100] oam_buffer: ds 4 * 40 The ds notation is another “data” variant, except it can take a size and reserves space for a whole string of data. Note that I didn’t put any actual data here — this section is in RAM, which only exists while the game is running, so there’d be nowhere to put data. Also note that I gave an explicit address this time. The buffer has to start at an address ending in 00, for reasons that will become clear momentarily. The space from$c000 to $dfff is available as working RAM, and I chose$c100 for… reasons that will also become clear momentarily. Now to write four bytes to it at runtime: 1 2 3 4 5 6 7 8 9 10 11 12 13 ; Put an object on the screen ld hl, oam_buffer ; y-coord ld a, 64 ld [hl+], a ; x-coord ld [hl+], a ; tile index ld a, 128 ld [hl+], a ; attributes, including palette, which are all zero ld a, %00000000 ld [hl+], a (I tried writing directly to OAM on my first attempt. Nothing happened! Very exciting.) But how to get this into OAM so it’ll actually show on-screen? For that, I need to do a DMA transfer. ## DMA DMA, or direct memory access, is one of those things the Game Boy programming manual seems to think everyone is already familiar with. It refers generally to features that allow some other hardware to access memory, without going through the CPU. In the case of the Game Boy, it’s used to copy data from working RAM to OAM. Only to OAM. It’s very specific. Performing a DMA transfer is super easy! I write the high byte of the source address to the DMA register ($ff46), and then some magic happens, and 160 bytes from the source address appear in OAM. In other words: 1 2 3 ld a,$c1 ; copy from $c100 ld [$ff46], a ; perform DMA transfer ; now $c000 through$c09f have been copied into OAM! It’s almost too good to be true! And it is. There are some wrinkles. First, the transfer takes some time, during which I almost certainly don’t want to be doing anything else. Second, during the transfer, the CPU can only read from “high RAM” — $ff80 and higher. Wait, uh oh. The usual workaround here is to copy a very short function into high RAM to perform the actual transfer and wait for it to finish, then call that instead of starting a transfer directly. Well, that sounds like a pain, so I break my rule of accounting for every byte and find someone else who’s done it. Conveniently enough, that post is by the author of the small template project I’ve been glancing at. I end up with something like the following. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ; Copy the little DMA routine into high RAM ld bc, DMA_BYTECODE ld hl,$ff80 ; DMA routine is 13 bytes long REPT 13 ld a, [bc] inc bc ld [hl+], a ENDR ; ... SECTION "DMA Bytecode", ROM0 DMA_BYTECODE: db $F5,$3E, $C1,$EA, $46,$FF, $3E,$28, $3D,$20, $FD,$F1, $D9 That’s compiled assembly, written inline as bytes. Oh boy. The original code looks like: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ; start the transfer, as shown above ld a,$c1 ld [$ff46], a ; wait 160 cycles/microseconds, the time it takes for the ; transfer to finish; this works because 'dec' is 1 cycle ; and 'jr' is 3, for 4 cycles done 40 times ld a, 40 loop: dec a jr nz, loop ; return ret Now you can see why I used$c100 for my OAM buffer: because it’s the address this person used. (Hm, the opcode reference I usually use seems to have all the timings multiplied by a factor of 4 without comment? Odd. The rgbds reference is correct.) (Also, here’s a fun fact: the stack starts at $fffe and grows backwards. If it grows too big, the very first thing it’ll overwrite is this DMA routine! I bet that’ll have some fun effects.) At this point I have a thought. (Okay, I had the thought a bit later, but it works better narratively if I have it now.) I’ve already demonstrated that the line between code and data is a bit fuzzy here. So why does this code need to be pre-assembled? And a similar thought: why is the length hardcoded? Surely, we can do a little better. What if we shuffle things around a bit… 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 SECTION "init", ROM0[$0100] nop ; Jump to a named label instead of an address jp main SECTION "main", ROM0[$0150] ; DMA copy routine, copied into high RAM at startup. ; Never actually called where it is. dma_copy: ld a,$c1 ld [$ff46], a ld a, 40 .loop: dec a jr nz, .loop ret dma_copy_end: nop main: ; ... all previous code is here now ... ; Copy the little DMA routine into high RAM ld bc, dma_copy ld hl,$ff80 ; DMA routine is 13 bytes long REPT dma_copy_end - dma_copy ld a, [bc] inc bc ld [hl+], a ENDR This is very similar to what I just had, except that the code is left as code, and its length is computed by having another label at the end — so I’m free to edit it later if I want to. It all ends up as bytes in the ROM, so the code ends up exactly the same as writing out the bytes with db. Come to think of it, I don’t even need to hardcode the $c1 there; I could replace it with oam_buffer >> 8 and avoid repeating myself. (I put the code at$0150 because rgbasm is very picky about subtracting labels, and will only do it if they both have fixed positions. These two labels would be the same distance apart no matter where I put the section, but I guess rgbasm isn’t smart enough to realize that.) I’m actually surprised that the author of the above post didn’t think to do this? Maybe it’s dirty even by assembly standards. ## Timing, vblank, and some cool trickery Okay, so, as I was writing that last section, I got really curious about whether and when I’m actually allowed to write to OAM. Or tile RAM, for that matter. I found/consulted the Game Boy dev wiki, and the rules match what’s in the manual, albeit with a chart that makes things a little more clear. My understanding is as follows. The LCD draws the screen one row of pixels at a time, and each row has the following steps: 1. Look through OAM to see if any sprites are on this row. OAM is inaccessible to the CPU. 2. Draw the row. OAM, VRAM, and palettes are all inaccessible. 3. Finish the row and continue on to the beginning of the next row. This takes a nonzero amount of time, called the horizontal blanking period, during which the CPU can access everything freely. Once the LCD reaches the bottom, it continues to “draw” a number of faux rows below the bottom of the visible screen (vertical blanking), and the CPU can again do whatever it wants. Eventually it returns to the top-left corner to draw again, concluding a single frame. The entire process happens 59.7 times per second. There’s one exception: DMA transfers can happen any time, but the LCD will simply not draw sprites during the transfer. So I probably shouldn’t be writing to tiles and palettes willy-nilly. I suspect I got away with it because it happened in that first OAM-searching stage… and/or because I did it on emulators which are a bit more flexible than the original hardware. In fact… I took this screenshot by loading the ROM I have so far, pausing it, resetting it, and then advancing a single frame. This is the very first frame my game shows. If you look closely at the first row of pixels, you can see they’re actually corrupt — they’re being drawn before I’ve set up the palette! You can even see each palette entry taking effect along the row. This is very cool. It also means my current code would not work at all on actual hardware. I should probably just turn the screen off while I’m doing setup like this. It’s interesting that only OAM gets a special workaround in the form of a DMA transfer — I imagine because sprites move around much more often than the tileset changes — but having the LCD stop drawing sprites in the meantime is quite a limitation. Surely, you’d only want to do a DMA transfer during vblank anyway? It is much faster than copying by hand, so I’ll still take it. All of this is to say: I’m gonna need to care about vblanks. Incidentally, the presence of hblank is very cool and can be used for a number of neat effects, especially when combined with the Game Boy’s ability to call back into user code when the LCD reaches a specific row: • The GBC Zelda games use it for map scrolling. The status bar at the top is in one of the two background maps, and as soon as that finishes drawing, the game switches to the other one, which contains the world. • Those same games also use it for a horizontal wavy effect, both when warping around and when underwater — all they need to do is change the background layer’s x offset during each hblank! • The wiki points out that OAM could be written to in the middle of a screen update, thus bypassing the 40-object restriction: draw 40 objects on the top half of the screen, swap out OAM midway, and then the LCD will draw a different 40 on the bottom half! • I imagine you could also change palettes midway through a redraw and exceed the usual limit of 56 colors on screen at a time! No telling whether this sort of trick would work on an emulator, though. I am very excited at the prospects here. I’m also slightly terrified. I have a fixed amount of time between frames, and with the LCD as separate hardware, there’s no such thing as a slow frame. If I don’t finish, things go bad. And that time is measured in instructions — an ld always takes the same number of cycles! There’s no faster computer or reducing GC pressure. There’s just me. Yikes. ## Back to drawing a sprite I haven’t had a single new screenshot this entire post! This is ridiculous. All I want is to draw a thing to the screen. I have some data in my OAM buffer. I have DMA set up. All I should need to do now is start a transfer. 1 call $ff80 And… nothing. mGBA’s memory viewer confirms everything’s in the right place, but nothing’s on the screen. Whoops! Remember that LCD controller register, and how it defaults to$91? Well, bit 1 is whether to show objects at all, and it defaults to off. So let’s fix that. 1 2 ld a, %10010011 ; $91 plus bit 2 ld [$ff40], a SUCCESS! It doesn’t look like much, but it took a lot of flailing to get here, and I was overjoyed when I first saw it. The rest should be a breeze! Right? ## To be continued That doesn’t even get us all the way through commit 1b17c7, but this is already more than enough. Next time: input, and moderately less eye-searing art! # Cheezball Rising: A new Game Boy Color game Post Syndicated from Eevee original https://eev.ee/blog/2018/06/19/cheezball-rising-a-new-game-boy-color-game/ This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console! source codeprebuilt ROMs (a week early for $4) • works best with mGBA In this issue, I figure out how to put literally anything on the goddamn screen, then add a splash of color. ## The plan I’m making a Game Boy Color game! I have no— okay, not much idea what I’m doing, so I’m going to document my progress as I try to forge a 90s handheld game out of nothing. I do usually try to keep tech stuff accessible, but this is going to get so arcane that that might be a fool’s errand. Think of this as less of an extended tutorial, more of a long-form Twitter. Also, I’ll be posting regular builds on Patreon for$4 supporters, which will be available a week later for everyone else. I imagine they’ll generally stay in lockstep with the posts, unless I fall behind on the writing part. But when has that ever happened? Your very own gamedev legend is about to unfold! A world of dreams and adventures with gbz80 assembly awaits! Let’s go! ## Prerequisites First things first. I have a teeny bit of experience with Game Boy hacking, so I know I need: • An emulator. I have no way to run arbitrary code on an actual Game Boy Color, after all. I like mGBA, which strives for accuracy and has some debug tools built in. There’s already a serious pitfall here: emulators are generally designed to run games that would work correctly on the actual hardware, but they won’t necessarily reject games that wouldn’t work on actual hardware. In other words, something that works in an emulator might still not work on a real GBC. I would of course prefer that this game work on the actual console it’s built for, but I’ll worry about that later. • An assembler, which can build Game Boy assembly code into a ROM. I pretty much wrote one of these myself already for the Pokémon shenanigans, but let’s go with something a little more robust here. I’m using RGBDS, which has a couple nice features like macros and a separate linking step. It compiles super easily, too. I also hunted down a vim syntax file, uh, somewhere. I can’t remember which one it was now, and it’s kind of glitchy anyway. • Some documentation. I don’t know exactly how this surfaced, but the actual official Game Boy programming manual is on archive.org. It glosses over some things and assumes some existing low-level knowledge, but for the most part it’s a very solid reference. For everything else, there’s Google, and also the curated awesome-gbdev list of resources. That list includes several skeleton projects for getting started, but I’m not going to use them. I want to be able to account for every byte of whatever I create. I will, however, refer to them if I get stuck early on. (Spoilers: I get stuck early on.) And that’s it! The rest is up to me. ## Making nothing from nothing Might as well start with a Makefile. The rgbds root documentation leads me to the following incantation: 1 2 3 4 all: rgbasm -o main.o main.rgbasm rgblink -o gamegirl.gb main.o rgbfix -v -p 0 gamegirl.gb (I, uh, named this project “gamegirl” before I figured out what it was going to be. It’s a sort of witticism, you see.) This works basically like every C compiler under the sun, as you might expect: every source file compiles to an object file, then a linker bundles all the object files into a ROM. If I only change one source file, I only have to rebuild one object file. Of course, this Makefile is terrible garbage and will rebuild the entire project unconditionally every time, but at the moment that takes a fraction of a second so I don’t care. The extra rgbfix step is new, though — it adds the Nintendo logo (the one you see when you start up a Game Boy) to the header at the beginning of the ROM. Without this, the console will assume the cartridge is dirty or missing or otherwise unreadable, and will refuse to do anything at all. (I could also bake the logo into the source itself, but given that it’s just a fixed block of bytes and rgbfix is bundled with the assembler, I see no reason to bother with that.) All I need now is a source file, main.rgbasm, which I populate with: 1 Nothing! I don’t know what I expect from this, but I’m curious to see what comes out. And what comes out is a working ROM! Maybe “working” is a strong choice of word, given that it doesn’t actually do anything. ## Doing something It would be fantastic to put something on the screen. This turned out to be harder than expected. First attempt. I know that the Game Boy starts running code at $0150, immediately after the end of the header. So I’ll put some code there. A brief Game Boy graphics primer: there are two layers, the background and objects. (There’s also a third layer, the window, which I don’t entirely understand yet.) The background is a grid of 8×8 tiles, two bits per pixel, for a total of four shades of gray. Objects can move around freely, but they lose color 0 to transparency, so they can only use three colors. There are lots more interesting details and restrictions, which I will think about more later. Drawing objects is complicated, and all I want to do right now is get something. I’m pretty sure the background defaults to showing all tile 0, so I’ll try replacing tile 0 with a gradient and see what happens. Tiles are 8×8 and two bits per pixel, which means each row takes two bytes, and the whole tile is 16 bytes. Tiles are defined in one big contiguous block starting at$8000 — or, maybe $8800, sometimes — so all I need to do is: 1 2 3 4 5 6 7 8 9 10 11 12 SECTION "main", ROM0[$0150] ld hl, $8000 ld a, %00011011 REPT 16 ld [hl+], a ENDR _halt: ; Do nothing, forever halt nop jr _halt If you are not familiar with assembly, this series is going to be a wild ride. But here’s a very very brief primer. Assembly language — really, an assembly language — is little more than a set of human-readable names for the primitive operations a CPU knows how to do. And those operations, by and large, consist of moving bytes around. The names tend to be very short, because you end up typing them a lot. Most of the work is done in registers, which are a handful of spaces for storing bytes right on the CPU. At this level, RAM is relatively slow — it’s further away, outside the chip — so you want to do as much work as possible in registers. Indeed, most operations can only be done on registers, so there’s a lot of fetching stuff from RAM and operating on it and then putting it back in RAM. The Game Boy CPU, a modified Z80, has eight byte-sized registers. They’re often referred to in pairs, because they can be paired up to make a 16-bit values (giving you access to a full 64KB address space). And they are: af, bc, de, hl. The af pair is special. The f register is used for flags, such as whether the last instruction caused an overflow, so it’s not generally touched directly. The a register is called the accumulator and is most commonly used for math operations — in fact, a lot of math operations can only be done on a. The hl register is most often used for addresses, and there are a couple instructions specific to hl that are convenient for memory access. (The h and l even refer to the high and low byte of an address.) The other two pairs aren’t especially noteworthy. Also! Not every address is actually RAM; the address space ($0000 through $ffff) is carved into several distinct areas, which we will see as I go along.$8000 is the beginning of display RAM, which the screen reads from asynchronously. Also, a lot of addresses above $ff00 (also called “registers”) are special and control hardware in some way, or even perform some action when written to. With that in mind, here’s the above code with explanatory comments: TODO need to change this to write a single byte 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 ; This is a directive for the assembler to put the following ; code at$0150 in the final ROM. SECTION "main", ROM0[$0150] ; Put the hex value$8000 into registers hl. Really, that ; means put $80 into h and$00 into l. ld hl, $8000 ; Put this binary value into registers a. ; It's just 0 1 2 3, a color gradient. ld a, %00011011 ; This is actually a macro this particular assembler ; understands, which will repeat the following code 16 ; times, exactly as if I'd copy-pasted it. REPT 16 ; The brackets (sometimes written as parens) mean to use hl ; as a position in RAM, rather than operating on hl itself. ; So this copies a into the position in RAM given by ; hl (initially$8000), and the + adds 1 to hl afterwards. ; This is one reason hl is nice for storing addresses: the + ; variant is handy for writing a sequence of bytes to RAM, ; and it only exists for hl. ld [hl+], a ; End the REPT block ENDR ; This is a label, used to refer to some position in the code. ; It only exists in the source file. _halt: ; Stop all CPU activity until there's an interrupt. I ; haven't turned any interrupts on, so this stops forever. halt ; The Game Boy hardware has a bug where, under rare and ; unspecified conditions, the instruction after a halt will ; be skipped. So every halt should be followed by a nop, ; "no operation", which does nothing. nop ; This jumps back up to the label. It's short for "jump ; relative", and will end up as an instruction saying ; something like "jump backwards five bytes", or however far ; back _halt is. (Different instructions can be different ; lengths.) jr _halt Okay! Glad you’re all caught up. The rgbds documentation includes a list of all the available operations (as well as assembler syntax), and once you get used to the short names, I also like this very compact chart of all the instructions and how they compile to machine code. (Note that that chart spells [hl+] as (HLI), for “increment” — the human-readable names are somewhat arbitrary and can sometimes vary between assemblers.) Now, let’s see what this does! Wow! It’s… still nothing. Hang on. If I open the debugger and hit Break, I find out that the CPU is at address $0120 — before my code — and is on an instruction DD. What’s DD? Well, according to this convenient chart, it’s… nothing. That’s not an instruction. Hmm. ## Problem solving Maybe it’s time to look at one of those skeleton projects after all. I crack open the smallest one, gb-template, and it seems to be doing the same thing: its code istarts at$0150. It takes me a bit to realize my mistake here. Practically every Game Boy game starts its code at $0150, but that’s not what the actual hardware specifies. The real start point is$0100, which is immediately before the header! There are only four bytes before the header, just enough for… a jump instruction. Okay! No problem. 1 2 3 SECTION "entry point", ROM0[$0100] nop jp$0150 Why the nop? I have no idea, but all of these boilerplate projects do it. Uhh. Well, that’s weird. Not only is the result black and white when I definitely used all four shades, but the whites aren’t even next to each other. (I also had a strange effect where the screen reverted to all white after a few seconds, but can’t reproduce it now; it was fixed by the same steps, though, so it may have been a quirk of a particular mGBA build.) I’ll save you my head-scratching. I made two mistakes here. Arguably, three! First: believe it or not, I have to specify the palette. Even in original uncolored Game Boy mode! I can see how that’s nice for doing simple fade effects or flashing colors, but I didn’t suspect it would be necessary. The monochrome palette lives at $ff47 (one of those special high addresses), so I do this before anything else: 1 2 ld a, %11100100 ; 3 2 1 0 ld [$ff47], a I should really give names to some of these special addresses, but for now I’m more interested in something that works than something that’s nice to read. Second: I specified the colors wrong. I assumed that eight pixels would fit into two bytes as AaBbCcDd EeFfGgHh, perhaps with some rearrangement, but a closer look at Nintendo’s manual reveals that they need to be ABCDEFGH abcdefgh, with the two bits for each pixel split across each byte! Wild. Handily, rgbds has syntax for writing out pixel values directly: a backtick followed by eight of 0, 1, 2, and 3. I just have to change my code a bit to write two bytes, eight times each. By putting a 16-bit value in a register pair like bc, I can read its high and low bytes out individually via the b and c registers. 1 2 3 4 5 6 7 8 ld hl, $8000 ld bc, 00112233 REPT 8 ld a, b ld [hl+], a ld a, c ld [hl+], a ENDR Third: strictly speaking, I don’t think I should be writing to$8000 while the screen is on, because the screen may be trying to read from it at the same time. It does happen to work in this emulator, but I have no idea whether it would work on actual hardware. I’m not going to worry too much about this test code; most likely, tile loading will happen all in one place in the real game, and I can figure out any issues then. This is one of those places where the manual is oddly vague. It dedicates two whole pages to diagrams of how sprites are drawn when they overlap, yet when I can write to display RAM is left implicit. Well, whatever. It works on my machine. Success! I made a thing for the Game Boy. Ah, but what I wanted was a thing for the Game Boy Color. That shouldn’t be too much harder. ## Now in Technicolor First I update my Makefile to pass the -C flag to rgbfix. That tells it to set a flag in the ROM header to indicate that this game is only intended for the Game Boy Color, and won’t work on the original Game Boy. (In order to pass Nintendo certification, I’ll need an error screen when the game is run on a non-Color Game Boy, but that can come later. Also, I don’t actually know how to do that.) Oh, and I’ll change the file extension from .gb to .gbc. And while I’m in here, I might as well repeat myself slightly less in this bad, bad Makefile. 1 2 3 4 5 6 7 8 TARGET := gamegirl.gbc all: $(TARGET)$(TARGET): rgbasm -o main.o main.rgbasm rgblink -o $(TARGET) main.o rgbfix -C -v -p 0$(TARGET) I think := is the one I want, right? Christ, who can remember how this syntax works. Next I need to define a palette. Again, everything defaults to palette zero, so I’ll update that and not have to worry about specifying a palette for every tile. This part is a bit weird. Unlike tiles, there’s not a block of addresses somewhere that contains all the palettes. Instead, I have to write the palette to a single address one byte at a time, and the CPU will put it… um… somewhere. (I think this is because the entire address space was already carved up for the original Game Boy, and they just didn’t have room to expose palettes, but they still had a few spare high addresses they could use for new registers.) Two registers are involved here. The first, $ff68, specifies which palette I’m writing to. It has a bunch of parts, but since I’m writing to the first color of palette zero, I can leave it all zeroes. The one exception is the high bit, which I’ll explain in just a moment. 1 2 ld a, %10000000 ld [$ff68], a The other, $ff69, does the actual writing. Each color in a palette is two bytes, and a palette contains four colors, so I need to write eight bytes to this same address. The high bit in$ff68 is helpful here: it means that every time I write to $ff69, it should increment its internal position by one. This is kind of like the [hl+] I used above: after every write, the address increases, so I can just write all the data in sequence. But first I need some colors! Game Boy Color colors are RGB555, which means each color is five bits (0–31) and a full color fits in two bytes: 0bbbbbgg gggrrrrr. (I got this backwards initially and thought the left bits were red and the right bits were blue.) Thus, I present, palette loading by hand. Like before, I put the 16-bit color in bc and then write out the contents of b and c. (Before, the backtick syntax put the bytes in the right order; colors are little-endian, hence why I write c before b.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ld bc, %0111110000000000 ; blue ld a, c ld [$ff69], a ld a, b ld [$ff69], a ld bc, %0000001111100000 ; green ld a, c ld [$ff69], a ld a, b ld [$ff69], a ld bc, %0000000000011111 ; red ld a, c ld [$ff69], a ld a, b ld [$ff69], a ld bc, %0111111111111111 ; white ld a, c ld [$ff69], a ld a, b ld [\$ff69], a Rebuild, and: What a glorious eyesore! ## To be continued That brings us up to commit 212344 and works as a good stopping point. Next time: sprites! Maybe even some real art? # AWS Online Tech Talks – June 2018 Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/ AWS Online Tech Talks – June 2018 Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today! Note – All sessions are free and in Pacific Time. Tech talks featured this month: Analytics & Big Data June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data. June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing. AWS re:Invent June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar. Compute June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances. June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach. Containers June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster. Databases June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora. DevOps June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools. Enterprise & Hybrid June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services. June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new AWS Environments June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation. June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences. June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device. IoT June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products. Machine Learning June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment. June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics. Management Tools June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs. Mobile June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement. Security, Identity & Compliance June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally. June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances. Serverless June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM. Storage June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services. June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances. June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS. # Storing Encrypted Credentials In Git Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/ We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history. Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution. I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere. This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code. Such a repo would look like this: project └─── production | | application.properites | | keystore.jks └─── staging | | application.properites | | keystore.jks └─── on-premise-client1 | | application.properites | | keystore.jks └─── on-premise-client2 | | application.properites | | keystore.jks Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows. You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that: secretfile filter=git-crypt diff=git-crypt *.key filter=git-crypt diff=git-crypt *.properties filter=git-crypt diff=git-crypt *.jks filter=git-crypt diff=git-crypt And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f` so that the existing files are actually encrypted. You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure. git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access. The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values. The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow. The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog. # Some quick thoughts on the public discussion regarding facial recognition and Amazon Rekognition this past week We have seen a lot of discussion this past week about the role of Amazon Rekognition in facial recognition, surveillance, and civil liberties, and we wanted to share some thoughts. Amazon Rekognition is a service we announced in 2016. It makes use of new technologies – such as deep learning – and puts them in the hands of developers in an easy-to-use, low-cost way. Since then, we have seen customers use the image and video analysis capabilities of Amazon Rekognition in ways that materially benefit both society (e.g. preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families, and building educational apps for children), and organizations (enhancing security through multi-factor authentication, finding images more easily, or preventing package theft). Amazon Web Services (AWS) is not the only provider of services like these, and we remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement. There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. AWS takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future. The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm. The same can be said of thousands of technologies upon which we all rely each day. Through responsible use, the benefits have far outweighed the risks. Customers are off to a great start with Amazon Rekognition; the evidence of the positive impact this new technology can provide is strong (and growing by the week), and we’re excited to continue to support our customers in its responsible use. -Dr. Matt Wood, general manager of artificial intelligence at AWS # Hiring a Director of Sales Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/ Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do. The History of Backblaze from our CEO In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data. Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round. Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with: • A brand millions recognize for openness, ease-of-use, and affordability. • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends. • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it. • A growing, profitable and cash-flow positive company. • And last, but most definitely not least: a great sales team. You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why: • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue. • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business. • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close. • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement. • We want someone to manage our Customer Success program. So that’s a bit about us. What are we looking for in you? Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program. Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team. Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t. Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work. Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves. Additional Responsibilities needed for this role: • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele. • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate. • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches. • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow. • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions. • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts. • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA. Requirements: • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals. Experience in developing skill sets and providing career growth and opportunities through advancement of team members. • Background in selling SaaS technologies with a strong track record of success. • Strong presentation and communication skills. • Must be able to travel occasionally nationwide. • BA/BS degree required
{}
Research article Special Issues Export file: Format • RIS(for EndNote,Reference Manager,ProCite) • BibTex • Text Content • Citation Only • Citation and Abstract Permutation entropy: Influence of amplitude information on time series classification performance Technological Institute of Informatics(ITI), Universitat Politècnica de València, Campus Alcoi, Plaza Ferrándiz y Carbonell, 2, 03801, Alcoi, Spain ## Abstract    Full Text(HTML)    Figure/Table    Related pages Permutation Entropy (PE) is a very popular complexity analysis tool for time series. De-spite its simplicity, it is very robust and yields goods results in applications related to assessing the randomness of a sequence, or as a quantitative feature for signal classification. It is based on com-puting the Shannon entropy of the relative frequency of all the ordinal patterns found in a time series. However, there is a basic consensus on the fact that only analysing sample order and not amplitude might have a detrimental effect on the performance of PE. As a consequence, a number of methods based on PE have been proposed in the last years to include the possible influence of sample ampli-tude. These methods claim to outperform PE but there is no general comparative analysis that confirms such claims independently. Furthermore, other statistics such as Sample Entropy (SampEn) are based solely on amplitude, and it could be argued that other tools like this one are better suited to exploit the amplitude differences than PE. The present study quantifies the performance of the standard PE method and other amplitude–included PE methods using a disparity of time series to find out if there are really significant performance differences. In addition, the study compares statistics based uniquely on ordinal or amplitude patterns. The objective was to ascertain whether the whole was more than the sum of its parts. The results confirmed that highest classification accuracy was achieved using both types of patterns simultaneously, instead of using standard PE (ordinal patterns), or SampEn (ampli-tude patterns) isolatedly. Figure/Table Supplementary Article Metrics # References 1. R. Alcaraz, D. Abásolo, R. Hornero, et al., Study of Sample Entropy ideal computational pa-rameters in the estimation of atrial fibrillation organization from the ECG, in 2010 Computing in Cardiology, 2010, 1027–1030. 2. R. G. Andrzejak, K. Lehnertz, F. Mormann, et al., Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state, Phys. Rev. E, 64 (2001), 061907. 3. K. N. Aronis, R. D. Berger, H. Calkins, et al., Is human atrial fibrillation stochastic or determinis-tic? Insights from missing ordinal patterns and causal entropy–complexity plane analysis, Chaos, 28 (2018), 063130. 4. H. Azami and J. Escudero, Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation, Comput. Meth. Prog. Bio., 128 (2016), 40–51. 5. A. Bagnall, J. Lines, A. Bostrom, et al., The great time series classification bake off: A review and experimental evaluation of recent algorithmic advances, Data Min. Knowl. Disc., 31 (2017), 606–660. 6. C. Bandt and B. Pompe, Permutation entropy: A natural complexity measure for time series, Phys. Rev. Lett., 88 (2002), 174102. 7. C. Bian, C. Qin, Q. D. Y. Ma, et al., Modified Permutation-entropy analysis of heartbeat dynamics, Phys. Rev. E, 85 (2012), 021906. 8. A. E. X. Brown, E. I. Yemini, L. J. Grundy, et al., A dictionary of behavioral motifs reveals clusters of genes affecting caenorhabditis elegans locomotion, Proceed. Nat. Aca. Sci., 110 (2013), 791–796. 9. C. Carricarte-Naranjo, D. J. Cornforth, L. M. Sánchez-Rodríguez, et al., Rényi and permutation entropy analysis for assessment of cardiac autonomic neuropathy, in EMBEC & NBC 2017 (eds. H. Eskola, O. Väisänen, J. Viik and J. Hyttinen), Springer Singapore, Singapore, 2018, 755–758. 10. E. Cirugeda-Roldán, D. Cuesta-Frau, P. Miró-Martínez, et al., A new algorithm for quadratic sample entropy optimization for very short biomedical signals: Application to blood pressure records, Comput. Meth. Prog. Bio., 114 (2014), 231–239. 11. E. Cirugeda-Roldán, D. Novák, V. Kremen, et al., Characterization of complex fractionated atrial electrograms by Sample Entropy: An international multi–center study, Entropy, 17 (2015), 7493–7509. 12. D. Cuesta-Frau, P. Miró-Martínez, S. Oltra-Crespo, et al., Classification of glucose records from patients at diabetes risk using a combined permutation entropy algorithm, Comput. Meth. Prog. Bio., 165 (2018), 197–204. 13. D. Cuesta-Frau, P. Miró-Martínez, S. Oltra-Crespo, et al., Model selection for body temperature signal classification using both amplitude and ordinality-based entropy measures, Entropy, 20. 14. D. Cuesta-Frau, D. Novák, V. Burda, et al., Characterization of artifact influence on the classifi-cation of glucose time series using sample entropy statistics, Entropy, 20. 15. D. Cuesta-Frau, M. Varela-Entrecanales, A. Molina-Picó, et al., Patterns with equal values in permutation entropy: Do they really matter for biosignal classification?, Complexity, 2018 (2018), 1–15. 16. B. Deng, L. Cai, S. Li, et al., Multivariate multi-scale weighted Permutation Entropy analysis of EEG complexity for Alzheimer's disease, Cogn. Neurodynamics, 11 (2017), 217–231. 17. B. Fadlallah, B. Chen, A. Keil, et al., Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information, Phys. Rev. E, 87 (2013), 022911. 18. Y. Gao, F. Villecco, M. Li, et al., Multi-scale permutation entropy based on improved LMD and HMM for rolling bearing diagnosis, Entropy, 19. 19. J. Garland, T. R. Jones, M. Neuder, et al., Anomaly detection in paleoclimate records using permutation entropy, Entropy, 20. 20. A. L. Goldberger, L. A. N. Amaral, L. Glass, et al., PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals, Circulation, 101 (2000), 215–220. 21. M. Henry and G. Judge, Permutation entropy and information recovery in nonlinear dynamic economic time series, Econometrics, 7. 22. N. Iyengar, C. K. Peng, R. Morin, et al., Age-related alterations in the fractal scaling of cardiac interbeat interval dynamics, Am. J. Physiol. Integrat. Comparat. Physiol., 271 (1996), R1078–R1084, PMID: 8898003. 23. D. E. Lake, J. S. Richman, M. P. Griffin, et al., Sample entropy analysis of neonatal heart rate variability, Am. J. Physiol. Integrat. Comparat. Physiol., 283 (2002), R789–R797. 24. H. Li, C. Peng and D. Ye, A study of sleep staging based on a sample entropy analysis of electroencephalogram, Bio-Med. Mater. Eng., 26 (2015), S1149–S1156. 25. G. B. Moody, A. L. Goldberger, S. McClennenm, et al., Predicting the onset of Paroxysmal Atrial Fibrillation: The Computers in Cardiology Challenge 2001, Comput. Cardiol., 28 (2011), 113–116. 26. D. Murray, J. Liao, L. Stankovic, et al., A data management platform for personalised real-time energy feedback, in Procededings of the 8th International Conference on Energy Efficiency in Domestic Appliances and Lighting, 2015. 27. N. Nicolaou and J. Georgiou, The use of permutation entropy to characterize sleep electroen-cephalograms, Clin. EEG Neurosci., 42 (2011), 24–28, PMID: 21309439. 28. E. Olofsen, J. Sleigh and A. Dahan, Permutation entropy of the electroencephalogram: A measure of anaesthetic drug effect, Br. J. Anaesth., 101 (2008), 810–821. 29. A. G. Ravelo-García, J. L. Navarro-Mesa, Casanova-Blancas, et al., Application of the permuta-tion entropy over the heart rate variability for the improvement of electrocardiogram-based sleep breathing pause detection, Entropy, 17 (2015), 914–927. 30. J. S. Richman, Sample entropy statistics and testing for order in complex physiological signals, Commun. Stat. Theor. M., 36 (2007), 1005–1019. 31. M. O. Sokunbi, Sample entropy reveals high discriminative power between young and elderly adults in short fMRI data sets, Front. Neuroinform., 8 (2014), 69. 32. X. Wang, S. Si, Y. Wei, et al., The optimized multi-scale permutation entropy and its application in compound fault diagnosis of rotating machinery, Entropy, 21. 33. Y. Xia, L. Yang, L. Zunino, et al., Application of permutation entropy and permutation min-entropy in multiple emotional states analysis of RRI time series, Entropy, 20 (2018), 148. 34. X. F. Liu and Y. Wang, Fine-grained permutation entropy as a measure of natural complexity for time series, Chinese Phys. B, 18 (2009), 2690. 35. Y. Yang, M. Zhou, Y. Niu, et al., Epileptic seizure prediction based on permutation entropy, in Front. Comput. Neurosci., 2018. 36. E. Yemini, T. Jucikas, L. J. Grundy, et al., A database of caenorhabditis elegans behavioral phenotypes, Nat. Meth., 10 (2013), 877–879. 37. M. Zanin, D. Gómez-Andrés, I. Pulido-Valdeolivas, et al., Characterizing normal and pathological gait through permutation entropy, Entropy, 20. 38. M. Zanin, L. Zunino, O. A. Rosso, et al., Permutation entropy and its main biomedical and econophysics applications: A review, Entropy, 14 (2012), 1553–1577. 39. Y. Zhang and P. Shang, Permutation entropy analysis of financial time series based on hills diversity number, Commun. Nonlinear Sci. Numer. Simul., 53 (2017), 288–298. 40. L. Zunino and C. W. Kulp, Detecting nonlinearity in short and noisy time series using the permu-tation entropy, Phys. Lett. A, 381 (2017), 3627–3635. 41. L. Zunino, F. Olivares, F. Scholkmann, et al., Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions, Phys. Lett. A, 381 (2017), 1883–1892. 42. L. Zunino, M. Zanin, B. M. Tabak, et al., Forbidden patterns, permutation entropy and stock market inefficiency, Physica A, 388 (2009), 2854–2864.
{}
# Thread: Ranges without the aid of a calculator Help 1. ## Ranges without the aid of a calculator Help This is a pre-calc review for Calculus (I think this is the right area) I can do domains completely fine but I have trouble with ranges. Like for instance y=sqrt(x+1), y=1/(sqrt(x+1)) and y= sqrt((1/x)+1). For the first one, I know its a radical function and y >= 0. But Mathematically how do I calculate this? I heard you put the equation into the inverse function but I'm not really sure what to do afterward. Thanks! 2. You don't need to calculate it for the first square root. If you need a justification you can start by looking at $y^2 = x$. This is a parabola centered at the origin and it opens to the right (towards positive x-values). So x can only have positive values. This is equivalent to $y = \pm \sqrt{x}$. In your case you are taking the positive square root, which will be the part of the parabola above the x-axis, that is the square root evaluates to only value greater or equal to 0. In your first case, the range is $[0,\infty[$. In the second expression you are taking the reciprocal of the first function. The range is $]0,\infty[$. Here is how you get it. Lets look at a point $y$ in the first range. To get the range of the second one, we take the reciprocal, i.e. $1/y$. When y is close to 0 we get a big number, so on one side the range can be arbitrarily large. However, if we make $y$ really large, we are making $1/y$ really tiny, so the range is 0 at the other end. The range ought to be $[0,\infty[$, but since you can never make $1/y$ zero, it must be an open interval - $]0,\infty[$ 3. Originally Posted by Nekuni This is a pre-calc review for Calculus (I think this is the right area) I can do domains completely fine but I have trouble with ranges. Like for instance y=sqrt(x+1), y=1/(sqrt(x+1)) and y= sqrt((1/x)+1). For the first one, I know its a radical function and y >= 0. But Mathematically how do I calculate this? I heard you put the equation into the inverse function but I'm not really sure what to do afterward. Thanks! Personally, I find the study of basic functions and transformations (vertical/horizontal stretch, shift up/down/left/right) to be useful when determining the domain and range of a function. For example, if you know the domain and range of x^2, then you can derive the domain/range list of 2x^2, (x-2)^2, and x^2 + 2 by realizing that these functions are simply products of transformations + their effect. Also, here's a trick, (although it may be limited). you should be careful with it as some inverse functions can get quite messy. We know that f: x -> y, where x is the domain and y is the range. Now consider that the inverse function of f, call it g: y -> x The domain of f is the range of g, and the range of f is the domain of g *verify* For example, let f(x) = y = x^2 Domain: the real line Range: y>=0 Now, g(x) = +- sqrt(x) Domain: x>=0 Range: the real line EDIT:// as an afterthought, if all else fails: graph it.
{}
# Pattern and Sequence Puzzles One of the harder types of question to answer effectively is a puzzle, which as I define it means that there is no routine way to solve it, so any hint would likely give away the answer. But sometimes these are only “puzzles” to us, because we don’t know the context that would have told us what kind of answer to expect. I have collected here several different sorts of sequence or pattern puzzles, showing various ways we can approach them. (These are often done before learning algebra, so that although variables may be used in the answers, no algebra is needed to solve them.) ## Finding a formula for a sequence Here is a typical question from 2001: Terms and Rules My step daughter is in sixth grade and she has been doing a pattern journal where she has two columns of numbers: the first column is the n, and the second colum is the term, and she has to find the rule (e.g. n^2), etc. Is there some place I can find out the basics for this concept? The one example offered may or may not be typical of the particular kind of “rule” she is to look for, so I started with general suggestions and a request for examples: I don't know that there is much to say in general about this kind of problem. A lot depends on the level of difficulty; most likely she has been given a set of problems that all have a similar kind of pattern, so that a method can be developed for solving them. The problems can differ in the presentation (whether consecutive terms are given, for example), and in the complexity of the rule (a simple multiplication, a more complicated calculation from n, a recursive rule - based on the previous term - or even a weird trick rule like "the number of letters in the English word for the number n"). For that reason, it would be very helpful if you could send us a couple sample questions so we could help more specifically. I started by offering a particularly simple example: I don't see any good examples in our archives at the elementary level - probably just because we've never felt that any one such answer was useful to help others. Let's try a few samples. Here's an easy one: n | term ---+------ 1 | 3 2 | 6 3 | 9 4 | 12 Here you may just be able to see that the terms are a column in a multiplication table (or, more simply, that all the terms are multiples of 3); or you might look at the differences between successive terms (a very useful method at a higher level, called "finite differences") and see that the terms are "skip-counting" by 3's, a clue that the rule involves multiplication by 3. However you see it, the rule (using "*" for the multiplication sign) is 3 * n At that level, there is no method needed for solving; mere recognition is often enough. Here's a slightly more complicated one: n | term ---+------ 1 | 6 2 | 11 3 | 16 4 | 21 Here the differences are all 5, suggesting multiplication by 5. You might want to add a new column to the table so you can compare the given term with 5n: n | 5n | term ---+----+------ 1 | 5 | 6 2 | 10 | 11 3 | 15 | 16 4 | 20 | 21 Now you can see that each term is one more than 5n, so the rule is 5n + 1 I particularly like this method, which I think of as building up the formula step by step. The “skip-counting” tells us the formula should be related to multiplication by 5, so we can just compare the given sequence to 5n, and observe that we have to add 1. Any linear sequence can be found this way. Now they might get still more complicated: n | term ---+------ 1 | 2 2 | 5 3 | 10 4 | 17 Here the differences aren't all the same (3, 5, 7), so something besides multiplication and addition must be going on. I happen to know that when the differences are themselves increasing regularly (by 2 each time in this case) that there is a square involved; try adding a column for the square of n and see if you can figure it out. This just gives a small taste of what these puzzles might be like. They can get much harder. Actually, in many harder cases I think the problems can be very unfair, because in reality (when you don't know ahead of time what kind of rule to expect) the rule could be absolutely anything, such as "the nth number on a page of random numbers I found in a book"! It's really just guesswork, and sometimes it's really hard to say which of several possible answers is what the author of the problem might have had in mind. But at your daughter's level you don't have to worry about that; it may be mostly just a matter of recognizing familiar patterns such as skip counting or squares. It's only where I sit, seeing problems completely out of context, and having to make a wild guess as to what sort of rule to look for, that these problems are always challenging. ## Finding a formula for a process William responded to my request for examples: Thank you, Dr. Peterson. That is exactly what I was talking about and you gave me great examples. One question I do remember was connected pentagons 1 = 5, sides -2 = 8 - 3 = 11 etc. where n is the number of pentagons and the term is the number of sides exposed, and then we needed to find the rule for 100 pentagons. Your answer definitely helped. This is really an entirely different type: not just a list of numbers, but numbers generated by a defined process. This is no longer just a puzzle, since we have not just the first few terms, but a way to make all of them. There is definitely one correct answer. He’s describing (a little cryptically) a table: n | term --+----- 1 | 5 2 | 8 3 | 11 Presumably the problem was to count the “exposed sides” of a sequence of pentagons like these: Even without a formula, by continuing to add more pentagons in a row we could get as many terms as we want. I responded: My answer is most helpful if you are just given a list of terms, and have to guess the rule. Your example is actually easier in some respects, and harder in others. It's common to just make a table of terms based on the geometry of the problem, and then try to guess a rule from that. But how do you know that the rule is really the right one, when it's not just an arbitrary list of terms, but one generated by a real situation? You really need to find some reason for connecting the rule to the objects you are counting. For that reason, I recommend trying to find a rule in the counting process itself. So you can use the guessing approach if you want, and I suspect many texts and teachers expect that, but it's much more satisfying to skip the tables and really know you have the right answer! So the rule, from the process I described, is $$a_n = 5 + 3(n – 1)$$, which can be simplified to $$a_n = 3n + 2$$. Using my table method, we compare to 3n because terms increase by 3, and get n | 3n | term = 3n + 2 --+----+----- 1 | 3 | 5 2 | 6 | 8 3 | 9 | 11 This is correct; but it does not in itself guarantee that it will work for all n. ## Tricky sequence puzzles Here is another question (2002) requesting general guidelines for sequence puzzles, but turning out to need an entirely different type of answer: Finding the Next Number in a Series I haven't been able to find much information about an approach or method in determining the "next" number is a given series of numbers, e.g., 9, 5, 45, 8, 6, 48, 6, 7... What is the next number? I can usually figure it out but if there is a formal way that makes it easier I would love to know about it! In the previous question, we were looking at fairly routine types of sequences. This one looked different. I started with some general comments about non-routine puzzles: Mathematically speaking, problems like this are impossible. Literally! That's because there is no restriction on what might come next in a sequence; ANY list of numbers, chosen for no reason at all, forms a sequence. So the next number can be anything. A question like this is really not a math question, but a psychology question with a bit of math involved. You are not looking for THE sequence that starts this way, but for the one the asker is MOST LIKELY to have chosen - the most likely one that has a particularly simple RULE. And there is no mathematical definition for that. I meant that literally: the next number could be anything at all, and we could find a rule to fit the sequence: If you just wanted _a_ sequence that starts this way, but can be defined by _some_ mathematical rule, there is a technique that lets you find an answer without guessing. This is called "the method of finite differences," and you can find it by searching our site (using the search form at the bottom of most pages) for the phrase. It assumes (as is always possible) that the sequence you want is defined by a polynomial, and finds it. Sometimes this is what the problem is really asking for. As we explain in detail elsewhere, any list of n numbers can be generated by a polynomial with degree n-1 or less. But the coefficients will often be ugly, and it is very unlikely that it is the intended answer for a puzzle of this type. That, however, is not a mathematical conclusion, but a psychological one! But often, especially when many terms are given, there is a much simpler rule that is not of polynomial form. Then you are being asked to use your creativity to find a nice rule. Sometimes starting with finite differences gives you a good clue, even if you don't end up with a polynomial; just seeing a pattern in the differences can reveal something about the sequence. Other times it is helpful to factor the numbers, or to look at successive ratios. Here you are doing a more or less orderly search, in order to find something that may not turn out to be orderly. Some puzzles like this are really just tricks. The "rule" may be that the numbers are in alphabetical order, or that each number somehow "describes" the one before, or even that they are successive digits of pi. In such cases, you have to ignore all thoughts of rules and orderly solutions, and just let your mind wander. This is sometimes called "lateral thinking," and it's entirely incompatible with "formal methods"! I’ll be looking at some of these ideas in a later post. In this case, after suggesting things that might work, I got around to trying the specific sequence we’d been given, and got a surprise: I first assumed the specific sequence you gave was just a random list of numbers, rather than a real problem, so I shouldn't bother looking for a pattern. But glancing at it, I see that it is not random: 9, 5, 45, 8, 6, 48, 6, 7, ... I see some multiplications here: 9 * 5 = 45, 8 * 6 = 48, 6 * 7 = __ I can't recall what chain of reasoning my mind went through to see that, but it may have helped that my kids asked me to go through a set of multiplication flash cards an hour or two ago. And focusing on the few larger numbers, thinking about how large numbers might pop up (multiplication makes bigger changes than addition), probably led me in the right direction. I don't recall seeing anything quite like this presented as a sequence problem before, but seeing factors, one of my usual techniques, was the key. As I had suggested, I just let my mind wander. And once I saw it, I realized I couldn’t just give a hint, unless it was as vague as, “Think about multiplication”. But I also realized something else: In this case, you were apparently just asked to find the NEXT number, so we're done as soon as you fill in my blank. It may well be that there is no pattern beyond that; the choice of 9, 5, 8, 6, 6, 7 may be random. That's a good reminder that we have to read the problem carefully and not try to solve more than we were asked. We weren't told that there was any pattern beyond the next number! ## A general procedure Here’s another question that elicited a collection of ideas from Doctor Wilko, in 2004: Thinking about Number Pattern Problems How do I figure out the next 2 numbers in the pattern 1, 8, 27, 64, ____, ____? Using addition, the pattern doesn't add up. If I use multiplication, I'm stumped. I get 1x1; 4x2; 9x3; 16x4;, but I don't understand what the next number would be to use for ___x5 and then ____ x6. Teresa was familiar with some particular kind of pattern; this didn’t fit. What do you do when you run out of ideas? Doctor Wilko started with something like what Teresa had probably done: Sequence or pattern problems can be anywhere from very challenging to fairly easy. This is a matter of perspective and depends on how many problems like this you've been exposed to. The more you do and see, the less challenging these types of problems become. I'll tell you how I approach a problem like this. The first thing is that I know there is a pattern, so that is my goal,to find the pattern. I might see if there is a pattern using addition. 1, 8, 27, 64, __, __, \/ \/ \/ \/ \/ +7 +19 +37 ? ? Hmmm, I don't see any pattern using addition. He is not necessarily looking for the same number being added, but for some sort of pattern in the differences. This is a classic first step. Next, I might try to use multiplication. The question is: Is there a number that I can multiply each term by to get the next term? To get from 1 to 8, there is only one number that I can multiply 1 by to get 8 - it's 8. But I also notice that I can't multiply 8 by any other number to get exactly 27 (no remainder). 1, 8, 27, 64, __, __, \/ \/ \/ \/ \/ *8 *? So far, I don't see a pattern with using addition from term to term and I don't see a number that I can multiply each term by to get the next. I would quickly do a similar check for subtraction and division, but I don't find anything that works there, either. Very often the ratio of one term to the next is not nice, so we can give up this idea quickly. Now, I'm going to look at the position of each number, as sometimes this will reveal a pattern. 1 is in the 1st position, 8 is in the 2nd position, ...(see below) 1, 8, 27, 64, __, __, | | | | | | 1 2 3 4 5 6 Does 1 have anything to do with 1? Does 2 have anything to do with 8? Does 3 have anything to do with 27? Does 4 have anything to do with 64? This might go somewhere! 1 is a divisor of 1. 2 is a divisor of 8. 3 is a divisor of 27. 4 is a divisor of 64. So maybe multiplication will work, but not in the way I tried above! I'm going to factor each term in the sequence to look at the divisors. 1: 1 = 1 8: (4 * 2) = (2 * 2 * 2) 27: (9 * 3) = (3 * 3 * 3) 64: (16 * 4) = (4 * 4 * 4) The 4s could be factored further, but I'm starting to see a pattern emerge, so I'll keep it as (4 * 4 * 4). If this pattern continues, can you find the 5th and 6th term of the sequence? (I rewrote the 1's to match the pattern.) 1 = (1 * 1 * 1) 8 = (2 * 2 * 2) 27 = (3 * 3 * 3) 64 = (4 * 4 * 4) ? = (? * ? * ?) ? = (? * ? * ?) Some people will immediately recognize the numbers in this sequence as cubes; others will need to go through the factoring and notice that factors come in threes. Your level of experience determines how hard this will be. Notice, though, that Teresa had done almost exactly what Doctor Wilko did. If she had noticed that the multipliers 1, 4, 9, and 16 she found are squares — in fact, the square of n — she might have had the answer. Doctor Wilko concluded: Does this help? The basic idea in number pattern problems is to keep trying possible patterns. As I said at the top, the more of these you do the easier it gets to see the patterns. As with most things, experience and practice pays off! Once you have seen enough patterns, you will start recognizing those you have seen before, and have a longer list of things to try in the future. ### 3 thoughts on “Pattern and Sequence Puzzles” This site uses Akismet to reduce spam. Learn how your comment data is processed.
{}
Maryland 6 - 2020 Edition 3.03 Ratio tables Lesson A ratio compares the relationship between two values. It compares how much there is of one thing compared to another. For example, if a pie recipe calls for $2$2 tablespoons of brown sugar per $1$1 apple, we could write this as a ratio $2:1$2:1 What if we wanted to make more than one pie? Well, we'd need to keep the ingredients in equivalent ratios. For example, to make two pies, we'd need $4$4 tablespoons of sugar and $2$2 apples. An easy way to display information about ratios is using a ratio table. Let's display our information from the above example: Sugar $2$2 $4$4 $6$6 $8$8 Apples $1$1 $2$2 $3$3 $4$4 Ratio tables display the relationship between the two quantities and help us find the constant multiplicative factor between them. This constant multiplicative factor, which is also called the constant of proportionality, is a constant positive multiple between the two variables. As we've already learned, ratios can also be represented as fractions, and by simplifying the fraction, we can work out the constant of proportionality. In the table above, $\frac{4}{8}=\frac{3}{6}$48=36$=$=$\frac{2}{4}$24$=$=$\frac{1}{2}$12. In other words, the number of apples will always be half as much as the number of tablespoons of sugar. In other words, the constant multiplicative factor is $\frac{1}{2}$12. That means if I put $20$20 tablespoons of sugar in my recipe, I know I would need $10$10 apples because that is half of $20$20. Similarly, if I used $7$7 apples, I would need double the amount of sugar, which is $14$14 tablespoons. #### Worked examples ##### Question 1 Consider the ratio table below. a) Fill in the missing values from the table below Pens $10$10 $20$20 $30$30 $40$40 $50$50 $\editable{}$ Cost (dollars) $\editable{}$ $11.60$11.60 $\editable{}$ $\editable{}$ $29.00$29.00 $58.00$58.00 Think: What is the common multiplicative factor between the two quantities? Do: $11.60\div20=0.58$11.60÷​20=0.58 $29.00\div50=0.58$29.00÷​50=0.58 So our common multiplicative factor is $0.58$0.58. In other words, one pen costs $58$58c. To work out the cost, we need to multiply the number of pens by $0.58$0.58. For example, the cost of $10$10 pens would be $10\times0.58=\$5.80$10×0.58=$5.80. Conversely, to work out the number of pens, we need to divide the cost by $0.58$0.58. For example, $\$58$$58 would buy 58\div0.58=10058÷​0.58=100 pens. Pens 1010 2020 3030 4040 5050 100100 Cost (dollars) 5.805.80 11.6011.60 17.4017.40 23.2023.20 29.0029.00 58.0058.00 b) Calculate the cost of buying 9090 pens. Think: If 1010 pens cost \5.80$$5.80, how much would $90$90 pens cost? Do: If we think, $90$90 pens is $9$9 times more than $10$10 pens. This means we need to increase the cost of $10$10 pens by $9$9 as well. $5.80\times9=\$52.20$5.80×9=$52.20. We can also solve this algebraically. Let's let $x$x be the cost of $90$90 pens. $10:5.80$10:5.80 $=$= $90:x$90:x $\frac{10}{5.80}$105.80​ $=$= $\frac{90}{x}$90x​ $\frac{5.80}{10}$5.8010​ $=$= $\frac{x}{90}$x90​ $\frac{5.80\times90}{10}$5.80×9010​ $=$= $x$x $x$x $=$= $\$52.20$$52.20 c) How much would you expect to pay for 55 pens? Think: The cost of 55 pens would be half the price of 1010 pens. Do: 5.80\div2=\2.905.80÷​2=2.90 #### Practice questions ##### Question 2 Ryan and Valerie are preparing for a party. Ryan blows up 1212 balloons in 1515 minutes. Valerie blows up 2424 balloons in 2828 minutes. Who is blowing up balloons faster? 1. Complete the table for the number of balloons Ryan blows up in each time period. Assume that he keeps blowing up balloons at a constant rate. Time (minutes) Balloons 00 1515 \editable{} 4545 6060 \editable{} 1212 \editable{} \editable{} \editable{} 2. Complete the table for the number of balloons Valerie blows up in each time period. Assume that she keeps blowing up balloons at a constant rate. Time (minutes) 00 2828 5656 8484 \editable{} Balloons \editable{} \editable{} \editable{} \editable{} 9696 3. Using the table above, who is blowing up balloons the fastest? Valerie A Ryan B Valerie A Ryan B ##### Question 3 Fill in the ratio table below, then use it to answer the following questions. 1. US dollar () Japanese yen (¥) 11 22 33 44 55 \editable{} \editable{} 320.40320.40 \editable{} \editable{} 2. How many Japanese yen will you be able to buy for$$20$20? 3. Beth wants to buy a pair of jeans that costs$4272$4272 Japanese yen. What is this equivalent to in US dollars? ##### Question 4 Kate and Laura are selling cakes at a bake sale. For every$6$6 cakes that Kate sells, she will make$\$15$$15. For every 2424 cakes that Laura sells, she will make \53$$53. Whose cakes are more expensive? 1. Fill in the missing gaps in the table for Kate. Cakes sold Earnings (dollars)$6$6$\editable{}18$18$\editable{}30$30$\editable{}30$30$\editable{}60$60$75$75 2. Fill in the missing gaps in the table for Laura. Cakes sold Earnings (dollars)$\editable{}48$48$72$72$96$96$120$120$53$53$\editable{}159$159$212$212$265\$265 3. Whose cakes are more expensive? Laura's A Kate's B Laura's A Kate's B ### Outcomes #### 6.RP.A.3 Use ratio and rate reasoning to solve real-world and mathematical problems, e.g. By reasoning about tables of equivalent ratios, tape diagrams, double number line diagrams, or equations. #### 6.RP.A.3a Make tables of equivalent ratios relating quantities with whole-number measurements, find missing values in the tables, and plot the pairs of values on the coordinate plane. Use tables to compare ratios.
{}
# How do you simplify root 3 (0.001x^3)? $\frac{x}{10}$ Re-write number under root symbol as $0.001 {x}^{3} = {x}^{3} / 1000 = {x}^{3} / {10}^{3}$ Now apply $\sqrt[n]{\frac{a}{b}} = \frac{\sqrt[n]{a}}{\sqrt[n]{b}}$ $\sqrt[3]{{x}^{3} / {10}^{3}} = \frac{x}{10}$
{}
# Camilo Enrique Argoty lecture series Camilo Enrique Argoty from the Sergio Arboleda University in Bogota, Colombia visits Institute for Research in Fundamental Sciences in Tehran, Iran between May 4 and May 12, 2016 for giving some lectures on the model theory of Hilbert spaces. This mini course gives a panorama of model theory of Hilbert spaces in two frameworks: continuous first order logic and abstract elementary classes. The program of the sessions is as follows: First session: Basic Hilbert space Model Theoretic Properties: Categoricity, stability, characterization of types, quantifier elimination, characterization of non-forking Second session: Hilbert spaces with a normal operator: Elementary equivalence, $\aleph_0$ categoricity up to perturbations, types as spectral measures, quantifier elimination, non-forking, orthogonality and domination. Third session: Hilbert spaces with a closed unbounded self- adjoint operator: Metric abstract elementary classes (MAEC’s); a n MAEC for a Hilbert space with a closed unbounded self-adjoint operator; continuous first order elementary equivalence; types as spectral measures; non-forking, orthogonality and domination. Fourth session: Model theory of representations of C*-algebras: Elementary equivalence; $\aleph_0$ categoricity up to perturbations; the generic representation of a C*-algebra; homoeomorphism of the stone space and quasi-state space and quantifier elimination; Non-forking, orthogonality and domination. Fifth session:  Further work: Elementary equivalence in *-representations of *-algebras
{}
If there are 89 chickens and half of that is brown and half of the other half is grey how many chickens are white? If there are 89 chickens and half of that is brown and half of the other half is grey how many chickens are white? 9. Find the area of a circle having a circumference of 382. Round to the nearest tenth. Use 3.14 for 1. a. 1133.5 units b. 1078.6 1. ellllaaaxx says: 22 chickens are white 2. Expert says: Ibelieve the answer would be 7, 13, 23 3. toriabrocks says: 22 Step-by-step explanation: 4. Expert says: susie has 7 apples
{}
A sinusoidal wave has a frequency of 50 Hz with 30 A rms current, Which of the following equations represent this wave? This question was previously asked in VSSC (ISRO) Technician B (Electronic Mechanic) Previous Year Paper (Held on 25 Sept 2016) View all ISRO Technician Papers > 1. i = 42.42 sin 314 t 2. i = 60 sin 25 t 3. i = 30 sin 50 t 4. i = 84.84 sin 25 t Option 1 : i = 42.42 sin 314 t Free Subject Test 1: Electrical Basics 3053 20 Questions 80 Marks 30 Mins Detailed Solution Concept: A general sinusoidal representation of a wave is as shown: Mathematically, this is represented as: v(t) = A sin ωt A = Maximum amplitude ω = 2πf = angular frequency $$f=\frac{1}{T}$$ T = Time period of the sinusoidal wave RMS value 'or' the effective value of an alternating quantity is calculated as: $${V_{rms}} = \sqrt{\frac{1}{T}\mathop \smallint \limits_0^T {v^2}\left( t \right)dt}$$ On solving for a sinusoidal waveform, the RMS value is obtained as: $$V_{rms}=\frac{V_m}{√2}$$ Calculation: Given Irms = 30 A ∴ The maximum amplitude will be: $$A = I_{max} = I_{rms}× \sqrt2$$ Imax = 30 × √2 Imax = 42.42 A Also, f = 50 Hz ω = 2 π f = 2 × 3.14 × 50
{}
back to index | new What is the radius of a circle inscribed in a triangle with sides of length $5$, $12$ and $13$ units? In $\triangle{ABC}$, segments AB and AC have each been divided into four congruent segments. We must find the fraction of the triangle that is shaded.
{}