text stringlengths 8 5.77M |
|---|
# m32r testcase for mvfc $dr,$scr
# mach(): m32r m32rx
.include "testutils.inc"
start
.global mvfc
mvfc:
mvi_h_condbit 0
mvi_h_gr r4, 1
mvfc r4, cr1
test_h_gr r4, 0
mvi_h_condbit 1
mvfc r4, cr1
test_h_gr r4, 1
pass
|
<!--- Copyright (c) 2020 parasquid. See the file LICENSE for copying permission. -->
INA219 Zero-Drift, Bidirectional Current/Power Monitor With I2C Interface
=====================
<span style="color:red">:warning: **Please view the correctly rendered version of this page at https://www.espruino.com/INA219. Links, lists, videos, search, and other features will not work correctly when viewed on GitHub** :warning:</span>
* KEYWORDS: Module,I2C,INA219,voltage,current,power,watts,amps
The [TI INA219](https://www.ti.com/product/INA219) is a voltage and current
monitor designed for voltages up to 26v. In Espruino, the [INA219](/modules/INA219.js) module ([About Modules](/Modules)) can be used to interface to it.
There is also an Espruino module for the [INA226](/INA226).
You can buy (see below) breakout board containing the INA219 along with a shunt resistor pre-wired:

Wiring
------
You can wire this up as follows:
| Device Pin | Espruino |
| ---------- | -------- |
| GND | GND |
| VCC | 3.3 |
| SCL | I2C SCL - connect to an I2C-capable pin on Espruino |
| SDA | I2C SDA - connect to an I2C-capable pin on Espruino |
Usage
-----
Example usage (using a Pixl.js)
```
I2C1.setup({ sda: A4, scl: A5 });
const ina219 = require("INA219").connect(I2C1);
console.log(ina219.initDevice());
setInterval(() => {
g.clear();
const volts = ina219.getBusMilliVolts() / 1000 + 'V';
const milliamps = ina219.getBusMicroAmps() * 1000 + 'mA';
const milliwatts = ina219.getBusMicroWatts() * 1000 + 'mW';
console.log(volts);
console.log(milliamps);
console.log(milliwatts);
console.log('-----');
// show on the Pixl.js built-in LCD
g.drawString(volts, 30, 20);
g.drawString(milliamps, 30, 30);
g.drawString(milliwatts, 30, 40);
g.flip();
}, 1000);
```
The default maximum expected current is 3.2A and the resistor shunt is 0.1 ohms.
`connect()` also accepts an options object for its second positional parameter.
This allows you to customize the accuracy of the sensor depending on the shunt
and the maximum current you will be using.
For example:
```
I2C1.setup({ sda: A4, scl: A5 });
const options = {
maximumExpectedCurrent: 1, // in amps
rShunt: 0.1, // in ohms
};
const ina219 = exports.connect(I2C1, options);
```
Will increase the precision of the measurements in exchange for a lower maximum
current that can be measured.
See page 12 of the [datasheet](https://www.ti.com/lit/gpn/ina219) for more
information.
Reference
---------
* APPEND_JSDOC: INA219.js
Using
-----
* APPEND_USES: INA219
Buying
-----
INA219 sensors on breakout boards can be purchased from:
* [eBay](http://www.ebay.com/sch/i.html?_nkw=INA219)
|
Q:
Reducing the cost of multiplications in numbers
I want to reduce the cost of number multiplication in the Matrix Multiplication. Is the multiplication of a number by $10^n$ or $2^n$ enforces a surplus cost in computer operations or it is just adding zeros at the end of the numbers? If there is not so cost, then an $O(n^2)$ algorithm for MM is available.
A:
Let us assume the standard IEEE binary floating point representation, that is a positive (machine) number is given by
$$ x = 2^n \left( 1 + \sum_{i=1}^m x_i 2^{-i} \right) $$
for $x_i\in\{0, 1\}$, $n\in\mathbb Z$ with $|n| \le N$, and some fixed $m, N\in\mathbb N$.
Then, $x$ is represented by $(x_1,\dotsc, x_m, n)$.
Multiplying $x$ by a 2-power, say $2^p$, results in a signed addition of $n$ and $p$ (with possible over-/underflow). So it is surely not just "moving numbers". Multiplying $x$ by a 10-power results in a full floating multiplication.
I am not sure if any CPU checks for 2-powers and exploits that. Multiplication with numbers of higher precision is more expensive. But, it is always constant given fixed precision.
|
5 Reasons Why Children Need to Meditate
This time gift your child something that will help keeping them happy, healthy and be successful in life.
Think about it. How often would you tell your children not to study and just watch TV instead? Probably not too many times, right? Why, because you obviously want the best for them. And who other than you would know the best for your child?
You always want to give them the best of everything, be it clothes, education or food. You get them the best gifts on their birthdays and do whatever it takes to keep them happy, healthy and be successful in life.
And now it’s time to give them the best gift of their life – something which they will cherish lifelong and be grateful, for it will change their life for the better. A simple yet very effective technique called meditation - one of the most valuable skills we can teach our children.
Did you know that regular practice of meditation has several beneficial effects on our children’s emotional, mental and intellectual development? Yes, it helps children tune into themselves, sleep better and develop better social interactions. Now that’s what you as a parent would always dream and wish for your child, right?
#1 To harness the monkey mind
The nature of the mind, when stressed, is to jump from thought to thought like a monkey. If there is tension, then the mind cannot be calm.
Have you noticed that your children are drawn to gadgets and technology like we never had in our youth? They are challenged to think and respond more quickly than ever before. They have the ability to take information in megabytes, play games of speed and imagination, surf and tweet, and respond to constant online communication!
In addition to these abilities, you would probably also want your child to have the capacity to turn their attention completely to one thing and be able to stick with their studies. You would want them to be capable of solving complex problems and to see projects through to the completion.
#2 To prepare for the challenges of puberty
If you have an adolescent child, you probably would have seen that they have strong emotions and are easily influenced by the society around them.
Meditation gives teenagers the access to a great feeling of inner stability and security. It allows them an insight into the inner wisdom to help them stay centered and strong through the hormonal changes in the body.
#3 To de-stress for academic success
Have you noticed that when you are fully absorbed in doing something, such as playing with a baby or watching a beautiful sunset, the mind is not worried, angry or anxious? When the mind is calm, the body is relaxed and in this way, the body follows the mind. If the mind is free of tension, it would function at its peak for exam performance and the body would be healthy. Isn’t that what you want for your child?
How many children do you see frowning and stressing over studies? Their shoulders get hunched and tight, eyesight getting strained and digestive and other health problems beginning to develop. Ideally, we all hope that our children will have the ability to progress in life, solving complex problems and thinking creatively.
#4 To support healthy emotional development
Is your child experiencing strong emotions such as frustration and fear? It is common through the developmental stages for children to have tantrums and tears. We want them to easily navigate these phases without too much distress.
Children often feel frustrated or irritable when they don’t get their own way and things get difficult as they haven’t yet learnt the virtue of patience. A toddler would scream and cry for a toy while a school child would resist if they are told to do something they don’t like. Technology has increased the expectation of instant solutions which can increase impatience in children.
Overcoming fear is another challenge for children as they are growing up in this fast-paced world. Fears, such as not being accepted and not having friends as well as the primal fears of death and losing loved ones, can trouble children. Emotional stability is essential for healthy growth. These great leaders of the future will need to have courage and emotional strength.
Meditation allows children to return to their natural rhythm and helps them cope with the emotions of frustration and fear. It helps to balance the whole system by supporting emotional development and gives rest to the mind so that they are not overwhelmed by their strong feelings.
#5 To reach their full potential
Through meditation, your children can discover that there is so much more potential in their life, that the stresses in their life are petty, short-term problems and that they can be successful beyond their dreams.
You child of today will make the leader of the future and they will need to be centered, strong and good lateral thinkers. The best we can do to support them is encourage them to practice meditation and access the untapped potential within. |
Night Of The Creeps (1986)
The PrologueIsn’t it a shame that a great movie that came out in 1986 doesn’t get the DVD treatment until the year 2009?? I always remember hearing about Night of the Creeps but I never seen it, even in the days of VHS renting every weekend I somehow missed it. I had a chance to watch it once on TV a few years back actually, but I had a date that night so I kinda picked the woman over the movie..can’t blame me right?
It was actually that ordeal that lead me to finding out there wasn’t a DVD for it. I got back at the end of the weekend, went to look it up and found out that much to my surprise that there wasn’t any DVD of it! I just didn’t get it, how can a movie that got the prime time treatment one night on a movie channel (Sorry but I forgot which one but I wanna say IFC) actually not have a DVD??
But here comes 2009 and here came Night of the Creeps FINALLY on DVD and thus my wait paid off and I finally got to watch it last night, and I believe it was well worth the wait.
The MovieAlien brain parasites, entering humans through the mouth, turn their host into a killing zombie, and suddenly I see where Slither got their plot from! But either way this film is just great folks. We start in space, we dip down in the 50’s and we then get our stuff rolling in the 80’s. So when you think of all that being together in one movie you just got to wonder how could it NOT be awesome? Kudos to director Fred Dekker because not only IS this movie awesome but I now no longer feel The Monster Squad is his best film.
So in a movie that is loaded with the cheese of the 80’s (and that’s a good thing) what I feel stands out the most is none other than Detective Ray Cameron played by one of my all time favorites Tom Atkins. Had Tom been THIS man in Halloween 3 I feel there would have been a ton of kids that would have been saved from having their heads nuked that night.
Also I must show some love to the fact that once people are turned into Zombies in this movie they still don’t run! I mean this was a logical reason to have running Zombies, it’s a comedy, and it’s fresh cadavers, but they still stick with the “rule” and you gotta respect that. Also if you watch this you’ll see where Peter Jackson got the idea for the Dead-Alive lawnmower to a Zombie scene!
The only complaint I have at all about this one kids is the fact that there is some parts that just drag water. However that’s understandable because we are also working a teen movie type romance plot in with everything. And that’s not terrible because I enjoyed having Cynthia (played by the lovely Jill Whitlow) on the screen, she looks great holding a flame thrower. However, I do wish more of the middle of the movie was like the final 30 minutes of the movie..because to me that was just epic stuff!
The ConclusionI’d say it was better late than never on the DVD release of this movie. It has all the things I like about horror, mainly 80’s horror and it’s just a great movie to watch..especially with a group of friends.
I still believe Tom Adkins steals this show hands down. If the man doesn’t make you want to say “Thrill Me!” every time you pick up your phone after watching this then there’s something wrong with you.
I just want to know why the old school cover got no love with this new release? |
---
abstract: 'A total of 188 high-mass outflows have been identified from a sample of 694 clumps from the Millimetre Astronomy Legacy Team 90 GHz survey, representing a detection rate of approximately 27%. The detection rate of outflows increases from the proto-stellar stage to the H stage, but decreases again at the photodissociation (PDR) stage suggesting that outflows are being switched off during the PDR stage. An intimate relationship is found between outflow action and the presence of masers, and water masers appear together with 6.7 GHz methanol masers. Comparing the infall detection rate of clumps with and without outflows, we find that outflow candidates have a lower infall detection rate. Finally, we find that outflow action has some influence on the local environment and the clump itself, and this influence decreases with increasing evolutionary time as the outflow action ceases.'
author:
- |
Qiang Li$^{1,3}$[^1], Jianjun Zhou$^{1,2}$[^2], Jarken Esimbek$^{1,2}$, Yuxin He$^{1,2}$, Willem Baan$^{1,4}$, Dalei Li$^{1,2}$, Gang Wu$^{1,2}$, Xindi Tang$^{1,2}$, Weiguang Ji$^{1}$, Toktarkhan Komesh$^{1,3,5}$, Serikbek Sailanbek$^{1,3,5}$\
$^{1}$Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi 830011, P. R. China\
$^{2}$Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Urumqi 830011, P. R. China\
$^{3}$University of the Chinese Academy of Sciences, Beijing 100080, P. R. China\
$^{4}$Netherlands Institute for Radio Astronomy, 7991 PD Dwingeloo, The Netherlands\
$^{5}$Department of Solid State Physics and Nonlinear Physics, Faculty of Physics and Technology, AL-Farabi Kazakh National University,\
Almaty 050040, Kazakhstan\
date: 'Accepted 2019 July 23. Received 2019 June 21; in original form 2019 March 11'
title: 'Effects of infall and outflow on massive star-forming regions.'
---
stars: formation $-$stars: massive $-$stars: statistic $-$ISM: clouds $-$ISM: molecules $-$ISM: jets and outflows
INTRODUCTION
============
Star formation is an intrinsically complex process involving the collapse and accretion of matter onto proto-stellar objects and infall and outflow motions play an important role in the star formation processes. However, a comprehensive understanding of both processes, particularly towards massive star-forming regions, is still lacking. In part, this is because of the larger distances involved and the typically more clustered and complex nature of star formation regions, making it difficult to disentangle the infall and outflow properties of individual objects in a given cluster [@2018MNRAS.477..2455]. In this paper, we continue to study the dynamical processes in massive star-forming regions to further understand star formation processes.
The SiO(2-1)(86.847 GHz) emission is found to be an excellent indicator of active outflows from young stellar objects [@2007ApJ...663..1092; @2014MNRAS.440..1213]. In addition, confirmed that SiO(2-1) line emission is a good indicator of outflows in massive star-forming regions. In the cold diffuse interstellar medium (ISM), shocked dust grains can sublimate, and frozen silicon is released into the gas phase to form SiO. It can either freeze out back onto dust grains or oxidize and form SiO$_{2}$ on a time scale of 10$^{4}$ years [@1997IAUS..182..199P; @2007ApJ...663..1092]. As emission from may potentially trace remnant, momentum-driven, ’fossil’ outflows [@2007ApJ...663..1092], SiO may be used as a tracer of active outflow (jet outflow). The infall and outflow events in massive star-forming regions are closely connected to the turbulence in the ISM, which plays a dominant role in regulating massive star formation [@2013MNRAS.436..1245; @2015MNRAS.453..3245]. Because outflows from massive stars may contribute to driving the turbulence in the ISM (e.g., @2013MNRAS.434..2313 [@2014ApJ...790...128]), their feedback helps to address two main questions: (a) do outflows inject enough momentum to maintain the turbulence, and (b) can outflows couple with the clump gas and drive turbulent motions [@2014prpl.conf..451F].
In this study, we identified 188 outflow candidates among 694 clumps from a previous infall survey [@2015MNRAS.450..1926; @2016MNRAS.461..2288]. Outflow candidates were identified from (1-0) PV diagrams. The remainder of this paper is organized as follows. A brief introduction to the survey and our sample selection is given in Section \[sec2\]. In Section \[sec3\], we identify 188 high-mass outflows and calculate their outflow parameters. Section \[sec4\] discusses the relationship between outflow and infall and matches our sample with maser surveys to check the relationship of masers and outflows. Finally, we discuss the contribution of outflows to the turbulence in the ISM and consider the influence of outflows on the velocity dispersion at different evolutionary stages of star-formation regions. We give a summary in Section \[sec5\].
ARCHIVAL DATA and OUR SAMPLE {#sec2}
============================
MALT90 Survey
--------------
The Millimetre Astronomy Legacy Team 90 GHz survey (MALT90) aims to characterise the physical and chemical evolution of high-mass star-forming clumps [@2013PASA...30...57J]. The unique broad frequency capability and fast-mapping capabilities of the Australia Telescope National Facility Mopra 22 m single-dish telescope has been exploited to simultaneously map 16 molecular lines near 90 GHz for each target source (size of each map is 3 arcmin $\times$ 3 arcmin). MALT90 contains over 2000 dense cores identified in the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) of 870 $\mu$m continuum emission covering the Galactic plane in the longitude range of -60to +20. These dense cores span the complete range of evolutionary stages of high-mass star formation from Pre-stellar to Proto-stellar to H regions, and finally to photodissociation (PDR) regions [@2013PASA...30...57J]. The spatial and spectral resolution of the MALT90 survey are approximately 36 arcsec and 0.11 kms$^{-1}$. The typical rms noise of the antenna temperature is $\sigma$ = 0.25 K per channel of 0.11 kms$^{-1}$. We used the (1-0)(89.189 GHz) and SiO(2-1)(86.847 GHz) emission lines of MALT90 sources to identify outflows and the N$_{2}$H$^{+}$(1-0)(93.174 GHz) emission lines from the survey to derive the full width half maximum (FWHM) line width and the system velocity of clumps.
Methanol MultiBeam Survey
-------------------------
Methanol masers are well-known indicators of the early phases of high-mass star formation . @2014MNRAS.444..566D found a total of 58 $^{13}$CO(3-2)(330.588 GHz) emission peaks in the vicinity of these maser positions and found evidence of high-velocity gas in all cases. The Methanol MultiBeam (MMB) survey mapped the Galactic plane for 6.7 GHz Class-II masers using a 7-beam receiver on the Parkes radio telescope with a sensitivity of 0.17 Jy beam$^{-1}$ and a half-power beamwidth of 3.2 arcmin [@2009MNRAS.392...783]. Subsequent Australia Telescope Compact Array (ATCA) observations (in the 6-km configuration) provided high resolution positions with an accuracy of $\sim$ 0.4 arcsec [@2010MNRAS.404..1029]. The MMB is complete in the range 186$< \ell <$ 20and $|b| <$ 2(@2010MNRAS.404..1029 [@2011MNRAS.417..1964; @2010MNRAS.409...913; @2012MNRAS.420..3108]). The MMB catalogue provides the velocity of the peak component and the flux density as measured from both the lower-sensitivity Parkes observations and the high-sensitivity ATCA follow-up observations.
H$_{2}$O Southern Galactic Plane Survey
---------------------------------------
Water masers usually trace outflows (e.g., ) at a generally earlier evolutionary phase than OH masers . Our aim is to test an evolutionary sequence for water and Class II methanol masers. The H$_{2}$O Southern Galactic Plane Survey (HOPS) was carried out at 22 GHz with the Mopra Radio Telescope with a broad-band backend and a beam size of approximately 2 arcmin [@2011MNRAS.416.1764W]. The root mean square (rms) noise levels are typically between 1 and 2 Jy with 95 percent under 2 Jy. HOPS found 540 H$_{2}$O masers in the range 290$< \ell <$ 30and $|b| <$ 0.5. Subsequent high resolution observations with the ATCA show a large range of noise levels from 6.5 mJy to 1.7 Jy, but with 90 percent of noise levels in the range of 15 to 167 mJy [@2014MNRAS.442..2240]. The ATCA observations had a beam size ranging from 0.55 $\times$ 0.35 arcsec to 14.0 $\times$ 10.2 arcsec.
Sample Selection
----------------
@2015MNRAS.450..1926 [@2016MNRAS.461..2288] selected 732 high-mass clumps with N$_{2}$H$^{+}$(1-0), HNC(1-0), and (1-0) emission lines detected from the MALT90 survey with S/N $>$ 3 and an angular distance between any two sources larger than the Mopra beam size (36 arcsec at 90 GHz). These clumps with detected N$_{2}$H$^{+}$, HNC, and emission lines with S/N $>$ 3 are more evolved than 8016 ATLASGAL clumps at corresponding evolutionary stages. The distance range is from 0.5 kpc to 17 kpc. In Figure \[figa1\], our sources show slightly different from all ATLASGAL sources in distance (from 4 kpc to 10 kpc). Thirty-eight clumps were excluded because they are close to the edge of the 3 arcmin $\times$ 3 arcmin map, which may affect the reliability of the outflow identification. Therefore, our sample included 694 massive star-forming regions. Then, we searched for outflow candidates in these 694 high-mass clumps. Since the noise is higher at the edges of the maps, a cropped 14 $\times$ 14 pixel size (approximately 2 arcmin $\times$ 2 arcmin) centred on the Galactic coordinates was selected for the ATLASGAL sources, which may results in an underestimate of the outflow parameters.
RESULTS {#sec3}
=======
Outflow Identification
----------------------
{width="50.00000%"}
{width="47.00000%"} {width="47.00000%"}\
{width="50.00000%"}
For each source in the sample, the identification of outflows has been made by checking extended wings representative of outflows in the PV diagrams with a cut along the Galactic latitude and longitude. We only check PV diagrams at two vertical directions because the resolution of the telescope was too low to adequately determine the outflow axis. Finally, we choose the direction with more extended wing. When the velocity bulge appears in the direction away from the central velocity at least at a 3 RMS level, we consider there is outflow, and vice versa. This step was completed by visual inspection. This may result that the reliability of some outflow candidates may be low. High-mass outflow candidates were identified in this manner. An example of a high-mass outflow candidate in an PV diagram is displayed in Figure \[fig1\].The contour levels of all clumps start from 3 sigma. The PV diagrams of 188 clumps are showed online. From the 188 outflow candidates, 43 sources had only one clear lobe, the red or blue lobes of 24 sources were contaminated by the gas along the line of sight, 13 sources had overlapping blue and red lobes, and 3 sources had no distance estimate. Finally, this leaves a sample of 105 clumps with suitable bipolar outflows and reliable distances for which the outflow parameters may be calculated. In addition, the SiO(2-1) emission in our sample clumps was used as an indicator of active outflows in massive star-forming regions, where the detected SiO emission is caused by outflow-driven shocks . A total of 198 sources were found with SiO emission above 3 sigma in integrated intensity and 85 clumps of these with stronger SiO emission also show HCO$^{+}$ line wings. The SiO emission integrated intensity maps of 85 clumps with stronger SiO emission are presented online and an example of an SiO detection is shown in Figure \[fig2\].
Outflow Parameters
------------------
We determine the velocity ranges of high-velocity gas emission from PV diagrams and choose 50% of peak emission as the beginning of the high-velocity wings, at which a velocity gradient clearly begins and is spatially extended. Assuming that the HCO$^{+}$ emission in the line wings is optically thin and in local thermodynamic equilibrium (LTE), the following values may be adopted for the abundance ratio and the excitation temperature of $[H_{\rm 2}/HCO^{+}] = 10^{8}$ and $T_{ex}$ = 15K [@1997ApJ...483..235T]. The physical properties of the outflows may then be calculated following the procedure of @1991ApJ...374..540G [@2015ApJS..219...20L]. The total column density of the outflow gas is given by $$N(HCO^{+}) = \frac {3k^2T_{\rm ex}}{4\pi^3\mu^2_{\rm d}h\nu^2exp(-h\nu/kT_{\rm ex})}\int T_{\rm mb}d\upsilon$$ where the Boltzmann constant $k$ = 1.38 $\times$ 10$^{-16}$ erg K$^{-1}$, the Planck constant $h$ = 6.626 $\times$ 10$^{-27}$ ergs, the dipole moment $\mu_{\rm d}$ = 3.89 $\times$ 10$^{-18}$ esu cm$^{-1}$, the transition frequency $\nu$ = 89.188526 GHz, and the velocity $v$ is in km s$^{-1}$. T$_{\rm{mb}}$ is the brightness temperature, which is calculated from the antenna temperature, dividing it by a main beam efficiency of 0.49 [@2013PASA...30...57J]. The mass for each pixel in the defined outflow lobe area is computed by $$M_{\rm pixel} = N(HCO^{+})[H_{\rm 2}/HCO^{+}]\mu_{\rm H_{\rm 2}}m_{\rm H}A_{\rm pixel},$$ where the mean molecular weight $\mu_{\rm H_{\rm 2}}$ = 2.72 , the mass of the hydrogen atom $m_{\rm H}=1.67 \times$ 10$^{-24}$ g, and the area of each pixel $A_{\rm pixel}$ within the outflow lobe is contained within the 3 sigma contours integrated intensity in .
The total mass, the momentum (P$_{out}$) and the kinetic energy (E$_{out}$) of each outflow is obtained by summing over all spatial pixels defined by the lowest contours. Finally, the mass rate of the outflow, the mechanical luminosity, and the mechanical force are calculated as $\dot{M}_{\rm out} = M_{\rm out}/t_{dyn}$,$L_{\rm out} = E_{\rm out}/t_{dyn}$, and $F_{\rm out} = P_{\rm out}/t_{dyn}$, respectively. The details refer to @li. The dynamical timescale can be calculated by two methods. One is to calculate the ratio of the maximum separation of the outflow lobes (the length of the blue and red lobes) and the terminal speed of the outflow measured from the spectral line wings. The other method is to calculate the ratio of the separation between the peak of the blue and red lobes (the distance between the lobes) and the mean outflow velocity defined as $P_{\rm out}/M_{\rm out}$. We adopt the second method because the first method requires higher observational sensitivity. @1999ApJ...522..921G indicates that in the optically thin limit, $T_{\rm mb}(4-3)/T_{\rm mb}(1-0)$ will yield a good approximation of the excitation temperature when it is less than 15 K and a lower limit when it is higher than this value. Our choice of $T_{\rm ex}(J=1-0)$=15 K implies that the emission in the line wings is optically thin. This excitation temperature is a lower limit when the opacity increases. The outflow mass (and all other properties that depend on mass) reaches a smallest value at an excitation temperature 5 K. When the excitation temperature is more than 5 K, the outflow properties only increase with increasing excitation temperature and the outflow mass (and all other properties that depend on mass) will be a lower limit when the opacity increases. Many of the uncertainties in distance, abundance, and inclination angle [@1990ApJ...348...530] are systematic and have little effect on the overall distribution and correlations between the individual quantities. Consequently, the homogeneity of our sample and the large number of objects will ensure robust results from our statistical analysis. The sensitivity is unlikely to significantly affect the total $M_{\rm out}$ because of the steeply declining nature of the outflow mass spectra [@2014ApJ...783....29]. The outflow mass would be overestimated when the low-velocity outflow emission contains some ambient cloud emission. We adopt an average inclination angle of $\theta$ = 57.3to correct the results [@2015ApJS..219...20L; @2015MNRAS.453..3245]. The correction factors of the momentum and the kinetic energy of the outflows are 1.9 and 3.4, respectively. The inclination-corrected physical properties of the outflows are partly listed in Table \[tab1\], and the whole list is available online as Table A1. The ratio of $M_{\rm out}/M_{\rm clump}$ has an average of 0.03 for the sources and a spread of less than one order of magnitude, which is similar to the mean ratio of 4% in and 5% in @2018ApJS..235.....3.
---------------- ------- -------- -------- ------ ------- ------- ------- --------------- --------------- ------------
G000.316-0.201 451.1 2272.3 239.53 19.3 17.97 9.611 23.37 \[13.8,16.8\] \[20.3,23.4\] 8.0(0.12)
G000.546-0.852 58.3 414.5 61.26 8.5 7.44 5.582 6.86 \[10.5,13.8\] \[18.5,24.5\] 2.0(0.12)
G005.899-0.429 22.4 87.9 6.93 45.8 0.29 0.117 0.49 \[2.9,4.4\] \[8.4,9.8\] 2.7(0.15)
G005.909-0.544 23.8 101.7 8.69 3.7 4.19 1.819 6.44 \[12.0,13.9\] \[17.0,19.0\] 3.27(0.15)
G006.216-0.609 18.4 78.0 6.38 11.2 1.06 0.441 1.64 \[14.8,16.1\] \[19.5,20.8\] 3.58(0.15)
---------------- ------- -------- -------- ------ ------- ------- ------- --------------- --------------- ------------
\
$^{\rm{a}}$ Sources are named by galactic coordinates of the maximum intensity in the ATLASGAL sources.
DISCUSSION {#sec4}
==========
Detection Statistics of Outflows
--------------------------------
From the 694 sources a total of 188 outflow candidates, resulting in a detection rate of 27%. The detection rate is smaller than found in previous studies (@2018ApJS..235.....3 (66%), @2015MNRAS.453...645 (66%), @2001ApJ...552L.167Z [@2005ApJ...625..864Z] (57%), and (39%$\sim$50%)) but is similar to the 20% detection rate found in @li. A lower detection rate is expected for sources located in the inner region of the Galactic plane with higher interstellar extinction and internal absorption.
Among the 694 clumps in our sample, there are 61 pre-stellar sources, 278 proto-stellar sources, 230 H regions, 66 photo dissociation regions (PDRs), and 59 with an uncertain classification. These classifications were obtained from @2015ApJ...815..130G and @2016MNRAS.461..2288. The classification is based on 3.6, 4.5, 8.0, and 24 $\mu$m *Spizter* images (see @2015ApJ...815..130G for details). We note that there are 3 outflow candidates at a pre-stellar stage but these may be misclassified. We classify these as proto-stellar. So there are 58 pre-stellar sources and 281 proto-stellar sources. Outflow line wings were detected using HCO$^{+}$ towards 75 proto-stellar sources (75/281 or 27%), 88 H regions (88/230 or 38%), and 17 PDRs (17/66 or 26%). This indicates that outflow detection rate increases from the proto-stellar to H evolutionary stage. If the outflow is just switched off, the detection rate of outflow should be constant with a similar detection rate as that of the previous stage. However, the lower detection rate at the PDR stage suggests that the previous jet entrained gas was not detected and may have likely been blown away. The decreased detection rate at the PDR stage is likely because the circumstellar envelope matter was blown away.
Similarly among the 188 outflow candidates, the SiO emission was detected towards 36 proto-stellar sources (36/75 or 48%), 42 H regions (42/88 or 48%), and 3 PDR regions (3/17 or 18%). We select SiO emission as a tracer of active outflow (not a ’fossil’ outflow). If there is a stellar wind, both the and SiO gas would be blown away to a lower column density and would not be etected. The sharply decreased detection rate of and SiO from H to PDR supports that outflows have mostly been switched off, and that the circumstellar envelope matter has been blown away during the PDR stage.
Outflows and Masers
-------------------
A search of masers within a beam for each clump shows that among the 694 clumps there are 123 clumps associated with maser sources: 82 clumps with methanol masers and 62 clumps with water masers. Sixty-two of these 123 clumps are outflow candidates which makes a high outflow detection rate (50%). This suggests an intimate relationship between outflow action and the presence of masers. Forty methanol masers (40/82; 49%) and thirty-four water masers are outflow candidates (34/62; 55%). Among the 82 clumps associated with the methanol masers, there are 1 pre-stellar source, 37 proto-stellar sources, 40 H regions, 3 PDR sources, and 1 source with uncertain classification. The outflows are detected towards 13 proto-stellar (13/37; 35%), 25 H regions (25/40; 63%), and 1 PDR source (1/3; 33%). Among the 62 clumps associated with water masers, there are 1 pre-stellar source, 23 proto-stellar sources, 34 H regions, 1 PDR source, and 3 sources with uncertain classification. The outflows are detected towards 10 proto-stellar (10/23; 43%), 22 H regions (22/34; 65%), and 0 PDR source (0/1; 0%). Similar outflow detection rate towards methanol masers and water masers at proto-stellar and H stage possibly mean that water masers appear at a nearly similar stage as 6.7 GHz methanol masers. We note that there are less samples at the PDR stage. Therefore, the detection rate may be not accurate at the PDR stage.
![ The outflow mass rate ($\dot{M}_{\rm out}$) of clumps compared with their infall mass rate from the literature. The uncertainties are based on the distance uncertainty.[]{data-label="fig3"}](figure4.eps){width="50.00000%"}
Outflow and Infall
------------------
@2015MNRAS.450..1926 [@2016MNRAS.461..2288] use the optically thin line of N$_{2}$H$^{+}$ and the optically thick lines of HNC and HCO$^{+}$ for searching for infall candidates. By calculating an asymmetry parameter, they determined red and blue skewed profiles of the optically thick lines of each clump. Infall candidates must show a blue skewed profile at least in one optically thick line, no red skewed profile in the other optically thick line and no spatial difference in the mapping results.
Among the 694 clumps, there were 222 infall candidates (infall candidates refer to @2015MNRAS.450..1926 [@2016MNRAS.461..2288]). Infall is detected in 35% (178 out of 506) of 506 clumps without outflow. However, infall is detected in only 23% (44 out of 188) of 188 outflow candidates. This suggests that the infall detection rate is lower towards outflow candidates. Among the outflow candidates, 31% of proto-stellar clumps have evidence of infall (23 out of 75), 23% of H regions have evidence of infall (20 out of 88), and 0% of PDR clumps have evidence of infall (0 out of 17). Among non-outflow candidates, 40% of proto-stellar clumps have evidence of infall(82 out of 206), 37% of H regions have evidence of infall (52 out of 142), and 16% of PDRs have evidence of infall (8 out of 49). The infall detection rates towards outflow candidates is always lower at the corresponding stage. This indicates that outflow action decreases the infall detection rate at each evolutionary stage. Outflows entraining the surrounding gas may have some effect on the infall process. The infall detection rates towards non-outflow candidates are rather constant from the proto-stellar to the H stages. However, the infall detection rates towards outflow candidates show a larger change from the proto-stellar to H regions. The infall detection rate towards non-outflow candidates is constant from proto-stellar to H. If the outflow effect is constant, then the infall detection rate towards the outflow candidates should be constant, but it is not. This suggests that the effect of outflows on their environment is becoming more significant.
We find that infall is detected in 28% of 85 sources with SiO emission (evidence of jets) and outflow detections. Among the 85 sources with SiO emission and outflow detections, evidence of infall is found in 33% of the proto-stellar clumps (12 out of 36), 26% of the H regions (11 out of 42) and in none of the 3 PDRs. It seems that there is no significant difference in the infall detection rate towards outflow candidates with or without SiO emission at corresponding evolutionary stages. Since candidates are identified by a variable line of sight velocity, outflow and infall is detected in a parallel direction. Therefore, outflow action produces some effects on the infall process for both jet or ’fossil’ outflows.
We calculated the outflow parameters of 105 clumps including the 22 infall candidates and the infall mass rate was obtained from @2015MNRAS.450..1926 [@2016MNRAS.461..2288]. Figure \[fig3\] shows the outflow mass rate ($\dot{M}_{\rm out}$) versus the infall mass rate for the 22 sources. When comparing the outflow mass rate of the clumps with the infall mass rate, there is no clear relationship between the two (Spearman’s rank correlation: $\rho$ = 0.3, p-value = 0.18). The basic information of part of the clumps is listed Table \[tab2\], and the whole list is available online as Table A2.
----------------- ------ ------ ------- --------------- ------
G010.288-00.124 2.76 2.74 4.136 HII i s
G010.299-00.147 3.37 3.38 4.579 HII o s
G010.323-00.161 3.44 2.29 3.027 HII wm
G010.329-00.172 4.07 1.97 2.799 PDR
G010.342-00.142 2.94 2.7 3.837 Proto-stellar owms
----------------- ------ ------ ------- --------------- ------
Clump turbulence {#lab41}
----------------
![ The turbulent energy in clumps compared with the outflow energy. The magenta stars, red circles, blue triangles, and black squares refer to sources with uncertain, proto-stellar, H, and PDR classification, respectively. The solid line shows E$_{out}$ = E$_{turb}$. The uncertainties are based on the distance uncertainty.[]{data-label="fig4"}](figure5.eps){width="50.00000%"}
The presence of outflows in clumps may have a cumulative impact on the level of turbulence in molecular clouds. One method to quantify this effect is to compare the total energy of the outflow with the cloud’s total turbulent kinetic energy. This total turbulent energy may be estimated as E$_{turb}$=(3/16 $\ln2$)M$_{cloud}$$\times$FWHM$^{2}$ [@2001ApJ...554...132]. Assuming that the N$_{2}$H$^{+}$ gas temperature is equal to the dust temperature in LTE, we estimate the expected thermal motions of N$_{2}$H$^{+}$ to show that it is insignificant in the FWHM. The average of the ratios between the velocity dispersion (thermal motions) and total velocity dispersion is 0.12, which means that the thermal motions have little contribution to the FWHM. A comparison of the turbulent energy and the outflow energy of our sample sources in Figure \[fig4\] shows that the outflow energy correlates with and is comparable to the turbulent energy (Spearman’s rank correlation coefficient 0.83 and p-value $\ll$ 0.001). This indicates a strong relationship between the turbulent energy and the outflow energy in the source.
![ The N$_{2}$H$^{+}$ FWHM distribution of clumps with detected outflows (red histogram) and without detected outflows (grey filled histogram). The dashed vertical black line and the solid vertical black line are the median values of N$_{2}$H$^{+}$ FWHM for clumps without and with outflow, respectively. The FWHM distribution of clumps with detected outflows has been scaled to FWHM distribution of clumps without detected outflows.[]{data-label="fig5"}](figure6.eps){width="50.00000%"}
Because there is no correlation between the outflow and the clump mass, the effect of distance may be excluded and we just need to consider whether the outflow has a significant effect on the clump FWHM and hence on the turbulent energy. As a first test, the FWHM distribution of the N$_{2}$H$^{+}$ lines are compared for the clumps with and without outflows in Figure \[fig5\]. This shows that the outflow candidates show a slightly higher median value for the FWHM than the clumps without detected outflows, which suggests that outflows do have a relatively significant effect on the N$_{2}$H$^{+}$ FWHM of the clump and hence on the turbulent energy. A Kolmogorov-Smirnoff (K-S) test suggests that the two samples are indeed drawn from different parent distributions (statistic = 0.24 and p-value $\ll$ 0.001).
![ The N$_{2}$H$^{+}$ FWHM distributions of clumps with a detected outflow (red histogram) and without detected outflow (grey filled histogram) at each evolutionary stage. The medians for clumps with and without outflow in each stage are indicated by the dashed vertical black line and the solid vertical black line, respectively.[]{data-label="fig6"}](figure7.eps){width="50.00000%"}
Next, a comparison can be made between clumps with and without outflows at each corresponding stage in order to determine whether outflows have a significant effect on the clump FWHM at different evolutionary stages (Figure \[fig6\]). Clumps with outflows are found to have a larger median value of the N$_{2}$H$^{+}$ FWHM than clumps without outflows at each evolutionary stage. This suggests that outflows have an effect on the clump FWHM at each evolutionary stage. K-S tests for proto-stellar and H regions, respectively, suggest that clumps with and without outflow are drawn from different parent distributions (statistic = 0.21 and p-value = 0.01 for proto-stellar clumps, statistic = 0.28 and p-value $\ll$ 0.001 for H regions). In order to estimate the contribution of the outflow to the FWHM of each clump, we calculate the median ratio of N$_{2}$H$^{+}$ FWHM between the clumps without and with the outflow for proto-stellar sources, H regions, and PDRs as 0.92, 0.87, and 0.92, respectively. The median ratio represents the non-outflow contribution (non-outflow contribution/all contributions; the N$_{2}$H$^{+}$ median FWHM for non-outflow detections/N$_{2}$H$^{+}$ median FWHM for outflow detections). This ratio slightly decreases between the proto-stellar and H stages, which suggests that during this time interval the outflow contribution to the FWHM increases. However, the ratio increases slightly from the H to the PDR stage, which suggests that the outflow contribution to the FWHM decreases. A K-S test shows a much smaller difference and a 33% probability that clumps with and without outflow are drawn from the same distribution for PDRs (statistic = 0.26 and p-value = 0.33), which supports that the outflow contribution may be disappearing. This may be because the outflow generating mechanism in the PDR clumps of our sample already have shut down and the observed outflow gas represents a ’fossil’ outflow from previous accretion episodes. This comparison provides weak evidence that the influence of outflows on the FWHM of the clumps increases from the proto-stellar to H stage and decreases with increasing evolutionary time when the outflow action ceases. Note that the number of pre-stellar clumps in our sample is small.
![The N$_{2}$H$^{+}$ FWHM distribution of outflow candidates with and without SiO emission. The red histogram represents clumps with SiO emission and the grey filled histogram represents clumps without SiO emission. The medians for outflow candidates with and without SiO emission are indicated by the dashed vertical black line and the solid vertical black line, respectively. The FWHM distribution of outflow with SiO emission has been scaled to FWHM distribution of outflow without SiO emission.[]{data-label="fig7"}](figure8.eps){width="50.00000%"}
A similar comparison of outflow candidates with and without SiO emission shows that outflow candidates with SiO emission have a larger FWHM median value, which suggests that the outflows with SiO emission contribute to the FWHM (Figure \[fig7\]). A K-S test suggests that the two samples are drawn from different parent distributions (statistic = 0.30 and p-value $\ll$ 0.001). SiO emission appears to be a good tracer of active outflows (no ’fossil’ outflow) since outflow-driven shocks can sublimate dust grains and release the frozen silicon into the gas phase forming SiO. SiO can either freeze out back onto the dust grains or oxidize and form SiO$_{2}$ after a few 10$^{4}$ yr [@1997IAUS..182..199P; @2007ApJ...663..1092]. Outflow candidates with SiO emission have a larger FWHM median value than outflow candidates without SiO emission, which also indicates that the outflow contribution decreases with time as the outflow action ceases. This means that outflows do not have a significantly cumulative impact on the turbulence levels.
![The N$_{2}$H$^{+}$ FWHM distributions of local regions of clumps with and without outflows. The red histogram represents clumps with an outflow and the grey filled histogram represents clumps without an outflow at each stage. The medians for clumps with and without outflows at each stage are indicated by the dashed vertical black line and solid vertical black line, respectively.[]{data-label="fig8"}](figure9.eps){width="50.00000%"}
![The variation of the ratio of the outflow contribution to the FWHM (black solid line) and the turbulent energy (red dashed line). The squares denote result for entire clumps and circles indicate result for local regions.[]{data-label="fig9"}](figure10.eps){width="50.00000%"}
The influence of outflows on the FWHM of the local region may be determined from the FWHM of a pixel at the peak flux position in the 870 $\mu$m continuum emission (hereafter referred to as pixel FWHM). Figure \[fig8\] shows the distributions of the pixel FWHM of clumps with and without outflows at each evolutionary stage. Clumps with an outflow have larger pixel FWHM median values than clumps without an outflow at corresponding evolutionary stages, and the pixel FWHM increases from the proto-stellar to the H stage regardless of the presence of an outflow. This suggests that outflows contribute to the pixel FWHM. Likewise, the ratios of the pixel FWHM values between clumps without and with outflow for proto-stellar sources, H regions, and PDRs are 0.84, 0.81, and 0.93, respectively. These ratio values are relatively similar from the proto-stellar to the H stage, which suggests that outflow contribution may be constant at these stages. K-S tests for proto-stellar, H regions, and PDRs at local regions, respectively, show the same statistical significance as the entire clumps (statistic = 0.25 and p-value = 0.002 for proto-stellar clumps, statistic = 0.33 and p-value $\ll$ 0.001 for H regions, statistic = 0.22 and p-value = 0.53 for PDRs). All median values are listed in Table \[tab3\].
In Figure \[fig9\], we plot the variation of the ratio of the outflow contribution to the FWHM and turbulent energy. The ratio of the outflow contribution = 1 – “non-outflow contribution"/“all contributions". We observe that the outflow has a contribution in the FWHM: about 20% in the local region at the H region (non-outflow contribution is about 81%) and about 10% even in the clumps. According to E$_{turb}$=(3/16 $\ln2$)M$_{cloud}$$\times$FWHM$^{2}$, outflow has a contribution in the turbulent energy up to 35% in the local region at the H region (1-0.81$^{2}$). It has a contribution of at least 15% in the clump at early stages of massive star formation, which is lower than that reported in previous studies (e.g. ). The outflow contribution decreases with time once the outflow action stops. This indicates that the outflows do not have a significant cumulative impact on the turbulent levels during the occurrence of several outflow actions. Thus, the outflow energy contribution to turbulent energy increases insignificantly with the evolutionary stages. Our results suggest that the outflow energy is large enough to maintain the turbulent energy in the clumps and that the outflow has some (not significant) effect on the turbulent energy. However, there is a better correlation between the outflow energy and turbulent energy (see Figure \[fig4\]). Therefore, we could not determine if the outflow significantly contributes to the turbulent energy in the clumps. This is consistent with the study conducted by @2015MNRAS.453...645. They also reported that there is a better correlation between the outflow energy and turbulent energy, but the core turbulence is not driven by the local input from the outflows. However, @2016MNRAS.457L..84D and @2018ApJS..235.....3 reported that there is not correlation between the turbulent and outflow energies. @2018MNRAS.473..1059 found that the clump mass and evolutionary stage are uncorrelated. For similar mass of massive star, it is likely that we can observe the obvious difference of turbulent energy between clump without and with outflow. However, for statistics, the mass parameter of turbulent energy is less constrained for each evolutionary stage. All these findings imply that the outflow action has some impact on the local environment and cloud itself, but the contribution from outflow does not mainly drive turbulence. This observation is consistent with several other studies that suggest that turbulence is mostly driven by large-scale mechanisms .
---------------------------- ------------------- ---------------------- ----- ------ -- -- --
FWHM(N$_{2}$H$^{+}$) all clumps 1 506 2.96
(kms$^{-1}$) Proto-stellar 1 206 2.92
2 75 3.18
H 1 142 3.01
2 88 3.45
PDR 1 49 3.24
2 17 3.53
FWHM(N$_{2}$H$^{+}$) outflow candidate without SiO emission 103 3.24
(kms$^{-1}$) with SiO emission 85 3.63
pixel FWHM(N$_{2}$H$^{+}$) Proto-stellar 1 206 2.03
(kms$^{-1}$) 2 75 2.41
H 1 142 2.19
2 88 2.72
PDR 1 49 2.2
2 17 2.36
---------------------------- ------------------- ---------------------- ----- ------ -- -- --
\
Notes. Column 3 notes: (1)–non-outflow candidates, (2)–outflow candidates
SUMMARY {#sec5}
=======
A search for outflows towards 694 star forming regions identified in previous studies based on the MALT90 survey has identified 188 high-mass outflow candidates, among which 85 clumps have SiO emission. The outflow properties were calculated for 105 sources with well-defined bipolar outflows and reliable distances. The parameters of these 105 sources may be underestimated to relate to temperature and opacity but overestimated due to include ambient material. Some factors have little effect on the overall distribution and correlations between individual quantities(e.g. abundance, and distance). The main results of this study can be summarized as follows:
1. We identified 188 high-mass outflows from a sample of 694 clumps with a detection rate of approximately 27%. We found that the outflow detection rate increases from the proto-stellar to the H stage. A decrease in the detection rate at the PDR stage is likely a result of the outflow switching off during this stage.
2. We found that there is an intimate relationship between outflow action and the presence of masers and that water masers may appear at a similar stage to 6.7 GHz methanol masers.
3. Outflow action decreases infall detection rate at each evolutionary stage, and there is no obvious relationship between the infall mass rate and outflow mass rate of clumps.
4. The outflow action has a small contribution to the turbulence in the clumps, and the outflow contribution decreases with time as the outflow action ceases. Meanwhile, the outflow contribution to the turbulent energy is similar from the Proto-stellar to H stages. Therefore, it can be concluded that there is no significant cumulative impact on the turbulence levels after repeated outflow action.
Because MALT90 data have higher noise, some outflow candidates are likely not found and the reliability of some candidates may be low. This has some influence on our statistic results. Some data with lower noise is thus needed to further examine the accuracy of our conclusions. In addition, in order to exclude the influence of different clump masses. it is necessary to acquire some data for similar clump mass at the same stage to determine whether there is an obvious difference between clumps with and without outflow.
Acknowledgements {#acknowledgements .unnumbered}
================
This research has made use of the data products from the MALT90 survey, the SIMBAD data base, as operated at CDS, Strasbourg, France. This work was funded by The National Natural Science foundation of China under grants 11433008, 11703073, 11703074 and 11603063, and The Program of the Light in China’s Western Region (LCRW) under Grant Nos. 2016-QNXZ-B-22, 2016-QNXZ-B-23. WAB has been supported by the High-end Foreign Experts grants Nos. 20176500001 and 20166500004 of the State Administration of Foreign Experts Affairs (SAFEA) of China and funded by the Chinese Academy of Sciences President’s International Fellowship Initiative Grant No. 2019VMA0040.
Arce, G. H., & Goodman, A. A. 2001, ApJ, 554, 132 Arce, G. H., Borkin, M. A., Goodman, A. A., Pineda, J. E., & Halle, M. W. 2010, Apj, 715, 1170 Bally, J. 2016, ARA&A, 54, 491 Brunt, C. M., Heyer, M. H., & Mac Low, M.-M. 2009, A&A, 504, 883 Brunt, C. M. 2010, A&A, 513, A67 Beuther, H., Schilke, P., Sridharan, T. K., et al. 2002, A&A, 383, 892 Cabrit, S., & Bertout, C. 1990, ApJ, 348, 530 Caswell, J. L., et al. 2010, MNRAS, 404, 1029 Caswell, J. L., et al. 2011, MNRAS, 417, 1964 Codella, C., Lorenzani, A., Gallego, A. T., Cesaroni, R., & Moscadelli, L. 2004, A&A, 417, 615 Cunningham, N., Lumsden, S. L., Moore, T. J. T., Maud, L. T., & Mendigut$\acute{i}$a, I. 2018, MNRAS, 477, 2455 Drabek-Maunder, E., Hatchell, J., Buckle, J. V., et al. 2016, MNRAS, 457, 84 Duarte-Cabral, A., Bontemps, S., Motte, F., et al. 2014, A&A, 570, 1D Federrath, C. 2013, MNRAS, 436, 1245 Federrath, C., Schrön, M., Banerjee, R., & Klessen, R. S. 2014, ApJ, 790, 128 Felli, M., Palagi, F., Tofani, G. 1992, A&A, 255, 293 Forster, J. R., & Caswell, J. L. 1989, A&A, 213, 339 Frank, A., Ray, T. P., Cabrit, S. 2014, Protostars and Planets VI, 451 Garden, R. P., Hayashi, M., Hasegawa, T., Gatley, I., & Kaifu, N. 1991, ApJ, 374, 540 Girart, J. M., Ho, P. T. P., Rudolph, A. L., et al. 1999, ApJ, 522, 921 Green, J. A., et al. 2009, MNRAS, 392, 783 Green, J. A., et al. 2010, MNRAS, 409, 913 Green, J. A., et al. 2012, MNRAS, 420, 3108 Guzmán, A. E., Sanhueza, P., Contreras, Y., et al. 2015, ApJ, 815, 130G He, Y. X., Zhou, J. J., Jarken, E., et al. 2015, MNRAS, 450, 1926 He, Y. X., Zhou, J. J., Jarken, E., et al. 2016, MNRAS, 461, 2288 Jackson, J. M., Rathborne, J. M., Foster, J. B., et al. 2013, PASA, 30, 57J Klaassen, P. D., & Wilson, C. D. 2007, ApJ, 663, 1092 Lada , C. J. 1985, ARA&A, 23, 267 Li, Q., Zhou, J. J., Jarken, E., et al. 2018, ApJ, 867, 167L Li, Huixian, Li, Di, Qian, Lei, et al. 2015, ApJS, 219, 20L Lo, N., Wiles, B., Redman, M. P., et al. 2015, MNRAS, 453, 3245 Maud, L. T., Moore, T. J., Lumsden, S. L., et al 2015, MNRAS, 453,645 Menten, K. M. 1991, ApJ, 380, 75 Michael, M. D., Hector, G. A., Diego, M., et al. 2014, ApJ, 783, 29 Miettinen, O. 2014, A&A, 562, A3 Minier, V., Ellingsen, S. P., Norris, R. P., et al. 2003, A&A, 403, 1095 Mottram, J. C., & Brunt, C. M. 2012, MNRAS, 420, 10 Ossenkopf, V., & Mac Low, M.-M. 2002, A&A, 390, 307 Padoan, P., Juvela, M., Kritsuk, A., & Norman, M. L. 2009, ApJ, 707, L153 Pineau des Forêts, G., Flower, D. R., & Chieze, J.-P. 1997, in IAU Symp. 182, Herbig-Haro Flows and the Birth of Stars, ed. B. Reipurth & C. Bertout (Dordrecht: Kluwer), 199 Plunkett, A. L., Arce, H. G., Corder, S. A., et al. 2015, ApJ, 803, 22 Rivilla, V. M., Martín-Pintado, J., Sanz-Forcada, J., et al. 2013, MNRAS, 434, 2313 Schuller, F., et al. 2009, A&A, 504, 415 Turner, B. E., Pirogov, L., & Minh, Y. C. 1997, ApJ, 483, 235 Urquhart, J. S., König, C., Giannetti, A., et al. 2018, MNRAS, 473, 1059 de Villiers, H. M., Chrysostomou, A., Thompson, M. A., et al. 2014, MNRAS, 444, 566 Walsh, A. J., Breen, S. L., Britton, T., et al. 2011, MNRAS, 416, 1764 Walsh, A. J., Purcell, C. R., Longmore, S. N., et al. 2014, MNRAS, 442, 2240 Yang, A. Y., Thompson, M. A., Urquhart, J. S., et al. 2018, ApJS, 235, 3 Yu, N. P., & Wang, Jun-Jie 2014, MNRAS, 440, 1213 Zhang, Q., Hunter, T. R., Brand, J., et al. 2001, ApJ, 552, L167 Zhang, Q., Hunter, T. R., Brand, J., et al. 2005, ApJ, 625, 864
Appendix
========
Table A1: the outflow properties of the blue and red lobes.\
Table A2: The basic information of part clumps.\
Figure B: Position-Velocity diagrams.\
Figure C: The integrated intensity images of the blue and red wing.\
Figure D: The integrated intensity contours of SiO emission.\
[^1]: Email: liqiang@xao.ac.cn
[^2]: Email: zhoujj@xao.ac.cn
|
I bought some comfy shoes at H&M and decided they were a little too boring. So I added a few rhinestones to them. It was super easy and only took 10 minutes! I bought some rhinestone beads from joann's and used some E6000 glue.
First I played with the beads and decided how I wanted to place them. I decided to keep it simple and only put three on each shoe.I dabbed glue on the back of the rhinestones, then firmly attached it to the shoe. I think it's better to put the glue on the rhinestones instead of the shoe, so any excess doesn't show on the shoe. Just be careful that you place them exactly where you want them. Very easy, very fast!
I ran across some glitter vinyl at Michael's the other day and decided to experiment with making a tassel. I love how it turned out, and it was super easy to make!
The vinyl I bought was $1 or $2 for a sheet, and they had other colors as well. It's very flexible and the glitter stays on very well.
Supplies:glitter vinylScissorsE6000 or other gluerulerrubber band (I was originally going to use a binder clip, as you can see in the picture, but a rubber band worked much better)pencilOptional: key ring or ribbon for attaching tassel
Cut a piece of the vinyl 6" x 3". Measure 3/4" from one edge, and draw a line across. Then mark 1/8" increments along the line. This is where you will cut the fringe.
You can adjust the measurements and cuts however you want, this is just how I did mine.
Cut a small piece (mine is 2 1/2" x 3/8") for the tag (is that the right word?) Glue the ends of the tag together, and then spread glue along the top edge above the line.
Start folding the tassel over, starting with the tag edge. At first it's very rectangular, but it will round itself out.
Wrap a rubber band around the top of the tassel to hold it tight while it's drying. And then voila! you are done.
In the past six months or so I've been knitting a lot of baby hats for a local hospital. They are collecting purple hats to give away during March, for Child Abuse Prevention Month. I've knit quite a few hats, and each one is slightly different.This is a great pattern for beginners, because many of them are just plain stockinette, and when I do use more than one color it's very basic.Here's my basic pattern, and a version with a bow. I also experimented with simple cables, stripes, and different colored poms and bands
Basic Baby Hat Pattern
Materials:Sport weight yarn in main colorSize 6 double pointed needles. You could also use circular needles and switch to dpns at the end.
Tiny Bow Baby Hat Pattern
Hat:In MC, CO 60K2, P2 to end, repeat for 6 rowsK 3 rowsChange to CC, Knit 4 rowsChange back to MC. From this point, it is the same as the basic hat pattern. Continue knitting in stockinette until the whole hat measures 5 inches long.Row 1: (k3, k2tog) to end (48 sts)Row 2 and all even rows: KnitRow 3: (k2, k2tog) to end (36 sts)Row 5: (k1, k2tog) to end (24 sts)Row 7: (k2tog) to end (12 sts)Row 9: (k2tog) to end (6 sts)After Row 9, cut yarn and thread through 6 remaining stitches. Weave in ends.Bow:With CC, CO 6 sts.Knit flat in stockinette until piece measures 5" long, then cast off.Stitch short ends of rectangle together. You should now have a circle. Fold in half so that the seamed part is at the center back of the circle.Take a long piece of yarn and start winding it around the middle of the circle. Do this until you are happy with how it looks, then tie both ends in a knot at the back of the bow. Use those ends to seam the bow to the hat on the white stripe.
I am a huge Harry Potter fan, and for an upcoming project I wanted to knit Snape's doe Patronus, but couldn't find a pattern. So I wrote my own! The pattern uses a lot of short rows and Judy's Magic Cast On. Also, it has only been test knit by me. If you have questions or find any errors, leave a comment and I will be happy to answer.
Doe Patronus Knitting Pattern
I used sport weight yarn with size 2 needles. You will also need a small amount of fiberfill or other stuffing, and two beads for the eyes (optional).You can use any yarn and needles, just be sure to use needles one or two sizes smaller than you would typically use with the yarn weight, so the stuffing doesn't show through.Click through for the pattern!
Since it's January and the start of a new year, I have a printable calendar to share. I do not have a color printer, so I wanted to make something that anyone with only a black and white printer could use. I love the geometric shapes of gemstones, so I decided to use those in my calendar. For my second calendar, I got crafty with some watercolors. |
Election: Requests for IDs bring voter complaints
Voters in the Lehigh Valley encounter demands from poll workers to show IDs before voting for president, causing delays and confusion.
November 06, 2012|The Morning Call staff
Voters in the Lehigh Valley and across the state are running head-first into the remnants of a storm and it's not Hurricane Sandy. It's a storm of confusion created by Pennsylvania's Voter ID law, which was put on hold last month after a court challenge.
While voters may be asked to show ID, they are allowed to vote without it. But that message seems to have escaped some poll workers, according to people around the Lehigh Valley who said they were subjected to repeated requests for ID.
In Catasauqua, Sean Redding, 29, said he and his wife encountered a rude worker who kept saying, "I am required by law to ask you for your ID."
Redding objected.
"You could see it in her eyes that she knows darn well she's wrong," said Redding. "They're doing everything in their power to not let you in to vote if you don't show them ID. They're very nasty about it too."
The law, signed in March by Gov. Tom Corbett, required every voter to show a photo ID at the polls. Supporters said it would help prevent voter fraud.
Opponents contend fraud is virtually nonexistent in the state and said the law — which passed in a party-line vote in the Republican-majority Legislature — was meant to disenfranchise the young, poor and elderly, who tend to vote Democratic.
Commonwealth Court Judge Robert Simpson upheld the measure in August, saying it was not overly burdensome. Democrats appealed to the state Supreme Court, which sent the case back to Simpson to determine whether it could be implemented in time for the November election.
In October, Simpson told state officials to hold off enforcing the law in this election, so voters could have more time to obtain photo IDs.
Even so, the Pennsylvania Department of State instructed poll workers to ask for ID as a "test run" to see what would happen if the ID law were in effect, according to Lehigh County Elections Board Chief Clerk Tim Benyo.
Benyo said the request for identification is supposed to occur when voters sign the poll books, not before. However, he also noted that each polling station is headed by a judge of elections, whom the voters elected. As an elected official, the judge has discretion on how to operate their polling place.
"Every polling place is different, and every judge is different," Benyo said. "How they do that job is not necessarily spelled out for them… We suggest how they do it… (But) it's up to them."
In Alburtis, Phil DePietro was surprised when an election official exited the building and asked everyone in line to "get their IDs ready."
The announcement was made again when DePietro entered the church.
When he got to the front of the line and objected to the identification request, he was told poll workers were making sure all voters were prepared for the spring when identification will be needed to vote. DePietro said he was concerned because the message was conveyed in a manner that those standing in line could think they needed identification to participate, which was not the case.
Barbara Arnwine of the Election Protection Coalition told reporters on a national conference call that the organization is fielding many calls of confusion over voter ID across Pennsylvania.
"This is the fault of the Pennsylvania state government," she said. "Signs are posted outside polling places incorrectly saying ID is required… Poll workers have been poorly and wrongfully trained."
By mid-morning, Benyo said he had not heard of anyone being denied the right to vote, though he had received numerous complaints about identification. Some people complained about being asked for identification while other complained because they weren't asked, he said.
Department of State spokesman Ron Ruman said he heard of only about a half-dozen cases of voters across the state being improperly denied their right to vote. He said the department sent instructions to poll workers that they were to ask for identification, but to allow people to vote if they didn't have it.
Jim Brosnan, a Lehigh Township resident who casts his ballots in the township's Pennsville district, said an election official asked him and his wife for identification. He provided it, but she declined. "You have to have ID," he said they were told. "It's the law."
Despite the confusion, the election worker allowed the Brosnans to vote, he said. He said being told to produce identification could be intimidating or embarrassing to senior citizens. |
Q:
How many amps should an iPhone 5s car charger output?
When looking on Amazon for a car charger I noticed that different items have different amp values, 2.1 and 3.1 being the most frequent. What is the recommended value?
Does it even matter? Is there more to it than the fact that more amps mean faster charging?
A:
More amps will not mean faster charging. The iPhone (and any electronic device) will only take as much current as it requires, and no more. The iPhone will take 1A to charge, and an iPad will take 2.1A.
There's no harm in using a charger that is capable of providing more current than a device requires, but there's no benefit either.
Providing less current than the device requires will lead to longer charging times, or no charging at all.
A:
I disagree with parts of Nathaniel's answer. The normal iphone charger is 5W and just over an amp. Using an iPad charger, either the 10W or newer 12W will absolutely charge your iphone faster. In fact, almost twice as fast. However there's some speculation that it may shorten the battery's lifespan. I've had no problems using an iPad charger on my 5S and it's charged in no time. So, you take a chance I guess but yes, the 2.1 amp ipad charger will cut charging time down considerably.
|
Introduction {#s1}
============
Cytomegalovirus (CMV) is the most common infectious cause of developmental disorders of the central nervous system (CNS) in humans and the predominant cause of developmental neurological disabilities in the United States [@pone.0016211-Cheeran1]. Each year, approximately 1% of all newborns have congenital CMV infection. Approximately 5 to 10% of these infected infants manifest signs of serious neurological defects at birth, including deafness, mental retardation, blindness, microencephaly, hydrocephalus, and cerebral calcification [@pone.0016211-Bale1], [@pone.0016211-Becroft1], [@pone.0016211-Stagno1]. Thus, it seems likely that CMV infection of the fetus alters the "normal blueprint" of the developing brain, resulting in long-term neurological sequelae.
Using a murine infection model, we have previously shown that NSCs in the adult brain appear to be the predominant cell type affected by murine cytomegalovirus (MCMV) [@pone.0016211-Cheeran2]. There is an abundance of NSCs in the fetal brain, in this study we will use the term neural stem cells to refer to all classes of immature and proliferating cells that reacted with CD133 and nestin. The susceptibility of these cells to viral infection could provide insights into the neuropathogenesis of CMV during brain development [@pone.0016211-Tsutsui1]. Previous studies have shown that MCMV can infect a wide variety of brain cell types including neurons and astrocytes [@pone.0016211-Tsutsui2]. These studies used immunohistochemical staining to demonstrate co-localization of viral antigens and cell type-specific markers. However, there is a paucity of data quantifying the effect of MCMV infection on the developing brain and which cell types involved.
Recent advances in the identification of specific neural cell types based on cell surface and intracellular markers, using flow cytometry, have lead to detailed characterization of neural stem and progenitor cells, as well as their down-stream progeny. Cell surface markers such as CD133, CD15, CD24, and CD29 have been used in a number of recently published studies [@pone.0016211-Peh1], [@pone.0016211-Pruszak1], [@pone.0016211-Panchision1]. These studies indicate that human CNS precursor cells expressing high levels of the surface antigen CD133 (CD133+/hi), with little or no CD24 (CD24−/lo), have the highest frequency of initiating clones as measured by neurosphere formation [@pone.0016211-Barraud1], [@pone.0016211-Uchida1]. Evidence suggests that these markers are also useful for characterizing similar subpopulations from the rodent CNS [@pone.0016211-Murayama1], [@pone.0016211-Rietze1]. In fact, high CD24 expression has been used to identify transit-amplifying cells [@pone.0016211-Doetsch1], as well as differentiated neurons [@pone.0016211-Calaora1], and CD24 is required for terminal differentiation of neuronal progenitors [@pone.0016211-Nieoullon1]. Another commonly used marker is CD29, a member of the integrin family. These integrins play an important role in neural development [@pone.0016211-GrausPorta1], and CD29 specifically has been observed on human NSCs obtained from fetal tissue [@pone.0016211-Hall1]. In addition, integrin signaling has been shown to be of functional relevance for both neural crest [@pone.0016211-Breau1] and mesenchymal development [@pone.0016211-Fuchs1], [@pone.0016211-Takashima1]. Finally, the antigen CD15, also known as LeX or stage-specific embryonic antigen 1 (SSEA1), has been identified as a positive selectable marker for rodent multipotent NSCs [@pone.0016211-Capela1].
In this study we used our MCMV infection model and a multi-color flow cytometry approach to quantify the effect of MCMV on the developing brain, identifying specific target cells for viral infection and its effect on subsequent brain development. Our findings indicate that NSCs expressing CD133 and nestin are the prime target cells along with CD24(hi) neuroblasts. We also show that infection of the developing brain, which is rich in NSCs, results in reduced expression of doublecortin (DCX), a marker that identifies young/immature neurons, while the glial precursor and mature astrocyte marker, glial acidic fibrillary protein (GFAP) expression remained unaltered. Reduced DCX expression was also associated with decreased neurotrophin expression. Taken together, these results demonstrate markedly abnormal neuronal development following MCMV brain infection.
Results {#s2}
=======
CD133(+) cells are infected with MCMV in vivo {#s2a}
---------------------------------------------
We have previously shown that MCMV can establish productive infection in cells which express nestin [@pone.0016211-Cheeran2]. In the present study, MCMV infection of NSCs was further characterized *in vivo*. CD133(+) cells have previously been shown to be having highest frequency of initiating clones as measured by neurosphere formation [@pone.0016211-Barraud1]. We first examined if CD133(+) cells were targets for viral infection by using a recombinant MCMV expressing GFP. One-day old neonates were infected with MCMV and control littermates were mock-infected. Brain tissues from infected and mock-infected animals were harvested at 7 d p.i. for analysis by flow cytometry. Harvested brain tissue samples were digested into single cell suspension using papain as described in the methods. One million cells from infected or mock-infected mice were incubated with APC conjugated CD133 MAbs and analyzed by flow cytometry for CD133(+) cells that expressed GFP, indicating virus-infected cells. Flow cytometry analysis ([Fig 1A](#pone-0016211-g001){ref-type="fig"}) showed the ratio of CD133(+) cells in a P7 (post-natal day 7) brain (3.93±2.13%). We then identified GFP(+) cells that demonstrated viral infection within this CD133(+) population. We observed 65.97±3.1% of CD133+ cells were positive for GFP at 7 d p.i. ([Fig. 1B](#pone-0016211-g001){ref-type="fig"}). Approximately 5--6% of the total brain cells were positive for GFP (indicative of infected cells) at 7 d p.i.
{#pone-0016211-g001}
NSCs expressing nestin are targets for MCMV {#s2b}
-------------------------------------------
To reinforce our previous finding that MCMV preferentially infects stem cells [@pone.0016211-Cheeran2], we next investigated MCMV infection in the neonatal brain, which has previously been shown to be rich in NSCs. We identified virus-infected cells using intracellular staining for the neural stem marker, nestin. Cells prepared from the brains of infected and control animals were stained for nestin and analyzed by flow cytometry. Flow cytometry analysis at 7 d p.i. demonstrated the presence of nestin(+) and nestin(−) cells from mock-infected mice ([Fig 2](#pone-0016211-g002){ref-type="fig"}, lower panel). There was no significant difference in the ratio of nestin (−) cells from the mock- and MCMV-infected brains. The nestin(−) and nestin(+) populations were then examined for the presence of GFP, indicative of viral infection ([Fig 2](#pone-0016211-g002){ref-type="fig"}, upper panel). These data clearly show that a significantly higher proportion of nestin(+) cells were found to be infected with MCMV, when compared to nestin(−) cells (26.12±2.3% versus 8.14±1.5%).
{#pone-0016211-g002}
MCMV brain infection reduces CD133(+) cell number and alters nestin expression {#s2c}
------------------------------------------------------------------------------
Because NSCs were found to be infected with MCMV, we then assessed whether viral infection had any effect on their number. Brain tissues were harvested from MCMV-infected and non-infected control neonates at 7 d p.i. and were stained for CD133. Flow cytometric analysis showed that numbers of CD133(+) cells were significantly reduced in the infected brains (1.41±0.80%) when compared to controls (5.35±2.0%) ([Fig 3A](#pone-0016211-g003){ref-type="fig"}). Absolute numbers of CD133(+) cells in control and MCMV-infected mice were 3.09×10^5^±4.1×10^4^ and 3.45×10^4^±2.1×10^4^, *p* = 0.01 ([Fig 3 B](#pone-0016211-g003){ref-type="fig"}). Relatively low levels of nestin expression were detected in infected-brains by flow cytometry ([Fig 3C](#pone-0016211-g003){ref-type="fig"}). Mean fluorescence intensity of nestin expression was found to be significantly lower among cells purified from virus-infected neonates (57.00±2.2% versus 143.00±2.84%, respectively, *p*\<0.01 Student\'s *t* test) ([Fig. 3D](#pone-0016211-g003){ref-type="fig"}).
{#pone-0016211-g003}
CD24(hi)-expressing neuronal precursor cells were targets for MCMV {#s2d}
------------------------------------------------------------------
To further characterize cell types infected in the developing brain, we identified cellular subsets based on their surface cluster of differentiation (CD). Previously described studies that have used CD15, CD24, and CD29 to identify various neural progenitors derived from embryonic stem cells formed the basis for our study [@pone.0016211-Pruszak2]. High CD24 expression has been used to identify transit-amplifying cells [@pone.0016211-Doetsch1], as well as differentiated neurons [@pone.0016211-Calaora1], and CD24 is required for terminal differentiation of neuronal progenitors [@pone.0016211-Nieoullon1]. The antigen CD15, also known as LeX or stage-specific embryonic antigen 1 (SSEA1), has also been identified as a positive selectable marker for rodent multipotent NSCs [@pone.0016211-Capela1]. We went on to prepare cells from brains of virus-infected and control animals, incubated them with CD15, CD24, CD29 and CD45 MAbs, and analyzed by flow cytometry. The bottom panel of [Fig. 4](#pone-0016211-g004){ref-type="fig"} shows a representative contour plot prepared from control mice, depicting three distinct cellular subsets, CD24(hi)CD29(−), CD24(hi)CD29(+) and CD24(lo)CD29(−). Gates were drawn based on the individual flurochromes and isotype control analyses. All of these subsets were negative for CD15 and CD45, indicating these subpopulations were devoid of NSCs and infiltrating immune cells, respectively. The top panels display histogram overlays from control and infected mice, showing respective GFP positive cells from each group. CD24(hi)CD29(−) cells had approximately 38.13±3.4% positive for GFP signal at 7 d p.i., while the CD24(hi)CD29(+) cells showed 8.1±2.2% of cells positive for virus. The CD24 (lo) CD29 (−) population did not show any detectable virus-infected cells, hence they were excluded from further analysis.
{#pone-0016211-g004}
Reduced numbers of proliferative CD24(hi) neuronal precursor cells in infected brains {#s2e}
-------------------------------------------------------------------------------------
CD24 is expressed by precursors of neuronal cells in the developing brain [@pone.0016211-Calaora1] and by neuroblasts from two adult neurogenic zones: the subventricular zone (SVZ) bordering the lateral ventricle, and the dentate gyrus of the hippocampal formation [@pone.0016211-Belvindrah1]. We investigated the effect of virus infection on proliferation of CD24(hi) cells. Single cells prepared from brains harvested from bromodeoxyuridine (BrdU)-treated mice were first stained for surface markers such as CD24, CD29, and CD45. Intranuclear BrdU staining was done using a BrdU flow kit as described in the methods, and cells were analyzed by flow cytometry. The individual subsets of cells that were previously defined were analyzed for BrdU+ ([Fig. 5A](#pone-0016211-g005){ref-type="fig"}). In these studies, we observed a significant decrease in the number of CD24(hi)CD29(−)BrdU(+) and CD24(hi)CD29(+)BrdU(+) compared to control mice (2.81×10^4^±6.2×10^3^ and 1.73×10^4^±5.3×10^3^±5.3×10^3^ versus 1.68×10^5^±6.9×10^4^ and 2.7×10^5^±9.3×10^4^ in control animals, respectively, *p*\<0.01 Student\'s *t* test) ([Fig. 5B](#pone-0016211-g005){ref-type="fig"}).
{#pone-0016211-g005}
Altered expression of the Oct4 transcription factor {#s2f}
---------------------------------------------------
Oct4 is critically involved in self-renewal of embryonic stem cells, so it is frequently used as a marker for undifferentiated cells. Oct4 expression must be closely regulated; too much or too little will actually induce differentiation [@pone.0016211-Niwa1]. The transcription factors Oct4, Sox2, and Nanog are capable of inducing the expression of each other, and are essential for maintaining the self-renewing undifferentiated state of the inner cell mass of the blastocyst, as well as in embryonic stem cells [@pone.0016211-Rodda1]. It has been previously shown that CD24 cells express Sox2 indicating they still retain stemness [@pone.0016211-Brazel1]. To determine if CD24(hi) cells from the developing brain expressed Oct4, we performed intracellular staining for Oct4 as well as surface stained for CD15, CD24 and CD29 and analyzed these cell populations using flow cytometry. In this study, we demonstrated that CD24(hi)CD29(−) and CD24(hi)CD29(+) cells, isolated from uninfected control brains, expressed Oct4 ([Fig. 6](#pone-0016211-g006){ref-type="fig"}, upper panel, histogram with blue line). Although CD24(hi)CD29(−) negative cells were found to express Oct4, it appeared that once CD29 expression occurred on these CD24(hi) cells the number of Oct4-expressing cells was reduced ([Fig. 6](#pone-0016211-g006){ref-type="fig"}, upper panel, histogram on the right). We then went on to determine if MCMV infection had any effect on Oct4 expression among this defined subset of cells. The upper panel shows histogram overlays for Oct4 staining from brain cells isolated from control and infected brains, as well as isotype control staining. Data obtained from these experiments demonstrated that Oct4 expression was reduced in cells obtained from virus-infected brains compared to the cells from the brains of uninfected animals (upper panel, histogram overlays).
{#pone-0016211-g006}
Expression of doublecortin is decreased following MCMV brain infection {#s2g}
----------------------------------------------------------------------
We next examined if viral infection is associated with abnormal expression of structural proteins such as doublecortin (DCX) and GFAP. DCX is a microtubule associated protein expressed by neuroblasts and is accepted as an effective read-out for neurogenesis. GFAP is an intermediate filament protein that is thought to be specific for astrocytes. Using an intracellular staining technique and flow cytometry, we found that expression of DCX is altered in the virus-infected brain ([Fig. 7A](#pone-0016211-g007){ref-type="fig"}), while GFAP expression remained unaltered ([Fig. 7B](#pone-0016211-g007){ref-type="fig"}). Histogram overlays are shown for both DCX and GFAP expression and were prepared from isotype, control, and virus infected peaks. Mean fluorescence intensity of DCX and GFAP expression were also measured between the groups. Expression levels of DCX in virus-infected brains were found to be significantly lower compared to uninfected control animals (511±140% versus, 1178±161% respectively, *p*\<0.05 Student\'s *t* test) and there was no significant difference in the MFI of GFAP expression among groups studied ([Fig. 7C](#pone-0016211-g007){ref-type="fig"}).
{#pone-0016211-g007}
Viral brain infection down-regulates BDNF and NT3 levels {#s2h}
--------------------------------------------------------
Brain-derived neurotrophic factor (BDNF) and neurotrophin 3 (NT3) have been shown to play a role in the development of the CNS. These two neurotrophin molecules are also known to be important in post-natal cerebellar development [@pone.0016211-Schwartz1], [@pone.0016211-Bates1]. We went on to determine mRNA levels for these two neurotrophins in control and virus-infected brains at 7 d p.i. using quantitative real time PCR. In these experiments, mRNA levels of both BDNF and NT3 were markedly down-regulated in virus-infected neonatal brains when compared to control mice ([Fig. 8](#pone-0016211-g008){ref-type="fig"}).
{#pone-0016211-g008}
Discussion {#s3}
==========
In this study we found that NSCs and neuronal precursor cells are the principal target cells for MCMV within the developing brain. Additionally, viral infection caused a marked loss of NSCs expressing CD133 and nestin. We also showed that infection of neonatal brain leads to abnormal development as indicated by loss of CD24 (hi) cells that incorporated BrdU. The infection of neonatal brain was also associated with altered expression of the neurotrophins BDNF and NT3, which are essential for normal development of brain [@pone.0016211-Schwartz1], [@pone.0016211-Bates1]. "Additionally" we found decreased expression of doublecortin, a marker to identify young neurons, following viral brain infection.
NSCs are abundant in the fetal brain and the increased susceptibility of the fetus to viral infection may explain the predominance of neurological damage associated with congenital CMV infection. Previous studies have shown that intracranial inoculation of MCMV results in wide spread brain infection [@pone.0016211-vandenPol1], [@pone.0016211-Shinmura1]. In our previous report [@pone.0016211-Cheeran2], we showed widespread MCMV brain infection in adult mice, particularly in cells of the periventricular zones as well as regions of the brain in direct contact with cerebrospinal fluid, strikingly similar to descriptions of human CMV (HCMV) ventriculoencephalitis in AIDS patients and infants with severe CNS manifestations of congenital HCMV infection. In this study, intracranial inoculation resulted in wide-spread infection within the brain. GFP signal was detected in various regions including cerebral cortex, olfactory bulb, cerebellum and brain stem, "however" the infection was most profound around the ventricles ([Fig. S1](#pone.0016211.s001){ref-type="supplementary-material"}). In contrast to immunocompetent adult mice, neonates infected intracranially failed to control the infection and succumbed to infection starting day 12 p.i. with 200 TCID50 ([Fig. S2](#pone.0016211.s002){ref-type="supplementary-material"}).
NSCs exhibit extensive self-renewal and multipotency (i.e., the ability to generate neurons and glial cells). Neurogenesis continues beyond embryonic life and postnatal and adult neurogenesis have been postulated to have critical roles in learning, memory, and cognitive development [@pone.0016211-Abrous1], [@pone.0016211-Lledo1]. In mice, much of the brain development takes place postnatally. Interestingly, we observed that there was no significant difference in expression levels of CD24 (hi) and nestin at embryonation day (ED) 14.5 and post-natal day 7 ([Fig. S3](#pone.0016211.s003){ref-type="supplementary-material"}). These newborn mice also failed to mount effective adaptive immune response and the immune infiltrate predominantly consisted of macrophages with activated resident microglia ([Fig. S4](#pone.0016211.s004){ref-type="supplementary-material"}).
Neurotropic viruses (e.g., HIV) disturb the normal adult neurogenesis pattern, a possible cause for development of dementia in these patients. More importantly, it has been demonstrated that HIV infects NPCs and leads to quiescence in these cells [@pone.0016211-Krathwohl1], [@pone.0016211-Lawrence1]. Other neurotropic viruses like Japanese encephalitis virus (JE virus), which also targets the CNS, infects embryonic NPCs, replicates in these cells, inhibits their growth, and decreases proliferation [@pone.0016211-Das1]. Closely resembling these findings, we report here, that MCMV infects NSCs in the developing brain and reduces their number *in vivo*. Using flow cytometry, along with a recombinant GFP-expressing MCMV, it was possible to quantify this viral brain infection and determine the number of highly susceptible NSCs *in vivo*. Our findings also indicated that CD133+ cells were the major target cells for MCMV, compared to other cell types that were studied.
There is considerable analytical value in the identification and isolation of multiple neural subsets by their expression of surface antigens using flow cytometry. Fluorescence-activated cell sorting (FACS) has been successfully utilized in sorting cells based on these surface markers and it has high scientific value for the fields of regenerative medicine, and stem cell biology [@pone.0016211-Pruszak1], [@pone.0016211-Carson1], [@pone.0016211-Li1]. The combinatorial detection of surface markers by multicolor flow cytometry has been widely applied in the fields of hematology and immunology [@pone.0016211-Herzenberg1], [@pone.0016211-Horan1], but has up to now only been marginally exploited in neurobiology [@pone.0016211-Uchida1], [@pone.0016211-Maric1]. Surface markers such as CD15, CD24, and CD29 have been described to study neural lineage cells derived from pluripotent stem cells [@pone.0016211-Pruszak2]. Based on this finding, we utilized the identical markers to identify cells that were obtained from digestion of mouse brain tissue. Using these methods, we were able to identify 3 distinct cellular populations; CD24(hi)CD29(−), CD24(hi)CD29(+) and CD24(lo)CD29(−). Within these subsets we identified virus-infected cells and observed that the majority of CD24(hi) cells were infected. High CD24 expression has been used to identify transit-amplifying cells [@pone.0016211-Doetsch1], as well as differentiated neurons [@pone.0016211-Calaora1], and CD24 is required for terminal differentiation of neuronal progenitors [@pone.0016211-Nieoullon1]. Infection of CD24(hi) cells demonstrates that neuronal precursor cells are highly susceptible to infection with MCMV. Interestingly, we were unable to detect viral infection in cells that expressed CD24 at low levels. It is well documented in the literature that hi CD24 expression is associated with neuronal precursor cells or neuroblasts. Based on the available literature, it is believed that CD24(hi)CD29(−), CD24(hi)CD29(+), and CD24(lo)CD29(−) cells would give rise to neurons, a mixture of neurons and glia, and identify mature neurons, respectively [@pone.0016211-Pruszak2]. Our findings indicated that neuronal precursors expressing hi CD24 are predominant target cells for the virus. In certain cancer cell lines, it has been shown that CD133+ cells were identified as cancer stem cells which would also react with CD29 [@pone.0016211-Tirino1], however in our study we did not find any such correlation between CD133 and CD29 among the cells that we studied.
Sustained proliferation is a key feature of neural precursor cells and it has been shown that viruses like JE virus and HIV can induce quiescence in these cells by impairing their proliferative ability [@pone.0016211-Krathwohl1], [@pone.0016211-Das1]. Consistent with these findings, we observed that CD24(hi)CD29(−) and CD24(hi)CD29(+) cells from control mice were found to incorporate BrdU, with the latter subset being more efficient. The ability of CD24(hi) cells to incorporate BrdU was remarkably decreased in virus-infected brains, suggesting that MCMV inhibits DNA synthesis in these CD24(hi) proliferative cells. In this study, we also report down-regulation of the multipotency marker, Oct4 within the above defined subset of cells. This may have deleterious effects on normal brain development.
Having observed that CD24(hi) cells were extensively infected and had decreased ability to proliferate, we then sought to identify the effect of viral infection on cells that are down-stream of CD24-expressing neuronal precursors. Our analysis for intracellular DCX, a marker for young neurons, was found to be significantly down-regulated in virus-infected brains. DCX plays an important role in signaling for neuronal migration during brain development and is a marker of early migratory neuroblasts. DCX haploinsufficiency can lead to various degrees of mental retardation, the extent of retardation is linked to the quantity of arrested neurons in the white matter [@pone.0016211-Gleeson1], [@pone.0016211-SosseyAlaoui1]. This has been previously reported in *in vitro* studies using human neural stem/precursor cells that were infected with CMV, the DCX expression was down-regulated, shown at both mRNA and protein levels [@pone.0016211-Luo1]. On the other hand, we did not observe any change in the expression levels of intracellular GFAP, a marker for astrocytes. HCMV infection of neural progenitor cells in the undifferentiated stage down-regulated GFAP expression while it remained unaltered following infection after differentiation [@pone.0016211-Luo1]. Further validation of proliferation arrest in neuronal precursor cells and reduced DCX expression with progressive infection was obtained from decreased expression of neurotrophins, such as BDNF and NT3. The activities of BDNF are pleiotropic and include protection from neural apoptosis, enhanced neuronal proliferation, increased granular neuron migration, and long-term potentiation [@pone.0016211-Carter1], [@pone.0016211-Ji1]. Down-regulation of the gene encoding of the neurotrophin NT3 was also associated with viral brain infection.
Infection of different resident cells of the CNS and loss of neuroepithelial cells secondary to lytic infection have also been reported [@pone.0016211-Kosugi1], [@pone.0016211-vanDenPol1]. Although MCMV targets several other cell types in the developing brain, NSCs in particular bear the brunt of virus infection. In conclusion, we found that MCMV brain infection of new born mice causes significant loss of NSCs, decreased proliferation of neuronal precursor cells and loss of young neurons expressing DCX. This neuronal loss was associated with down-regulation of multipotency marker, Oct4 and neurotrophins, indicating abnormal brain development.
Materials and Methods {#s4}
=====================
Ethical statement {#s4a}
-----------------
The animal use protocols used were approved by the University of Minnesota Institutional Animal Care and Use Committee (Protocol Number: 0807A40181).
Virus and animals {#s4b}
-----------------
A recombinant MCMV that expresses green fluorescent protein (GFP) under control of the human elongation factor-1a promoter, inserted at the immediate-early gene (IE2) site (strain K181 MC.55 \[ie2^−^ GFP^+^\]) was kindly provided by Jon Reuter [@pone.0016211-vanDenPol1], [@pone.0016211-Reuter1]. 200 TCID~50~ of virulent, salivary gland--passaged, sucrose gradient--purified virus was used for all intracerebral (i.c.) infections. The GFP-expressing virus was expanded on NIH 3T3 mouse fibroblasts and purified by centrifugation over a sucrose cushion. Mice were purchased from The Jackson Laboratory or Charles River Corporation. Mating pairs were setup and carefully monitored each day until they gave birth. All animals were maintained in a specific pathogen--free facility.
Intracerebral infection of neonatal mice {#s4c}
----------------------------------------
Intracerebral infection of neonatal mice was performed as previously described [@pone.0016211-Wiesner1]. Briefly, neonatal mice were placed on ice for 3 min to induce anesthesia before being secured in a cooled, stereotaxic frame (Stoelting) maintained at 4°C to 8°C by a dry ice/ethanol reservoir. A 10 µL syringe (Hamilton Company) fitted with a 30 gauge hypodermic needle (Hamilton Company) was used to inject virus (200 TCID~50~ in 2 µl) or saline into the right lateral cerebral lobe. No incision was made for injection. The neonatal skull was penetrated with the needle for all injections.
Preparation of single cell preparation from neonatal brains and flow cytometry {#s4d}
------------------------------------------------------------------------------
Entire brain tissues obtained from control and infected neonates at 7 d p.i. were dissociated into single cell suspension as using previously described methods [@pone.0016211-Panchision1]. Briefly, tissue samples were gently minced using a scalpel and were resuspended in a purified trypsin-like replacement in Dulbecco\'s PBS/EDTA (TrypLE Select; Invitrogen; containing 200 units/mL DNase I (Roche) and 1 mM MgCl2. We also used the enzyme papain (12 units/mL; Worthington, Lakewood, NJ) in experiments that involved staining for CD133. The enzyme papain was preactivated in 1.1 mM EDTA, 0.067 mM mercaptoethanol, and 5.5 mM cysteine-HCl for 30 min before addition. The absence of bicarbonate in the HBSS allowed dissections and incubations to be performed in a room atmosphere. Samples were placed in a 37°C water bath for 30 min during digestion. Samples were then spun at 200×g for 5 min, resuspended in fresh HBSS/DNase/MgCl2 without enzyme, and filtered using cell strainers (40 µm). Cells were counted by trypan blue dye exclusion method, 1×10^6^ cells from both control and infected brains were used for flow cytometric analysis.
For flow cytometry, cells were resuspended in flow cytometry buffer, consisting of 1× PBS, pH 7.2, containing 2% fetal bovine serum. Cells were counted and diluted to a density of 10^6^ cells per milliliter. For surface marker analysis, cells were stained with anti-mouse cell surface markers for 30 min at 4°C. Antibodies used were, CD133-APC or PE (Miltenyi Biotec Inc, Auburn, CA), CD15-APC or PE (ebiosciences, San Diego, CA), CD24-PE-Cy7, CD45-PE-Cy5, CD29-PE (BD Biosciences, San Jose, CA). For staining intracellular markers, cells were surface stained, prior to fixation and permeabalization using Cytofix/Cytoperm (BD Biosciences, San Jose, CA). Surface markers were selected based on the experiment being performed: nestin-PE or APC (R&D Systems, Minneapolis, MN and BD Biosciences, San Jose, CA), DCX, GFAP-PE (Santa Cruz Biotechnology, Santa Cruz, CA) and Oct4-APC (R&D Systems, Minneapolis, MN). Cells were washed in staining buffer, and then secondary fluorescent-conjugated antibody (if needed) was added at the appropriate dilution and incubated on ice for 30 min. Cells were analyzed on a FACSCanto flow cytometer (BD Biosciences). Background fluorescence was measured using unlabeled cells and cells labeled with isotype control or secondary antibody alone; and used to set gating parameters between positive and negative cell populations. Cell aggregates and small debris were excluded from analysis or isolation on the basis of side scatter and forward scatter; dead cells (7 AAD+) and CD45-PE-Cy5+ immune cells were excluded from analysis. The data collected were analyzed using FlowJo software (TreeStar).
BrdU labeling and detection of BrdU+ cells by flow cytometry {#s4e}
------------------------------------------------------------
BrdU labeling of neonatal mice was done as described previously [@pone.0016211-Koontz1], MCMV-infected or control newborn mice were injected intraperitoneally with BrdU (50 µg/g body weight) 7 d after infection. Mice were killed at 24 h after BrdU administration. Single cell suspension was prepared from control and infected brains as described earlier. Cells were surface stained with CD24-PE-Cy7 and CD29-PE, and with CD133-APC. Intranuclear BrdU was stained using the FITC BrdU Flow Kit (BD Biosciences San Jose, CA). Interference of MCMV-GFP signal with FITC BrdU was ruled out as repeated fixing and permeablization during the staining protocol dampened the GFP signal from virus-infected cells, this was confirmed on simultaneously treated cells by fluorescent microscopy for GFP signal prior to staining with BrdU-FITC. The cells were fixed and permeablized by resuspending in 100 µL of Cytofix/Cytoperm (BD Biosciences) buffer at room temperature for 30 min, followed by the addition of 1 mL of wash buffer (BD Biosciences). The samples were spun at 300×*g* for 5 min and the supernatant was aspirated. The cells were then resuspended in 100 µl of Cytoperm Plus buffer (BD Biosciences) on ice for 10 min. After washing and centrifuging, the cells were resuspended in 100 µl of Cytofix/Cytoperm buffer at room temperature for 5 min. The cells were then resuspended in 100 µL of DNAse (30 µg; stock from kit was diluted in PBS (Ca^2+^/Mg^2+^ free) containing 0.1 mM CaCl~2~ and 10 mM MgCl~2~) in a dry heat block at 37°C for 1 h. Following washing and centrifuging, the cells were resuspended in 50 µl of FITC-conjugated anti-BrdU (1∶50 dilution) in the dark, at room temperature for 20 min. After the samples were washed, they were resuspended with 20 µl of the nuclear marker, 7-AAD, at room temperature in the dark. The cells were then resuspended in 1 mL of staining buffer (PBS, 3% FBS, 0.09% sodium azide). Prior to analysis, cells were filtered through a cell strainer cap (30 µm) to remove debris. The data was collected the same day on a BD FACSCanto system. The data collected was analyzed using FlowJo software (TreeStar).
Real-time PCR for BDNF and NT3 {#s4f}
------------------------------
Total RNA was extracted from brain tissue homogenates with Trizol reagent (Invitrogen, Carlsbad, CA). One µg RNA was DNase (Ambion, Applied Biosystems, Austin, TX) treated and reverse transcribed to cDNA with SuperScript™ III (Invitrogen), dNTP (GE Healthcare, Piscataway, NJ) and oligo (dT)~12--18~ (Promega, Madison, WI). Real-time PCR was performed in Mx3000p (Stratagene, La Jolla, CA) with SYBR Advantage qPCR Premix (Clontech, Mountain View, CA), primers and cDNA according to manufacturer\'s protocol. Reaction conditions for qPCR were as follows: initial denaturation at 95°C for 15 sec, amplification for 40 cycles at 95°C for 10 sec, 60°C for 10 sec and 72°C for 10 sec followed by dissociation curve analysis (1 cycle at 95°C for 60 sec, 55°C for 30 sec and 95°C for 30 sec) to verify PCR product specificity. Primer sequences used were: sense 5′-GGTATCCAAAGGCCAACTGA-3′ and antisense 5′-CTTATGAATCGCCAGCCAAT-3′ for BDNF; sense 5′- CCAGGCGGATATCTTGAAAA-3′ and antisense 5′-AGCGTCTCTGTTGCCGTAGT --3′ for NT3; sense 5′-TGCTCGAGATGTCATGAAGG-3′ and antisense 5′-AATCCAGCAGGTCAGCAAAG-3′, for hypoxanthine guanine phosphoribosyl transferase (HPRT)-1 as housekeeping gene. After normalizing to HPRT-1 expression (ΔCt = target gene Ct−HPRT Ct) and then to control group (ΔΔCt = treatment ΔCt−C ΔCt), relative quantification using 2∧^−ΔΔCt^ was calculated as fold change of target mRNA expression vs. control.
Supporting Information {#s5}
======================
######
Periventricular cells are preferentially infected. Coronal sections from neonatal brains showing GFP-expressing cells indicative of viral infection with recombinant MCMV at 7 d p.i. **A**. Lower magnifications demonstrate that MCMV is localized to cells surrounding the ventricles. **B**. Adjacent serial sections, showing GFP+ cells stained for nucleus (DAPI).
(TIFF)
######
Click here for additional data file.
######
Neonatal mice fail to control viral brain infection. Day-old littermates were injected intracranially with MCMV (500 or 200 TCID~50~ in 2 µl) or with saline. The infected and control neonates were nourished in identical conditions. Data are expressed as percent survival in each group at the indicated time point, followed over the 15 d time-course of the experiment.
(TIFF)
######
Click here for additional data file.
######
Experimental model simulates congenital cytomegalovirus infection. Embryos from timed breeders were collected at ED14.5 and brains were dissected out to prepare single cell suspension. Cells were also prepared from brains harvested from 7 d old neonates. Cells were surface stained for CD24 and also for intracellular nestin. Representative dotplot and histogram showing ratios of CD24(hi) cells and percent of maximum cells expressing nestin from both **A**. ED14.5 and **B**. P7 brains are shown. No significant difference in the expression levels of CD24 and nestin was observed between the groups analyzed. Data are derived from two independent experiments, n = 3--5 embryos/neonates.
(TIFF)
######
Click here for additional data file.
######
Immune responses to MCMV brain infection predominantly consist of macrophages. At 7 d p.i., leukocytes were isolated from MCMV-infected neonatal brains. Brain tissues harvested from 4--6 animals were minced finely in RPMI (2 g/L D-glucose and 10mM HEPES) and mechanically disrupted (in Ca/Mg free HBSS) at room temperature for 20 min. Single cell preparations from infected brains were resuspended in 30% Percoll and banded on a 70% Percoll cushion at 900×g at 15°C. Brain leukocytes obtained from the 30--70% Percoll interface were stained with anti-mouse immune cell surface markers for 45 min at 4°C (CD45-PE-Cy7, CD11b-APC-CY7, Ly-6G-FITC, MHC Class II- PE, F4/80-APC, CD4-FITC and CD8-PE BD Biosciences, San Jose, CA, multiple set of analyses were performed to accommodate different combination of markers with fluorochromes) and analyzed by flow cytometry using a BD FACSCanto. Live leukocytes were gated using forward scatter and side scatter parameters and analyzed using FlowJo software (TreeStar, Inc.). **A**. We identified four distinct populations as shown. **B**. Histogram showing F4/80+ cells from CD45(hi)CD11b(hi) cells. **C**. Histogram showing MHC class II+ cells, indicating microglial activation from CD45(int)CD11b(+). **D**. Dotplot showing ratio of CD4 and CD8 from CD45(hi)CD11b(−) and from CD45(hi)CD11b(+).
(TIFF)
######
Click here for additional data file.
**Competing Interests:**The authors have declared that no competing interests exist.
**Funding:**This project was supported by Award Number R01 NS-038836 from the National Institute of Neurological Disorders and Stroke. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
[^1]: Conceived and designed the experiments: MBM JRL. Performed the experiments: MBM SH. Analyzed the data: MBM MC. Wrote the paper: MBM JRL.
|
Blogs
Jeter's Next Big Swing
"I don't miss playings," says the retired Yankee, as the press-shy captain leads website The Players' Tribune, where DeAndre Jordan and Tiger Woods break news (sorry, ESPN) and backers are betting on a media home run
Berlin 2012: 'My Brother the Devil' Wins Europa Cinema Label Prize
The feature debut from U.K. director Sally El Hossaini is a coming-of-age story about two British Arabs.
BERLIN -- The British film My Brother The Devil, written and directed by Sally El Hosaini, has won the prize for Best European Film from the Europa Cinemas Label network.
The feature, which premiered in the Panorama section of the Berlin film festival, is a coming-of-age tale focusing on two British Arabs living in the dismal London borough of Hackney. They confront gang violence, sexual awakening and their own ingrained prejudices as they struggle to find their way in the world.
"My Brother The Devil is a refreshingly subtle film from a very talented new British filmmaker," said the European Cinemas jury in its decision. "It does not deal in the usual stereotypes of drugs and violence expressed in many contemporary films about immigrant family life in European cities. Beautifully shot, the film draws you into the well-rounded characters’ lives and keeps you interested in each throughout."
The Europa Cinemas Label also gave a special mention to the drama Dollhouse from Irish director Kirsten Sheridan, the daughter of six-time Oscar nominee Jim Sheridan, for "its innovative approach, the atmosphere it creates, and the way it grips the audience throughout.”
The Europa Cinemas network, which turns 20 this year, is an association of art house cinemas representing nearly 3,000 screens across Europe, Asia and Latin America which is dedicated to the distribution and promotion of European films. |
Q:
Rendering a nested navigation in React
I have the following data structure for my website’s navigation. This is just a JSON object:
[{
"path": "/",
"name": "Home"
}, {
"path": "/products",
"name": "Products",
"subnav": [{
"path": "/sharing-economy",
"name": "Sharing Economy"
}, {
"path": "/pre-employment-screening",
"name": "Pre-Employment Screening"
}, {
"path": "/kyc-and-aml",
"name": "KYC & AML"
}]
}, {
"path": "/checks",
"name": "Checks"
}, {
"path": "/company",
"name": "Company"
}]
What I’d like to do is to render the following from it, having a nested list inside of the Products list item when the subnav key is present:
<ul>
<li>Home</li>
<li>Products
<ul>
<li>Sharing Economy</li>
<li>Pre-Employment Screening</li>
<li>KYC & AML</li>
</ul>
</li>
<li>Checks</li>
<li>Company</li>
</ul>
Currently, my React code looks like this:
// This is the data structure from above
import navigation from '../data/navigation.json'
const SubNavigation = (props) => {
// Here I’m trying to return if the props are not present
if(!props.subnav) return
props.items.map((item, index) => {
return <Link key={index} to={item.path}>{item.name}</Link>
})
}
class Header extends React.Component {
render() {
return (
<header className='header'>
{navigation.map((item, index) => {
return(
<li key={index}>
<Link to={item.path}>{item.name}</Link>
<SubNavigation items={item.subnav}/>
</li>
)
})}
</header>
)
}
}
export default Header
I’m using a functional stateless component to render the SubNavigation, however am running into trouble when item.subnav is undefined.
How would I adapt this code so that I conditionally render the SubNavigation based on the item.subnav key being present/undefined.
A:
Could you try this:
<header className='header'>
{navigation.map((item, index) => {
return(
<li key={index}>
<Link to={item.path}>{item.name}</Link>
{ item.subnav && <SubNavigation items={item.subnav}/> }
</li>
)
})}
</header>
|
[Continuous and bolus thermodilution cardiac output measurement during off-pump coronary artery bypass surgery.].
Off-pump CABG surgery is related to major and abrupt hemodynamic changes that may not be immediately detected by continuous cardiac output measurement (CCO). This study aimed at comparing results of cardiac index measurement with pulmonary artery catheter (PAC) with thermal filament (Baxter Edwards Critical Care, Irvine, CA) versus standard bolus thermodilution method during distal coronary anastomosis. Participated in this study 10 patients undergoing off-pump CABG who were monitored with PAC with thermal filament. Measurements of cardiac index were obtained in four moments: at anesthetic induction with the chest still closed (M1), after sternotomy (M2), after heart stabilization with the octopus device (M3) and at distal anastomosis completion (M4). There has been significant cardiac index decrease (p < 0.05) during coronary anastomosis, detected when measurements were taken with bolus thermodilution method. Cardiac index has varied 2.8 +/- 0.7 to 2.3 +/- 0.8 L.min.m-2 in the beginning and 2.5 +/- 0.8 L.min.m-2 at the end of anastomosis. This variation was not detected by the continuous method (from 3 +/- 0.6 to 3.2 +/- 0.5 to 3.1 +/- 0.6 L.min.m-2 during anastomosis). CCO measurement with PAC was late in detecting acute hemodynamic changes due to changes in heart position during off-pump CABG. |
1. Field of the Invention
This invention relates to the field of software compilation and, and more particularly relates to post optimization of compiled code.
2. Description of the Related Art
Today's compilers include various schemes for code optimization. Generally, compilers produce relocatable object modules that can be linked together and loaded for execution by a link loader. The compilers can generate efficient instruction sets using target dependent or target independent machine codes. However, the code generated by a compiler may not be optimized for particular applications. Once an instruction set is generated by a compiler, the instruction set can further be optimized using various post-optimization techniques. Post-optimization of code involves re-visiting the generated code and finding an efficient way to execute the generated code. Some of the common techniques for post optimization include instruction scheduling and register allocation.
Instruction scheduling allows a compiler to identify code operations that are independent and can be executed out of sequence. For example, a routine for printing the status of idle peripheral devices can be executed ahead of a routine that is computing a complex mathematical algorithm as long as there are no data, resource or other related dependencies between the two routines.
In register allocation scheme, a compiler identifies and allocates available machine registers to store intermediate and final results of a computation. The number of actual hardware registers in a given machine is limited by the target machine architecture. A compiler's design may allow use of software virtual registers that are allocated memory locations, to be used for register operations. Initially, during code generation process, the compiler may assume an infinite number of available virtual registers and allocate virtual registers to various computations. However, each virtual register is eventually mapped to actual hardware registers for final code generation. Allowing a compiler to use unlimited number of virtual registers for code generation can produce an optimized instruction scheduling for a given code generation. Because each virtual register requires mapping with limited number of actual machine register, instruction scheduling can be limited.
The optimization of code generation can be improved by integrating instruction scheduling and register allocation. In integrated optimization, a balance between instruction scheduling and register allocation is achieved by accepting some inefficiency in instruction scheduling and some spillover register allocation of virtual registers. The current integrated optimization techniques include inefficient instruction scheduling and register allocation. |
Q:
Show that the infinite intersection of nested non-empty closed subsets of a compact space is not empty
I'm given the following problem:
Suppose that for every $n\in \mathbb{N}$ $V_n$ is a non-empty, closed subset of a compact space $X$, with $V_n \supseteq V_{n+1}$.
Now I have to show that $V_{\infty}= \bigcap_{n=1}^{\infty} V_n \neq \emptyset$.
How can I do that? I know the nested interval property from real analysis...
The 'answer' should be that the family $\{V_n: n\in \mathbb{N} \}$ has the finite intersection property - the intersection of any finite subfamily $\{V_{n_1}, V_{n_2}, ..., V_{n_r} \}$ is $V_N$, where $N$ is $\max \{n_1,n_2, ..., n_r \}$ and $V_N \neq \emptyset$. Hence since $X$ is compact another exercise $(*)$ says that $\bigcap_{n=1}^{\infty} V_n$ is non-empty.
Exercise $(*)$ was about to prove that for a space $X$ to be compact, it is necessary and sufficient condition that if $\{V_i: i\in I \}$ is any indexed family of closed subsets of $X$ such that $\bigcap_{j\in J}V_j$ is non-empty for any finite subset $J \subseteq I$, then $\bigcap_{i\in I}V_i$ is non-empty.
So I don't understand the proof now.... Can somebody clarify this stuff? :-)
Thanks for your trouble !
Definition of open cover:
Let $A$ be a subset in $X$.
A family $\mathcal{U}=\{U_i: i\in I\}$ of subsets of $X$ is called a cover for $A$ is $A\subseteq \bigcup U_i$.
If each $\{U_k\}$ is open in $X$, then $\mathcal{U}$ is an open cover for $A$
A:
Claim: A topological space $\,X\,$ is compact iff it has the Finite Intersection Property (=FIP):
Proof: (1) Suppose $\,X\,$ is compact and let $\,\{V_i\}\,$ be a family of closed subsets s.t. that $\,\displaystyle{\bigcap_{i}V_i=\emptyset}\,$. Putting now $\,A_i:=X-V_i\,$ , we get that $\,\{A_i\}\,$ is a family of open subsets , and
$$\bigcap_{i}V_i=\emptyset\Longrightarrow \;X=X-\emptyset=X-\left(\bigcap_iV_i\right)=\bigcup_i\left(X-V_i\right)=\bigcup_iA_i\Longrightarrow\;\{A_i\}$$
is an open cover of $\,X\,$ and thus there exists a finite subcover of it:
$$X=\bigcup_{i\in I\,,\,|I|<\aleph_0}A_i=\bigcup_{i\in I\,,\,|I|<\aleph_0}(X-V_i)=X-\left(\bigcap_{i\in I\,,\,|I|<\aleph_0}V_i\right)\Longrightarrow \bigcap_{i\in I\,,\,|I|<\aleph_0}V_i=\emptyset\Longrightarrow$$
The family $\,\{V_i\}\,$ has the FIP
(2) Suppose now that every family of closed subsets of $\,X\,$ hast the FIP, and let $\,\{A_i\}\,$ be an open cover of it. Put $\,U_i:=X-A_i\,$ , so $\,U_i\, $ is closed for every $\,i\,$:
$$\bigcap_iU_i=\bigcap_i(X-A_i)=X-\bigcup_i A_i=X-X=\emptyset$$
By assumption, there exists a finite set $\,J\,$ s.t. $\,\displaystyle{\bigcap_{i\in J}U_i=\emptyset}\,$ , but then
$$X=X-\emptyset=X-\bigcap_{i\in J}U_i=\bigcup_{i\in J}X-U_i)=\bigcup_{i\in J}A_i\Longrightarrow$$
$\,\{A_i\}_{i\in J}$ is a finite subcover for $\,X\,$ and thus it is compact....QED.
Please be sure you can follow the above and justify all steps. Check where we used Morgan Rules, for example, and note that we used the contrapositive of the FIP's definition...
|
Friday, July 13, 2018
Drought? Really?
You have all heard that the Pacific Northwest is always wet and rainy! Well, not this year! May and June had lower than average rain and July is going that way also. I can't say we have had a lot of hot weather BUT it hasn't rained much. Now we are getting the warm weather. which I love. But I know the forest and brush fires will probably be bad. I can remember one summer when our kids were around five or six when my husband was away fighting forest fires most of the summer and into early fall. I would hardly get his dirty, smelly, smoky clothes clean before he would get called out again. We weren't making tons of money with the USFS but when he fought fire he got hazard pay and overtime so it always helped us out financially! But it was hard on me and the kids. We didn't have a chance that summer to cut wood so we had to have a load delivered and they dumped it in the carport. I couldn't get my car out of the garage until the wood was gone so the kids and I had to get it stacked in the woodshed. That was during the same period of summer when I was doing most of the canning. But I was young and had tons of energy and the kids, even though they were small were a big help. They were good kids. After we moved up here the kids all picked berries when they were younger and then the boys hayed and ran peapicker machines. Our daughter babysat and took a bus to work at a restaurant when she was old enough. I guess we were not good parents because they worked but they sure turned out to be hard working, responsible adults that we are very proud of.
We didn't do anything special on the 4th of July this year. I didn't think my knee was up to having the kids over so we sat around outside and read. Then went to DQ for some ice cream and fries! Not very exciting but we are old so it was fine....
I am not enjoying this summer as much as usual because my right knee is so bad. I abused it for so many years running every day, etc. Then crashing during a motocross really did it in and I had to have surgery and wore a cast for a month or more. Then several years later they had to do another surgery but it has never been the same and has really gotten bad this summer for some reason. I am hoping that wearing my brace and trying to take care of it, it will get better by itself. It always has before...
Well, our "president" is overseas embarrassing us again. I hope the world doesn't judge us by him. Most of us don't believe or "think" like him. Enough said.
It doesn't seem like as many themes are bring submitted this summer. I imagine all the younger people are having a life and not working on them. But this old lady is still working away. Have even learned a few new things on my Photoshop program. There is so much to do but at my age, it is not an easy program to learn. And it infuriates me when I do something and then can't remember exactly what brush I used or exactly how I did it. I try to write it down but sometimes I forget to do that!
Apparently there is going to be some big changes for Themes. I don't know how that is going to affect me and my themes. One thing I do know is they are apparently going to do away with the preview of how it will look on your screen. So stay tuned and watch out for changes. I hope it doesn't change the themes themselves but it may. I will just have to wait and see.
The orchid one of our neighbors gave me at Christmas is doing so well. I love flowers but do not have a green thumb so I am amazed my orchid is doing so well. I Googled how to water it and I watched 2 You Tube videos that an expert orchid grower had put on and have followed his instructions and apparently he knew what he was talking about because mine is doing well! So it is time for me to go take care of mine!
Followers
Translate
(A little about me)
I once said I never wanted a computer, BUT...one of my grandson's is in the IT Department at WWU and installed Firefox when he built my desktop computer. I really like it and won't use any other browser now. It is safe, fast and has tons of extensions to make it easy for your browsing. One of the add ons they have available are themes which they used to call Personas. They had been called Personas for several years and became very popular, but someone at Mozilla got the bright idea to change the name. Whatever you want to call them they are still a fun quick way to make your browser personal. You can find my Persona/Theme designs at:https://www.getpersonas.com/en-US/gallery/Designer/MaDonna
I make desktop wallpaper designs to match many of my themes, which you can use as background themes for GMail. You can also use them as a wallpaper for your cell phone. You can find my desktop wallpaper designs at: http://my.desktopnexus.com/MaDonnas/uploads/
I have been making theme designs for my own personal Firefox ever since it was installed. Then it was opened so you could submit designs for approval in 2008. My design called Sunset Over Water had over a million and a half people using it at one time. It is from a photo I took off Chuckanut Drive on our way home from a wedding.
I have always been interested in design. When I was a little girl I designed clothing for my paper dolls. Then designed and made clothes for my regular dolls. I made most of my own clothing in high school. When I was older I designed, sewed and made our 3 kids clothing. When they got married and started having babies I made tons of clothes for our 7 grand kids. I decided to sell some of my designs. Fibromyalgia decided to put a stop to that. Sewing for hours a day was just too much.
I am a bit of an over-achiever and when I start something, I don't know when to quit! I have done a little bit of sewing for our great grand kids, but I know I have a limit and when to stop.
I used to design and make jewelry and sold at bazaars which was a lot of fun.
I work on themes and desktop wallpaper designs almost every day and have enjoyed hearing from people all over the world.
I was asked to design themes for Brand Thunder. They have theme designs for people that use Google Chrome, Internet Explorer, Safari and also Firefox. In 2012 I won a Nexus 7 tablet on a pop culture contest they held. They have changed their format so I don't make designs for them anymore.
I now use Photoshop Elements to make my designs. It is a very complicated program to use, especially at my age of almost 80.
Thank you all for your comments and using my designs. I will keep making them as long as I can and people enjoy using them.
A BIT MORE!
I love nature and the beauty of the Pacific Northwest, the animals
that roam our forests and back yards. That is the reason most of my
designs are nature based. I love going up into the mountains with my
husband of 61 years. He knows the mountains well and can take me places
where I can take photos I can use for both Persona designs and
wallpapers. It is getting harder and harder for me to take really good scenic shots, though, because of my age!
We have 3 grown children, 6 grandsons, 1 granddaughter, 4 great
grandsons and 1 great granddaughter. My family is the most important part of my life. I don't
get to see my brother or sister as much as I would like as we all live
in different states and I am not a traveler. My family moved a lot when I
was growing up and I pretty much like to just stay put now.
I loved riding dirt bikes and ATV's. I used to ride my Yamaha dirt bike in poker runs, enduro's and motocross and have a lot of trophies to show for it. And one bad knee after two surgeries! I rode dirt bikes before they had motocross gear for women. Had to buy gear for boys! But my most favorite thing was riding my Kawasaki 454 LTD road bike. My husband had a 454, too and we would go on cross country trips, especially to Winthrop, WA. My last year to ride was when I was 71 years old and it was hard to give up but I knew my strength wasn't up to it anymore. Summer isn't quite as much fun now because I can't ride, but I have so many wonderful memories.
I was born and raised a farm girl. In the early 1990's my husband and I started a group to protect the 500+ acre ranch close to us. A Japanese company had purchased it and wanted to build a development with over 2000 houses. Over 600 people from the community joined our group and we were successful in stopping the development. But instead of farming it like it should be, they built a golf course! We still do all we can to help preserve the farm land.
I have been doing a bit of freelance work, such as designing book covers, business cards, calenders, etc. It all keeps me very, very busy. Plus getting older it doesn't get any easier. |
We have described a prostatic cell derived growth factor (PRGF) from a human prostatic epithelial cancer cell line JCA-1. PRGF is a protein, constitutively produced by JCA-1 cells and released into the culture medium. PRGF was purified and shown to have a molecular weight of 53,100. A partial amino acid sequence from the N-terminal of PRGF has been obtained. The interaction between stromal fibroblast preparations, from human prostate or nonprostate fibroblasts, and PRGF causes an accelerated growth of the fibroblasts. The mechanism includes an increase in growth rate and a more rapid entry of G1 phase cells into the replicating S-phase of the cell cycle. PRGF may represent a paracrine growth factor and is distinct from the commonly known growth factors. Since prostate growth involves both epithelium and fibromuscular tissue, and prostate hyperplasia and malignancy are sometimes characterized by abnormal fibroblast- epithelial cell interactions, we plan to investigate the nature and biological activity of the PRGF. We will further characterize and identify PRGF from JCA-1 cancer cell culture supernatant. PRGF needs to be characterized to further identify its amino acid sequence and production of polyclonal and monoclonal antibodies to investigate its biological functions. Sensitive assay procedures, such as cell cycle analysis, cell size, DNA replication, etc. will be developed to identify PRGF. The biological activity of PRGF will be pursued by identifying the cell types in the prostate, whether stromal or epithelial, responding to PRGF. Growth responses of different cell types will be measured and cell surface receptors, specific for PRGF, will be identified using radiolabeled PRGF. Localization of the PRGF production site in clinical prostate tissues will be pursued using the peroxidase antibody technique. Normal, benign and cancer tissues from the prostate will be analyzed in order to correlate PRGF and prostate cancer gradings. Furthermore, the relationship between PRGF production and androgen stimulation will be analyzed using JCA-1 cells. Further structural analysis of PRGF, using isolated JCA-1 cDNA and PRGF nucleotide sequence is planned. The cDNA clone will also make it possible to heterologously express the PRGF protein. |
---
abstract: 'It is argued that directed flow $v_1$, the observable introduced for description of nucleus collisions, can be used for the detection of the nature of state of the matter in the transient state of hadron and nuclei collisions. We consider a possible origin of the directed flow in hadronic reactions as a result of rotation of the transient matter and trace analogy with nucleus collisions. Our proposal it that the presence of directed flow can serve as a signal that transient matter is in a liquid state.'
---
**ORBITAL MOMENTUM EFFECTS DUE TO A LIQUID NATURE OF TRANSIENT STATE**
and N.E. Tyurin
*Institute for High Energy Physics, 142281, Protvino, Moscow Region, Russia*
Important tools in the studies of the nature of the new form of matter are the anisotropic flows which are the quantitative characteristics of the collective motion of the produced hadrons in the nuclear interactions. With their measurements one can obtain a valuable information on the early stages of reactions and observe signals of QGP formation. The experimental probes of collective dynamics in $AA$ interactions, the momentum anisotropies $v_n$ are defined by means of the Fourier expansion of the transverse momentum spectrum over the momentum azimuthal angle $\phi$. The angle $\phi$ is the angle of the detected particle transverse momentum with respect to the reaction plane spanned by the collision axis $z$ and the impact parameter vector $\mathbf b$ directed along the $x$ axis. Thus, the anisotropic flows are the azimuthal correlations with the reaction plane. In particular, the directed flow is defined as $$\label{dirfl}
v_1(p_\perp)\equiv \langle \cos \phi \rangle_{p_\perp}= \langle
{p_x}/{p_\perp}\rangle = \langle {\hat{\mathbf b}\cdot {\mathbf
p}_\perp} /{p_\perp}\rangle$$ From Eq. (\[dirfl\]) it is evident that this observable can be used for studies of multiparticle production dynamics in hadronic collisions provided that impact parameter $\mathbf b$ is fixed.
We assume that the origin of the transient state and its dynamics along with hadron structure can be related to the mechanism of spontaneous chiral symmetry breaking ($\chi$SB) in QCD, which leads to the generation of quark masses and appearance of quark condensates. This mechanism describes transition of the current into constituent quarks. The gluon field is considered to be responsible for providing quarks with masses and its internal structure through the instanton mechanism of the spontaneous chiral symmetry breaking. Massive constituent quarks appear as quasiparticles, i.e. current quarks and the surrounding clouds of quark–antiquark pairs which consist of a mixture of quarks of the different flavors. Quark radii are determined by the radii of the surrounding clouds. Quantum numbers of the constituent quarks are the same as the quantum numbers of current quarks due to conservation of the corresponding currents in QCD.
Collective excitations of the condensate are the Goldstone bosons and the constituent quarks interact via exchange of the Goldstone bosons; this interaction is mainly due to pion field. Pions themselves are the bound states of massive quarks. The interaction responsible for quark-pion interaction can be written in the form [@diak]: $${\cal{L}}_I=\bar
Q[i\partial\hspace{-2.5mm}/-M\exp(i\gamma_5\pi^A\lambda^A/F_\pi)]Q,\quad
\pi^A=\pi,K,\eta.$$ The interaction is strong, the corresponding coupling constant is about 4. The general form of the total effective Lagrangian (${\cal{L}}_{QCD}\rightarrow {\cal{L}}_{eff}$) relevant for description of the non–perturbative phase of QCD includes the three terms [@gold] $${\cal{L}}_{eff}={\cal{L}}_\chi +{\cal{L}}_I+{\cal{L}}_C.\label{ef}$$ Here ${\cal{L}}_\chi $ is responsible for the spontaneous chiral symmetry breaking and turns on first. The picture of a hadron consisting of constituent quarks embedded into quark condensate implies that overlapping and interaction of peripheral clouds occur at the first stage of hadron interaction. The interaction of the condensate clouds assumed to of the shock-wave type, this condensate clouds interaction generates the quark-pion transient state. This mechanism is inspired by the shock-wave production process proposed by Heisenberg long time ago. At this stage, part of the effective lagrangian ${\cal{L}}_C$ is turned off (it is turned on again in the final stage of the reaction). Nonlinear field couplings transform then the kinetic energy to internal energy. As a result the massive virtual quarks appear in the overlapping region and transient state of matter is generated. This state consist of $\bar{Q}Q$ pairs and pions strongly interacting with quarks. This picture of quark-pion interaction can be considered as an origin for percolation mechanism of deconfinement resulting in the liquid nature of transient matter [@jtt].
Part of hadron energy carried by the outer condensate clouds being released in the overlap region goes to generation of massive quarks interacting by pion exchange and their number was estimated as follows as follows: $$\tilde{N}(s,b)\,\propto
\,\frac{(1-\langle k_Q\rangle)\sqrt{s}}{m_Q}\;D^{h_1}_c\otimes
D^{h_2}_c \equiv N_0(s)D_C(b), \label{Nsbt}$$ where $m_Q$ – constituent quark mass, $\langle k_Q\rangle $ – average fraction of hadron energy carried by the constituent valence quarks. Function $D^h_c$ describes condensate distribution inside the hadron $h$ and $b$ is an impact parameter of the colliding hadrons. Thus, $\tilde{N}(s,b)$ quarks appear in addition to $N=n_{h_1}+n_{h_2}$ valence quarks.
The generation time of the transient state $\Delta t_{tsg}$ in this picture obeys to the inequality $$\Delta t_{tsg}\ll \Delta t_{int},$$ where $\Delta t_{int}$ is the total interaction time. The newly generated massive virtual quarks play a role of scatterers for the valence quarks in elastic scattering; those quarks are transient ones in this process: they are transformed back into the condensates of the final hadrons.
Under construction of the model for elastic scattering it was assumed that the valence quarks located in the central part of a hadron are scattered in a quasi-independent way off the transient state with interaction radius of valence quark determined by its inverse mass: $$\label{rq}
R_Q=\kappa/m_Q.$$ The elastic scattering $S$-matrix in the impact parameter representation is written in the model in the form of linear fractional transform: $$S(s,b)=\frac{1+iU(s,b)}{1-iU(s,b)}, \label{um}$$ where $U(s,b)$ is the generalized reaction matrix, which is considered to be an input dynamical quantity similar to an input Born amplitude and related to the elastic scattering scattering amplitude through an algebraic equation which enables one to restore unitarity. The function $U(s,b)$ is chosen in the model as a product of the averaged quark amplitudes $$U(s,b) = \prod^{N}_{Q=1} \langle f_Q(s,b)\rangle$$ in accordance with assumed quasi-independent nature of the valence quark scattering. The essential point here is the rise with energy of the number of the scatterers like $\sqrt{s}$. The $b$–dependence of the function $\langle f_Q
\rangle$ has a simple form $\langle
f_Q(b)\rangle\propto\exp(-m_Qb/\xi )$.
These notions can be extended to particle production with account of the geometry of the overlap region and properties of the liquid transient state. Valence constituent quarks would excite a part of the cloud of the virtual massive quarks and those quark droplets will subsequently hadronize and form the multiparticle final state. This mechanism can be relevant for the region of moderate transverse momenta while the region of high transverse momenta should be described by the excitation of the constituent quarks themselves and application of the perturbative QCD to the parton structure of the constituent quark. The model allow to describe elastic scattering and the main features of multiparticle production. In particular, it leads to asymptotical dependencies $$\label{tota}
\sigma_{tot,el}\sim \ln^2 s,\;\;
\sigma_{inel}\sim \ln s, \;\; \bar{n}\sim s^\delta.$$ The geometrical picture of hadron collision at non-zero impact parameters described above implies that the generated massive virtual quarks in overlap region will obtain large initial orbital angular momentum at high energies. The total orbital angular momentum can be estimated as follows $$\label{l}
L(s,b) \simeq \alpha b \frac{\sqrt{s}}{2}D_C(b).$$ The parameter $\alpha$ is related to the fraction of the initial energy carried by the condensate clouds which goes to rotation of the quark system and the overlap region, which is described by the function $D_C(b)$, has an ellipsoidal form. It should be noted that $L\to 0$ at $b\to\infty$ and $L=0$ at $b=0$. At this point we would like to stress again on the liquid nature of transient state. Namely due to strong interaction between quarks in the transient state, it can be described as a quark-pion liquid. Therefore, the orbital angular momentum $L$ should be realized as a coherent rotation of the quark-pion liquid as a whole in the $xz$-plane (due to mentioned strong correlations between particles presented in the liquid). It should be noted that for the given value of the orbital angular momentum $L$ kinetic energy has a minimal value if all parts of liquid rotates with the same angular velocity. We assume therefore that the different parts of the quark-pion liquid in the overlap region indeed have the same angular velocity $\omega$. In this model spin of the polarized hadrons has its origin in the rotation of matter hadrons consist of. In contrast, we assume rotation of the matter during intermediate, transient state of hadronic interaction. Collective rotation of the strongly interacting system of the massive constituent quarks and pions is the main point of the proposed mechanism of the directed flow generation in hadronic and nuclei collisions. We concentrate on the effects of this rotation and consider directed flow for the constituent quarks supposing that directed flow for hadrons is close to the directed flow for the constituent quarks at least qualitatively. The assumed particle production mechanism at moderate transverse momenta is an excitation of a part of the rotating transient state of massive constituent quarks (interacting by pion exchanges) by the one of the valence constituent quarks with subsequent hadronization of the quark-pion liquid droplets. Due to the fact that the transient matter is strongly interacting, the excited parts should be located closely to the periphery of the rotating transient state otherwise absorption would not allow to quarks and pions to leave the region (quenching). The mechanism is sensitive to the particular rotation direction and the directed flow should have opposite signs for the particles in the fragmentation regions of the projectile and target respectively. It is evident that the effect of rotation (shift in $p_x$ value ) is most significant in the peripheral part of the rotating quark-pion liquid and is to be weaker in the less peripheral regions (rotation with the same angular velocity $\omega$), i.e. the directed flow $v_1$ (averaged over all transverse momenta) should be proportional to the inverse depth $\Delta l$ where the excitation of the rotating quark-pion liquid takes place. The geometrical picture of hadron collision has an apparent analogy with collisions of nuclei and it should be noted that the appearance of large orbital angular momentum should be expected in the overlap region in the non-central nuclei collisions. And then due to strongly interacting nature of the transient matter we assume that this orbital angular momentum realized as a coherent rotation of liquid. Thus, it seems that underlying dynamics could be similar to the dynamics of the directed flow in hadron collisions.
We can go further and extend the production mechanism from hadron to nucleus case also. This extension cannot be straightforward. First, there will be no unitarity corrections for the anisotropic flows and instead of valence constituent quarks, as a projectile we should consider nucleons, which would excite rotating quark liquid. Of course, those differences will result in significantly higher values of directed flow. But, the general trends in its dependence on the collision energy, rapidity of the detected particle and transverse momentum, should be the same. In particular, the directed flow in nuclei collisions as well as in hadron reactions will depend on the rapidity difference $y-y_{beam}$ and not on the incident energy. The mechanism therefore can provide a qualitative explanation of the incident-energy scaling of $v_1$ observed at RHIC [@jmpe].
[9]{} D. Diakonov, V. Petrov, Phys. Lett. B **147** (1984) 351. T. Goldman, R.W. Haymaker, Phys. Rev. D **24** (1981) 724. L.L. Jenkovszky, S.M. Troshin, N.E. Tyurin, arXiv:0910.0796. S.M. Troshin, N.E. Tyurin. Int. J. Mod. Phys.E **17** (2008) 1619.
|
Age at onset and pattern of neuropsychological impairment in mild early-stage Alzheimer disease. A study of a community-based population.
To examine the effects of age at onset on neuropsychological functioning in a group of patients with probable Alzheimer disease (AD) and, within this group, to scrutinize further those patients with mild early-onset disease as it was hypothesized that within this group specific patterns of cognitive impairment could be identified that correlated with neuropathological staging of the disease. Each patient underwent an extensive neuropsychological test battery to examine a wide range of cognitive processes to provide information to identify subtypes of dementia. The Memory Clinic in the Department of Geriatric Medicine, Concord Hospital, Concord, New South Wales, Australia. One hundred forty-five community-residing case patients with probable AD were studied; within this group, 51 case patients with mild AD and a Mini-Mental State Examination score greater than 19 were further examined; 36 similarly aged control patients who were part of a larger case-control study of AD in an urban population were also examined. A diagnosis of probable and possible AD was made if the case patient had evidence of memory impairment and met criteria according to the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association. Individual neuropsychological test scores were compared. The tests were then grouped into 7 cognitive domains. Patterns of early cognitive impairment were derived from these comparisons. With an earlier age at onset, significantly more impairment on tests of digit span and praxis was seen, while the duration of disease had no independent effect once the age at onset was fixed. Patients with mild early-onset dementia and a Mini-Mental State Examination score greater than 19 showed significant impairment in tests of attention, memory, frontal/executive functions, visuospatial ability, praxis, and visual agnosia compared with that shown by control patients. In this group, further analyses revealed that impairment in memory and frontal/ executive functions were the earliest signs of cognitive impairment. These data showed that when the duration of disease was adjusted for, case patients with an earlier age at onset of AD demonstrated significantly more impairment on tests of attention span and working memory (digit span), graphomotor function (copy loops), and apraxia than those with an older age at onset. Our findings support the view that the hippocampus and its connections are affected in the early stages of AD. The deficits in the frontal/executive functions also suggest that a disruption of cortical pathways to the frontal lobes and the pathological changes in this region occur early in the disease. |
Bay State Roots, Hollywood Fame
Steven Van Zandt of "The Sopranos" and "The E Street Band" was born in Winthrop, Mass. (Photo by Michael Loccisano/Getty Images)
Topher Grace
Topher Grace of "That 70s Show" attended the Fay School in Southboro, Mass. He was discovered while attending Brewster Academy in Wolfeboro, N.H. - by a producer whose daughter also attended the school. (Photo credit should read MAX NASH/AFP/Getty Images)
Actor Denis Leary was born and raised in Worcester. After his cousin and childhood friend were killed in the 1999 Worcester warehouse blaze, he launched the Leary Firefighters Foundation, raising money for firefighters in Worcester, Boston and New York City. (Photo by Bryan Bedder/Getty Images)
Shemar Moore
Shemar Moore of CBS' Criminal Minds lived in Boston as a child. (Photo by Michael Caufield/Getty Images for PCA)
Lenoard Nimoy
Leonard Nimoy, seen here as Mr. Spock in the television series "Star Trek," was born and raised in Boston. (Getty Images)
Debra Messing
Debra Messing of "Will and Grace" fame graduated from Brandeis University in Waltham. (Photo by Jason Merritt/Getty Images)
Kate Bosworth
Actress Kate Bosworth moved to Cohasset when she was 14, and was living on the South Shore when she landed a role in "The Horse Whisperer." (Photo by Michael Buckner/Getty Images)
Geena Davis
Geena Davis was born and raised in Wareham, Mass. She went on to attend Boston University. (Photo by Frederick M. Brown/Getty Images)
Jessica Biel
Jessica Biel enrolled in Tufts University in 2000, but left school after three semesters to focus on her career. (Photo by Steffen Kugler/Getty Images)
Matt LeBlanc
... And his co-star Matt LeBlanc was born in Newton. (Photo by Frederick M. Brown/Getty Images)
Matthew Perry
Matthew Perry of "Friends" fame was born in Williamstown, Mass. (Photo by Frederick M. Brown/Getty Images)
Conan O'Brien
Conan O'Brien was born and raised in Brookline and then attended Harvard University. (Photo by Jason Merritt/Getty Images)
Amy Poehler
Amy Poehler grew up in Burlington and attended Boston College. (Photo by David Livingston/Getty Images)
John Cena
Wrestler John Cena of the WWE is from West Newbury. (Photo by Bryan Bedder/Getty Images)
Edward Norton
Actor Edward Norton was born in Boston. (Photo by Jason Merritt/Getty Images)
Marcia Cross
Actress Marcia Cross may be a "Desperate Housewife" living on Wisteria Lane now, but she was born in Marlboro, Mass. (Photo by Adam Rose/ABC via Getty Images)
Henry Winkler
(credit: Alberto E. Rodriguez/Getty Images)
Henry Winkler, or 'The Fonz', graduated from Emerson College.
Rachael Ray
Talk show host and 30 Minute Meal extraordinaire Rachael Ray lived on Cape Cod as a child, where her parents operated several restaurants. (Photo by Taylor Hill/Getty Images)
Uma Thurman
Uma Thurman was born in Boston, grew up in Amherst, Mass. and attended Northfield Mount Hermon boarding school. (Photo by Stephen Lovekin/Getty Images)
Bobby Brown
Boston native Bobby Brown saw huge success early in life as a member of "New Edition" and then in a solo career. His marriage and divorce from Whitney Houston has been the subject of tabloid fodder. Brown has landed in Massachusetts courtrooms several times in recent years, for failing to pay child support for two children he had with a Massachusetts woman. (Photo by Frazer Harrison/Getty Images)
Sam Waterston
Sam Waterston of "Law & Order" was born in Cambridge and attended the Brooks School in North Andover. (Photo by Kris Connor/Getty Images)
Ed MacMahon
Ed McMahon of late night and 'Star Search' fame grew up in Lowell and attended Boston College. His first broadcasting job was at WLLH-AM in Lowell. (Photo by Frederick M. Brown/Getty Images)
Jeff Corwin
Jeff Corwin of Animal Planet grew up in Norwell and later attended both Bridgewater State College and UMass Amherst. (Photo by Peter Kramer/Getty Images)
Ellen Pompeo of "Grey's Anatomy" was born in Everett, Mass. - the youngest of 6 children. (Photo by Jason Merritt/Getty Images)
Traci Bingham
Traci Bingham from Baywatch was born in Cambridge. (Photo Credit: Getty Images/Handout)
Michael Chiklis
Michael Chiklis of "The Shield" was born in Lowell, grew up in Andover, and later attended Boston University. Michael Chiklis of "The Shield" was born in Lowell, grew up in Andover, and later attended Boston University.
Michael Chiklis performs during the Boston Pops 4th of July concert rehearsal at the Hatch Shell on the Esplanade in Boston, Sunday, July 3, 2011. (AP Photo/Michael Dwyer)
Jennifer Coolidge
Comic actress Jennifer Coolidge may be best known as "Stifler's Mom" in the "American Pie" movie series. She was born in Boston, attended Norwell High School and Emerson College. (Photo By Getty Images)
Ginnifer Goodwin
Actress Ginnifer Goodwin appears as Margene on HBO's series 'Big Love'and was in the movie 'Walk the Line.' Goodwin graduated from Boston University and is said to be an avid Red Sox fan. (Photo by Alberto E. Rodriguez/Getty Images)
Jenny Slate became an overnight YouTube sensation, when the Milton native let the F-bomb slip during her Saturday Night Live debut in September of 2009. (Photo by Charles Eshelman/Getty Images)
Agnes Moorehead
Agne(s Moorehead (R), one of the stars of the 1960's television show "Bewitched" was from Clinton. (Photo credit: FILES/AFP/Getty Images)
Carroll Spinney
Carroll Spinney, who has spent more than 35 years playing Big Bird and Oscar the Grouch on 'Sesame Street,' was born in Waltham and attended Acton-Boxboro Regional High School. (Photo by Noel Vasquez/Getty Images)
Jeffrey Donovan
Jeffrey Donovan of USA network's series 'Burn Notice' was born in Amesbury. (Photo by John Parra/Getty Images for Custo Barcelona)
Football commentator Howie Long was born in Somverville and attended Milford High School, before going on to be an NFL Hall of Famer. (Photo by Frank Micelotta/Getty Images)
Jason Alexander
Jason Alexander of "Seinfeld" fame attended Boston University, but left school before graduation. (Photo by Kevork Djansezian/Getty Images for VH1)
Madeline Kahn
The late actress Madeline Kahn of 'Young Frankenstein' and 'Cosby' was born in Boston. (Photo credit: TIMOTHY CLARY/AFP/Getty Images)
John O'Hurley
Perhaps best know for his role as J.Peterman in "Seinfeld", O'Hurley lived in Natick for a time during childhood. (Photo by Andy Kropa/Getty Images)
Al Pacino
Al Pacino was a longtime member of David Wheeler's Theatre Company of Boston, performing on Boston stages in the 1970s. (Photo by Kevin Winter/Getty Images)
Paget Brewster
Paget Brewster of CBS's 'Criminal Minds' was born in Concord. (Photo by Frazer Harrison/Getty Images)
Erik Per Sullivan
Actor Erik Per Sullivan, best known as little brother Dewey on 'Malcom in the Middle', was born in Worcester. His family owns a restaurant in Milford. (Photo by Stephen Shugerman/Getty Images)
George Carlin
Comedian George Carlin worked at Boston radio station WEZE for three months in the late 1950s. (Photo by Neilson Barnard/Getty Images)
Kristian Alfonso
Soap star Kristian Alfonso of "Days of Our Lives" was born and raised in Brockton and graduated from Brockton High. (Photo by Michael Loccisano/Getty Images)
Elizabeth Banks
Elizabeth Banks of the Spider-Man films, Seabiscuit, and 40 Year Old Virgin, is from Pittsfield. (Photo by David Livingston/Getty Images)
Bill Macy
Actor Bill Macy (L) pictured with son William H. Macy (R). The elder Macy is from Revere. (Photo by George McGinn/Getty Images)
Roy Bolger
Ray Bolger was best known as the Scarecrow in 'The Wizard of Oz.' The Tony Award winning dancer was born in Dorchester. (AP Photo/MGM)
Elizabeth Perkins
Elizabeth Perkins lived in Bernardston for a time as a child. She is best known for her role in the Showtime hit series 'Weeds' and her movie roles in 'The Flintstones' and 'Big.' (Photo by Jason Merritt/Getty Images)
Eliza Dushku has found success in movies and television. Perhaps best known for her roles on "Buffy the Vampire Slayer" and in the cheerleading movie "Bring It On, " Dushku was born in Watertown. (Photo by Jason Merritt/Getty Images)
Jasmine Guy
Actress, singer, and dancer Jasmine Guy was born in Boston (Photo by Jemal Countess/Getty Images)
Jack Haley
Jack Haley, who played the Tin Man in "The Wizard of Oz" was from Newton Highlands. (AP Photo/MGM)
Jane Alexander
Actress Jane Alexander was born and grew up in Boston. (Photo by Frazer Harrison/Getty Images)
These days Callie Thorne stars alongside Denis Leary in "Rescue Me." Her roots are in Massachusetts - growing up in Lincoln and attending Wheaton College. Thorne also starred in shows like "Prison Break" and "Homicide." (Photo by Astrid Stawiarz/Getty Images)
Alex Rocco
Actor Alex Rocco of 'The Godfather' and 'The Wedding Planner' was born in Boston. (Photo by Vince Bucci/Newsmakers)
Steve and Nancy Carell
Husband and wife Steve and Nancy Carell both have roots here. Steve was born in Concord, Mass. Nancy was born in Cohasset. The two now spend their summers vacationing in Marshfield, where they own a general store. (Photo by Jason Kempin/Getty Images)
Chris Evans
Actor Chris Evans, known for his roles in “Captain America” and "The Fantastic Four" movies, grew up in Framingham and Sudbury. (Photo by Kevin Winter/Getty Images)
James Spader
James Spader was born in Boston and attended school at the Pike School in Andover, Brooks School in North Andover, and Phillips Academy. (Photo by Neilson Barnard/Getty Images)
Joe McIntyre
Singer turned actor Joe McIntyre grew up in Needham and Jamaica Plain. He got his start as the youngest member of Boston boy band New Kids on the Block, then graduated to TV, Broadway and made an appearance as a contestant on Dancing with the Stars. (Photo by Jemal Countess/Getty Images)
Jean Louisa Kelly
Jean Louisa Kelly of "Yes, Dear" was born in Worcester and raised in Boylston. She also appeared in "Mr. Holland's Opus" and "Uncle Buck." (Photo by Alberto E. Rodriguez/Getty Images)
Lenny Clarke
Comedian Lenny Clarke was born in Cambridge and is good friends with fellow Massachusetts native Denis Leary. He has a successful comic career and also a recurring role on Leary's series "Rescue Me." (Photo by Bryan Bedder/Getty Images)
Max Casella
Max Casella, the baby-faced actor known for his roles on "The Sopranos" and "Doogie Howser, MD," was raised in the Boston area and attended Cambridge Rindge & Latin. (Photo by Henry S. Dziekan III/Getty Images)
Bridget Moynahan
Actress Bridget Moynahan was raised in Longmeadow, Mass. While she has appeared in several movies and television shows, she may be best known as Tom Brady's ex-girlfriend and mother of his first child. (Photo by Jemal Countess/Getty Images)
Mike O'Malley
Mike O'Malley of Glee and CBS' "Yes, Dear" was born in Boston, but raised in New Hampshire. (Photo by Jason Merritt/Getty Images)
Parker Stevenson
Remember Parker Stevenson from "The Hardy Boys"? He graduated from Brooks School in North Andover. (Photo Courtesy of USA Network/Getty Images)
Rachel Dratch
Comedian/Actress Rachel Dratch of Saturday Night Live was born and raised in Lexington. (Photo by Jason Kempin/Getty Images)
Liza Snyder
Liza Snyder of "Yes, Dear" was born in Northampton. (Photo by Frederick M. Brown/Getty Images)
Tony Shalhoub
Tony Shalhoub of "Monk" fame spent four seasons with the American Reperatory Theatre in Cambridge, Mass. (Photo by Frazer Harrison/Getty Images)
Ben and Casey Affleck
Brothers Ben and Casey Affleck were raised in Cambridge and have both found success in Hollywood. (Photo by Gareth Cattermole/Getty Images)
B.J. Novak
B. J. Novak, also of "The Office" was born in Newton and went to high school with his co-star John Krasinski at Newton South. (Photo by Jason Merritt/Getty Images)
Barbara Walters
Barbara Walters was born in Boston. (Photo by Donna Ward/Getty Images)
Elisabeth Hasselbeck
The world was first introduced to Elizabeth Hasselbeck when she was a Boston College student competing on "Survivor." She turned that experience into a successful career as co-host of "The View.” (Photo by Jemal Countess/Getty Images for Scholastic)
John Krasinski of "The Office" and "Dreamgirls" was born and raised in Newton, graduating from Newton South. (Photo by Michael Buckner/Getty Images)
Julia Child
Famed TV chef Julia Child got her start in Boston. (AP File Photo)
Lee Remick
The late actress Lee Remick was born in Quincy. (Photo by George De Sota/Getty Images)
Matt Damon
Matt Damon grew up just a couple blocks away from the Affleck brothers in Cambridge. He and Ben were childhood friends who rose to fame together with their film "Good Will Hunting." (Photo by Gary Gershoff/Getty Images)
Matthew Fox
Matthew Fox of "Lost" and "Party of Five" fame attended Deerfield Academy for a year after high school. (Photo by Andreas Rentz/Getty Images)
Mindy Kaling
Mindy Kaling, also of "The Office" was born and raised in Cambridge and graduated from Buckingham, Browne & Nichols. (Photo by Michael Loccisano/Getty Images for TIME)
Nancy Travis
Actress Nancy Travis from 'Becker' and the cult hit 'So I Married an Ax Murderer', grew up in Framingham. (Photo by Jennifer Polixenni Brankin/Getty Images)
Paul Michael Glaser from the "Starsky and Hutch" TV series was born in Cambridge, grew up in Brookline and Newton and attended BU. (Photo by Frazer Harrison/Getty Images)
Peter Gallagher
Actor Peter Gallagher graduated from Tufts in Medford, before going on to star in several movies and the television series "The O.C." (Photo by Frazer Harrison/Getty Images for AFI)
Rainn Wilson
Rainn Wilson of 'The Office' and 'Juno' went to Tufts (Photo by Mike Coppola/Getty Images for Tribeca Film Festival)
Scott Wolf
Actor Scott Wolf, who played Matthew Fox's brother on "Party of Five," was born in Boston. He later starred in "Everwood" and the short-lived series "The Nine." (Photo by Moses Robinson/Getty Images for Usher's New Look Foundation)
Tom Everett Scott
Tom Everett Scott from the movie 'That Thing You Do" among other movies, is from East Bridgewater. (Photo by Frederick M. Brown/Getty Images)
Tom Bergeron
TV host Tom Bergeron is an alum of WBZ-TV and Radio. He rose to local fame on Boston TV before making the successful move to the national scene. Most recently, he can be seen as host of "Dancing Wtih the Stars." (Photo by Alberto E. Rodriguez/Getty Images for Reality Rocks)
Jonathon Togo of "CSI: Miami" was born and raised in Rockland. (Photo by Charley Gallay/Getty Images)
Julianne Nicholson
Julianne Nicholson of "Law & Order Criminal Intent" was born and raised in Medford. (Photo by Jason Kempin/Getty Images for Tribeca Film Festival)
Karen Allen
Karen Allen of "Animal House" and "Raiders of the Lost Ark" founded the Berkshire Mountain Yoga in 1995, and later started her own knitwear design studio in Great Barrington. (Photo by Stephen Lovekin/Getty Images)
Kathryn Erbe
Kathryn Erbe or Law & Order Criminal Intent is from Newton. (Photo by Andrew H. Walker/Getty Images)
Louis C.K.
Comedian Louis C.K. of HBO's "Lucky Louie" was raised in Massachusetts and got his start in the Boston stand-up comedy scene. (Photo by Katy Winn/Getty Images)
mark-wahlberg
Mark Wahlberg rose to fame under the name "Marky Mark" - and posing for underwear ads. He later found success as an adult in TV and movies.California. (Photo by Frazer Harrison/Getty Images)
Michael Weatherly
Michael Weatherly of "NCIS" attended Brooks School in North Andover and Boston University for a time. (Photo credit: CHRISTIAN ALMINANA/AFP/Getty Images)
Pop singer and actress JoJo was raised in Foxboro, Mass. Here full name is Joanna Noëlle Levesque. (Photo by David Livingston/Getty Images)
Kurt Russell
Kurt Russell was born in Springfield, Mass. (Photo by Frazer Harrison/Getty Images)
Maria Menounos
Actress Maria Menounos was born in Medford. She was Miss Teen Massachusetts in 1996 and went to Emerson College. (Photo by Frederick M. Brown/Getty Images)
Mike Wallace
Mike Wallace of CBS' "60 Minutes" was born in Brookline. (Photo by Scott Gries/Getty Images)
Olympia Dukakis
Academy Award winning actress Olympia Dukakis was born in Lowell and is a cousin of former governor Michael Dukakis. (Photo by Andy Kropa/Getty Images)
Peter Guber
Hollywood producer Peter Guber was born in Newton. He is CEO of Mandaly Entertainment Group and served as Executive Producer on such projects as "Rain Man" and the Batman movies. (Photo by Frederick M. Brown/Getty Images)
Oscar-winning actor Chris Cooper, who has appeared in dozens of movies including "Adaptation," "The Bourne Identity" and "American Beauty," lives in Kingston, Mass. (Photo by Stephen Lovekin/Getty Images)
David Chokachi of "Baywatch" fame was born in Plymouth and attended Tabor Academy in Marion. He is shown here with actress Annabeth Gish. (Photo by Matthew Simmons/Getty Images for GQ)
David Morse
David Morse of "St. Elsewhere" and "The Green Mile" was born and raised in Hamilton, Mass. He began his career as a player for the Boston Repertory Theatre. (Photo credit: MICHAL CIZEK/AFP/Getty Images)
Donna Summer
5-time Grammy winner Donna Summer was born in Boston, one of seven children. (Photo by Frederick M. Brown/Getty Images)
Jo Dee Messina
Country singer Jo Dee Messina was born and raised in Holliston. (Photo by A. Messerschmidt/Getty Images)
Joe Rogan
Joe Rogan, who starred in the sitcom "NewsRadio" in his early career, was raised in Revere, Mass. He found his niche in Hollywood as host of "Fear Factor" and "The Man Show." Photo: Frazer Harrison/Getty Images for Spike TV . (Photo by Ethan Miller/Getty Images for MGM Resorts International)
Broadway and television actor Robert Morse was born in Newton. (Photo by Frazer Harrison/Getty Images)
Scott Grimes
Scott Grimes, who is known these days for his role on "E.R." was born in Lowell. Early in his career, Grimes had a guest role as Alyssa Milano's boyfriend on "Who's The Boss." (Photo by Frederick M. Brown/Getty Images) |
Q:
Finding the length and width of a house that maximize its area
A house is built in the shape of a rectangle, with $3$ rectangular interior sections separated by parallel walls, using fencing. The owner has $900$ feet of fencing, and he wants to enclose the largest possible area. What should the length, width, and area be?
Please help, I'm lost.
A:
Let the two inside parallel walls each have length $x$. Let the sides of the rectangle perpendicular to these each have length $y$.
Then the total area enclosed is $xy$. The amount of fencing used is $4x+2y$. This is to be $900$, since it is clear that it is best to use up all the fencing.
So we want to maximize $xy$, under the constraint $4x+2y=900$.
Thus $y=450-2x$, and we want to maximize $x(450-2x)$.
Because of the physical situation, we need $x\ge 0$ and $y\ge 0$. This means $x\le 225$.
So mathematically, we want to minimize $f(x)=450x-2x^2$, where $0\le x\le 225$.
This can be done by standard tools, such as calculus or completing the square.
|
Frank John Powell
Frank John Powell (15 March 1891 – 31 October 1971), was a British Liberal Party politician and magistrate.
Personal life
He was the son of Francis Cox Powell and was educated at Rutlish School and Inns of Court. He married Irene Hesse Wyatt in 1915 and they had two sons and one daughter. His wife died in 1955. He married Joan Selley, who died in 1965. He married Betty Edelson who died in 1971.
Career
He was with the Queen's Westminster Rifles from 1910–14 and then a Captain in the King's Own Yorkshire Light Infantry from 1914–18 and was gassed at the Battle of Loos in 1915. He was Called to the Bar, Middle Temple, in 1921. He practised in London and on the South East circuit. He was then a Metropolitan Police Magistrate from 1936–63; Greenwich and Woolwich 1936–40, Tower Bridge 1940–42 and Clerkenwell 1942–63. He was appointed a Justice of the Peace for Surrey in 1937. He was Hon. Legal adviser to the New Malden Citizens Advice Bureau from 1939–46 and was a Member of Council of the Magistrates Association from 1942–60. He was a Member of the Chairmen’s Panel, of the Metropolitan Juvenile Courts, 1946–52.
Political career
He was on the executive committee of the National Young Life Campaign. He was Liberal candidate for the Kingston-upon-Thames Division of Surrey at the 1929 General Election. Kingston was a safe Conservative seat that they had won at every election since it was created in 1885. Along with the national trend, Powell was able to increase the Liberal vote share;
Following the formation of the National Government in 1931, there was another General Election. As a consequence, the Liberal Party did not run a candidate in Kingston against the Conservative who was the sitting National government candidate. Powell was Chairman of the Malden branch of the League of Nations Union. He was Liberal candidate again for the Kingston-upon-Thames Division of Surrey at the 1935 General Election. By then, the electoral fortunes of the party were in decline and he came a poor third;
An opportunity came to contest the seat again at the Kingston upon Thames by-election, 1937, but the Liberal party did not field a candidate. He was Chairman of the National Association of Homes and Hostels, 1955–60. He was President of the Probation Officers Christian Fellowship, 1954–66.
External links
Photographs of Powell at the National Portrait Gallery: http://www.npg.org.uk/collections/search/person/mp68942/frank-john-powell
References
Category:1891 births
Category:1971 deaths |
President Barack Obamas administration is quietly offering a quasi-amnesty for hundreds of thousands of illegal immigrants, while aiming to win reelection by mobilizing a wave of new Hispanic voters, say supporters of stronger immigration law enforcement.
The new rules were quietly announced Friday with a new memo from top officials at the US Immigration and Customs Enforcement (ICE) agency. The prosecutorial discretion memo says officials need not enforce immigration laws if illegal immigrants are enrolled in an education center or if their relatives have volunteered for the US military.
Theyre pushing the [immigration] agents to be even more lax, to go further in not enforcing the law, said Kris Kobach, Kansas secretary of state. At a time when millions of Americans are unemployed and looking for work, this is more bad news coming from the Obama administration [if the administration] really cared about putting Americans back to work, it would be vigorously enforcing the law, said Kobach, who has helped legislators in several states draft local immigration-related laws.
We think it is an excellent step, said Laura Vasquez, at the Hispanic-advocacy group, La Raza, which pushed for the policies, and which is working with other groups to register Hispanics to vote in 2012. Whats very important is how the prosecutorial discretion memo is implemented on the streets, she said.
The Hispanic vote could be crucial in the 2012 election, because the Obama campaign hopes to offset its declining poll ratings by registering new Hispanic voters in crucial swing states, such as Virginia and North Carolina.
To boost the Hispanic vote, the administration has enlisted support from Hispanic media figures, appointed an experienced Hispanic political operative to run the political side of the Obama reelection campaign, and has maintained close ties to Hispanic advocacy groups, including La Raza. For example, La Razas former senior vice president and lobbyist, Cecilia Munoz, was hired by the Obama administration as director of intergovernmental affairs in 2009.
On Friday, officials at ICE announced several new administrative changes to immigration enforcement.
The primary document was the six-page prosecutorial discretion memo, which provided new reasons for officials to not deport illegal immigrants.
When weighing whether an exercise of prosecutorial discretion may be warranted for a given alien, ICE officials, agents and attorneys should consider all relevant factors, including, but not limited to the circumstances of the persons arrival in the United States particularly if the alien came to the United States as a young child; the Persons pursuit of education .. whether the person, or the persons immediate relative, has served in the U.S. military, said the memo.
The factors are extremely broad and very troubling [it] look like a stealth DREAM Act enforcement through non-enforcement, said Kobach.
The Development, Relief and Education for Alien Minors Act has been repeatedly rejected by Congress from 2001 to 2010. The deliberate non-enforcement of our immigration laws in this administration certainly seems politically motivated, said Kobach, adding how exactly they expect to win votes by doing this is beyond me.
In practice, the new memo wont make much of a difference because ICE isnt deporting people now, said Jessica Vaughan, an analyst at the Center for Immigration Studies. While pleading limited resources, they only [deport] individuals with criminal charges, such as felonies or several misdemeanors, she said.
There are roughly 10 million illegal immigrants in the United States, of which roughly 7 million are working. Business and Democrat-allied advocacy groups have stoutly opposed federal and state efforts to identify and deport the immigrants, but public opposition has repeatedly stopped proposals including the Obama-backed DREAM Act to provide the illegal immigrants with amnesties and residency permits.
On Friday, officials also announced a new advisory panel intended to implement policies stopping the [deportation and] removal of individuals charged with, but not convicted of, minor traffic offenses who have no other criminal history or egregious immigration violations.
Advocates for illegal immigrants have long argued that police should not deport illegal immigrants who are identified following a traffic violation. It is not a crime to be be here illegally, claimed B. Loewe, a spokesman for the National Day Laborers Organizing Network. Local law-enforcement enforcing immigration laws is a bad idea.
It is a misperception that local police are going out to pull people who look like immigrants on trumped-up traffic violations, countered Vaughan. Theyre not removing people who made a right turn on red light without stopping, because you dont get arrested for that.
The agency also announced new training policies for immigration officials, a new policy to shield illegal immigrants from deportation if they seek police protection during a domestic violence episode, and a new form to be given to detained immigrants which tells them they cant be detained for more than 48 hours by state officials.
These announcements are solutions in search of a problem, said Vaughan. For example, illegal immigrants who successfully show they are domestic violence victims already can get a U Visa under an established law, she said. This is absolutely unheard of for a law enforcement agency to be told to practically apologize for doing its job of enforcing the law, she said.
Immigrant advocacy groups said they want to get more from the administration. Were continuing to work with the administration for them to show strong leadership and advancing immigration reform, said Vasquez. We think there are further steps the administration can take. For example, the administration should allow people to stay in the United States while their immigration cases are settled, she said.
The administration should end the Secure Communities program, which allows state and local police to detain illegals for subsequent deportation by federal authorities, said Loewe. Secure Communities is an experiment they unleashed on the public without any safeguards or regulations local law-enforcement of immigration laws is a bad idea, he said.
Tougher enforcement of immigration laws shouldnt be used to combat high unemployment among poor Americans, he said. Instead, the government should start a major spending program to build schools and libraries, and levy taxes on major corporations. When were talking about a drain on the economy, he said, we should look at the corporations that refuse to pay back their due.
But the overall goal of the new memos is victory in the 2012 election, not law enforcement, said Vaughan. It is kabuki theater designed [by the administration] to send a signal to these groups that they are taking their concerns very seriously.
Latino voters are very engaged and watching carefully what is happening with immigration policies, said Vasquez, because theyre deeply effected by it, either because they know someone impacted by it, or themselves are impacted by it.
Tougher enforcement of immigration laws shouldnt be used to combat high unemployment among poor Americans, he said. Instead, the government should start a major spending program to build schools and libraries, and levy taxes on major corporations.
| Willfully destructive. You can't be this wrong without trying to be wrong.
3
posted on 06/21/2011 3:34:26 AM PDT
by ClearCase_guy
(The USSR spent itself into bankruptcy and collapsed -- and aren't we on the same path now?)
At some point it must occur to the smart people in the Hispanic community that this would accelerate the decline of the US into just another third world dump of the sort they or their ancestors left behind. Either that or we’ll see some balkanization by and by.
Not enforcing the law to win votes?So the people who would support this policy and benefit are illegal aliens.So the votes they are trying to win are from illegal aliens!I would like to know why are non citizens who sneak into the country voting and having their asses kisses for their votes and what's being done to stop it!
Obama's stupidity never ends.....like when he told latinos to vote against "their enemies," as Midterm elections turned into a Democrat bloodbath.
Illegals are well-briefed on US law WRT getting on the gravy train----most of the US tax dollars they score are sent back home to Third World hellholes where they intend to retire.......as the US govt checks keep rolling in.
RIDING THE US GRAVY TRAIN An illegal alien w/ wife and five children violates our borders. He gets a job mowing lawns for $5.00 or 6.00/hour. At that low wage, with six dependents, he pays no income tax, so each year, he files an Income Tax Return to get "EITC---earned income credit" of up to $3,200 scot-free. He qualifies for Section 8 housing and subsidized rent. He qualifies for food stamps and no deductible, no co-pay free health care. His children get free school breakfasts and lunches. The kids qualify for monthly SSI checks, faking ADD; the illegal and his wife get SSI if they fake being aged, blind or disabled; SSI qualifies them for Medicare. Plus illegals don't worry about pricey car insurance, life insurance, or homeowners insurance, and qualify for relief from high energy bills. All that is collected with one identity.
========================================
REFERENCE NOTE Earned Income Tax Credit is available only to immigrant workers who obtain legal work status. But the law allows immigrants to claim EITC for up to three years prior to obtaining that status. Workers simply file a tax return for the years in which they were not legally eligible to work in the US. The most widespread abuse stems from the requirement that children live with the worker for more than six months of the year. IRS does little to verify the claim. Many immigrants claim non-existent children, or claim children who theyve left behind with relatives. Those with two or more children and income below $32,121 could get as much as $4,008. It is estimated that illegals received $22 billion EITC.
==================================================
Illegals know their strength is is forming voting blocs to put pressure on lawmakers. They establish multiple identities.......AND ALL OF THEM VOTE.
Illegals establish several identities with phony SS nos and fake documents these "impoverished immigrants" buy from itinerant document brokers for several thousand dollars.
THE LUCRATIVE PHONY ID BUSINESS----NORTH BERGEN, NJ -- July 21, 2006 -- Pelcastre brothers, Angel and Jorge, Dallas, Texas, were a walking threat to US national security, expert document forgers who, for a few thousand dollars, could give anyone a new identity, L/E said. The Texas brothers turned a NJ hotel room into a business office and were readying a massive cache of fake Social Security cards for delivery to a local NJ identity broker.
The Texas brothers were a "one-stop shop" for a myriad of fake US documents, including birth certificates, Social Security cards, driver's licenses for any state in the US, passports and resident alien cards, said state police. Officers happened upon two cars bearing Texas plates in a NJ hotel parking lot. Authorities wouldn't identify the NJ hotel by name for fear it would spark retribution from savage drug cartels operating in the US.
The Texas brothers were followed to a NJ office supply store nearby where they purchased computer supplies. Officers then followed the Texans to a NJ storage facility in Secaucus, NJ, where the Texans loaded several boxes into a car. One of them stood lookout. L/E approached the Texas brothers when they returned to the NJ hotel and questioned them separately. The Texas brothers consented to a search.
All told, the haul was worth about $500,000 on the street. Police also recovered $6,000 in cash, which was the first payment from a NJ fake document broker for a shipment of 500 fake Social Security cards. ####
HAYWARD, CALIFORNIA -- A Hayward woman has been charged with numerous felony counts for allegedly running an identity-theft operation that created fake Social Security and California identification cards, checks and credit cards. Mishel Caviness, 40, was arrested after an investigation by Oakland police and the U.S. Secret Service.
The probe began when an Oakland city employee reported in January that someone was fraudulently cashing her checks, according to police Officer Ryan Goodfellow and court records. Caviness was identified with the help of surveillance-camera footage from Bay Area stores, police said.
A search of her apartment on the 21000 block of Foothill Boulevard in Hayward last week uncovered a printing operation capable of making fake checks and credit cards, police said. Also found were 900 blank credit cards, personal information belonging to as many as 1,000 people, blank checks and computers, police said.
Alameda County prosecutors charged Caviness with forgery, identity theft, forgery of a driver's license and grand theft. She has a previous conviction for welfare fraud and told police that she is disabled and unemployed. She is being held at Santa Rita Jail in Dublin in lieu of $325,000 bail.
Nothing good has come from this plague of lawbreakers violating our borders. Karl Rove was WRONG----they came from hellholes whining about a "better life".....but that was a ruse. They DID NOT embrace democracy once they had a taste of it as Rove famously said. On the contrary, they are a national security threat-----and are conspiring to tear down democracy---and their home countries are helping them. Read on:
======================================
News reports say Mexico, Argentina, Brazil, Chile, Colombia, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua and Peru and are conspiring to collude with The Anti-Defamation League, The American Civil Liberties Union, the Southern Poverty Law Center and several other civil and immigrant rights groups to infringe on US sovereignty to make laws as we see fit. The co-conspirators filed a federal class-action lawsuit against Georgia's law and are now asking a judge to halt the measure pending the outcome of their case.
===========================================
They only look like they're mowing our loans and flipping burgers in fast food joints. The main occupation of these hyphenated-separatists is creating chaos to divert attention from the Marxist Third World they are building on US soil.
These anti-American criminals consider the United States an invader state---a foreign power imposed on Third World hellholes. They arrogantly refuse to refer to the US by its legal name. The US is known by Mexicans as el Norte, (the North), el Otro Lado (the Other Side), or Gringolandia (the Anglo Entity).
Freud would have a field day with these people----that "warrior" mentality evidences a distinct lack of self esteem. They actually think we are "frail white people." That's what the Japanese thought. The same mistake Mexicans made with Texans. The disturbed lefts identity politics with its propensity to separate and balkanize America moves ahead apace with the era of Obama. You remember Obama? The post-racial president that was supposed to heal America? Yeah, that one.
=============================================
Phoenix Latinos will protest in Atlanta "We are taking over this land and anybody who doesnt like it should go back to Europe" The Examiner | May 20th, 2011 | Miguel Perez FR Posted Sat, May 21, 2011 by moonshinner_09
Georgia wont be part of the proposed Latino homeland of Aztlán, but Latino civil rights leaders from Phoenix will march and protest in different parts of Atlanta next week for immigration laws they view as unjust. Latinos wont stay at the back of the bus anymore. We are taking over this land and anybody who doesnt like it should go back to Europe, says Jorge Serrano of Take Back Aztlán. Racism should not be tolerated anymore in this country. Trying to get rid of Latinos is nothing but racist, says Cecilia Maldonado of Chicanos Unidos Arizona, who will be meeting with Latino civil rights leaders in Atlanta to propose national boycotts. (Excerpt) Read more at examiner.com ...
================================================
Translated from the Espanol------Third World federales are salivating over looting the US treasury. Can't wait to get their hands on the loot.
=============================================
Just here for a better life (sob).
WAITING TO GET THEIR ORDERS FROM MEXICOReconquista shock troops at Phoenix Capital protest, May 29, 2010.
2009---TEXAS DREAM ACT DEMONSTRATION In secret enclaves, they conspire to destroy the USA from within.
13
posted on 06/21/2011 3:52:02 AM PDT
by Liz
( A taxpayer voting for Obama is like a chicken voting for Col Sanders.)
FREEPERS GET THIS VOTER FRAUD INITIATIVE STARTED IN YOUR STATE ASAP: Usually each state's Secy of State handles voter irregularities. Send letters to names on the voter rolls---requesting proof of citizenship, place of residence and SS #---advise that those who do not present proof will be stricken from the rolls.
=================================================
ID SUGGESTIONS---what to request---what not to request. Each state might establish different criteria.
<><><><> Primary ID---Registered voters might be required to show at least ONE of the following documents:
Foreign passport with INS or USCIS verification and valid record of arrival/departure (Form I-94)
Foreign passport with INS or USCIS verification and valid Form I-551 stamp
Current alien registration card (new Form I-551) with expiration date and verification from INS or USCIS
Current photo employment authorization card (Form I-688B or I-766). Must be presented with valid Social Security card.
Current alien registration card (old Form I-551) without expiration date and with INS or USCIS verification
Photo temporary resident card (Form I-688)
Civil marriage, domestic partnership or civil union certificate issued by the municipality or state in which the ceremony occurred. Please note: Photo copies are not acceptable. Certificates issued by religious entities are not acceptable
Order or decree of divorce, dissolution or termination
Court order for a legal name change, signed by a judge or court clerk
Current US military dependent card
US military photo retiree card
Valid firearm purchaser card
US college photo ID card with transcript
Valid federal, state or local government employee driver license
Valid federal, state or local government employee photo ID card
US military discharge papers (DD214)
FAA pilot license
Current/expired less than one year non-digital PHOTO driver license
Current PHOTO driver license from any state or the District of Columbia
EXCERPT The Obama admin is quietly offering quasi-amnesty for hundreds of thousands of illegal immigrants, while aiming to win reelection by mobilizing a wave of new Hispanic voters.........new rules were quietly announced Friday with a new memo from top officials at ICE.
The prosecutorial discretion amnesty memo says officials need not enforce immigration laws if illegal immigrants are enrolled in an education center or if their relatives have volunteered for the US military.
We think it is an excellent step, said Laura Vasquez of La Raza, which pushed for the policies, and which is working with other groups to register Hispanics to vote in 2012. Whats very important is how the prosecutorial discretion memo is implemented on the streets, she said.
The Hispanic vote could be crucial in the 2012 election, because the Obama campaign hopes to offset its declining poll ratings by registering new Hispanic voters in crucial swing states, such as Virginia and North Carolina. --SNIP--
Please see suggested voter fraud initiative (above) that should be implemented in your state.
17
posted on 06/21/2011 4:10:57 AM PDT
by Liz
( A taxpayer voting for Obama is like a chicken voting for Col Sanders.)
OBAMA'S NOT BOWING----AN AMERICAN PRESIDENT KNEELS IN OBEISANCE TO FOREIGN COUNTRIES: News reports say Mexico, Argentina, Brazil, Chile, Colombia, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua and Peru and are conspiring to collude with The Anti-Defamation League, The American Civil Liberties Union, the Southern Poverty Law Center, and several other civil and immigrant rights groups, to infringe on US sovereignty to make laws as we see fit. The co-conspirators filed a federal class-action lawsuit against Georgia's anti-illegal law (tougher than Arizona's) and are now asking a US judge to halt the measure pending the outcome of their case.
22
posted on 06/21/2011 4:38:20 AM PDT
by Liz
( A taxpayer voting for Obama is like a chicken voting for Col Sanders.)
The “prosecutorial discretion” memo should be the first piece of evidence introduced in the liability lawsuit against Obama, Napolitano and ICE by the families of those murdered by the beneficiaries of this pandering policy. If laws allow for the prosecution/litigation of those who act in collusion with criminals, I fail to see how the Obama administration differs.
Obama is ruling us with regulations and memo’s and special Executive orders.
We don’t need the Congress to confirm Obama appointments, he just makes a new Czar.
We don’t need Congress to pass Capo & Trade. Obama just get the EPA.
We don’t need Congress to pass the Dream act, Obama just writes a memo.
We don’t need Congress to pass a Budget, Obama just spends what he likes.
We don’t need Congress to give money to foreign countries, Obama just writes a check for 20 billion to the Arab Spring.
We dont need Congress to go to war, Obama just bombs anyone he likes or send in the drones.
What are we paying all those worthless political whores on the Hill for? We have the Emperor Obama.
Tougher enforcement of immigration laws shouldnt be used to combat high unemployment among poor Americans, he said. Instead, the government should start a major spending program to build schools and libraries, and levy taxes on major corporations. When were talking about a drain on the economy, he said, we should look at the corporations that refuse to pay back their due.
wtf? corporations haven't paid their due? what about illegals that haven't paid anything?? and yet we're suppose to let more in... while pushing up more taxes??
as citizens, we have 3 forms of influence on the government that actually have impact.
1. demonstrations & voting in a system rife with corruption to the point that IDs cannot be verified to insure the person voting hasn't voted 30 times that day, voting and demonstrations hold very little sway. blogs hold even less.
2. tax money in a system where the fedgov prints more physical cash when it wants or just charges up the debt, your tax dollars have little impact. consider going Galt (stop working/paying in). would the fedgov care? no. in fact, it helps the dems as the fedgov receives less tax money and they will NOT slow their spending, driving the debt higher to the point of breaking.
3. the 2nd amendment it's what the citizenry has left when the fedgov is out of control. how effective it is depends on the hearts & minds of the population and those serving the fedgov.
if there are other alternatives, i'd love to hear them.
25
posted on 06/21/2011 4:58:40 AM PDT
by sten
(fighting tyranny never goes out of style)
The prosecutorial discretion amnesty memo says officials need not enforce immigration laws if illegal immigrants are enrolled in an education center or if their relatives have volunteered for the US military.
At some point it must occur to the smart people in the Hispanic community that this would accelerate the decline of the US into just another third world dump of the sort they or their ancestors left behind
They don't care. What's important to them is that they control the territory and they don't have to obey gringo laws.
Get it? They consider themselves a nation - unlike the now lost and nationless white Anglo-Saxons, who think that group identification (um, "nationality") is thought-crime.
Vote-crazed Obama's back-door amnesty: "US officials need NOT enforce immigration laws IF illegal immigrants are enrolled in an education center or if their relatives have volunteered for the US military....."
So how will sap-happy Obama know which of their multiple identities they used to enroll and volunteer? And which of their identities will they use to obtain amnesty?
There is overwhelming proof illegals use multiple identities. Illegal Jose Madrigal, the Washington state rapist, had some 30 identities.
One thing 2008 candidates Obama, Hillary Clinton and John McCain had in common is that they voted to give retroactive Social Security benefits to illegal aliens who committed document fraud.
Illegal immigrants generally have very low education levels. 61 percent of illegal immigrant adults lack a high school diploma. Illegal immigrants have a poverty level that is roughly twice that of native-born Americans. The Senate's bill would offer amnesty and a path to citizenship to 12 to 12.5 million illegals currently in the U.S. In addition, its lax evidentiary standards would encourage millions more to apply for amnesty fraudulently. Because there is no numeric limit on the number of amnesties that could be granted under the bill, the actual numbers who would receive amnesty under the bill could be far higher. Eligibility for government benefits means that the former illegal immigrant or his family members obtains the same benefits as a U.S. citizen would have.
Children born within the United States to illegal immigrants, including Z visa holders, are potentially eligible for all welfare benefits from the moment of birth through the reat of their lives. In addition, adult Z visa holders and their foreign-born children will be eligible for medical care under the Medicaid Disproportionate Share Program. Z visa holders will be given lawful Social Security numbers which makes them eligible for two refundable tax credits: The Earned Income Tax Credit and the Additional Child Tax Credit. These credits provides cash welfare assistance...
Irrespective of employment history, amnesty recipients will become eligible for 60 different federal welfare programs five years after receiving legal permanent residence. When the amnesty recipients reach retirement age, total benefits received will OUTSTRIP TAXES PAID BY ROUGHLY SEVEN TO ONE. (Excerpt) Read more at heritage.org
33
posted on 06/21/2011 11:49:07 AM PDT
by Liz
( A taxpayer voting for Obama is like a chicken voting for Col Sanders.)
To remain here illegally is a felony Person must have means to sustain themselves economically Must not be destined to be a burden on society Must be of economic and social benefit to society Must be of good character and have no criminal record. Must have immigration authority and must have a record Foreign visitors violating terms of their entry are imprisoned or deported Those who aid illegal immigration will be sent to prison
These laws are rigorously enforced but not by the cruel Americanos. These are the immigrations laws of our southern neighbor Mexico.
Note that matricula consular cards used in the US are NOT accepted in Mexico---b/c they are easily falsified.
34
posted on 06/21/2011 11:52:29 AM PDT
by Liz
( A taxpayer voting for Obama is like a chicken voting for Col Sanders.)
ITEM A federal grand jury Friday indicted four Vallejo women, including a county employee, on charges of bank fraud and identity theft. Among the accused is Jennifer Miller, 43, an accounting supervisor in the Solano County Health and Human Services Department. Miller abused her position to allegedly obtain names, dates of birth, Social Security numbers and driver's license numbers of Solano County health and social services clients, including food stamp recipients, federal prosecutors said. Miller allegedly provided the stolen identities to accomplices who opened bank accounts in the names of the victims, prosecutors said. (Excerpt) Read more at timesheraldonline.com ...
ITEM They are taking over inner city councils---a wave of latino councilman earmarked tax funds for "immigrant" organizations----organizations that do not exist.
ITEM In another case, latinos were asked to pay up to $2000 to get approved for Section 8 housing subsidies---govt pays 75% of the rent---but the govt does not require payment for this.
ITEM> A latino councilman (whose father is a state Senator) got huge amounts of city and state funds for an organization run by the councilman's wife. She said she provided services to another organization---trouble is the second organization said they never heard of her.
35
posted on 06/21/2011 11:56:45 AM PDT
by Liz
( A taxpayer voting for Obama is like a chicken voting for Col Sanders.)
Well, the white Anglo-Saxons may be the first people who ever thunk themselves into extinction, cultural or otherwise. Will they ever shake off this reasoning or is the condition too strong? We’ll see.
Disclaimer:
Opinions posted on Free Republic are those of the individual
posters and do not necessarily represent the opinion of Free Republic or its
management. All materials posted herein are protected by copyright law and the
exemption for fair use of copyrighted works. |
Indian Railways Gets New Modern Diesel Locomotive; Check Out Cool Facts
In bright hues of red and yellow, this is Indian Railways’ new diesel locomotive – it’s the first in a set of 1,000 engines – that will power trains for many years from now. Manufactured by GE Transportation, the new locomotive was unveiled in June this year. After two months of testing and validation, the locomotive has now been painted. This particular locomotive has been made in the US, but eventually, GE will make the new engines in its factory in Bihar under the government’s ‘Make in India’ initiative. We take a look at the significance of its colour scheme and other salient facts:
The bright colours are said to hold a “special meaning”, with yellow representing freshness and red signifying energy. The new colour scheme for Indian Railways was completed using approximately 50 gallons of paint, says GE, adding that it will provide the locomotive protection in the harsh environments where Indian Railways operates.Read More.. |
After yesterday’s “shot of poop” revelation regarding a young Sheev Palpatine played by Matt Smith, another leak has decided to show it’s spoiler-y head. Turn back now if you don’t want to be spoiled for Star Wars Episode IX, the final entry in the Star Wars sequel trilogy and the Skywalker saga, directed by JJ Abrams. Be wary! This leak is a hilarious doosey.
SPOILERS!! SPOILERS!! SPOILERS!!
6 years after TLJ, the Galaxy is in ruins because of the First Order. Several systems have been Annihilated. The Resistance is a thing of the past. Maz Kanata, Rey, and Finn discover a strange crystal on Ahch-To. They think they may have found a way to change everything.
Poe and Leia (using unused footage from TFA and TLJ) team up with Lando Calrissian.
Rey, Finn, and Maz use the crystal to develop a time travel device. They hatch a plan to go back in time and prevent Snoke from creating the First Order, and to make sure Ben never becomes evil, which hopefully will save Luke and Han from death.
Poe, Leia, and Lando discover a secret team of Resistance fighters. They plan a major attack on the First Order.
Rey, Finn, and Maz go back in time 40 years before The Phantom Menace. Young Palpatine is with a man that is his brother. There is a huge battle between the First Order and the new Resistance fighters. Poe blows himself up to destroy the main ship.
Finn, Rey, and Maz travel through time to try and figure out Snoke’s origin. He is Palpatine’s brother. An incident that took place shortly after Palpatine’s death horribly disfigured him into what he was in TFA and TLJ.
The three travel to right before this happens. They fight him, he kills Maz and injures Finn. Rey manages to kill him. Out of no where, a seismic charge goes off from none other than Boba Fett. It leaves both Rey and Finn disfigured in the exact same way that Snoke was.
Snoke’s death drastically alters the timeline. Rey and Finn lose the capability to time travel and become leaders of the First Order instead of Snoke.
The final battle involves a restored Luke, Han, and a non-evil Ben against disfigured Rey and Finn. Luke kills them, and destroys the alternative First Order.
The ending features Luke, Han, and Leia (using a double) together watching a peaceful Galaxy.
What do you think of this leak? Is the leaker trolling us? Probably. This is totally fake and made the FSW staff laugh, but it’s our job to report on all leaks.
As always, stay tuned to FakingStarWars.net for all the finest Star Wars comedy, parody, and satire in the galaxy. Don’t forget to subscribe to our podcast on iTunes or Google Play for even more unbelievable news from a galaxy far, far away. Also, consider joining our Discord to talk Star Wars and other nonsense as well as supporting us on Patreon… for as little as a buck a month, you can help us fake harder, better, faster, stronger.
— Link Voximilian |
Early Changes in eDiary COPD Symptoms Predict Clinically Relevant Treatment Response at 12 Weeks: Analysis from the CRYSTAL Study.
Early detection of treatment response is important in the long-term treatment and management of patients with chronic obstructive pulmonary disease (COPD). This analysis evaluated whether early improvement in symptoms, recorded in the first 7 or 14 days via an electronic diary (eDiary) compared with baseline, can predict clinically meaningful treatment responders at 12 weeks. CRYSTAL was a 12-week, randomized, open-label study that demonstrated the increased effectiveness of indacaterol/glycopyrronium (IND/GLY) or glycopyrronium (GLY), after a direct switch from on-going baseline therapies, in patients with symptomatic COPD and moderate airflow obstruction. The co-primary endpoints were trough forced expiratory volume in 1 second (FEV1) and transition dyspnea index (TDI) at Week 12. Patients' symptom status was recorded daily in an eDiary. Of 4,389 patients randomized, 3,936 and 3,855 reported symptoms on Days 7 and 14, respectively. Patients who reported an early decrease in symptoms on Day 7 or 14 were more likely to achieve the minimal clinically important difference of ≥100 mL in trough FEV1 or ≥ 1 point in TDI at Week 12. Using stepwise multivariate regression models we identified as best predictors of FEV1 responders the decrease in wheeze on Day 7, and nighttime symptoms and wheeze on Day 14; best predictors of TDI responders were decrease in nighttime symptoms and wheeze on Day 7, and nighttime symptoms, sputum and wheeze on Day 14. Early symptom improvement at Day 7 or 14, especially wheeze and nighttime symptoms, may identify patients with clinically important improvement in lung function and dyspnea at Week 12. |
Q:
what is the best way to find substring in a string with its frequency using c#?
I am trying to find a word in a string which means
if abcdef ghijk is a string then substring ghijk should give us 1 as frequency.
but if substring is cde then 0 should be returned.
i wish this was possible with Regex but I don't know if it is. or may be there is a method in string class.
A:
If you want words frequency, you can create word freq. dictionary like this:
s.Split().GroupBy(x => x).ToDictionary(x => x.Key, x => x.Count())
And then check if word in this dictionary.
var s = "abcdef ghijk abcdef";
var d = s.Split().GroupBy(x => x).ToDictionary(x => x.Key, x => x.Count())
// Dictionary<string, int>(2) { { "abcdef", 2 }, { "ghijk", 1 } }
A:
It sounds like you want to find only whole words within the string. This can be accomplished with a regular expression using \b, which means 'word boundary'. For example:
var input = "abcdef ghijk";
var freq1 = Regex.Matches(input, @"\bghijk\b").Count;
Console.WriteLine(freq1); // 1
var freq2 = Regex.Matches(input, @"\bcde\b").Count;
Console.WriteLine(freq2); // 0
This will also take into account punctuation like commas and periods.
|
The United States has approved flights on six U.S. airlines toCuban cities other than Havana, linking the former Cold War foes closer together, the U.S. Transportation Department said in a statement on Friday.
U.S. President Donald Trump's plan to roll back his predecessor's opening toward Cuba will spare airlines and cruise operators betting on a new revenue source but the rollback could affect them by weakening demand.
Packed into a remote corner of a pavilion, just 13 U.S. companies took stands at Cuba's sprawling trade fair this year, in a sign of how firms' interest in doing business on the island has dwindled in the first year of Donald Trump's presidency.
Packed into a remote corner of a pavilion, just 13 U.S. companies took stands at Cuba's sprawling trade fair this year, in a sign of how firms' interest in doing business on the island has dwindled in the first year of Donald Trump's presidency.
At a time when Venezuela faces an economic crisis, a spiraling crime epidemic and political unrest, Mary Anastasia O'Grady, editor at the Wall Street Journal, says Cuba’s firm grip on power is the reason why the country hasn’t collapsed.
At a time when Venezuela faces an economic crisis, a spiraling crime epidemic and political unrest, Mary Anastasia O'Grady, editor at the Wall Street Journal, says Cuba’s firm grip on power is the reason why the country hasn’t collapsed.
The United States on Friday abruptly warned Americans not to visit Cuba and ordered more than half its Havana embassy personnel to leave the island in a dramatic response to mysterious recent "specific attacks" harming the health of U.S. diplomats.
The United States is crafting a plan for a drawdown of staff from the U.S. embassy in Havana in response to still-unexplained incidents that have harmed the health of some U.S. diplomats there, U.S. and congressional officials said.
The United States is crafting a plan for a drawdown of staff from the U.S. embassy in Havana in response to still-unexplained incidents that have harmed the health of some U.S. diplomats there, U.S. and congressional officials said.
U.S. President Donald Trump's plan to roll back his predecessor's opening toward Cuba will spare airlines and cruise operators betting on a new revenue source but the rollback could affect them by weakening demand. |
Q:
Blank canvas HTML, JavaScript
why I'm getting blank canvas?
When e.target.result is changed to normal url of an image somewhere from web (in img.src = e.target.result) it's working perfectly fine. Adding img tag with src=e.target.result works also.
function handleFileSelect(evt) {
var files = evt.target.files;
for (var i = 0, f; f = files[i]; i++) {
if (!f.type.match('image.*')) {
continue;
}
var reader = new FileReader();
reader.onload = (function(theFile) {
return function(e) {
var span = document.createElement('span');
span.innerHTML = ['<canvas class="thumb" title="', escape(theFile.name), '" id="', escape(theFile.name), '"></canvas>'].join('');
document.getElementById('photo-list').insertBefore(span, null);
var ctx=document.getElementById(escape(theFile.name)).getContext("2d");
var img=document.createElement('img');
img.src = e.target.result;
ctx.drawImage(img,0,0);
};
})(f);
reader.readAsDataURL(f);
}
}
document.getElementById('files').addEventListener('change', handleFileSelect, false);
A:
At the point in your code where you execute
ctx.drawImage(img,0,0);
The image hasn't finished loading (it's client side but loading still takes a bit of time). So wrap that drawImage call in a load handler for the <img> element, like:
img.onload = (function (ctx) {
return function () {
ctx.drawImage(this, 0, 0);
};
})(ctx);
Fiddle
|
A teenager was killed in a shootout with Chicago police in the Washington Park neighborhood Friday night, police said.
The boy was identified as 17-year-old Corsean Lewis, authorities said. He died of multiple gunshots, an autopsy determined.
Officers saw a group of men in an alley in the 5800-block of South Wabash Avenue at about 11:10 p.m. when one of them apparently pulled out a gun and fired at officers, hitting the bumper of their unmarked squad car, police said. |
Q:
Supporting quotations and string substitutions together in Scala
I have the following lines in repl
scala> val accountID = "123"
accountID: String = 123
scala> s"{\"AccountID\":\$accountID\, \"ProcessMessage\":\"true\", \"Reason\":\"Integration Test Message\"}"
<console>:1: error: ';' expected but string literal found.
s"{\"AccountID\":\"$accountID\", \"ProcessMessage\":\"true\", \"Reason\":\"Integration Test Message\"}"
^
I assume it's some small silly quotations thing, but I still want to understand what I am doing wrong here. If I put the account ID directly it evaluates fine.
A:
Use triple quotes and remove \s
scala> s"""{"AccountID":"${accountID}", "ProcessMessage":"true", "Reason":"Integration Test Message"}"""
res6: String = {"AccountID":"123", "ProcessMessage":"true", "Reason":"Integration Test Message"}
|
ulta stock - BingNewshttp://www.bing.com:80/news?q=ulta+stock&first=11&FORM=PONR&format=RSSSearch resultshttp://www.bing.com/rsslogo.gifulta stockhttp://www.bing.com:80/news?q=ulta+stock&first=11&FORM=PONR&format=RSSCopyright \xc2\xa9 2018 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.Ulta Beauty Inc Stock Is Heading to $250!http://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.msn.com%2fen-us%2ffinance%2ftopstocks%2fulta-beauty-inc-stock-is-heading-to-24250%2far-BBKgOSJ&c=15300754529759400999&mkt=en-usI’ll never forget a few years back when I dropped my wife off at an Ulta Beauty Inc (NASDAQ:ULTA) shop. It was like a vision, a field of dreams. Clearly, ULTA isn’t the kind of store I frequent, but the first time I saw it I actually wanted to go inside.Thu, 15 Mar 2018 11:14:43 GMTInvestorPlacehttp://www.bing.com/th?id=ON.3272B8C4F2D4D2BE3441B7BB130459A4&pid=Newsw={0}&h={1}&c=14700312Jefferies Adds Red-Hot Retail Stock to Franchise Picks Listhttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2f247wallst.com%2finvesting%2f2018%2f03%2f19%2fjefferies-adds-red-hot-retail-stock-to-franchise-picks-list%2f&c=12888755434026696636&mkt=en-usand one company has added a new stock we feel could have outsized upside potential. In a recent research note, the analysts at Jefferies make a big move by adding a top retail company Ulta Beauty Inc. (NASDAQ: ULTA) to its well-respected Franchise Picks ...Mon, 19 Mar 2018 07:05:00 GMT24/7 Wall St.http://www.bing.com/th?id=ON.02FF9295BB9AFC4EFB488457DA9D8B84&pid=Newsw={0}&h={1}&c=14400400Every Beauty Item You Need to Snag From Ulta's 21 Days of Beauty Salehttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.popsugar.com%2fbeauty%2fUlta-21-Days-Beauty-Sale-2018-44672548&c=10962976976748690227&mkt=en-usBeauty junkies, brace yourselves: Ulta's epic 21 Days of Beauty sale has finally returned ... This year, the sale starts March 18 and runs through April 7, giving you plenty of time to stock up on your favorite beauty essentials. Many of the items are ...Sun, 18 Mar 2018 21:14:00 GMTPopSugarhttp://www.bing.com/th?id=ON.55D1533D7F7FA31324E3A45CB47432F7&pid=Newsw={0}&h={1}&c=14700367Why Ulta Beauty Inc(NASDAQ: ULTA) is going gangbusters todayhttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2ffxdailyreport.com%2fulta-beauty-incnasdaq-ulta-going-gangbusters-today%2f&c=7094113948338932410&mkt=en-usUlta Beauty Inc(NASDAQ: ULTA) stock rose over 7.1% on March 16 th, 2018 (as of 1:21PM EDT; Source: Google finance) post a decent fourth quarter performance. For fourth quarter of 2017, the group’s top line rose 22.6% or 15.7% adjusted for the 53rd week.Fri, 16 Mar 2018 17:24:00 GMTStock marketUlta's 21 Days Of Beauty Sale Is Finally Here & These Are The Sales You Can't Misshttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.bustle.com%2fp%2fultas-21-days-of-beauty-sale-is-finally-here-these-are-the-sales-you-cant-miss-8533931&c=5101750834944130508&mkt=en-usUlta's 21 Days of Beauty Sale runs from now through Saturday ... You will be whipping out your credit card with incredibly frequency for the next several weeks as you stock up on amazing items that don't usually see their prices slashed.Mon, 19 Mar 2018 06:16:00 GMTBustlehttp://www.bing.com/th?id=ON.FCFE82B042BA4C472F8AAEA063244430&pid=Newsw={0}&h={1}&c=14700420Ulta Beauty, Inc. (ULTA) Q4 2017 Earnings Conference Call Transcripthttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.msn.com%2fen-us%2ffinance%2fcompanies%2fulta-beauty-inc-ulta-q4-2017-earnings-conference-call-transcript%2far-BBKhroY&c=1069645144834551179&mkt=en-usSPONSORED: 10 stocks we like better than Ulta Beauty When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor , has tripled the market.*Thu, 15 Mar 2018 18:13:00 GMTThe Motley Foolhttp://www.bing.com/th?id=ON.A1F81261774B4CEFD7BC7EBDA6C3F8E4&pid=Newsw={0}&h={1}&c=14270288Ulta Beauty (ULTA) Q4 Earnings Miss, Issues FY18 Guidancehttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.zacks.com%2fstock%2fnews%2f295948%2fulta-beauty-ulta-q4-earnings-miss-issues-fy18-guidance&c=11500531648262989358&mkt=en-usFurther, this Zacks Rank #3 (Hold) stock has declined 7.4% in the past three months, wider than the industry’s fall of 3.8%. Q4 Numbers Coming to numbers, Ulta Beauty posted adjusted earnings of $2.75 per share, missing the Zacks Consensus Estimate by a ...Fri, 16 Mar 2018 07:23:00 GMTZacks Investment Researchhttp://www.bing.com/th?id=ON.D1B45AC4078AEC8B829CEF19B35372AA&pid=Newsw={0}&h={1}&c=14620369Ulta Beauty Q4 earnings miss; to open 100 storeshttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.chainstoreage.com%2ffinance-0%2fulta-beauty-q4-earnings-miss-open-100-stores%2f&c=6012303006864182113&mkt=en-usUlta Beauty is not backing down in 2018 from its aggressive ... The company also announced a $625 million stock buyback and a one-time bonus for hourly workers due to tax reform. For the full year, net sales, including the benefit of the 53rd week ...Fri, 16 Mar 2018 08:56:00 GMTChain Store Agehttp://www.bing.com/th?id=ON.24F713EA5B0160FA8FD493D1A32BD8D8&pid=Newsw={0}&h={1}&c=14700524Stocks making the biggest moves after hours: Adobe, Broadcom, Ulta & morehttp://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.cnbc.com%2f2018%2f03%2f15%2fafter-hours-buzz-adbe-avgo-ulta-more.html&c=12766625361783242691&mkt=en-usUlta Beauty shares fell more than 2 percent in extended trading ... The company also announced a $625 million stock buyback and a one-time bonus for hourly workers due to tax reform. Shares of Overstock.com plummeted nearly 13 percent in the extended ...Thu, 15 Mar 2018 14:55:00 GMTCNBChttp://www.bing.com/th?id=ON.103FA7BBF77CA78DC76AC19D3F1B2BE1&pid=Newsw={0}&h={1}&c=14530298After-Hours Stock Movers 03/15: (AMRS) (KODK) (ADBE) Higher; (OSTK) (ZUMZ) (ULTA) Lower (more...)http://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=AC2C579E9BFA4308AD805ED3CE833149&url=https%3a%2f%2fwww.streetinsider.com%2fSpecial%2bReports%2fAfter-Hours%2bStock%2bMovers%2b0315%253A%2b%2528AMRS%2529%2b%2528KODK%2529%2b%2528ADBE%2529%2bHigher%253B%2b%2528OSTK%2529%2b%2528ZUMZ%2529%2b%2528ULTA%2529%2bLower%2b%2528more...%2529%2f13951230.html&c=10907967329890746833&mkt=en-usAmyris (NASDAQ: AMRS) 21% HIGHER; reported Q4 EPS of $0.61, versus $0.07 reported last year. Revenue for the quarter came in at $80.6 million versus the consensus estimate of $68.14 million. Overstock.com (NASDAQ: OSTK) 12.2% LOWER; reported Q4 EPS of ($3 ...Thu, 15 Mar 2018 23:52:00 GMTStreetInsiderhttp://www.bing.com/th?id=ON.60ADCFD44AC2205DA20F3D09C564FD7C&pid=Newsw={0}&h={1}&c=14200113 |
# File produced by Open Asset Import Library (http://www.assimp.sf.net)
# (assimp v3.2.202087883)
newmtl material_0_24
Kd 0.482353 0.05098 0.388235
Ka 0 0 0
Ks 0.4 0.4 0.4
Ke 0 0 0
d 1
Ns 10
illum 2
newmtl material_1_24
Kd 0.666667 0.666667 0.666667
Ka 0 0 0
Ks 0.4 0.4 0.4
Ke 0 0 0
d 1
Ns 10
illum 2
newmtl material_2_24
Kd 0.737255 0.737255 0.737255
Ka 0 0 0
Ks 0.4 0.4 0.4
Ke 0 0 0
d 1
Ns 10
illum 2
|
Guerrilla Food is about being poor and hungry, and the almost nuptial romance that I have with food. It's about being pissed off at what the American home kitchen has become and taking our food culture back from those that have ruined it. I hope to open a discourse about where it all went wrong and how we can fix our tattered cuisine.
Saturday, January 27, 2007
If anyone were to be named the General of the Guerrilla Food Army to take back the American dining experience, it would be Anthony Bourdain. This is a quasi-review of his restaurnat, Les Halles.
My wife and I just returned from a three day vacation trip to Washington DC. As soon as she booked the tickets I went to the website of Les Halles in DC and made reservations. I must admit, I waited in such anticipation for weeks leading up to the trip; The White House and Capitol Hill are cool and all, but I couldn't wait for some authentic French Brasserie food.We arrived ten minutes before our 7:30 reservation, and a rather attractive young lady with what seemed to be an Eastern European accent and a metro-sexual guy asked to take our coats. We were immediately seated next to a couple that would prove to be a ridiculous source of entertainment for me and a major headache for my wife who hates people who blabber on and on about nothing.When our waiter appeared with menus I couldn't help but think that foreign accents are definitely to be expected in a restaurant like Les Halles. Considering the fact that Philippe Lajaunie, one of the actual owners of Les Halles (Anthony Bourdain is just the executive chef, not an owner) proudly hails from Portugal, I was not surprised to find such an international treatment in their DC branch.I ordered the Steak au Poivre ($21) and my wife the Poulet Rôti avec Frites ($16). When my steak arrived I was blown away by the smell. The meat was a wonderful sirloin about as thick as an unabridged copy of War and Peace. It was absolutely encrusted with roughly crushed black pepper corns, and bathed in a cognac and dark veal stock reduction. My wife's chicken was perfectly roasted with a wonderful aroma of herb butter and a jus reduction sauce on the side. Our pommes frites were perfect. Believe me, I feel pretentious calling them pommes frites, but referring to these ideal crisp and perfectly seasoned sticks of potatoes as "French Fries"... I don't know, it just feels wrong.The most shocking part of the entire meal to me was the small salad of fresh greens that came with both of our meals. It was perfection. There were no tomatoes or cucumber or anything else. It was just greens tossed with a vinaigrette. Simple and understated. But at first taste, I knew that this is how I want every salad I eat for the rest of my life to taste. Even at the risk of overusing this word... it was perfect!The Steak au Poivre is really very peppery. I never think of French food as being spicy like Mexican, Spainish, or Portuguese food would be. But this stuff will really knock your socks off, in the best of ways. Until Les Halles I had always had Steak au Poivre in a reduced heavy cream and cognac sauce. I think the cream has always toned down the pepper corns' heat. At Les Halles, they really let the black pepper rip into you. It was almost religious. I felt like after many years in the forest, I had found home.Needless to say, I enjoyed my meal. And with three beers ($4.50 ea.) and water the total was only $55. The atmosphere is inviting and Earthy. Everyone seems to smile, laugh and truly enjoy themselves. Some places just have an energy that invited you in. Les Halles at 1201 Pennsylvania Ave is just such a place.
He motioned the blond woman forward, had her place her hands on the arms of his chair and lean down over him, letting her heavy breasts sway forward to dangle practically in his face. Kald stood there until the squids came boiling down the ladder,discernable by their blue grey battledress.free zoo porn stories and picturesnifty erotic gay stories archivesexy student teacher storiesfree rape torture storiesnifty erotic archive storiesHe motioned the blond woman forward, had her place her hands on the arms of his chair and lean down over him, letting her heavy breasts sway forward to dangle practically in his face. Kald stood there until the squids came boiling down the ladder,discernable by their blue grey battledress.
Blog Archive
About Me
I began my adventures into the culinary trenches when I moved to Munich Germany. I was a painfully typical college student, i.e. drank and smoked pot all day instead of going to class. With a disgraceful GPA, I dropped out of school to move to Europe. Five years later I was straightened out enough to come back and finish school.
While in Munich, I became a food junkie. My kitchen was a mini fridge and two stove eyes. It was in the corner of my living room that was also my bed room. That little Küche was like my studio.
I landed a consulting job so I had money to follow my obsession. I begged chefs to let me into their kitchens. I installed software for a Chinese chef in exchange for three months training. I harassed a German chef into letting me cook in his restaurant. I had two French Chef friends who humored my questions. And Tuesday nights were pizza night at an Italian friend’s house who owned a pizzeria in Rome. I absorbed it all. Now back in the states, I have worked my way through college as a cook at a health food restaurant. I am now the culinary specialist there and am still foaming at the mouth to learn more about the foods we eat. |
Q:
Finding $\sin(x/2)$ given a $\tan$ using half-angle identities
How can I find $\sin(x/2)$ given $\tan(x) = -5.099$ and $x$ is in Quadrant IV, assuming that $0 < x < 2\pi$?
I know I have to use half-angle identities in some way, but cannot figure it out.
A:
Your first goal is to find $\sin x$ and $\cos x$. You know the following things:
$\sin^2 x + \cos^2 x = 1$.
$\frac{\sin x}{\cos x} = -5.099$.
Because $x$ is in Quadrant IV, $\cos x > 0$ and $\sin x < 0$.
We will only need $\cos x$ for now, but finding $\sin x$ and finding $\cos x$ goes hand in hand.
Once you know that, you want to solve for $$\sin \frac x2 = \pm \sqrt{\frac{1 - \cos x}{2}}.$$
The sign of $\sin \frac x2$, once again, cannot be determined from the value of $\cos x$. You have to ask yourself: if $x$ is in Quadrant IV, what is the range of possible values of $x$ (as an angle)? What, then, is the range of possible values of $\frac x2$? Is $\sin x$ positive or negative for those values?
|
trace element
trace element defined in 1951 year
trace element - trace element;trace element - An element which must be available to an organism for its normal health though it is necessary only in minute amounts. E.g. higher plants need traces of at least the elements zinc, boron, manganese, molybdenum, and copper; lack of these may produce economically serious disease, such as 'heart-rot' of sugar beet (boron deficiency). In animals such deficiency disease is also known, e.g. 'coast disease' of cattle and sheep in Australia from lack of cobalt. In man, thyroid deficiency (goitre, cretinism) may be due to lack of semi-trace element, iodine. See also: Thyroid. Trace elements are probably constituents of enzyme systems (Compare with: vitamins); and also, in animals, of hormones. |
The researchers included 12 relative risk estimates in 10 eligible studies. Based on pooled analysis, the non-O blood group was associated with a statistically significant 14% increase in CAD incidence compared to O blood group (OR/HR, 1.14). No evidence of significant publication bias was seen. When 8 studies reporting data regarding (acute) myocardial infarction (MI) were pooled, similar statistically significant results unfavorable to the non-O blood group were seen (OR/HR, 1.16).
"In conclusion, we found that based on a meta-analysis of 10 studies enrolling a total of 174,945 participants, non-O blood group appears to be an independent risk factor for CAD and MI," the authors write. |
The effect of plain 0.5% 2-chloroprocaine on venous endothelium after intravenous regional anaesthesia in the rabbit.
The possible venous endothelial toxicity of 0.5% 2-chloroprocaine without additives in intravenous regional anaesthesia (IVRA) was evaluated in rabbits. After exsanguination of a hind limb with an Esmarch's bandage a neonatal blood pressure cuff around the thigh was inflated (250 mmHg). For IVRA 4 ml of either plain 0.5% 2-chloroprocaine (pH 3.7), 0.9% NaCl (pH 6.0) or acidified NaCl (pH 3.7) was injected i.v. to the exsanguinated limb in a randomized, double-blind fashion. Each group comprised 15 rabbits. Eleven rabbits received 4 ml of 0.5 M or 1.0 M KCl, for the production of positive controls. Two hours after injection of the test solution the tourniquet was deflated and venous biopsies were taken one and 24 hours later for histological and immunocytochemical examination. Five to eight 24-hour samples from each group were also processed for electron microscopy. A macroscopic thrombus formation was observed in four rabbits after KCl and in two after acidified NaCl administration. No inflammatory changes were observed at histologic and immunocytochemical examination of any of the vein samples. Electron microscopy revealed that KCl had caused severe damage to the venous endothelium of four out of five samples and acidified NaCl had caused moderate damage to the endothelium of two out of seven samples. 2-chloroprocaine had caused moderate damage in four and severe damage in two of the vein samples; two samples were normal. No thrombus formation occurred. It is concluded that additive-free 2-chloroprocaine caused damage to the venous endothelium in rabbits when used for IVRA. |
But the implications go beyond the impeachment trial itself. While we’ve all considered it inevitable that Trump would be acquitted, the manner in which the trial has proceeded is going to reverberate through the presidential election. Trump may now feel he has legal and constitutional permission to do literally anything to win in November.
AD
AD
On Tuesday, Dershowitz made the preposterous claim that you can’t impeach a president for abusing his power, a position supported by no historical or legal record and viewed by every historian and legal scholar as not just obviously wrong but utterly bizarre. But Republican senators seized gleefully on the argument that even if Trump did everything he’s accused of, he still must be acquitted.
“Let’s say it’s true, okay?” said Indiana Sen. Mike Braun. “Dershowitz last night explained that if you’re looking at it from a constitutional point of view, that that is not something that is impeachable.”
Frank O. Bowman, a law professor and author of a recent book on the history of impeachment, called Dershowitz’s argument “complete nonsense that’s totally unsupported by any scholarship, anywhere.”
AD
But Dershowitz was just getting started. Returning to the Senate on Wednesday, Dershowitz made an argument so insane that not even Republican senators desperate to find any grounds to justify their acquittal vote could abide it.
AD
Now, Dershowitz argued, if the president believes that his own reelection is good for the country, as every president does, then he can do literally whatever he wants to advance that goal, including marshaling the resources of the U.S. government, and by definition, it cannot be impeachable.
“If a president does something which he believes will help him get elected in the public interest, that cannot be the kind of quid pro quo that results in impeachment,” Dershowitz said.
AD
Now imagine Trump sitting in the White House residence watching this on TV. He already believes his powers are virtually unlimited (“I have an Article II, where I have to the right to do whatever I want as president,” he has said). Now here’s a famous law professor telling him that, because his reelection is in the national interest, anything he does to make it happen is acceptable.
Who do you think Trump is going to believe: the guy telling him what he wants to hear, or a bunch of naysayers saying it’s not true?
AD
Keep in mind that some time ago Trump made clear that he is not just willing but eager to get assistance from foreign countries in his reelection campaign. While some of his defenders have tentatively allowed that it might not be a great thing to solicit (or coerce) foreign assistance for his campaign, Trump himself has never said that. To the contrary, he has publicly invited that assistance.
AD
And there are probably countries that won’t need to be coerced, like Trump tried to do to Ukraine. There’s Russia, of course, whose help for Trump’s campaign is a near-certainty. How about North Korea, or Turkey, or Hungary, or the Philippines, or any of the other countries ruled by authoritarians with whom Trump is so sympatico?
Might they see it in their own interest to give him a hand? Under Trump’s new rules they won’t even have to be sneaky about it; they can just call up the Oval Office and say “What do want us to do?”
AD
So imagine it's October, and we learn that, say, North Korea has mounted an effort to help the Trump campaign. Are Republicans going to condemn it? Demand an investigation? Call for retaliatory measures? Of course not.
AD
But soliciting foreign help is just the beginning. If after he’s acquitted Trump truly believes he has permission to do anything he wants because his reelection is in the national interest, the ways he could abuse his powers in the service of his campaign are limited only by his imagination.
How about ordering the attorney general to announce a criminal investigation into the Democratic nominee? How about having the Internal Revenue Service seize the homes of all Democratic elected officials? How about announcing that should anyone assassinate his opponent, he’d pardon the killer? How about ordering the Air Force to bomb Milwaukee so its residents couldn’t vote for his opponent?
AD
Well, he wouldn’t go that far, you might say. And maybe bombing Milwaukee might be going a little far. But do we really know how far Trump will go once he’s convinced himself there are no legal constraints on his actions?
AD
Before impeachment began, some Democrats argued that it was unwise because Trump would take his acquittal by cowed and cowardly Republicans in the Senate as a vindication. Others said that wasn’t a good-enough reason to ignore the responsibility to at least attempt to hold him accountable for his misdeeds. We now face the possibility that Trump will feel not just vindicated but utterly unleashed.
And the only constraint on him will be if the people around him can muster the courage to say, “Um, sir? Maybe that’s not such a good idea.” How reassured does that make you feel? |
Open world games are hard to make, but it's even harder to make them about something. When a game's scope spreads across tens, maybe hundreds of virtual square miles, it's not surprising that developers can struggle to fill that space. Who can forget collecting feathers in the first Assassin's Creed, or Unity's unique approach of pouring every kind of content imaginable into Revolutionary Paris, as if Ubisoft was making virtual foie gras?
When you've got such a broad canvas, the temptation is to go wild with all the paints on your palette. The problem with this is when you mix every colour, you inevitably end up with brown. This is why so many open-world games end up stuffed with racing mini-games or mediocre crafting systems. You've got to chuck a lot of stuff in there before they feel full, and it takes enormous talent and teamwork to make the resulting experience feel like anything other than a random assortment of activities and filler.
This is why I have such a fondness for Red Faction: Guerrilla. It's an open-world game driven by a singular purpose. Granted, that purpose can be summarised as "smashing stuff to bits", but I never said the goal of an open world had to be noble or high-minded. It just has to somehow unify its components, and Guerrilla does this extremely well. It's a prime example of a developer figuring out what their open world is about first, then building the rest of the game around that idea.
If you've not got id, you're not coming in!
In fairness, Volition Studios had something of an advantage over other open-world developers, namely the two Red Faction games that preceded Guerrilla. It knew that a Red Faction game would have to focus on destruction of some variety. It is the defining characteristic of the series, after all. Yet even here Volition showed an unusual level of focus and restraint. Any other developer would have said "Let's make everything destructible" and happily let the player spend 30 hours shooting rockets at brown hilltops. Volition wisely realised that this would have been phenomenally boring, and instead restricted the destructible environment to manmade structures only.
It's worth noting that at the time this was a controversial decision, which looking back seems like madness, as it's what makes Guerrilla work as an open world game. By switching the destructive focus from the terrain to buildings, Volition instantly provided themselves with a blueprint for the layout of their open world. Wherever there were buildings in the world, there was potential for action. All the studio needed to do was figure out the reasons behind that action.
Again, the series' legacy helps here. Red Faction's overarching narrative is essentially the Russian Revolution displaced into a sci-fi setting. At least, the bits before the whole oppressive, genocidal dictatorship thing kicked in. Naturally Guerrilla followed this format, casting players as Alec Mason, a pretty nondescript white male protagonist who becomes a major player in the Red Faction after his brother is executed by the ruthless Earth Defence Force.
The story as a whole is about as substantial as the skin on a rice-pudding, with little in the way of thematic nuance or compelling character work. That said, it certainly has pace on its side. For an open-world game, Guerrilla plays fast, keeping you constantly on the move, and minimising elements like cutscenes or inventory management that might otherwise slow the game down. And frankly, that's exactly what you want from an action game.
Mason's erection problems differed from most men his age.
Ultimately, the story only exists to provide the action with some context. The real motivation for the player occurs at a systems level. Salvage is the game's currently - used to buy and upgrade new weapons and equipment, it's obtained primarily by smashing stuff. The other primary system that guides the players actions is, fittingly, control, which represents the the EDF's grip on Mars' various regions. Mason has to liberate these regions from EDF control in order to progress the story. This can be depleted by smashing , or by completing side missions, many of which involve - you guessed it - smashing stuff.
It helps that the act of smashing stuff feels great. Just spiffing, in fact. The risk with constructing your world to be destroyed is that it could end up feeling flimsy and insubstantial. But Guerrilla's buildings have a tangible weight and presence to them. Communicated through a wizardly blend of sound, physics and particle effects - Guerrilla sells the act of destruction to the player brilliantly. Even simple actions such as knocking a hole in a wall with your hammer has a satisfying crunch to it, the masonry cracking and crumbling in thick, rebar-strewn chunks.
When you break out the remote charges and level an entire building, the edifice collapses in a thick cloud of dust, resulting in a jumble of steel beams and metal panels. As the game progresses, it encourages more surgical demolition, introducing the matter-eating nano-rifle that lets you target specific areas of buildings to dissolve. Often you're doing this demolition work while embroiled in combat, fending off soldiers and armoured vehicles. Indeed, there's no sweeter moment in Guerrilla than triggering a charge that collapses a building and scatters a squad of EDF.
Combat is the one area of the game that falls a bit flat. Apart from all the buildings Mason destroys, anyway.
As someone who has played a lot of open world games, Guerrilla is also filled with plenty of smaller ideas that I really appreciate. Some missions, for example, instruct you to tail a particular vehicle, requiring you to stick close but not too close to your target. Missions like these are often fiddly and annoying because it's unclear what the right distance is. But Guerrilla uses the minimap to demonstrate clearly the space behind the target vehicle you need to stick to maintain the tail. The navigational aides are also smartly designed. Blow up a bridge, for example, and your virtual satnav will adjust to accommodate for this new obstacle in future.
I have a real fondness for games with a simple surface that demonstrate sleek and brainy design underneath. The surface here is very simple, however. Admittedly, Volition made life difficult for themselves setting the game on Mars, which isn't exactly known for its terrestrial diversity, and there is some attempt at variety through districts like the lush Oasis and the more urbanised Eos. Compared to say, the beautifully desolate world of Avalanche's Mad Max, however, it's plain to see that Guerrilla's environments haven't aged well. The combat is also pretty creaky by today's standards, a flaw that persists through much of Volition's work.
I've played enough blandly gorgeous open-world games in my time, however, so I'll happily take Guerrilla as it is over the alternative. Its destruction mechanic remains state-of-the-art, and the way Volition make it the focal point helps provide Guerrilla with a sense of identity and direction that shines through its dreary landscapes. There may have been larger and prettier open world game developed since Guerrilla, but few of them let you shape that world as comprehensively as this Martian revolution. |
Financial Obligations
Tuition
School Tuition and Fees (2018 – 19)
Every family is assessed a non-refundable application fee of $150.00 per year. All students apply/reapply each year.
Tuition for the 2018-2019 school year:
Pre-School:
3 Half Days $2035.00
5 Half Days $2530.00
5 Full Days $4998.00
Parishioners – Grades K through 8*:
Number of Children/Tuition
$4,980.00 One child
$8,500.00 Two children
$11,650.00 Three children
$14,250.00 Four children
Non-parishioner tuition: $8,100. All students in Kindergarten – Grade 8 have a $250 book/supply fee and a Technology Fee of $100.00 per child in grades K-8, with a $300.00 cap per family. Every Family (K-8) will be required to purchase 4 – $50 Grand Raffle Tickets (Only additional tickets sold above these 4 will qualify for Tuition Credit). Gr. 2 Students will be assessed a Sacramental Fee of $60. 5th Gr. Students are assessed a Bible Fee of $30. H.S.A. Fees are $10 per child in all grades.Each individual classroom teacher will outline required supplies for all grades.
**Your Family must be a registered Parishioner Family to receive the Parishioner Tuition Rate.
If you have not registered with the Parish, please contact the Parish Office directly.
Financial Obligations
Parish Support and Sunday Envelopes:
It is expected that all families will regularly attend Sunday Mass and contribute to the best of their ability to the parish. Lack of school family support of the parish will directly impact tuition. School parents are expected to support the Parish through contributions and stewardship. Children learn about stewardship, Christian service and parish support from their parents. We appreciate your support. St. Isidore Parish supports St. Isidore School by subsidizing 22% of the cost to educate each K-8 grade student.
Annually – Full amount due by Aug. 20 (3% discount given for this option)
Semi-Annually – Tuition paid in two equal payments
Quarterly payments – Tuition paid in four equal payments
Monthly – Tuition paid in 10 equal payments
Script and Gala Raffle Tickets credits will be applied directly to tuition. Gala Raffle Ticket credit will not be applied until after the event.
Delinquent Accounts:
All tuition and fees must be kept current. Your child’s continued enrollment/attendance is contingent upon keeping your account current. All accounts are monitored monthly. Students may be excluded from classes if accounts are seriously delinquent. If a hardship should occur, please contact Mrs. Collins as soon as possible. We will try to work with you as best we can. A late fee may be assessed for tuition collected after the 20th of the month. A NSF fee will be assessed on all non-funded payments. Registration for the next school year will not be processed until all accounts are current. Final payments for the school year must be received by April 20th or your child/children’s place may be given to a child on the wait list. |
Sen. Elizabeth Warren Elizabeth WarrenHillicon Valley: Subpoenas for Facebook, Google and Twitter on the cards | Wray rebuffs mail-in voting conspiracies | Reps. raise mass surveillance concerns On The Money: Anxious Democrats push for vote on COVID-19 aid | Pelosi, Mnuchin ready to restart talks | Weekly jobless claims increase | Senate treads close to shutdown deadline Democratic senators ask inspector general to investigate IRS use of location tracking service MORE (D-Mass.) introduced the Accountable Capitalism Act on Wednesday, saying that it will ease income inequality and help hold large companies accountable to their employees.
The act will require corporations with more than $1 billion in annual revenue to procure a federal corporate charter, which mandates that directors consider “all major corporate stakeholders” in decisionmaking, according to a piece Warren wrote in The Wall Street Journal.
ADVERTISEMENT
Corporations would also be required to have at least 40 percent of their directors elected by employees and at least 75 percent of directors and stakeholders would have to approve any “political expenditures.”
Directors and officers would also be barred from selling company shares within five years of gaining them or three years of a company stock buyback.
In the bill itself, Warren writes that the wealthiest 10 percent of American households own 84 percent of all American-held stocks. This, she argues, means that only the rich benefit from corporations’ interest in maximizing shareholder value.
“There’s a fundamental problem with our economy. For decades, American workers have helped create record corporate profits but have seen their wages hardly budge,” Warren said in an announcement of the bill. “My bill will help the American economy return to the era when American companies and American workers did well together.”
While writers on the left praise Warren’s proposition as a solution to income inequality in a capitalist framework, conservatives have opened fire on the bill as being ignorant of economic realities. |
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<script>
var frag = document.createDocumentFragment();
window.it = function(msg, callback) {
var div = document.createElement('div');
div.textContent = msg;
frag.appendChild(div);
callback(function() {
div.style.color = 'green';
}, function() {
div.style.color = 'red';
});
}
window.onload = function() {
document.body.appendChild(frag);
};
</script>
<script src="bala.min.js"></script>
<script src="https://cdn.jsdelivr.net/requirejs/2.1.22/require.min.js"></script>
</head>
<body>
<div class="test">
<div class="test-1"></div>
</div>
<script>
it('Converts array-like', function(ok, fail) {
var all = document.querySelectorAll('*'),
$all = $(all);
$all.length == all.length && all[0] == $all[0] ? ok() : fail();
});
it('Converts one element', function(ok, fail) {
var element = document.querySelector('*'),
$element = $(element);
$element[0] == element && $element.length == 1 ? ok() : fail();
});
it('Uses context', function(ok, fail) {
$('.test-1', '.test').length && !$('.test-xx', '.test').length ? ok() : fail();
});
it('Doesn\'t use wrong context', function(ok, fail) {
!$('.test', '.test-undefined').length ? ok() : fail();
});
it('Allows to use window', function(ok, fail) {
$(window)[0] === window ? ok() : fail();
});
it('Allows to use undefined and null', function(ok, fail) {
!$(null).length && !$().length ? ok() : fail();
});
it('Allows to use document', function(ok, fail) {
$(document)[0] === document ? ok() : fail();
});
it('Parses HTML', function(ok, fail) {
var $el = $('<div></div><span></span>');
$el.length == 2 && $el[0].tagName == 'DIV' && $el[1].tagName == 'SPAN' ? ok() : fail();
});
it('Parses contextual HTML', function(ok, fail) {
var $el = $('<td></td><td></td>', 'tr');
$el.length == 2 && $el[0].tagName == 'TD' && $el[1].tagName == 'TD' ? ok() : fail();
});
it('Allows to create plugins', function(ok, fail) {
var $tgt = $('*');
$.fn.plugin = function() {
$tgt === this ? ok() : fail();
};
$tgt.plugin();
});
it('Little test for $.one', function(ok, fail) {
$.one('.test').className == 'test' ? ok() : fail();
});
it('Works with AMD', function(ok, fail) {
var timeout = setTimeout(fail, 1000);
require(['./bala.umd.js'], function($) {
if($.one) {
ok();
clearTimeout(timeout);
}
});
});
</script>
</body>
</html>
|
ADDITIONAL SERVICES: free laundry service at the owners place, baby bed on request - free , free use of barbecue, possibily of booking a berth for boat with surcharge (only with obligatory previous notice).
BASIC FEATURES: Apartment type: A1. 4 bed/s for adults. Apartment capacity (adults): (4). Category of apartment is 2 stars. Apartment size is 50 m2. The apartment is on the first floor. Number of bedrooms in the apartment: 2. Number of bathrooms in the apartment: 1. Number of balconies in the apartment: 1.
LIVING ROOMS: kitchen and dining room in the same room, the living room has an exit to the balcony/teracce. Flooring in the apartment: tiles.
About Us
Love for FlipKey
As featured in USA TODAY and recommended by Travel + Leisure in its annual Villa Guide:
FlipKey Elsewhere
New to FlipKey?
FlipKey is a vacation rental marketplace with more than 300,000 rentals around the world. Find the perfect place to stay for your trip, and get great value along with the space, privacy and amenities of home. |
package org.sonar.plugins.findbugs.resource;
import org.junit.Before;
import org.junit.Test;
import org.sonar.api.batch.fs.FilePredicate;
import org.sonar.api.batch.fs.FilePredicates;
import org.sonar.api.batch.fs.FileSystem;
import org.sonar.api.batch.fs.InputFile;
import org.sonar.api.batch.fs.internal.DefaultInputFile;
import org.sonar.api.batch.fs.internal.TestInputFileBuilder;
import org.sonar.api.internal.google.common.collect.ImmutableList;
import java.util.ArrayList;
import static org.junit.Assert.assertEquals;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
public class ByteCodeResourceLocatorTest {
//File system that return mock input files
// FileSystem fs;
// FilePredicates predicates;
//File system that return no Input files
FileSystem fsEmpty;
FilePredicates predicatesEmpty;
@Before
public void setUp() {
//Not used for the moment
// fs = mock(FileSystem.class);
// predicates = mock(FilePredicates.class);
// when(fs.predicates()).thenReturn(predicates);
fsEmpty = mock(FileSystem.class);
predicatesEmpty = mock(FilePredicates.class);
when(fsEmpty.predicates()).thenReturn(predicatesEmpty);
when(fsEmpty.inputFiles(any(FilePredicate.class))).thenReturn(new ArrayList<InputFile>());
}
@Test
public void findJavaClassFile_normalClassName() {
ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
locator.findSourceFile("com/helloworld/ThisIsATest.java", fsEmpty);
verify(predicatesEmpty,times(1)).hasRelativePath("src/main/java/com/helloworld/ThisIsATest.java");
}
@Test
public void findScalaClassFileNormalClassName() {
ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
locator.findSourceFile("com/helloworld/ThisIsATest.scala", fsEmpty);
verify(predicatesEmpty,times(1)).hasRelativePath("src/main/scala/com/helloworld/ThisIsATest.scala");
}
// @Test
// public void findJavaClassFile_withInnerClass() {
//
// ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
// locator.findJavaClassFile("com.helloworld.ThisIsATest$InnerClass",fsEmpty);
//
// verify(predicatesEmpty,times(1)).hasRelativePath("src/main/java/com/helloworld/ThisIsATest.java");
// }
@Test
public void findTemplateFile_weblogicFileName() {
ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
locator.findTemplateFile("jsp_servlet._folder1._folder2.__helloworld", fsEmpty);
verify(predicatesEmpty,times(1)).hasRelativePath("src/main/webapp//folder1/folder2/helloworld.jsp");
}
@Test
public void findTemplateFile_jasperFileName() {
String prefixSource = "src/main/webapp/org/apache/jsp/";
String[] pages = {"WEB-INF/pages/widgets/cookies_and_params.jsp", "lessons/DBCrossSiteScripting/DBCrossSiteScripting.jsp"};
for(String jspPage : pages) {
String name = "org.apache.jsp." + JspUtils.makeJavaPackage(jspPage);
System.out.println("Compiled class name: "+name);
ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
locator.findTemplateFile(name, fsEmpty);
System.out.println("Expecting: "+ prefixSource + jspPage);
verify(predicatesEmpty,times(1)).hasRelativePath(prefixSource + jspPage);
}
}
@Test
public void findRegularSourceFile() throws Exception {
DefaultInputFile givenJavaFile = TestInputFileBuilder.create("TestJavaClass", "app/src/main/java/com/helloworld/TestJavaClass.java").build();
when(fsEmpty.inputFiles(any())).thenReturn(ImmutableList.of(givenJavaFile));
ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
assertEquals(givenJavaFile, locator.findSourceFile("com/helloworld/TestJavaClass.java", fsEmpty));
}
@Test
public void findSourceFileFromScalaClassName() throws Exception {
DefaultInputFile givenJavaFile = TestInputFileBuilder.create("TestOperationalProfileIccidModel", "src/main/scala/TestOperationalProfileIccidModel.scala").build();
when(fsEmpty.inputFiles(any())).thenReturn(ImmutableList.of(givenJavaFile));
ByteCodeResourceLocator locator = new ByteCodeResourceLocator();
assertEquals(givenJavaFile, locator.findSourceFile("TestOperationalProfileIccidModel$TestOperationalProfileIccid$.class", fsEmpty));
}
}
|
Council welcomes jail for man who trailed puppy like a dead weight
Staff Reporter
Derry City and Strabane District Council says it will continue to clamp down on irresponsible animal owners, following a successful prosecution at the local Magistrate’s Court.
Patrick Collins, of Lower Nassau Street, Derry, was sentenced at Derry Magistrate’s Court after pleading guilty to causing unnecessary suffering to a brown terrier dog in his care in August 2015.
The charges related to findings following an investigation by PSNI and Animal Welfare Officers regarding offences in August 2015.
Mr Collins received a three month jail sentence in respect of each charge to run concurrently. He also received a lifetime ban from keeping animals and was ordered to pay £909 costs.
CCTV footage had been previously shown in court on the 15th December 2016 of the terrier dog on a lead being dragged along the ground on its side like a dead weight. The dog was seen on the CCTV footage to be trailed out the front door of the shop.
The dog was examined by a vet and found to be unresponsive and motionless, its breathing was slow and heavy and its prognosis was poor. There were cuts to the pads of all four feet which indicated that the skin had been grazed off. The dog was hospitalised and put in an oxygen tent – but made a full recovery before being rehomed by Council.
Commenting after the proceedings, a spokesperson for Derry City and Strabane District Council, welcomed the Court’s ruling saying: “It is upsetting to hear of cases of the mistreatment of animals in our society. However, I am gratified that Derry City and Strabane District Council continues to adhere to a rigorous enforcement policy to ensure full compliance of regulatory requirements.
“I would urge the public to be vigilant and report any suspected cases of mistreatment or cruelty to domestic animals and equines to our Animal Welfare team on 028 82256226. “Complaints are investigated thoroughly and where necessary formal action is taken, which may include the service of Improvement Notices or, in extreme cases, the seizure of animals.
“The Council may also prosecute for offences, as in this particularly harrowing case, which I hope serves as a warning to anyone who does not take appropriate care of animals.”
Members of Derry City and Strabane District Council’s Health and Communities Committee this week received an update on the ongoing campaign to address animal welfare issues across the district. The latest figures show that from 2nd April 2015, the Western Region alone has received over 995 animal welfare calls and carried out 1390 inspections, 441 of which were carried out in the Derry City and Strabane District. |
KTDO
KTDO, virtual channel 48 (UHF digital channel 26), is a Telemundo owned-and-operated television station serving El Paso, Texas, United States that is licensed to Las Cruces, New Mexico. The station is owned by the Telemundo Station Group subsidiary of NBCUniversal (itself a subsidiary of Comcast, which also owns the local cable system in Las Cruces). KTDO's studios are located on North Mesa Street/Highway 20 in northwest El Paso, and its transmitter is located atop the Franklin Mountains on the El Paso city limits.
History
KASK-TV
The station first signed on the air November 18, 1984 as KASK-TV; it originally operated as an English-language independent station. The TV station was an outgrowth of KASK-FM 103.1.
KZIA
KASK-TV went off the air in October 1987 when it was bought by Bayport Communications. Bayport was approved to relocate the tower to a new site near Anthony, New Mexico, and increase power from 74,000 watts to the maximum 5 million. Channel 48 was sold to Robert Muñoz and reemerged on June 13, 1990 as KZIA. The station was added to the El Paso cable system in 1991. Lee Enterprises bought the station in 1993 for $440,000, after a separate $900,000 sale fell through the year prior. "Z48" became a charter affiliate of the United Paramount Network (UPN) upon the network's launch on January 16, 1995.
Change to Telemundo
In 1997, the station's calls were changed to KMAZ ahead of a January 16, 1998 change to Telemundo and Spanish-language programming. The change was made to improve the station's financial position and because management felt the market was ready for a second Spanish-language station on the United States side of the border.
In 2001, the station's call letters were changed to KTYO. In 2004, the station was purchased by the Arlington, Virginia-based ZGS Group for $11.8 million; ZGS subsequently converted the station into a Spanish-language outlet as the market's Telemundo affiliate and changed its call letters to KTDO. As a result of the switch, UPN (which ceased operations in September 2006 and merged its programming with competing network The WB as part of a joint venture between CBS Corporation and Time Warner to form The CW) did not have a full-time affiliate in the El Paso market for the remainder of the network's run, with its programming being relegated to a secondary affiliation on KKWB (channel 65, now KTFN) until it switched to TeleFutura in January 2002.
On December 4, 2017, NBCUniversal's Telemundo Station Group announced its purchase of ZGS' television stations, including KTDO. The sale was completed on February 1, 2018.
Digital television
Digital channels
The station's digital signal is multiplexed:
Analog-to-digital conversion
KTDO shut down its analog signal, over UHF channel 48, on June 12, 2009, the official date in which full-power television stations in the United States transitioned from analog to digital broadcasts under federal mandate. The station's digital signal remained on its pre-transition UHF channel 47. Through the use of PSIP, digital television receivers display the station's virtual channel as its former UHF analog channel 48.
Former translator
KTDO's main channel was also seen in analog in El Paso proper via low-power translator KTDO-LP on channel 48, mainly to allow viewers in El Paso and on the Mexican side of the market to continue to watch the station over-the-air. With the sale to NBCUniversal and Mexico's digital transition having been completed, the license for KTDO-LP was returned to the Federal Communications Commission (FCC) on November 30, 2018, and was formally canceled on December 21.
News operation
KTDO presently broadcasts 12 hours of locally produced newscasts each week (with two hours each weekday and one hour each on Saturdays and Sundays).
On November 16, 2010, KTDO launched a news department, with half-hour Spanish-language newscasts airing at 5:00 and 10:00 p.m., under the title Telenoticias El Paso; with the launch, it became the first Spanish-language television station in the El Paso market to broadcast its local newscasts in high definition.
On June 11, 2018, the station launched newscasts at 4 and 4:30 p.m., adding to the already established 5 p.m. newscast. With this expansion, KTDO has more hours of local news than any other Spanish-language station in El Paso.
Previous local newscasts
KASK-TV had launched with a local news operation, airing a 5-minute news brief at 7 p.m. and a main news at 10 p.m.
As KZIA-TV, the station carried a 9 p.m. local newscast, "Newswatch 48", hosted by former KDBC-TV anchor Bill Mitchell, and a replay of KDBC's 6 p.m. newscast at 9:30 p.m. In 1991, channel 48 began carrying by microwave link all of KOB-TV's local newscasts live from Albuquerque in a first-of-its-kind arrangement. When Lee Enterprises—owner of KRQE-TV—bought the station in 1993, the station shifted to carrying that station's news, though Lee planned to begin local newscasts in Las Cruces.
References
External links
Category:Telemundo network affiliates
Category:Television channels and stations established in 1984
TDO
TDO
Category:1984 establishments in New Mexico
Category:Cozi TV affiliates
Category:TeleXitos affiliates |
A 30 ans, Julie D. a appris qu'elle était surdouée.Image d'illustration. (Ben Brain/Future Publis/REX/SIPA)
J’ai eu 30 ans en octobre. Je vis seule avec mes trois enfants, issus de deux pères différents, je n’ai pas d’emploi, seulement un bac L en poche, et je suis surdouée.
Mes proches pensent que j'ai gâché ma vie
Lorsqu’il m’arrive d’en parler, chacun réagit différemment.
Il y a ceux qui pensent très fort avec leurs yeux que j’ai tout simplement "la grosse tête" et il y a les circonspects qui se disent que ma vie ou ce que je suis n’est absolument pas en adéquation avec l’image que l’on peut se faire d’une personne surdouée. Pas de carrière, pas d’études, pas de fric, pas de mari, pas d’amis…
Pour tous, famille et connaissances, j’ai "gâché" ma vie.
J’ai dépensé un temps et une énergie infinis à devoir justifier mes "étrangetés", mes choix "hors normes" et, au mieux, on me considérait comme une extravagante, une capricieuse ou une farfelue, au pire, comme une folle, une malade, un danger pour les autres et moi-même…
J'ai fini par me méfier des autres
Au fil des années, j’ai fini par me méfier, par avoir peur aussi, des "autres" et je me suis repliée de plus en plus sur moi-même pour trouver refuge dans mon "royaume intérieur". Pour ne plus souffrir de mon décalage permanent, de ce rapport au monde si particulier, j’ai érigé des montagnes entre lui et moi.
Par chance, malgré les galères et une souffrance chronique, j’ai pu mener pendant quelques années une vie que l’on pourrait qualifier de plutôt "stable", sans réelle nécessité d’avoir un emploi fixe et bien payé (je remercie ceux qui m’ont permis d’avoir cette chance).
J’ai travaillé ci et là, tantôt comme serveuse de bar, secrétaire d’école ou web rédacteur, tantôt comme correspondante locale, comédienne ou professeur de théâtre dans un conservatoire. J’ai travaillé quand je le voulais, quand cela faisait sens, quand ça me plaisait. Un vrai luxe.0 J’ai même travaillé pour "rien", gratuitement, juste pour le plaisir de faire quelque chose qui me donnait satisfaction, à moi, mais aussi aux autres.
Je suis ce que l'on nomme une "scanneuse"
Des désillusions, j'en ai connu, bien sûr. Comme la fois où j'ai voulu travailler dans les pompes funèbres avec pour seule motivation d'accompagner au mieux et dans la dignité des familles endeuillées et qu'on m'a renvoyée froidement que mon profil n'était pas assez "commercial"...
Cela m'a laissé donc bien du temps libre pour bichonner mon "royaume intérieur" et le parer de toutes les richesses intellectuelles, humaines, que j’ai pu glaner au fil du temps (lorsque ma souffrance était trop grande mais que je n’avais pas envie de tout détruire).
Dès qu’un sujet m’intéresse, je le développe avec passion jusqu’à un certain degré de connaissances et je l’abandonne aussi vite pour passer à autre chose. Dans le jargon des surdoués, je suis ce que l'on nomme une "scanneuse". Le contraire du spécialiste. Je suis incapable de me choisir un métier stable ou une seule passion. Je voudrais pouvoir apprendre et expérimenter tout ce qui peut éveiller ma curiosité. Je fourmille d'idées, de projets du matin au soir. C'est épuisant mais l'ennui l'est encore plus.
"prête à tout mais bonne à rien"
Evidemment, sur le CV, cette particularité n'a rien d'un atout et plus j’accumule les emplois divers, plus je passe pour une incompétente versatile, une "prête à tout mais bonne à rien".
Quand bien même j’essaie d’expliquer modestement que je suis une "autodidacte", on me regarde avec défiance comme si je me justifiais d’être "une ratée qui veut réussir".
Au mieux, quand certains employeurs sont tout de même séduits par mon CV, ils ne comprennent pas ce que je fais là et ils s’imaginent qu’il y a forcément quelque chose de louche. Bref, dans tous les cas, je n’ai aucune crédibilité. Et j’ai beau avoir des connaissances approfondies aussi bien en élevage canin, qu’en littérature jeunesse, en immobilier, en médecine, en aromathérapie ou en italien, je n’ai rien pour le prouver.
Plus le temps passe, plus je m’éloigne du monde du travail, de la société. Pourtant, je veux travailler. Deux mois, six mois, un an, peu importe...
Je veux devenir assistante d'éducation... pour l'instant
Actuellement, je sollicite les établissements scolaires (collèges, lycées) pour être assistante d’éducation. Un emploi ingrat et sous-payé et pourtant je serai très heureuse de décrocher un tel poste.
Parce que je ne peux malheureusement pas espérer mieux, mais surtout, parce qu’il a un sens, parce qu’il résonne avec mes valeurs profondes, parce qu’il m’intéresse et me plaît.
Beaucoup s’insurgeront de mon attitude et penseront, à juste titre, que je mettrais sans doute de côté toutes ces questions existentielles si j’étais dans une situation vraiment désespérée (même si certains pensent qu’elle l’est déjà).
Marginale, je reste pleinement heureuse
Seulement, je ne me juge pas à l’aune de la détresse (ou de la réussite) de mon prochain, c’est une façon de voir trop simpliste, grossièrement valorisante pour les uns, dégradante pour les autres, selon le bout de la lorgnette que l’on tient.
Je ne vois simplement que ce que je peux faire de mieux, avec et pour les autres, même recluse dans mon "royaume intérieur". Au fond, elle est là ma réussite.
J’ai vécu comme je l’entendais, en résonance avec ce que je suis, dans le respect des autres et avec l’envie de contribuer à bâtir, à ma modeste échelle, un monde meilleur et j’ai été heureuse quand tous auraient voulu jeter ma vie à la poubelle. |
State of New York
Supreme Court, Appellate Division
Third Judicial Department
Decided and Entered: November 10, 2016 522507
________________________________
In the Matter of RAYNARD
CARAWAY,
Petitioner,
v
MEMORANDUM AND JUDGMENT
ANTHONY J. ANNUCCI, as Acting
Commissioner of Corrections
and Community Supervision,
Respondent.
________________________________
Calendar Date: September 20, 2016
Before: McCarthy, J.P., Rose, Clark and Mulvey, JJ.
__________
Raynard Caraway, Dannemora, petitioner pro se.
Eric T. Schneiderman, Attorney General, Albany (Marcus J.
Mastracco of counsel), for respondent.
__________
Proceeding pursuant to CPLR article 78 (transferred to this
Court by order of the Supreme Court, entered in Albany County) to
review a determination of respondent finding petitioner guilty of
violating certain prison disciplinary rules.
Petitioner was observed fighting with another inmate and
disobeyed several commands to cease punching the other inmate,
prompting a correction officer to use a baton strike to subdue
petitioner. Shortly thereafter, petitioner was observed
discarding a 1½-inch by 1½-inch razor into a drainage block. As
a result of this incident, petitioner was charged in a
misbehavior report with fighting, refusing a direct order,
engaging in violent conduct, possessing an altered item,
assaulting an inmate and possessing a weapon. At the ensuing
tier III disciplinary hearing, petitioner pleaded guilty to
-2- 522507
fighting and, at the conclusion of the hearing, was found guilty
of the remaining charges. The determination was affirmed on
administrative appeal. This CPLR article 78 proceeding ensued.
We confirm. Initially, given petitioner's plea of guilty
to the charge of fighting, he is precluded from challenging the
determination as to that charge (see Matter of Kim v Annucci, 128
AD3d 1196, 1197 [2015]; Matter of Smith v Annucci, 126 AD3d 1198,
1198 [2015]). As to the remaining charges, the misbehavior
report, the testimony of several correction officers who were
involved and familiar with the incident, the photographic
evidence and the confidential documentary evidence provide
substantial evidence to support the determination of guilt (see
Matter of Thousand v Prack, 139 AD3d 1212, 1212 [2016]; Matter of
Ramos v Venettozzi, 131 AD3d 1309, 1310 [2015], lv denied 26 NY3d
913 [2015]; Matter of Quezada v Fischer, 113 AD3d 1004, 1004
[2014]; Matter of Moreno v Fischer, 100 AD3d 1167, 1167 [2012]).
Inasmuch as petitioner denied cutting the other inmate with a
razor and claimed that he was not the aggressor and was only
defending himself, his varying narrative of the incident
presented a credibility issue for the Hearing Officer to resolve
(see Matter of Hyatt v Annucci, 141 AD3d 977, 978 [2016]; Matter
of Ramos v Venettozzi, 131 AD3d at 1310).
Turning to petitioner's remaining contentions, we reject
his claim that the misbehavior report did not adequately give him
notice of the charges against him. In our view, the misbehavior
report was sufficiently specific and provided adequate
information to discern petitioner's role in the incident so as to
afford him an opportunity to prepare a defense (see 7 NYCRR 251-
3.1 [c] [1], [4]; Matter of Pequero v Fischer, 122 AD3d 992, 993
[2014]; Matter of Basbus v Prack, 112 AD3d 1088, 1088 [2013]).
The record also establishes that the Hearing Officer afforded
petitioner an adequate opportunity at the hearing to review the
available documentation that he was permitted to view, including
the unusual incident report (see Matter of Martin v Fischer, 109
AD3d 1026, 1027 [2013]; Matter of Chavis v Goord, 58 AD3d 954,
955 [2009]). We further reject petitioner's contention that he
was denied adequate employee assistance given that the Hearing
Officer remedied any deficiencies, and petitioner has not
demonstrated that he was prejudiced thereby (see Matter of
-3- 522507
McMaster v Annucci, 138 AD3d 1289, 1290 [2016], lv denied 28 NY3d
902 [2016]). To the extent that petitioner argues otherwise, he
was not prejudiced by the fact that the misbehavior report was
not endorsed by Correction Officer Deblasi given that this
officer testified at the hearing (see Matter of Wilson v Annucci,
138 AD3d 1335, 1335 [2016]; Matter of Cane v Fischer, 115 AD3d
1097, 1098 [2014]). In any event, Deblasi testified that he only
witnessed a portion of the incident, and the Hearing Officer's
disposition did not rely upon Deblasi's testimony. We have
considered petitioner's remaining contentions, including his
claim that the Hearing Officer exhibited bias, and we find them
to be unpersuasive.
McCarthy, J.P., Rose, Clark and Mulvey, JJ., concur.
ADJUDGED that the determination is confirmed, without
costs, and petition dismissed.
ENTER:
Robert D. Mayberger
Clerk of the Court
|
By Adair Law Jewel and Ron Lansing are the second couple to receive the LSA after the late Tom and Caroline Stoel in 2006. This article is based on archives, writings, and conversations with Jewel and Ron Lansing as well as research and contributions by their children Mark, Alyse, and Annette. Through the conduct of their professional and personal lives, Jewel and Ron Lansing have been the source of education and inspiration for thousands with their contributions in teaching, political office, and their respective written works. Jewel Anne Beck was born in May 1930 to Lars and Julia Beck on the Flathead Reservation in western Montana. She joined two brothers…
By Hon. Peter McKittrick The Hon. Randall L. Dunn will be retiring in January 2017 after 18 years of service as a U.S. Bankruptcy Judge. Judge Dunn is known for his service to his colleagues nationally, his tenure on the 9th Circuit Bankruptcy Appellate Panel (BAP), and his quick wit on the bench. Judge Dunn earned his undergraduate degree from Northwestern University, and received his Juris Doctor from Stanford in 1975. Prior to joining the bench, Judge Dunn was a partner in the Portland-based firm Landye Bennett Blumstein. His practice focused on commercial bankruptcy, business transactions, and securities. Judge Dunn brought a wealth of practical experience in both bankruptcy and…
By Mary Anne Anderson, USDCHS board member In Spring 2013, Oregon Benchmarks noted that 114 women had served or were serving as state and federal judges in Oregon. Three years later that number has increased to 137 women who have served (or will begin serving in 2017) in 176 different judicial offices in Oregon. Would it surprise you to learn that only six of the 103 judges that have served on the Oregon Supreme Court in its 150+ year history—and only 12 of the 50 judges to serve on the Oregon Court of Appeals—have been women? Over the past 15 years, the face of the Oregon judiciary has slowly shifted. …
By Stephen Raher, USDCHS board member It was a slowly evolving series of political scandals that captivated the nation. Driven by politics of the post-Reconstruction era and the economics of westward expansion, the star route scandals provided fodder for editorial pages and political wags throughout the early 1880s. Oregon played a small but important role in the narrative of the legal drama that unfolded in Washington, DC. As the United States pursued its policy of “manifest destiny” after the Civil War, one perennial problem in newly occupied territories was reliable mail service. In large swathes of Oregon and other frontier areas, the Post Office Department relied on contractors to transport…
Who We Are
USDCHS is operated by a volunteer group of lawyers, judges, scholars, and lay persons interested in preserving the court’s history. This historical society is a 501(c)(3) not-for-profit organization. Your membership and donations are appreciated. |
Zero Jumper DesignWatch Dog Timer (auto-reset system when it can not handle overclock configurations)AGP Protection (The AGP Protection can ensure the AGP card voltage to be 1.5V, to protect the mainboard and the AGP card.)
Certifications
FCC, CE, BSMI
Form Factor
ATX (305mm x 200mm)
We suggest you to use the most newest driver version.
Before install this driver we recommend to set a restore point. This way you can reverse system if the driver is rong.
Requerimentos de Sistema:
Não há requerimentos especiais.
Status da Versão do Programa:Suporte à Instalação do Programa:
Driver Albatron PX848PV Bios 1.10 Softwares relacionados baixados:
Baixe Também
Driver Albatron P4M890 BIOS 1.01 - 3804 downloadsBIOS Update: Supports Celeron 430/420 CPUsWe suggest you to use the most newest driver version.Before install this driver we recommend to set a restore point. This way you ca
Mais Populares
Driver Albatron P4M890 BIOS 1.01 - 3804 downloadsBIOS Update: Supports Celeron 430/420 CPUsWe suggest you to use the most newest driver version.Before install this driver we recommend to set a restore point. This way you ca |
Johann Heinrich Boeckler
Johann Heinrich Boeckler (13 December 1611 in Cronheim – 12 September 1672 in Strassburg) was a German polymath.
Born in Cronheim as a son of the Protestant priest Johann Boeckler and Magda Summer, he was a polymath at the University in Strassburg. He was the brother of the architect Georg Andreas Boeckler who also became famous with his publication Architectura Curiosa Nova. 1649 Queen Christina of Sweden invited Johann Heinrich to teach at the University in Uppsala. In 1650 he was graduated Swedish state historian. In 1654 he returned as a professor to the University of Strassburg.
Publications (selection)
Orationes duae. I. de C. Taciti Historia, II. de Tiberii Caesaris principatu. Straßburg 1636
Historia schola principum. 1640
In C. Corn. Taciti quinque libros histor. annotatio politica. Straßburg 1648
Disseratio De Notitia Reipublicae, Ad C: Corn. Taciti lib. IV, 33. Uppsala 1649. (Digitalisat in der Digitalen Bibliothek Mecklenburg-Vorpommern)
Nomima tōn Aigyptiōn, sive leges Aegyptiorum., 1657
Iosephus Philonis, sive bios politiku, vita viri civilis., 1660
In Hugonis Grotii Ius Belli Et Pacis, Ad Illustrißimum Baronem Boineburgium Commentatio Jo. Henrici Boecleri. Straßburg 1663/1664
Elogium Christophori Forstneri. 1669
Collegium politicae posthumum. Oder polit. Discourse von 1. Verbesserung Land und Leuth, 2. Anrichtung guter Policey, 3. Erledigung grosser Ausgaaben, und 4. eines jeden Regenten jährlichen Gefäll und Einkommen. Anno (editori Magisteriali) 1669. zu Strassburg von dem weitberühmten JCto, und der Rechten Professore, Hn. J. Heinr. Böcklern, nun aber zu geminem Besten publicè andas Liecht gebracht, und zum Druck befördert. o.O., o.J. [wohl Straßburg 1670]
Bibliographia historico-politico-philologica curiosa. Leipzig 1677
Joh. Henrici Boecleri kurtze Anweisung, wie man die Authores classicos bey und mit der Jugend tractiren soll. So auch desselben dissertatio epistolica postrema de Studio politico bene instituendo. Straßburg 1680
Institutiones politicae. 1704
Joh. Heinrici Boecleri Viri Celeberrimi Libellus Memorialis Ethicus, 1712
Theses Juridicae de testamentis solemnibus et minus solemnibus. 1720
External links
Entry at Kalliope
Category:German historians
Category:17th-century jurists
Category:German literary historians
Category:1611 births
Category:1672 deaths
Category:University of Strasbourg alumni
Category:University of Strasbourg faculty
Category:Uppsala University faculty |
Q:
Toggle inside v-for items affects the entire list, how can I make the each toggle affect only the containing list item?
I'm making a list of items with v-for loop. Inside each item of the loop there is button with click event method that showing description text.
When i click on the button, it should toggle only inside it's own item, but it affecting all elements in v-for list.
So, how to make a toggle method that will affect only it's own item?
<template>
<div>
<div v-for="item in items" :class="{ activeclass: isActive }">
<div class="item-text">
{{item.text}}
</div>
<button @click="toggle()">show</button>
<div v-show="isActive" class="item-desc">
{{item.desc}}
</div>
</div>
</div>
</template>
<script>
export default {
data () {
return {
items: [
{
text: 'Foo',
desc: 'The Array.from() method creates a new Array instance from an array-like or iterable object.',
},
{
text: 'Bar',
desc: 'The Array.from() method creates a new Array instance from an array-like or iterable object.',
}
],
isActive: false
}
},
methods: {
toggle: function () {
this.isActive = !this.isActive;
}
},
}
</script>
A:
You can add a property on each item in your list if description should be shown:
<template>
<ul>
<li v-for="item in items" :class="{ activeclass: item.isActive }">
<div class="item-text">
{{ item.text }}
</div>
<button @click="toggle(item)">show</button>
<div v-show="item.isActive" class="item-desc">
{{ item.desc }}
</div>
</li>
</ul>
</template>
<script>
export default {
data () {
return {
items: [
{
isActive: false,
text: 'Foo',
desc: 'The Array.from() method creates a new Array instance from an array-like or iterable object.',
},
{
isActive: false,
text: 'Bar',
desc: 'The Array.from() method creates a new Array instance from an array-like or iterable object.',
}
],
}
},
methods: {
toggle: function (item) {
item.isActive = !item.isActive;
}
},
}
</script>
Alternatively, you can extract the li into a separate component.
A:
As @Nora said you can (and probably should) create a separate component for each list item, so you would have a component that accepts an item as a prop, then each component can have it's own isActive flag, which keeps the markup nice and clean:
Component:
Vue.component('toggle-list-item', {
template: '#list-item',
props: ['item'],
methods: {
toggle() {
this.isActive = !this.isActive;
}
},
data() {
return {
isActive: false
}
},
})
Markup
Now you can simply place the component inside your v-for:
<div id="app">
<div v-for="item in items">
<toggle-list-item :item="item"></toggle-list-item>
</div>
</div>
Here's the JSFiddle: https://jsfiddle.net/w10qx0dv/
|
Use Innovative Measures to Dramatically Improve Efficiency of Buildings: Buildings account for nearly 40 percent of carbon emissions in the United States today and carbon emissions from buildings are expected to grow faster than emissions from other major parts of our economy. It is expected that 15 million new buildings will be constructed between today and 2015. President Obama and Vice President Biden will work with cities so that we make our new and existing buildings more efficient consumers of electricity.
It’s interesting that one of the most significant parts of his energy plans (buildings use 40% of the energy in the U.S., so this could have a giant affect on our country’s energy use) is in this Urban Policy section. I assume this is because of the “green jobs” aspect of building green. Tradespeople jobs are a great way out of poverty, and we’re going to need a lot of new tradespeople, with new skills, to transform the housing and commercial building stock to be highly efficient. |
There are a variety of uses for dies that punch shaped holes and cut or form sheets from materials such as metal, cardboard and other stock. Dieboards which are slotted to receive and rigidly retain steel rule are in particular demand.
Dies have been described in previously issued U.S. patents. For example, U.S. Pat. No. 3,863,550 describes a die having a superimposed pair of metal or rigid plastic plates separated by an intermediate semirigid plastic material. The plates are coated with a light-sensitive compound to form a "resist layer" that is resistant to chemical etching materials such as nitric acid. A solubilizing agent removes the resist layer in appropriate areas indicated by a slotting template which is transferred onto the resist layer using photographic negatives. The dieboard is then slotted to receive rule by applying a chemical etching material. The superimposed metal or plastic plates of the dieboard described in this patent are expensive and chemical etching is relatively slow and costly.
Dieboards with a white finish, such as acrylic are known to facilitate projection or drawing of slotting templates onto the dieboard.
Because slotting can be accomplished more rapidly, lasers appear to be the fastest growing method of cutting dieboards. Lasers are currently being used to slot hardwood dieboards, such as of maple or birch. Due to their lack of dimensional stability, hardwood dieboards constitute the low performance and low cost end of the commercial market for dieboards. Lasers have also been used to cut plywood dieboards; however, the heat generated by the lasers often results in warping such dieboards.
Lasers have also been used to cut "PERMAPLEX" dieboards, which are made from a polyester-cellulose blend by EHV Weidmann of St. Johnsbury, Vt. PERMAPLEX boards are expensive. Also, these dieboards can only be slotted at a relatively slow rate using lasers.
Polyurea-cellulose composites are known in the art, but have not heretofore been recognized as being suitable for use in a dieboard structure. For example, U.S. Pat. No. 5,008,359, issued Apr. 16, 1991 to Frank Hunter and owned by the assignee of the present application describes such a polyurea composite material. This patent is incorporated herein by reference in its entirety and describes a polyurea-cellulose composite formed by impregnating cellulose sheet material with from about 8% to about 20% of a substantially uncatalyzed, polyisocyanate resin and thereafter curing this material under suitable conditions of moisture content, pressure and temperature.
The polyurea-cellulose composite described in the above patent has been used to form a sheathing panel as described in pending U.S. patent application entitled "A Multi-Functional Exterior Structural Foam Sheathing Panel," Ser. No. 07/680,810, filed Mar. 22, 1991. The sheathing panel comprises a foam core sheet of from one to four pounds per cubic foot density laminated with the polyurea-cellulose composite sheets. The sheathing panel application specifically calls for composite sheets of from about 8.times.10.sup.-3 to about 0.1 inches in thickness. This structure, because of its low density core, would not be suitable for a dieboard.
There is therefore a need for dimensionally stable, high-performance dieboards produced from inexpensive materials such as polyurea-cellulose composites. There is also a need for dieboards that can be slotted at a rapid rate, such as using lasers. |
MODULE ObxFact;
(**
project = "BlackBox"
organization = "www.oberon.ch"
contributors = "Oberon microsystems"
version = "System/Rsrc/About"
copyright = "System/Rsrc/About"
license = "Docu/BB-License"
changes = ""
issues = ""
**)
IMPORT
Stores, Models, TextModels, TextControllers, Integers;
PROCEDURE Read(r: TextModels.Reader; VAR x: Integers.Integer);
VAR i, len, beg: INTEGER; ch: CHAR; buf: POINTER TO ARRAY OF CHAR;
BEGIN
r.ReadChar(ch);
WHILE ~r.eot & (ch <= " ") DO r.ReadChar(ch) END;
ASSERT(~r.eot & (((ch >= "0") & (ch <= "9")) OR (ch = "-")));
beg := r.Pos() - 1; len := 0;
REPEAT INC(len); r.ReadChar(ch) UNTIL r.eot OR (ch < "0") OR (ch > "9");
NEW(buf, len + 1);
i := 0; r.SetPos(beg);
REPEAT r.ReadChar(buf[i]); INC(i) UNTIL i = len;
buf[i] := 0X;
Integers.ConvertFromString(buf^, x)
END Read;
PROCEDURE Write(w: TextModels.Writer; x: Integers.Integer);
VAR i: INTEGER;
BEGIN
IF Integers.Sign(x) < 0 THEN w.WriteChar("-") END;
i := Integers.Digits10Of(x);
IF i # 0 THEN
REPEAT DEC(i); w.WriteChar(Integers.ThisDigit10(x, i)) UNTIL i = 0
ELSE w.WriteChar("0")
END
END Write;
PROCEDURE Compute*;
VAR beg, end, i, n: INTEGER; ch: CHAR;
s: Stores.Operation;
r: TextModels.Reader; w: TextModels.Writer; attr: TextModels.Attributes;
c: TextControllers.Controller;
x: Integers.Integer;
BEGIN
c := TextControllers.Focus();
IF (c # NIL) & c.HasSelection() THEN
c.GetSelection(beg, end);
r := c.text.NewReader(NIL); r.SetPos(beg); r.ReadChar(ch);
WHILE ~r.eot & (beg < end) & (ch <= " ") DO r.ReadChar(ch); INC(beg) END;
IF ~r.eot & (beg < end) THEN
r.ReadPrev; Read(r, x);
end := r.Pos(); r.ReadPrev; attr :=r.attr;
IF (Integers.Sign(x) > 0) & (Integers.Compare(x, Integers.Long(MAX(LONGINT))) <= 0) THEN
n := SHORT(Integers.Short(x)); i := 2; x := Integers.Long(1);
WHILE i <= n DO x := Integers.Product(x, Integers.Long(i)); INC(i) END;
Models.BeginScript(c.text, "computation", s);
c.text.Delete(beg, end);
w := c.text.NewWriter(NIL); w.SetPos(beg); w.SetAttr(attr);
Write(w, x);
Models.EndScript(c.text, s)
END
END
END
END Compute;
END ObxFact. |
Q:
How can I access the weather description in the json object
I have below json object, I want to display the weather object description, when I try to display the object its giving undefined. can any one tell me how do I access it?
{
"coord": {
"lon": 80.28,
"lat": 13.09
},
"weather": [
{
"id": 800,
"main": "Clear",
"description": "clear sky",
"icon": "01n"
}
],
"base": "stations",
"main": {
"temp": 299.15,
"pressure": 1015,
"humidity": 74,
"temp_min": 299.15,
"temp_max": 299.15
},
"visibility": 6000,
"wind": {
"speed": 3.1,
"deg": 60
},
"clouds": {
"all": 0
},
"dt": 1519491600,
"sys": {
"type": 1,
"id": 7834,
"message": 0.0057,
"country": "IN",
"sunrise": 1519433836,
"sunset": 1519476414
},
"id": 1264527,
"name": "Chennai",
"cod": 200
}
A:
The object weather is an array, so you need to iterate through the objects.
You can use the function forEach.
var obj = { "coord": { "lon": 80.28, "lat": 13.09 }, "weather": [{ "id": 800, "main": "Clear", "description": "clear sky", "icon": "01n" }], "base": "stations", "main": { "temp": 299.15, "pressure": 1015, "humidity": 74, "temp_min": 299.15, "temp_max": 299.15 }, "visibility": 6000, "wind": { "speed": 3.1, "deg": 60 }, "clouds": { "all": 0 }, "dt": 1519491600, "sys": { "type": 1, "id": 7834, "message": 0.0057, "country": "IN", "sunrise": 1519433836, "sunset": 1519476414 }, "id": 1264527, "name": "Chennai", "cod": 200};
obj.weather.forEach(w => console.log(w.description));
.as-console-wrapper { max-height: 100% !important; top: 0; }
Resource
Array.prototype.forEach()
|
Lichtsteiner wants Juventus stay
By Football Italia staff
Stephan Lichtsteiner will reportedly stay with Juventus for one more season, before leaving on a free transfer.
The Swiss international looked set to leave last summer, but eventually signed a one-year contract extension which ties him to the club until 2018.
It was thought the 33-year-old might be moved on this year, but with Dani Alves set to terminate his contract, Tuttosport reports that Lichtsteiner will stay.
The right-back will look to win a seventh Scudetto in a row, before leaving Turin on a free transfer next summer.
Watch Serie A live in the UK on Premier Sports for just £9.99 per month including live LaLiga, Eredivisie, Scottish Cup Football and more. Visit: https://www.premiersports.com/subscribenow |
DMC & ME
Originally intended to document my experience of DeLorean ownership, focus is often radical and strange, boring and obtuse.
Monday, April 28, 2008
Stag & Doe #2
What happens when you have 21 hours of stuff to do, but there are only 24 hours in a day? You sleep for 3 hours.
Saturday was Stag & Doe #2, a money-making event for Suz's cousin Vicki, and her man Ryan, who both share a love of karaoke, alcoholic beverages and camping.
Because Vicki & Ryan are so super-great, and really spice up our spicy Halloween parties, I was feeling generous and therefore purchased a crapload of raffle tickets. I was so proud of the staggering amount of tickets I had that I wore them on my wrists all night, showing them off. Like Wonder Woman.
Suz, using her tried and true method of dispersing her tickets evenly over all the prizes, distributed her tickets equally over all the prizes. I, on the other hand, slammed 70 raffle tickets into the one item I really wanted, a tasty Jelly Belly margarita mix set, with a really nice juice jug and glasses.
After dropping ticket #70 into the overflowing container, I figured overkill was good enough. I then dropped my last fifteen tickets into the prize I knew Suz wanted - a nice lotiony Avon gift basket.
With my last few bucks, I paid a nice bouncer to put naughty Vicki in jail - and laughed heartily while her poor dad had to combine all the cash he had left with somebody else's money just to bail her out so she could enjoy her own party.
If only it were like that in real life; I'd be broke with all the people I'd throw in the slammer.
As midnight drew near, the raffle prizes were drawn. My Avon effort paid off, as one of my tickets won it for Suz. However, to my astonishment, I didn't win the Jelly Belly drink set. I could feel a temper tantrum coming on, so I sat quietly, telling Dead Baby jokes in my own head until I felt relaxed enough to behave normally.
Amazingly, I drove home from the hall with a happy wife, her happy sister, and thanks to Vicki's super-awesome mom, the Jelly Belly drink set, which we put to use the very next day despite severe sleep deprevation.
Thursday, April 24, 2008
Your Smoke Alarm Can't Save You
This is a DMC & ME Public Service Announcement. The kind that the fire department doesn't want you to hear.
The time change, which comes around every spring, is synonymous with safety. Every spring local fire departments, perhaps even your city, remind denizens to change the batteries in their smoke detectors.
With decades of mistakes to learn from, advances in fire-retardant materials, and more public awareness about smoke alarms and fire hazards, you'd think we could avoid devastating infernos that claim an average of 2,930 lives a year in the U.S. There were 524,000 building fires in 2006 alone. The numbers are huge. The stats are here.
But there's more to it than simply replacing your smoke detector's batteries every spring. Much more. And it's scary. Not scary like your phone ringing immediately after reading one of those chain letters that says your phone will ring, and the person calling you is actually hiding upstairs in your closet waiting to "get" you, whatever that means. Just what does that mean? "Get" you? Was it scary when your uncle Leonard, with outstretched arms, chased you around the house when you were six yelling, "I'm gonna get you!"? Sure it was, but you didn't really know why. That is, until about ten years later when uncle Leonard was arrested for being a pedophile.
I'm digressing. What the bulging, rippled firemen want you to know is that their calendars are on sale now. What they don't want you to know is that your smoke detector is unreliable. And it doesn't matter a darn tootin' about the condition of your battery.
Smoke detectors can fail. They can malfunction at any time, like mine did last week. But I didn't know it. Not until I tried a number of new batteries in it, only to discover none of them worked.
When things are working properly you take them for granted. But be careful. The fire department will not tell you that smoke detectors are unreliable. But they are. How will you know yours is going to work properly, and wake you if a fire starts while you're sleeping?
Sunday, April 20, 2008
Halloween Panic in April
It's amazing how one package, a simple brown box, can cause emotional extremes between different people. One extreme contains the child-like emotions of the person who wanted the contents, who giddily looked forward to the day it arrived, and who couldn't wipe that stupid grin off his stupid face until the next stupid day. The other extreme is, well, Suz.
Fright Catalog, selling Halloween wares online longer than almost any other site, is a Halloweener's dream. Thousands of Halloween items fill their web pages selling from a buck, up to about 30 grand. Their one and only downfall is their high prices, which can be combatted by purchasing during the off-season.
Like April for example.
During the off-season there are promotions a-plenty, such as the 50% invite I received a little while ago. Immediately I began scouring the familiar pages for severed heads and the like.
Unfortunately, blood-soaked items which were not in stock could not be purchased at the super awesome low prices. But I was still able to find some deliciously creeptacular elements with which to spice up the house in October.
Mere days after I placed my order, Mr. DHL showed up with my big brown box resulting in my stupid stubborn smile; a happiness rivalled only by single-digitly aged children at 7 a.m. on December 25th. In order to compare my freaky new toys to my older freaky toys, I spent the night unpacking my 4x5 ft. Halloween closet, an event that takes hours to complete.
And that spectacularly messy "event" is exactly what causes the extreme on the other end of the emotional spectrum, namely, Suz's panic attack.
Tuesday, April 15, 2008
Lighter Roulette
My eyes instinctively shut and I jerked backwards as I was overwhelmed by the repugnant & revolting stench of burned human hair choking me.
Momentarily stunned, it took me a second to retrace all the events that led up to this horrific moment, starting with my discovery of two mud-covered tea lights laying in my garden. They were backup tealights, placed inside my Jack-o-lanterns on Halloween.
They were quite unpleasant to look at, so I figured I'd see if they still worked. If they did, I'd light them and get them out of the way. "Besides," I thought, "I like candles."
I picked up my dollar store BBQ lighter, identical to the kind you can buy at Canadian Tire or Home Depot for three times what I paid. It was yellow.
The first pull of the trigger was much like the 10th, and the 15th... and the 20th. I aimed the BBQ lighter at the tea lights and pulled the trigger over and over again. Each time I was greeted with the same empty 'click'.
I figured something had to be wrong with the lighter. I shook it, I pointed up in the air, I tried everything. Nothing. I looked for the tiny window that indicated how much butane remained. The level was low, but it still should have lit.
I shook the lighter again and again, and continued pulling the trigger, each time getting more and more frustrated. Click. Click. Click. Nothing. Nothing. Nothing.
The next moment reminded me exactly of the morons you read about in the Darwin Awards, and of Yosemite Sam, Wile E. Coyote, or any of the less intelligent Looney Tunes characters who looked down the barrel of their gun to see what was wrong with it.
That moment, which my brain was finally able to piece together from all the fragmented memory bits, was when I tried to smell whether or not gas was actually coming out of the lighter.... as I shoved it up my right nostril, and clicked the trigger.
Sunday, April 13, 2008
Stag & Doe #1
Ah, spring. The rainy season. April showers not only bring May flowers and flooded basements, but an abundance of weddings and wedding related events. Events such as the ever popular Stag & Doe.
Stag & Doe parties are designed to raise funds for the poor, struggling bride and groom who've not a penny to their name and have no hopes of paying for their extravagant wedding unless they can squeeze a few bucks out of every friend, distant relative or acquaintance. In fact, if a blood-soaked stranger grinning from ear to ear and mumbling about his 'sweet revenge' walked in off the street with cash in hand he probably wouldn't be turned away.
Suz and I spent Saturday night at a Stag & Doe for our co-worker James whom I first met in the parking lot at work when he challenged me to a race. The race was immediately called off upon James' discovery that the ol' Talon was putting out nearly triple the horsepower of his Toyota Celica.
The Stag & Doe was a well-organized blast, with cheap drinks, great food and even greater prizes to be won. Suz and I bought twenty raffle tickets and distributed them among the prizes we'd hoped to win.
Suz, abiding by the rule of "don't put all your eggs in one basket" applied her tickets towards various prizes, hoping to win just one of them. I, on the other hand, decided to put statistics into my favour, and plopped all of my tickets into the one prize I wanted the most.
At the end of the night, both our tactics worked as Suz won a 'Fancy Cut n' Hairstylin' Certificate and I won the 'Gourmet Gift Basket.' The basket's awesome international contents are as follows:
President's Choice White Chocolate Chunk & Raspberry cookies
Bahlsen Truffet Meringue/cocoa/chocolate biscuits
Vicenzi Grisbi Classic Lemon & Ginseng biscuits
Lindt Lindor milk chocolate bar
Lindt Lindor milk chocolate balls
Werther's Original caramels
St. Dalfour Wild Blueberry Deluxe Spread, or 'Jam' to us reg'lar people
Carr's Poppy & Sesame Thin Savoury Crackers
Starbuck's Latin America Medium House Blend coffee, and finally...
a box of 8 massive Mrs. Fields Semi-Sweet Chocolate Chip cookies
I suspect a tummy ache of Snuffalupagus proportions is right around the corner.
Thursday, April 10, 2008
Cat-lateral Damage
I don't know how people do it. I don't know how crazy old cat ladies can have dozens of insanity-inducing cats, except maybe for the fact that they're, well, insane.
To have multiple cats is expensive and exhausting. We learned this with our recent experience with three quarters of a half dozen cats. That's four cats for the mathmatically challenged. (That's "4" for the alphabetically challenged.) Our two girls, plus my parents two boys.
Every few days our house began to smell. The smell was like cats. Their food, their pee, their crap, their little pink buttholes, their litter. So every few days we had to clean out two litter boxes; one for the upstairs cats, and one for the downstairs cats.
The winter seemed to fly by as we spent every spare minute cleaning. If it weren't for Roomba, our house would be condemned right now due to inhabitable conditions as a result of an unacceptable build-up of unsanitary elements that would pose a health risk to anyone walking within 30 feet of our home.
But even Roomba couldn't fully compete with our kitties. Their fur wasn't the problem. It was the litter that got out of hand. Digging-litterbox action fired the tiny clumping granules all over the floor. And despite our best efforts, we couldn't stop it from getting underfoot.
The damage is done. The granulated bentonite clay particles, which are normally used for absorbing our cats' excrement, have devalued our home by 0.1% as they've scraped our hardwood floors with their crunchy, sand-flavoured edges. Dang. Maybe I should've titled this post 'Collitteral Damage.'
Thursday, April 03, 2008
Sign Of Spring: Construction
The signs of spring are here. And depending on your personality, they are either good or bad, because a sign of spring is also a sign of things to come. Namely, work.
As spring rolls around, things start needing to be done. On the upside, BBQs, decks and patio furniture all need building. On the downside, silly little girly flowers need to be planted. Planted everywhere. Planted in obvious places, where your neighbours can see them and then make fun of you.
I, for one, am glad it's still a little too cold for gardening right now. That means I can spend my time indoors getting better acquainted with my drill.
Upon completing the renovation of our sunroom, Suz and I decided it would be nice to be able to actually use it the way it was meant, rather than the catch-all it's currently, and inappropriately, designated as.
So, in hopes of spring weather filling our sunroom with sun and happiness, I spent the night building our new La-Z-Boy faux wicker furniture from the ooh-la-de-da Jameson collection that will fill this room and allow us to relax with or without a tasty alcoholic beverage in our hand.... but mostly with.
Two relaxed Martini feet up for La-Z-Boy's line of spiffy waterproof outdoor furniture that doesn't feel like it was manufactured with good ol' fashioned monkey power in some oppressive facility run by greasy teenagers who don't give a sh*t about anything except weather or not that boil on their neck is getting pussier. Or hairier. Or both. |
1. Technical Field
The present invention relates to a switch for mounting on a portion of a vehicle as part of a vehicle safety apparatus. In particular, the present invention relates to a horn switch which is part of an air bag module mounted on a vehicle steering wheel.
2. Description of the Prior Art
It is known to mount an air bag module on a steering wheel of a vehicle to help protect the driver of the vehicle. The air bag module includes an air bag and an inflator. In the event of sudden vehicle deceleration of a magnitude which requires protection of the driver, the inflator is actuated to inflate the air bag into a position to help protect the driver of the vehicle.
It is known to provide a horn switch which is operable by pressing on a cover of an air bag module mounted on a vehicle steering wheel. U.S. Pat. No. 5,309,135 discloses a horn switch which includes a variable resistance conductor adhered to a flexible substrate and attached to an air bag module cover. |
Q:
How to set upper and lower bounds for each element in a set?
I am creating a GAMS model to solve a simple maximization problem. I have a set J with 3 elements (1,2,3) and a variable x(J) that encompasses all the elements.
I am wondering if there is a way in GAMS to set a lower bound of 0 and upper bound of 3 to each element in the set without having to set each element bound individually and without using the positive variable keyword for the lower bound.
I have tried using x.lo =e= 0 and x.up =e= 3 but none of these are working. I am guessing I am not using the correct syntax but for the life of me cannot seem to find anything on the official documentation about it specifically for sets.
What is the correct way of doing this?
A:
Try
x.lo(J)=0;
x.up(J)=3;
See also here: https://www.gams.com/26/docs/UG_Variables.html#UG_Variables_AssigningValuesToVariableAttributes
|
Q:
Intercepting @Input changes with getter and setter
I tried to use Typescript accessors for @Input property to intercept changes from the parent to the child. I changed a little bit this example from the docs.
I set two parent properties change in parent's methods expecting that child's binded Input properties will follow. I found that when the property is changed in the parent as the following, the setter fires only once:
this.name="John"; //but this.name = this.name + "r" will work
while for this one it works always:
this.age++; // or this.age = this.age + 1;
To fix the first one I need to 'notify' the parent with an EventEmitter in an Output (which I already tried, and it works), but why the second one doesn't need it? Could someone explain the real differences between the two and why it doesn't work for the first one (or why it works for the second one).
DEMO
Parent class:
...
name = 'Jane';
age = 10;
changeName(){
this.name= "John";
}
changeAge(){
this.age++;
}
Parent view:
<my-name [name]="name"></my-name>
<button (click)="changeName()">Click me to change the name</button>
<my-age [age]="age"></my-age>
<button (click)="changeAge()">Click me to change the age</button>
Child1 Class:
private _name = '';
@Input()
set name(name: string) {
this._name = (name && name.trim()) || '<no name set>';
console.log(name);
}
get name(): string { return this._name; }
Child1 view:
My name is {{name}}.
Child2 class:
private _age = 0;
@Input()
set age(age: number) {
this._age = age || 0;
console.log(age);
}
get age(): number { return this._age; }
Child2 view
I am {{age}} years old
A:
After some research and reflexion, I found/remembered that bindings are only updated when there is a change detection.
So in case name property is set as following:
this.name= "John";
the first run changes the initial value of the property, so the accessor is fired. For the second and other times, the same statement will not trigger the change detection as the value didn't change (it was already "John") and consequently the accessor will not fire.
While for the age property, it's always a value change as it increments:
this.age++;
so the change detection and accessor are always fired.
|
Q:
Merge statement and after triggers on target table
I have two after triggers on target table (one for insert and one for update).
Now if I execute merge on the target table, the triggers are executed only once. Although the merge statement executes around 300 updates, and 200 inserts.
I checked it with print statements in each trigger, right after getting data from deleted, inserted record into variables.
How come? Is this a bug?
I have SQL Server 2008 sp1 std (part of the SBS2k8).
A:
A trigger runs per single action. Not "per row"
You have one insert of 200 rows and one update for 300 rows.
So, the trigger runs once if for insert and update, or the separate triggers run once each
Edit:
From MSDN, "Multirow Considerations for DML Triggers"
From Brent Ozar, "Triggers Need to Handle Multiple Records"
|
Deception: The Glow in the Darkness Part II :: By Jerry McDermott
America – the Land of the Free and the Home of the Cool – Confusion & Deception
The title stems from my research finding that sixty-four percent of Americans are for legalizing marijuana. In fact, thirty states already approve of medical marijuana and nine states plus D.C. approve recreational marijuana. Therefore, narcotic usage is not growing; it is exploding. There is even a free guide now available listing the best marijuana stocks to become a new millionaire. Medical marijuana does not contain the highness element and is useful for certain medical conditions
Scripturally, there are numerous references to sorcery or sorcerers as a translation of the Greek word pharmakeia (from which we get the English word pharmacy). The meaning is the usage of drugs to induce hallucinations. God is very firm in his admonition about the practice of swaying people to such actions. You cannot mistake His command, “Do not allow a sorceress to live” (Exodus 22:18). The main problem with drugs is the possibility of association with evil spirits. However, there are other methods Satan uses to influence people.
Just Games
The first such approach is the seemingly innocent parlor game with the Ouija Board. The name is a compound word from the French (oui) and the German (ja), both of which mean “yes.” This so-called game has intrigued all ages. The younger ones are curious if it works; the teens want a date partner; and the adults are more serious and want to reach a dead family member or have a question answered.
Regardless of the intent, by playing with the board, you can touch the occult. You are literally playing with fire as you let your hand move unguided across the board. It is not harmless fun, and therefore should be avoided. There is the true God and Savior, and there is the deceitful Satan. Seek God directly, not the confusion of an evil spirit through a game.
Tarot Cards are another apparent demonic personal attraction. The origin of the word Tarot stems from taroch which was a synonym for “foolish” in the fifteenth and sixteenth centuries. In Europe, they are used for card games. In English-speaking countries, the manipulation of the cards is used for divination purposes such as psychic predictions and to provide answers for your future life. The bottom line is that the cards are used in fortune-telling, psychic readings, and probing the past, present, and future events.
Astrology
Here is another area of confusion as astrology could be mistaken for astronomy, which is a natural science dealing with the study of celestial objects. By contrast, astrology is the study of movements and relative positing of celestial objects as a means of divination for human affairs. The confusion is rendered by the daily horoscope reading that folks believe. Supposedly, the Zodiac signs affect your future. The official astrology site provides free horoscope readings, tarot readings, psychic readings, different types of astrology, numerology, as well as Zodiac and sun sign readings.
The problem with any of these occult games, habits, New Ageisms or other isms is seeking answers rather than the true God. Our Lord once let me know this: Place your trust in me always for nothing is too big nor too small. Consider what I can do with a few fish or a stormy sea. I can feed you and calm your fears, but you must trust me as your Lord.
Newer Concerns
There are many other areas of confusion. The first is artificial Intelligence which is a science that is increasing in every field of endeavor, especially the military. There is a report that recently two robots communicated with each other in a new language that the programmers could not understand. AI seems to be rather new science. However, many of us have already experienced artificial intelligence especially in grammar school. Remember taking a peak at a nearby kid’s test papers? Voila! That was artificial intelligence.
A related area of confusion is the reality of the UFO’s. Yes, they are real. In fact, the Canadian Defense Minister confirmed that by saying, “The UFO’s are as real as the commercial airliners overhead.” The confusion comes as some folks believe that they are extraterrestrials or alien beings. Hogwash! They are the disembodied spirits from the giants. We know that the unholy ones occupy the second heaven. This explains how the UFO’s can appear and disappear from the first heaven, our abode. We also believe that the pilots are demons, which explains the violent maneuvers that no human body could stand.
Ephesians 6:12 seems to confirm this belief as it says that our struggles are not against flesh and blood creatures but against the powers of this world of darkness, the evil spirits in regions above. The concern is that when the rapture occurs, the report could be that we are with the extra-terrestrials, not with our Lord.
One final thought is that the disembodied spirits could take refuge in robots just as they did in human beings and pigs as referenced in scriptures. That would really cause confusion. Regardless of any of these possibilities from the evil world, God knew the future and provided the Armor of God so that, when the day of evil comes, we can be shielded from any attack (Ephesians 6:10-18).
The Clergy
We look to ministers and priests for guidance. As a result, it is a real blow to our understanding when they falter. Of course, we believe they are fellow human beings, but inwardly we probably expect more. With homosexual, pedophile Catholic priests and homosexual Protestant ministers either being revealed or openly approved, there is great confusion. The result can be a clear-cut belief that the supposed tolerance in their lives has been compromised. This has caused some folks to abstain from church attendance. I do not know whether their belief in God was affected, but have personal knowledge that some were totally bewildered and quit their church. Is a feigned repentance or “sorry” enough for the average person?
Me First
The new marvel, the cell phone, has given rise to the “Me first” phenomenon and a new action stressing the individual. The “selfie” is well-named as they always need a photo of themselves. It is an addiction regardless of age or gender. I must be informed every second of everything in every day. What is the confusion here? It is twofold: God cannot get through the busy cell phone, and it is the new American family problem, the forgotten scope of conversation.
We have all seen a family with a few children sitting in a restaurant with heads bowed only just viewing their cell phone. We have noticed kids together texting each other, but without the old-fashioned verbal method. I have personally viewed this when they were sitting next to each other and texting each other. Could this be a future job hunter on the internet?
Are you looking for a job? AAMOF (as a matter of fact)
AAF (Yes, I am)
AMA (ASK ME ANYTHING)
ARE YOU QUALIFIED NP (NO PROBLEM)
YOU DON’T SEEM THAT INTERESTED W (MHAT)
I’M AFRAID WE ARE NOT INTERESTED NW (NO WAY)
BYE WAJ (WHAT A JERK)
I Was High & Mighty
This was the opening song line from a 1954 movie, “The High and the Mighty.” Despite pot, weed, Mary Jane, cannabis, hemp or any other name for marijuana, it is an illegal drug. Incredibly, 600,000 Americans are arrested each year for possession of the drug. Evidently, a lot of Americans can paraphrase that song into “I was Mighty High.” The reason for the continued usage is that high feeling. Our major concern should be for the high teenage usage as statistics show that most users started below age 18, and drugs do affect the brain.
The initial draw for teens is peer pressure or indeed to be cool, completely oblivious that drugs are harmful chemicals. The belief is, “Nothing will happen to me; I can control it.” Meanwhile the smoked, eaten, inhaled, or injected drug thinks otherwise. This is due to the drug influencing the reward part of the brain. After repeated usage, the drugs cause deviated dopamine which reduces the high. This results in more drug usage which is necessary to repeat the previous euphoria. It also shows why drug addiction is hard to quit. The desired high usually leads to heroin, a more expensive drug, but just as lethal.
Consider the growing list of entertainers and Hollywood celebrities who have overdosed as well as the everyday folks overdosing on opiates.
Satan provides the confusion in drug usage to mimic the peaceful action of the Holy Spirit in the Christian lifestyle. If you have ever experienced the presence of the Holy Spirit in worship or at a conference, it is just a little tickle from the Lord of what it means to abide with Him with such happiness. It clearly influences our lifestyle to be closer to Him.
All these items we have examined are about escapism, the need to get away from the world’s problems.
The bickering among political parties is a living example of what is sin. Fake news is well named, but lying and being a false witness is cursed by God, so much so that it is even one of the commandments. However, the parties involved could care less about the stain on their soul.
Another confusion is the trend toward socialism. You do not need a degree in civics to realize socialism was a failure in the former Soviet Union, Cuba, and Venezuela. Capitalism continually fills the pot; socialism empties the pot.
People are indeed seeking answers, but they are looking in the wrong places. As Christians, we need to tell folks about salvation. I remember a pastor explaining salvation to a clergyman. His comment was that it seemed too simple; and he left, still a quasi-Christian. I believe each of us has met someone like this person.
How many people have you met that absolutely do not know whether they are going to heaven or not? They are the people for us to connect with 1 John 5:13 where he says, “These things I have written to you who believe in the name of the Son of God, that that you may know that you have eternal life.” Our actions could be a life-changing event for them which would bring such peace.
While these topics border on escapism, the aspect is trying to know the future. They are completely unaware that God knows our interests and the Bible knows the answers to all our questions, including our eternal future. We need to connect with them?
A Few Scriptures to Meditate Upon
Leviticus 19:26b “Do not practice divination or sorcery” (i.e. the act of predicting the future or taking or distributing drugs).
Leviticus 19:31 “Do not go to mediums or consult fortune tellers for you will be defiled by them.”
We can also add channelers, or wiccans. The reason is that these actions can lead to being influenced by demons, by actually contacting them or ingesting them. There is only one God as He has stated: “I am God; there is no other. I am God; and there is none like me” (Isaiah 46:9-10).
“I am the first and I am the last; there is no God but me. Who is like me? Let him stand up and speak, make it evident, and confront me with it” (Isaiah 44:6b).
“But they do not realize that I remember all their evil deeds, their sins engulf them; they are always before me” (Hosea 7:2).
“Any kingdom divided against itself will be ruined and a house divided against itself will fall” (Luke 11:17).
The following is a Pauline epistle to Timothy written about 65 A.D. It is also a prophecy of the culture of the last days and a mirror image of 2018. We are reading today’s media written almost a thousand years ago.
“But mark this: There will be terrible times in the last days. People will be lovers of themselves; lovers of money; boastful; proud; abusive; disobedient to their parents; ungrateful; unholy; without love; unforgiving; slanderous; without self control; brutal; not lovers of the good; treacherous; rash; conceited; lovers of pleasure rather than lovers of God, having a form of godliness but denying its power. Have nothing to do with them” (2 Timothy 3:1-5). |
16 F.3d 1291
305 U.S.App.D.C. 80
The HELEN MINING COMPANY, Petitioner,v.FEDERAL MINE SAFETY AND HEALTH REVIEW COMMISSION, andSecretary of Labor, Mine Safety and HealthAdministration, on behalf of Joseph A.Smith, Respondents.
No. 92-1599.
United States Court of Appeals,District of Columbia Circuit.
Argued Feb. 15, 1994.Decided March 4, 1994.
Petition for Review of an Order of the Federal Mine Safety and Health Review Commission.
J. Michael Klutch, Pittsburgh, PA, argued the cause and filed the briefs for petitioner. David J. Laurent and Thomas A. Smock, Pittsburgh, PA, entered an appearance.
Tana M. Adde, Atty., U.S. Dept. of Labor, Washington, DC, argued the cause for respondents. With her on the brief was W. Christian Schumann, Counsel, U.S. Dept. of Labor, Washington, DC. L. Joseph Ferrara, Washington, DC, entered an appearance.
Before: MIKVA, Chief Judge, EDWARDS, and SILBERMAN, Circuit Judges.
Opinion PER CURIAM.
PER CURIAM:
1
Petitioner seeks review of a Commission decision that it had twice illegally discharged Joseph Smith in retaliation for exercising his statutory right to file mine safety complaints. Since ample evidence in the record supports the ALJ's factual findings--which depended on a number of credibility judgments--we deny the petition.I.
2
Joseph Smith is chairman of the union safety committee at petitioner's Homer City Mine, a position that requires him to file grievances continually on behalf of his co-workers. Although employed as a longwall shearer operator, Smith is also a state-certified mine inspector ("fireboss" in the mining vernacular). Although firebossing is generally performed by managerial supervisors, qualified employees are often asked to fireboss on an as-needed basis since the mine cannot operate without an inspector.
3
On December 19, 1990, the mine management asked Smith to fireboss his shift. After some discussion regarding two hours of overtime Smith had expected that day, he began firebossing. During the shift, Smith called the mine foreman to say that he would have to shut down one of the mining belts because of a harmful accumulation of combustible coal dust. This is a drastic measure; since all the belts ran in tandem, turning one off means shutting down the mine. The shift foreman said that Smith should not shut down the belt, an instruction that Smith ignored. Smith turned off the belt and began shoveling coal to clean up the affected area--a remedy that may have been counterproductive since shoveling creates more coal dust. The next day Smith was fired for insubordination. An arbitrator, acting pursuant to the collective bargaining agreement, later agreed with the company that Smith had ignored a direct order, but reduced Smith's punishment from discharge to a 60-day suspension.
4
Smith came back to the mine after the suspension and continued his duties as safety committee chairman. In June 1991, Smith filed three safety grievances, each resulting in citations against the mine from federal inspectors. The last of the violations reported by Smith, on June 27, 1991, was serious enough to compel inspectors to issue an Imminent Danger Withdrawal Order for sections of the mine. On July 2, 1991, mine management again asked Smith to fireboss the shift. Smith declined to fireboss because he had the flu. After some discussion, Smith asked whether his supervisor was issuing a direct order, to which came the reply that if Smith was still at the mine at 12:01 A.M. when the shift started, then the assignment would be a direct order. Smith then said he would take an "illegal" day and walked out of the mine before the shift started. The next day, Smith obtained a statement from his doctor verifying his illness, but was informed that he had been fired for insubordination. Another arbitrator determined that the discharge decision was not a violation of the collective bargaining agreement.
5
The Department of Labor filed retaliatory discharge complaints under Federal Mine Safety and Health Act section 105(c), 30 U.S.C. Sec. 815(c), on behalf of Smith based on the two incidents. The ALJ held that Smith had established the prima facie case that he was discharged on both occasions for safety-related activities protected by the Act. The ALJ also rejected petitioner's affirmative defense that Smith was fired for insubordination--a justification unrelated to his protected conduct--because Smith had not been issued a direct order on either occasion. The ALJ found that on December 20, 1990, the mine supervisor had merely discussed possible abatement actions with Smith and had not issued a "direct work order" using those magic words, which in the mine's practice would have alerted Smith to a possible insubordination charge. And Smith was not given a direct work order to fireboss on July 2, 1991, only a conditional one that would become operative at the start of the shift. The ALJ noted that much of his decision rested on a credibility judgment between Smith and the mine management, and that he did not defer to the arbitrators' decisions since they operated under the collective bargaining agreement, not according to the Act's antidiscrimination mandate. The Commission adopted the ALJ's decision without comment.
II.
6
Petitioner does not claim that the ALJ gave inadequate weight to the arbitrators' interpretation of the collective bargaining agreement, an issue that the National Labor Relations Board has long grappled with in analogous contexts. See Plumbers & Pipefitters Union Local No. 520 v. NLRB, 955 F.2d 744, 755-56 (D.C.Cir.), cert. denied --- U.S. ----, 113 S.Ct. 61, 121 L.Ed.2d 29 (1992); Darr v. NLRB, 801 F.2d 1404, 1408-09 (D.C.Cir.1986). Petitioner, instead, argues only that substantial evidence in the record does not support the ALJ's rejection of the affirmative defense that Smith was fired for insubordination. We have little trouble dismissing this argument given the high level of deference we accord to such factual determinations. See Chaney Creek Coal Corp. v. Federal Mine Safety & Health Review Comm'n, 866 F.2d 1424, 1431 (D.C.Cir.1989); Donovan ex rel. Chacon v. Phelps Dodge Corp., 709 F.2d 86, 90-96 (D.C.Cir.1983).
7
Petitioner's challenge with respect to the December 20, 1990, discharge is rather facile. It is urged that Smith was fired not for protected activity but because he ignored a direct order from his supervisor not to turn off the conveyor belt. Even leaving aside the fact that the supervisor never issued a direct order (which the ALJ found important), the supervisor did not have authority to issue such an order. As the mine inspector, Smith was the person responsible for identifying hazards and responding to them--a conclusion supported by the state safety commission's report on the incident, which the ALJ noticed in his opinion.1
8
Petitioner's argument concerning the July 2, 1991, discharge is equally unavailing. We are asked to accept the shift foreman's testimony that he had ordered Smith to fireboss, and thus to ignore the ALJ's factual finding that the foreman had not issued a direct order--only a conditional work assignment that would become an order at 12:01 A.M., when the shift started. However one characterizes the instruction, petitioner's argument fails because Smith had the right--under standard mine practice, as the ALJ found--to abstain from work by taking a so-called "illegal" absence. Petitioner inexplicably argues that Smith's insubordinate act (ignoring a direct order) is the only relevant issue, not whether his absence was permissible. But Smith was not insubordinate if he could nevertheless ignore the work order, conditional or otherwise, by taking the day off.
9
Perhaps recognizing the logical flaw, petitioner argues further that insufficient evidence exists to support the ALJ's finding that it was standard mine practice to take an "illegal" day off, claiming that the collective bargaining agreement did not permit such absences. Petitioner offers no textual or logical support for its argument, asserting only that the interpretation of the agreement would lead to a "rather strange practice." Strange or not, Smith and several co-workers testified that they had previously taken unexcused absences from the mine, which are sanctionable under the collective bargaining agreement only if they occur on two consecutive days. The ALJ thus reasonably concluded that Smith had acted according to accepted procedures when he declined his work assignment.
10
Petitioner's case essentially reduces to its complaint that the ALJ rejected the testimony of management witnesses as to the facts in question and their version of the work rules at the mine. As the ALJ noted, the case turns on the relative credibility of the employees and their supervisors, and we are not in a position to question his judgment of who is more credible.
11
* * * * * *
12
We therefore deny the petition for review.
13
So Ordered.
1
Petitioner's argument that the ALJ erred in giving preclusive effect to the state report is specious, since the ALJ referred to the report only to support his own conclusion that Smith had acted reasonably
|
Our racer for the little dudes, suggested age 4-7 years old with a weight limit of 75 lbs. For 2012 the Mini got new cassette hubs, new frame top tube shape with a shorter rear end requested by our team riders and a really cool new CNC machined FB stem. |
The highest quality limited edition motorcycle prints.
My aim is to create the most beautiful motorcycle art in the world.To buy a limited edition print contact me - gastonv@dcsi.net.au
Monday, December 21, 2009
The most beautifull TRITON.
Geoff from The Old Classic Motorcycle Warehouse says he has built many Tritons for customers over the years and here he has put everything he has learnt into one motorcycle.This Triton has a 800cc six speed Triumph motor in a Norton featherbed frame. |
New silica nanostructure for the improved delivery of topical antibiotics used in the treatment of staphylococcal cutaneous infections.
In this paper, we report the synthesis, characterization (FT-IR, XRD, BET, HR-TEM) and bioevaluation of a novel γ-aminobutiric acid/silica (noted GABA-SiO₂ or γ-SiO₂) hybrid nanostructure, for the improved release of topical antibiotics, used in the treatment of Staphylococcus aureus infections. GABA-SiO₂ showed IR bands which were assigned to Si-O-Si (stretch mode). The XRD pattern showed a broad peak in the range of 18-30° (2θ), indicating an amorphous structure. Based on the BET analysis, estimations about surface area (438.14 m²/g) and pore diameters (4.76 nm) were done. TEM observation reveals that the prepared structure presented homogeneity and an average size of particles not exceeding 10nm. The prepared nanostructure has significantly improved the anti-staphylococcal activity of bacitracin and kanamycin sulfate, as demonstrated by the drastic decrease of the minimal inhibitory concentration of the respective antibiotics loaded in the GABA-SiO₂ nanostructure. These results, correlated with the high biocompatibility of this porous structure, are highlighting the possibility of using this carrier for the local delivery of the antimicrobial substances in lower active doses, thus reducing their cytotoxicity and side-effects. |
[Radiation diagnosis of pelvic cancer].
Pelvic cancer morbidity is consistent with that of renal cancer in general which was 15,000 patients in the whole country (2003). We present our data on 24 cases of pelvic cancer. Complex diagnostic examination included X-ray, ultrasound, urography and CT. Sensitivity of complex preoperative staging was 72.2%, specificity--66.7%, credibility--69.7%. |
Q:
How do I step into one of my included libraries in Android Studio?
When I step into a routine that is contained in my library module, I see the following:
public void ProcessClick(android.widget.Button Key, boolean bAutoProcess) { /* compiled code */ }
How do I configure my module to not only include the library, but its source as well so I can step into it?
A:
Per SD:
Attach source zip/folder to dependencies as well
|
In December, Jared Kushner asked Russian Ambassador Sergey Kislyak about using Russian diplomatic facilities to establish a secret and secure communications line between the Trump transition team and the Kremlin, the Washington Post reported in an explosive new story Friday.
According to U.S. officials who intercepted and reviewed Russian communications regarding the proposal, Kislyak told his superiors that Kushner, Trump’s son-in-law and senior adviser, suggested the communications line during a meeting at Trump Tower on Dec. 1 or 2.
While the Post was notified of the incident via an anonymous letter in December and the White House disclosed and downplayed the meeting in March, U.S. officials have verified that talks of the secret channel are consistent with their understanding of events.
Setting up such a private channel between a foreign leader and a transition team would be unusual, according to the Post, though the State Department, the White House National Security Council, and the U.S. intelligence agencies all have that ability. Obama administration officials say Trump transition team members never approached them about arranging a secure channel, possibly because of leak concerns.
Kislyak was reportedly taken aback at the question of allowing an American to use Russian communications gear at their embassy or consulate. The Post indicates that such usage would have been a security issue and would require Moscow to expose its sophisticated communications capabilities.
The request appeared “extremely naive or absolutely crazy,” to one former senior intelligence official who spoke to the Post. The FBI closely monitors communications of Russian officials in the U.S., and it consistently surveils Russia’s diplomatic facilities. A Trump transition official coming and going to the embassy would have been extremely concerning. Additionally, the official asked how Kushner would be sure the Russians wouldn’t leak the communications themselves.
On Friday, the Democratic National Committee (DNC) called for Trump to fire Kushner after the Post‘s report on the secure communication channel proposal, according to the Hill.
“Trump has no choice but to immediately fire Kushner, whose failure to report this episode on his security clearance is reason enough for a criminal investigation,” DNC deputy communications director Adrienne Watson said in a statement. “The next question is whether the president authorized this, because no one stands between Trump and Kushner on the chain of command.”
H/T the Hill |
(CNN) A New Jersey fire department's pit bull just became the first of its breed to become an arson detection K9 officer.
Hansel, a 4-year-old pup known for his cheerful energy and constant kisses, graduated from training on Friday, officially becoming a member of the Millville Fire Department.
"He's extremely excited," Tyler Van Leer, a Millville firefighter and Hansel's handler, told CNN. "Whenever I ask him, 'Are you ready to go to work?' and bring out the harness, he starts doing laps around the crate."
Rescued from a dogfighting ring
Hansel was rescued from a dogfighting ring in Ontario, Canada, when he was only 7 weeks old.
A global campaign called #Savethe21 was created to fight against the euthanization of the 21 dog fighters, including Hansel's mom, who were rescued from that ring.
Five of the rescued dogs, including Hansel and his sister Gretel, were later taken to Throw Away Dogs Project, a nonprofit organization in Philadelphia that rescues "unique" dogs and trains them to become K9s all over the country.
If Hansel hadn't been rescued, he and his sister would have also become dog fighters, Carol Skaziak, the founder of Throw Away Dogs, told CNN.
Hansel can now sniff out 14 different ignitable odors.
Hansel trained with Throw Away Dogs for a year before enrolling in a 16-week K9 academy with his handler to become a certified arson detection K9 officer. Hansel was given to the Millville Fire Department, who was in need of an arson detection dog, at no cost.
"He was trained or imprinted on 14 different odors and once he was imprinted on all the odors, he was eligible to graduate," Van Leer said.
Hansel is a single purpose arson detection K9, meaning he is specifically trained to identify ignitable liquids, such as kerosene, gasoline and diesel.
While the future hero will begin taking on jobs immediately, Hansel will also be available to aid other police and fire departments outside of Millville.
Part of his mission includes education to help the fire department teach students about fire prevention around the area.
Hansel is paving the way for other pit bulls
In addition to being a very good boy, Hansel is making history, according to Skaziak.
"I am 100% sure Hansel is the first pit bull arson detection dog in New Jersey," Skaziak told CNN. "I have done so much research and I don't believe there are any other pit bull arson detection dogs in the entire country. I have not found any others."
CNN could not independently confirm that Hansel is the first pit bull to hold the position.
Hansel and his handler, Tyler Van Leer.
Van Leer and Skaziak believe that Hansel is paving the way for a brighter future for pit bulls as a breed.
Other departments that were in attendance of Hansel's graduation and witnessed his progression over the past year have already expressed interest in bringing in other pit bulls as arson detection dogs, Van Leer said.
"We need police chiefs and fire chiefs around the country to want to do this too. This is the first step that could make a huge statement for this breed that has been so misunderstood," Skaziak told CNN.
While Hansel is ready to help the Millville Fire Department save lives, the sweet pup is also busy bonding with his handler, now his best friend.
"We are just inseparable," Van Leer said. "Everyone here are at the firehouse loves him. He is just an awesome dog. I wouldn't ask for any other dog." |
In the United States Court of Federal Claims
OFFICE OF SPECIAL MASTERS
No. 17-0164V
Filed: January 19, 2018
UNPUBLISHED
BETTY JENKINS,
Petitioner,
v. Special Processing Unit (SPU);
Attorneys’ Fees and Costs
SECRETARY OF HEALTH AND
HUMAN SERVICES,
Respondent.
Amy A. Senerth, Muller Brazil, LLP, Dresher, PA, for petitioner.
Traci R. Patton, U.S. Department of Justice, Washington, DC, for respondent.
DECISION ON ATTORNEYS’ FEES AND COSTS 1
Dorsey, Chief Special Master:
On February 3, 2017, petitioner filed a petition for compensation under the
National Vaccine Injury Compensation Program, 42 U.S.C. §300aa-10, et seq., 2 (the
“Vaccine Act”). Petitioner alleged that she suffered left shoulder injuries as a result of
her September 25, 2014 influenza (“flu”) vaccination. Petition at 1. On September 25,
2017, the undersigned issued a decision awarding compensation to petitioner based on
the respondent’s proffer. (ECF No. 25.)
On December 7, 2017, petitioner filed a motion for attorneys’ fees and costs.
(ECF No. 32.) Petitioner requests attorneys’ fees in the amount of $12,646.50 and
1
Because this unpublished decision contains a reasoned explanation for the action in this case, the
undersigned intends to post it on the United States Court of Federal Claims' website, in accordance with
the E-Government Act of 2002. 44 U.S.C. § 3501 note (2012) (Federal Management and Promotion of
Electronic Government Services). In accordance with Vaccine Rule 18(b), petitioner has 14 days to
identify and move to redact medical or other information, the disclosure of which would constitute an
unwarranted invasion of privacy. If, upon review, the undersigned agrees that the identified material fits
within this definition, the undersigned will redact such material from public access.
2
National Childhood Vaccine Injury Act of 1986, Pub. L. No. 99-660, 100 Stat. 3755. Hereinafter, for
ease of citation, all “§” references to the Vaccine Act will be to the pertinent subparagraph of 42 U.S.C. §
300aa (2012).
attorneys’ costs in the amount of $554.37. (Id. at ¶ 4.) In accordance with General
Order #9, petitioner's counsel represents that petitioner incurred no out-of-pocket
expenses. (Id.) Thus, the total amount requested is $13,200.87.
Respondent has not filed a response but later indicated by email that respondent
had no objection to the overall amount sought by petitioner.3
The undersigned has reviewed the billing records submitted with petitioner’s
request. In the undersigned’s experience, the request appears reasonable, and the
undersigned finds no cause to reduce the requested hours or rates.
The Vaccine Act permits an award of reasonable attorneys’ fees and costs.
§ 15(e). Based on the reasonableness of petitioner’s request, the undersigned
GRANTS petitioner’s motion for attorneys’ fees and costs.
Accordingly, the undersigned awards the total of $13,200.87 4 as a lump
sum in the form of a check jointly payable to petitioner and petitioner’s counsel
Amy A. Senerth.
The clerk of the court shall enter judgment in accordance herewith.5
IT IS SO ORDERED.
s/Nora Beth Dorsey
Nora Beth Dorsey
Chief Special Master
3
Respondent’s response was due by January 4, 2018. See ECF No. 32. On January 10, 2018,
respondent’s counsel confirmed via email to all parties that they do not object to the overall amount
sought by petitioner’s counsel. Respondent’s counsel additionally noted that “respondent’s lack of
objection to the amount sought in this case should not be construed as admission, concession, or waiver
as to the hourly rates requested, the number of billed, or the other litigation related costs.”
4 This amount is intended to cover all legal expenses incurred in this matter. This award encompasses all
charges by the attorney against a client, “advanced costs” as well as fees for legal services rendered.
Furthermore, § 15(e)(3) prevents an attorney from charging or collecting fees (including costs) that would
be in addition to the amount awarded herein. See generally Beck v. Sec’y of Health & Human Servs.,
924 F.2d 1029 (Fed. Cir.1991).
5 Pursuant to Vaccine Rule 11(a), entry of judgment can be expedited by the parties’ joint filing of notice
renouncing the right to seek review.
2
|
Based upon the Theory of Interpersonal Behavior, this study was aimed at assessing the predictors of physicians’ intention to use telemedicine in their clinical practice. Physicians were mailed a questionnaire to identify the psychosocial determinants of their intention to adopt telemedicine. Structural equation modelling was applied to test the theoretical model. The adapted theoretical model explained 81% (p < .001) of variance in physicians’ intention. The main predictors of intentions were a composite normative factor, comprising personal as well as social norms (β = 1.08; p < .001) and self identity (β = −.33; p < .001). Thus, physicians who perceived professional and social responsibilities regarding adoption of telemedicine in their clinical practice had stronger intention to use this technology. However, the suppression effect of personal identity in the regression equation indicates that physicians’ intention to use telemedicine was better predicted if their self-perception as telemedicine users was considered.
Over the past years, the adoption of information and communication technologies (ICT) in the health care sector has been the focus of many studies. Telemedicine, defined as the use of information technologies to exchange health information and provide health care services across geographical, time, social, and cultural barriers (Reid, 1996), has the potential to increase quality and access to health care and to lower health expenditures. This technology is considered as a major innovation at the technological, social, and cultural levels. Thus, the introduction of telemedicine as a tool to support the delivery of health care induces numerous changes for professionals, institutions, and for the health care system as a whole that must be accounted for during the implementation process (Hu, Chau & Sheng, 2000). Telemedicine is expected to impact all levels of health care organisations.
Physicians represent one of the principal groups of telemedicine users and their acceptance of this technology constitutes one of the prerequisites to the emergence and sustainability of telemedicine networks (Hu, Chau & Sheng, 2000). However, the decision of physicians to adopt a new technology such as telemedicine can be challenged by their relatively low computer literacy, the possible alteration of their traditional routines, and their high professional autonomy. Many studies have investigated physicians’ acceptance of various telemedicine applications over the last ten years (Hu et al., 1999a). These studies were of exploratory nature and were often limited to the measure of attitudes and perceived barriers. Furthermore, most of these studies were based on small samples and did not used explicit theoretical foundation to test their hypotheses (Hu et al., 1999).
Theoretical background
Among the studies of telemedicine adoption that were based on a theoretical framework, most employed the Theory of Planned Behaviour (TPB) (Ajzen, 1991) or the Technology Acceptance Model (TAM) (Davis, 1989). The validity of the TPB was demonstrated in a study of telemedicine adoption by physicians (Hu & Chau, 1999). This study reported that attitude was the principal determinant of physicians’ intention to use telemedicine, while perceived behavioural control had also a lesser but significant effect on intention. However, social norms were not found to significantly influence intention.
Derived from the TPB, the TAM was specifically designed to study the adoption of technology. In its original version, the model considers intention as the direct determinant of behaviour, while attitude and social norms are the predictors of intention (Davis, 1989). However, the TAM decomposes the attitude construct into two distinct factors: perceived ease of use and perceived usefulness. Many studies have empirically tested the TAM for the prediction of adoption behaviours for various technologies (Hu et al., 1999; Croteau & Vieru, 2002).
An investigation of telemedicine adoption among physicians in Hong Kong (Davis, 1989) found reasonable support for the TAM. The model explained 44 % of the variance in physicians’ intention to use telemedicine. This study has also demonstrated that intention was mostly determined by perceived usefulness. In counterpart, perceived ease of use of the technology did not influence significantly its adoption. These authors argued that other constructs should be added to the TAM for the study of technology adoption by physicians in order to enhance its explanatory power and its applicability in the healthcare sector.
Croteau and Vieru (2002) used an adaptation of the TAM to explore the factors affecting telemedicine adoption by two groups of physicians in Canada. Perceived usefulness was the main predictor of adoption in both groups, while perceived ease of use was significantly associated to adoption in only one group. The concepts of image and perceived voluntariness of use were also added to the original TAM in this study. Image, defined as the perceived impact of technology adoption on one’s social status, was not significant, while perceived voluntariness of use was negatively correlated to adoption (contrary to their hypothesis), but only in one group.
The influence of social factors has not been significant in most of the studies of telemedicine adoption by physicians (Hu, Chau & Sheng, 1999; Hu & Chau, 1999; Croteau & Vieru, 2002). It has been recognised that the medical profession is characterised by the relative autonomy of physicians and their independence in decision-making (Tanriverdi & Venkatraman, 1999). However, a technology that could interfere with physicians’ traditional practice could affect their perception of their professional role. Furthermore, as other professionals, physicians are committed to their profession and look to their peers for acceptable standards of performance (Tanriverdi & Venkatrama, 1999). As suggested by Succi and Walter (1999), the addition of specific determinants to psychosocial models, such as the perceived impact of using the technology on professional status, should be tested in further studies of telemedicine adoption by physicians.
In response to some of these concerns, the purpose of this study is to propose and to test a model that explores telemedicine acceptance determinants among physicians using a conceptual framework specifically adapted to the particular characteristics of the medical profession and the health care sector.
Conceptual model and research hypotheses
Among the psychosocial theories developed to understand the adoption of behaviours, Triandis’ Theory of Interpersonal Behaviour (Triandis, 1980) encompasses many of the behavioural determinants found in other models such as the TPB and the TAM. However, the TIB has a wider scope in that sense that this model also considers cultural, social, and moral factors that are not accounted for in other theories (Facione, 1993).
According to Triandis (1980), behaviour is determined by three dimensions: intention, facilitating conditions, and habit. Intention refers to the individual’s motivation regarding the performance of a given behaviour. Facilitating conditions represent objective factors that can make the realisation of a given behaviour easy to do. Conversely, barriers consist of factors that can impede or constrain the realisation of the behaviour. Habit constitutes the level of routinization of a behaviour, i.e. the frequency of its occurrence. As suggested by Triandis, habit can also exerts an influence on the emotive component of attitude (affect).
In the TIB, intention is formed by attitudinal, normative, and identity beliefs. Affect represents an emotional state that the performance of a given behaviour evokes for an individual. It is considered as the affective perceived consequences of the behaviour, whereas perceived consequences refer to the cognitive evaluation of the probable consequences of the behaviour. Perceived consequences encompass the perceived usefulness construct found in the TAM. The TIB incorporates two different normative dimensions: social and personal norms. Perceived social norms are formed by normative and role beliefs. Normative beliefs consist of the internalisation by an individual of referent people or groups’ opinion about the realisation of the behaviour, whereas role beliefs reflect the extent to which an individual thinks someone of his or her age, gender and social position should or should not behave. The other normative component of the TIB is the personal normative belief that represents the feeling of personal obligation regarding the performance or not of a given behaviour. Finally, self identity refers to the degree of congruence between the individual’s perception of himself or herself and the characteristics he or she associates with the realisation of the behaviour.
To the best of our knowledge, this model has not previously been applied to the study of telemedicine adoption by physicians. However, the TIB was used in some studies of information technology adoption by different groups of workers (Bergeron et al., 1995; Thompson, Higgins & Howell, 1991). For instance, Thompson et al. (1991) have tested the TIB in relation to personal computer use. Their model explained 40% of the variance in the behaviour. Paré and Elam (1995) employed a subset of Triandis’ model to explore determinants of computer use among knowledge workers. They found limited support for the TIB with less than 30% of explained variance in behaviour. The main predictors of computer use were beliefs, affect, social norms, facilitating conditions and habit. Finally, a study by Bergeron et al. (1995), found that knowledge workers’ internalisation of an information system was predicted by their affect towards the system and the perceived consequences of using it (R2 = .52). However, the TIB variables could not explain significantly information systems utilisation in this study. Although these results moderately support the use of the TIB, this model was adopted in the present study since the target population (physicians) differs on many respects from other studies. This conclusion is further supported by the observation that the constructs of role beliefs, self identity, and personal normative belief found in the original TIB were excluded in all of the reported studies.
Telemedicine adoption refers to physician’s psychological state with regard to his or her intention to use telemedicine in his or her practice (Croteau & Vieru, 2002). Telemedicine acceptance can be defined in different manners and adoption (or utilisation) represents a common indicator of the degree of telemedicine acceptance. Thus, the dependant variable measured in the present study is intention to use telemedicine. An individual’s intention to use telemedicine is considered as an appropriate measure of his or her actual use of the technology (Hu, Chau & Sheng, 1999). Moreover, meta-analysis on the use of psychosocial models in the study of health behaviours found high correlation between the intention to perform a given behaviour and the actual behaviour (Godin & Kok, 1996).
Figure 6.1. Conceptual model (adapted from Triandis, 1980)
For the purpose of this study, behavioural determinants of the intention were adapted from the original Triandis’ model with minor modifications. Firstly, two constructs, habit and facilitating conditions, were hypothesised to be linked directly to intention in our model while they are conceptualised as direct antecedents of behaviour in the original model. This was done because previous studies that employed Triandis’ theory have found that facilitating conditions and habit were important predictors of intention (Boots & Treloar, 2000). Also, a mediating effect of affect on the association between habit and intention was tested in our model, as suggested by Triandis.
The following hypotheses were tested:
Affect is a predictor of physicians’ intention to use telemedicine
Perceived consequences are predictors of physicians’ intention to use telemedicine
Perceived social norms are predictors of physicians’ intention to use telemedicine
Personal normative belief is a predictor of physicians’ intention to use telemedicine
Self identity is a predictor of physicians’ intention to use telemedicine
Facilitating conditions are predictors of physicians’ intention to use telemedicine
Habit is a predictor of physicians’ intention to use telemedicine
Affect has a mediating effect on the relation between habit and intention
Methods
Instrument development and validation
As recommended by Davidson et al. (1976), an etic-emic approach, inspired from the field of anthropology, was used to develop research instruments according to the TIB’s constructs. Firstly, a survey was realised among a convenience sample of physicians attending a conference on telehealth. An open-ended questionnaire, comprising ten questions, was distributed to a total of 60 physicians. The questions dealt with: a) physicians’ perceived telemedicine pros and cons; b) barriers and facilitating conditions affecting telemedicine use; c) emotions related to telemedicine utilisation; d) individuals or groups favourable or unfavourable to one’s utilisation of telemedicine; e) characteristics of telemedicine users; f) personal values related to telemedicine; and g) information and communication technologies used in practice. Forty-two questionnaires were returned completed (70%). A content analysis was performed to extract the salient modal beliefs among physicians, i.e. the beliefs that are common in this subgroup. This step constituted the emic component. The responses given by more than 25% of physicians were kept to form the items for each of the theoretical constructs that represent the etic component. A content analysis was performed independently by two researchers who had to reach an agreement on the classification and labelling of themes extracted. Thus, the number of items composing each constructs varied according to the number of popular responses given by physicians. The questions were formulated following social psychology theorists’ consensus for the development of questionnaires (Ajzen, 1991).
Secondly, a test-retest was performed to assess the reliability of the questionnaire with a sample representative of the studied population. A total of 20 physicians completed the same version of the questionnaire with a two-week interval. Results indicated good construct reliability, with Cronbach alpha varying from to .71 to .90 for the theoretical variables, which is considered satisfactory for an exploratory study. The temporal stability was assessed by calculating the intra-class correlation coefficients for each theoretical construct. Results varied from .46 to .98, which represent moderated to very good coefficients of agreement. Minor modifications were made to the final version of the questionnaire, following commentaries of the respondents.
Variables measured
In this research, the targeted behaviour was the intention of physicians to use telemedicine in their practice. The following definitions were printed on the questionnaires:
Telemedicine refers to any medical service provided at distance via an electronic communication.
In your practice refers to any act of consultation, diagnosis, treatment, or follow-up provided to a patient on site or at distance.
The intention (α = .84) to use telemedicine was measured by means of three items: “I estimate that my chances of using telemedicine in my practice are...” (7-point scale: very high, 7; very low, 1); “If I have the opportunity, I will use telemedicine in my practice” (7-point scale: strongly agree, 7; strongly disagree, 1); and “ I intent to use telemedicine in my practice” (7-point scale: strongly agree, 7; strongly disagree, 1).
The determinants of the affective dimension (affect) of attitude were obtained using a semantic differential 7-point scale made up of two pairs of adjectives, appearing after the sentence: “ For me, using telemedicine in my practice would be...”. The bipolar adjectives proposed were: stressful-relaxing and satisfying-dissatisfying. The Spearman correlation coefficient for this construct was .49 (p < .001).
For the cognitive component of attitude, or the perceived consequences (PC ), only one arm of the belief-based measure was obtained, that is b. As suggested by Gagné and Godin (2000), this method yields high coefficients of correlation with the direct determinant. This is also consistent with other studies based on the TIB that used a direct measure of the perceived consequences associated with the realisation of the behaviour (Boots & Treloar, 2000). Thus, seven items were used to assess the perceived consequences of using telemedicine (α = .72). Five items were worded as follows: “Using telemedicine in my practice would...” 1) facilitate access to expertise; 2) necessitate more time; 3) allow to update my knowledge; 4) reduce patients transfers; and 5) help my decision making. The other two items were: 6) “The definition of my professional roles and responsibilities would not be clear if I were using telemedicine in my practice”; and 7) “My relationships with patients would be less human if I were using telemedicine in my practice”. Each item was measured on a 7-point scale, ranging from 1 (strongly disagree) to 7 (strongly agree). Reverse score was computed for the items negatively formulated (items 2, 6, and 7).
The normative beliefs (NB) (α = .76) were assessed by asking the respondents to indicate their level of agreement, on 7-point scales, with the following four statements: 1) “If I were using telemedicine in my practice, my patients would...” (strongly approve, 7; strongly disapprove, 1); 2) My colleagues would recommend that I use telemedicine in my practice” (strongly agree, 7; strongly disagree, 1); 3) “The consulting specialists would recommend that I use telemedicine in my practice” (strongly agree, 7; strongly disagree, 1); and 4) “The hospital managers would encourage me to use telemedicine in my practice” (strongly agree, 7; strongly disagree, 1).
The measure of role beliefs (RB) (α = .85) was obtained by using three items. These items were worded as follows: “I consider that using telemedicine is correct for a physician of...” 1) my speciality, 2) my region; and 3) my age. All three items were measured on a 7-point scale, ranging from 1 (strongly disagree) to 7 (strongly agree). Consistent with Triandis’ theory, items measuring role beliefs and normative beliefs were aggregated into a single construct, perceived social norms (SN) (α = .85).
Personal normative belief (PNB) (α = .75) was measured by means of three items. Respondents were asked to evaluate, on 7-point scales, the following statements: 1) “I would feel guilty if I was not using telemedicine in my practice” (strongly agree, 7; strongly disagree, 1); 2) “Using telemedicine would be in my principles” (strongly agree, 7; strongly disagree, 1); and 3) “It would be unacceptable to not use telemedicine in my practice” (strongly agree, 7; strongly disagree, 1).
Measure of self identity (SI) (α = .66) was obtained by calculating the difference between physicians’ beliefs regarding characteristics of telemedicine users and their evaluation of the importance of these characteristics for themselves. Firstly, the three characteristics assessed were: 1) “A physician who uses telemedicine shows an innovative mind”; 2) “Using telemedicine is a proof of a physician’s competence”; and 3) “A physician who uses telemedicine is concerned by the quality of patients care”. Respondents’ level of agreement with each item was assessed on 7-point scales (strongly agree, 7; strongly disagree, 1). Secondly, the importance of each of these characteristics was assessed by asking the respondents to evaluate themselves, on 7-point scales (strongly agree, 7; strongly disagree, 1), on these statements: 1) “I consider myself as someone with an innovative mind”; 2) “ I consider myself as competent”; and 3) “I am concerned by the quality of patients care”. Finally, the absolute value of the difference between the two scores was calculated to form the personal identity construct. Thus, the possible scores vary from 0 indicating a high agreement between characteristics of telemedicine users and physicians’ self-evaluation, to 6, indicating a poor agreement.
Facilitating conditions (FC) (α = .77) were assessed by asking the respondents to indicate to what extend the following elements could impede telemedicine utilisation in their practice: time, technology quality, clinicians resistance, consultants availability, lack of qualified personnel, technology availability, remuneration, costs, and clinical complexity. All these items were rated on a 7-point scale, from extremely likely (1) to extremely unlikely (7). For the purpose of structural equation modelling, the facilitating conditions construct was decomposed into two variables. This was done firstly because a principal component analysis has indicated that this construct was formed by two distinct factors. Moreover, Dwyer et al. (1998) have established a distinction between control factors that depends on the individual’s resources and skills (internal barriers), and those that are external to the individual. Thus, the two constructs were created by combining the nine items of the facilitating conditions construct into two categories: external factors (α = .79) and internal factors (α = .67).
Finally, habit (H) was measured by asking the respondents if they have used telemedicine in the past, as well as their frequency of use. The respondents’ scores were grouped into the following categories: 0 (never); 1 (once); 2 (two to four times); and 3 (5 times or more). Since habit was assessed by a single item, its error variance was estimated by multiplying the construct’s measurement error found in the test-retest (.22) by its variance (1.28), as suggested by Kline (1998). Thus, the error variance parameter for habit was fixed to .28.
Studied population and sample
The survey questionnaire was distributed to all general practitioners (GPs) and specialists of the 32 hospitals involved as telemedicine services requestors in the RQTE (the extended provincial telemedicine network of Quebec). This network was created in 1998 to provide specialised consultations in paediatric cardiology to hospitals across the Province of Quebec. With the expected diffusion of telemedicine technology to other medical specialities within this provincial network, the study sample included all active physicians (physicians in administrative position or in public health were excluded). Data from the “Régie de l’assurance maladie du Québec” (RAMQ), the official government agency responsible for the administration of health care services in Quebec, were obtained to estimate the number of full-time equivalent physicians practising in the 32 hospitals of the RQTE. Furthermore, a validation of the total number of physicians (part- and full-time) was made with hospitals. Data compiled from each of the 32 hospitals indicated that the total number of active physicians was 3,832. This number, however, overestimate the actual number of physicians targeted in the present study because several physicians have more than one hospital affiliation.
Out of the 3,832 mailed questionnaires, 538 were received. Among those received, seven were returned uncompleted, six were received by physicians in community health or in administrative position, three physicians refused to complete the questionnaire, two were returned by a physician who had received three copies of the questionnaire, and one was received by a dentist. Thus, 519 questionnaires were satisfactorily completed. Also, the variation in response rate was considerable between hospitals, with proportions of respondents varying from 7% to 50%. Hospitals from remote areas were those with the highest response rate while urban hospitals had the lowest participation.
As the study questionnaires were entirely anonymous, it was not possible to identify physicians who did not return their questionnaire. However, the possibility of non-response bias was assessed by comparing the respondents with the Quebec physicians’ population. As presented below, physicians in the sample had similar characteristics than the general population of physicians in Quebec on most of the control variables measured (age, gender, and speciality). The only exception pertains to physicians’ region of practice: outlying and remote regions were over-represented in the sample.
Data collection procedure
Contacts with the CMDP (Council of Physicians, Dentists and Pharmacists) or the DSP (Professional Services Direction) had been made previously to identify a local contact person in every hospital. This contact person was responsible for the promotion of the study in the hospital, the distribution of questionnaires, and the follow-up.
Each contact person of the 32 hospitals was mailed a number of packets corresponding to the total number of physicians practising in the hospital. The packets contained a letter explaining the purpose of the study, a consent form, a questionnaire and a pre-stamped envelope. This pre-stamped envelope was to be mailed directly to the researchers with the completed questionnaire and the signed consent form. The contact persons were responsible for the distribution of packets to the physicians by internal mail. Two weeks later, the contact person of each hospital distributed recalls to all physicians. Another recall was sent by the same procedure three weeks after the first recall. This last letter indicated that another copy of the questionnaire could be obtained if needed from the local contact person identified.
A unique identification number that indicated the code of the hospital and the code of each respondent was printed on the questionnaire. However, questionnaires were completely anonymous since physicians’ names were not linked to their identification number. Thus, none of the material sent to the physicians was personalised. This study was approved by the ethic committee of the local university.
Statistical analyses
A structural equation modelling approach (SEM) was performed using the EQS version 5.7. Analyses were conducted in two major stages. The first step consisted in a confirmatory factor analysis (CFA) to assess the measurement model. Thus, the correspondence between observed variables and the latent constructs hypothesised was tested. In the second step, the adequacy of the TIB in explaining intention to use telemedicine by physicians was tested. Also, an analysis was performed to test whether affect had a mediating effect on the relationship between habit and intention. This significant mediation effect was incorporated into the global model.
The maximum likelihood method was used to estimate the parameters of the model. As recommended by Byrne (1994), the following statistics were considered to assess the fit of the model: the chi-square value (χ2), the Satorra-Bentler scaled statistic (S-Bχ2), the corrected Comparative Fit Index (*CFI), the corrected Nonnormed Fit Index (NNFI*), and the corrected Root Mean Squared Approximation of Error (RMSEA*). The chi-square statistic is sensitive to sample size and is not recommended for data with non normal distribution. The Satorra-Bentler scaled statistic (S-Bχ2) corrects the χ2 when the assumptions of normality in data distribution are violated (Byrne, 1994). The corrected values of the indexes *CFI, NNFI*, and RMSEA* were computed from the S-Bχ2. Based on the recommendation of Hu & Bentler (1995), a cut-off value close to .95 for NNFI* and *CFI and a cut-off value close to .06 for RMSEA* were used to assess the goodness of the model to fit the data. A raw data file was submitted to EQS in order to obtain the Satorra-Bentler scaled statistic. Then, the program created the covariance matrix used for analyses.
Missing values
To assure construct validity of the measure, data were retained only for subjects having answered at least 70% of the items for each construct. Since the EQS program requires that there are no missing data, mean imputation was applied for respondents having 30% or less missing items scores for a given construct. In such cases, the missing values were replaced by the total sample mean for this item. The final sample size for structural equation modelling was 506.
Results
Descriptive statistics of the sample are presented in Table 6.1. The mean age of physicians in the sample was 43.9 years. More males than females participated in the study. Also, specialists accounted for 57% of the sample. These proportions are similar to the “College des médecins” (College of Physicians of Quebec) data on Quebec physicians’ profile. However, the practice location of physicians in the sample differed from the provincial average. A majority of respondents was practising in hospitals located in suburbs or small towns. These proportions are consistent with the characteristics of the telemedicine network under study, which includes mainly local or regional hospitals from the different health regions of the Province of Quebec.
Table 6.1. Demographic characteristics of respondents
Sample characteristics (n = 506)
Frequency
Gender*
Male
Female
311 (62.2%)
189 (37.8%)
Type
GP
Specialist
220 (43.5%)
286 (56.5%)
Region
Urban
Suburban
Remote
63 (12.5%)
281 (55.5%)
162 (32.2%)
Mean age (sd)
43,9 (±9.9)
Mean years of practice (sd)
16,2 (±10.6)
GP: General practitioners
* n = 500
Measurement model
The first step of data analysis was to assess the adequacy of the measurement model by a confirmatory factor analysis (CFA). The model tested included original theoretical constructs (intention, affect, perceived consequences, self identity, habit) and a composite normative construct (personal normative belief + perceived social norms). This normative construct was created to take into account the multicollinearity between the perceived social norms and the personal norm constructs. According to Kline (1998), multicollinearity is present when the correlation between two independent variables is greater than .85. After adjustment, the coefficients of correlation between each independent variable of the model were all satisfactory. A Cronbach alpha of .86 indicated adequate internal consistency for the composite normative construct. However, the facilitating conditions construct was excluded because of its poor fit in the measurement model. Thus, hypothesis 7 was rejected. The CFA performed on this modified model indicated a relatively good fit of the data. Values of the corrected indexes of fit were satisfactory: *CFI = .93; NNFI* = .91; and RMSEA* = .06.
Structural model
During the second step of data analysis, various structural models were compared. Firstly, the mediation effect of affect on the relation between habit and intention was assessed following Baron and Kenny’s procedure (1986). As hypothesised, affect had a mediation effect on the relation between habit and intention. However, this effect was only partial, since there was still a significant association between habit and intention, after taking affect into account. Consequently, both direct and indirect effects of habit on intention were tested in the structural model. Secondly, a complete model, including the partial mediation effect of affect, was tested in order to assess the validity of the TIB for the prediction of physicians’ intention to use telemedicine. This model is shown in Figure 6.2.
Figure 6.2. Complete TIB structural model
The scaled χ2 value was significant (S-Bχ2 = 325.23; df = 120, N = 506), but lower than the null model χ2 (3858.54; df = 153, N = 506). The fit indexes indicated a relatively good fit for this model, with values of .93 for the *CFI and the NNFI*, and a value of the .06 for the RMSEA*. Overall, the TIB proved to be an acceptable model to explain intention to use telemedicine. However, some of the structural coefficients were not significant in this model (Figure 2). Even if the direct effect of habit on affect was significant, neither the affect nor the habit constructs were significant predictors of intention in the model. Since three of the theoretical constructs – affect, habit, and perceived consequences – were not significant predictors of intention, another model was tested with only the significant predictors of intention. In structural equation modelling, it is suggested to reestimate a model, removing the nonsignificant parameters of the original model by fixing them to zero (Hays, 1989). In the final model, shown in Figure 6.3., only the significant predictors of intention were kept. This approach did not change the initial findings of the study, but the value of the fit indexes increased for the parsimonious model. Hence, the model scaled χ2 was nonsignificant (S-Bχ2 = 44.25; df = 17, N = 506) and the value of the *CFI was .98. The NNFI* and the RMSEA* were also satisfactory, with respective values of .97 and .05.
Figure 6.3. Final structural model
The standardised structural coefficients for the final model as well as the variance in intention to use telemedicine explained by this model are presented in Table 6.2. The strongest predictors of intention are normative factors (β = 1.08), which encompass social as well as personal norms. Self identity is also a significant predictor of intention (β = −.33), but in the opposite way of what hypothesised. Surprisingly, self identity has a negative weight in the prediction of intention. Given the positive correlation between the two predictors (r = .64), a net suppression effect (Cohen & Cohen, 1983) of the self identity construct was suspected. This indicates that a part of the variance in the normative factors that is not relevant for the prediction of intention has been removed by including the self identity construct into the equation. Together, these two constructs explain 81% of the variance in physicians’ intention to use telemedicine.
Table 6.2. Standardised structural model coefficients (final model)
Standardised coefficient
Value (corrected standard error)
Factor correlations
NF – int
.88*
SI – int
.37*
NF – SI
.64* (.03)
Path coefficients
NF – Int
1.08* (.18)
SI – Int
-.33* (.24)
Explained variance
Int
81%
NF; normative factors; SI: self identity; Int: intention
* p-value < .001
Discussion
In this study, the TIB was adopted as a basis for examining the predictors of physicians’ intention to use telemedicine in their practice. Specifically, normative factors – comprising social as well as personal norms – are the best predictors of intention. In addition, self identity has been found to have a suppression effect in the prediction of physicians’ intention to use telemedicine. These findings have several implications, at the theoretical as well as the practical level. Each of these aspects is presented below, followed by a discussion on the study’s limitations.
Theoretical implications
Overall, results suggest that the TIB is an appropriate model to predict physicians’ intention to use telemedicine in their practice, considering the high proportion of variance explained by the structural model. Relative to other studies that have explored telemedicine acceptance among physicians with structural equation modelling (Ajzen, 1991; Hu & Chau, 1999; Croteau & Vieru, 2002), the 81% of variance explained by the structural model is noteworthy. A confirmatory factor analysis was performed to test the measurement model and has indicated strong relationships between the latent constructs and their corresponding measurement items. However, multicollinearity was present between some of the theoretical constructs proposed in the original TIB model. Thus, the social and the personal normative factors were aggregated to form a single normative indicator. This is consistent with what some authors have suggested concerning the presence of a general normative construct, comprising social and personal dimensions, that influence intention of individuals to perform a given behaviour (Fishbein, 1967).
Furthermore, many of the constructs proposed in Triandis’ original model did not significantly predict intention in our study. In previous studies based upon the TIB (Bergeron et al., 1995; Thompson, Higgins & Howell, 1991), perceived consequences and affect were strong predictors of technology acceptance. In contrast, the present study has shown that these attitudinal components did not significantly influence telemedicine acceptance by physicians. The fact that telemedicine is a different technology than the one analysed in other studies, i.e. personal computer or Internet, could explain this finding. Similarly, the target populations in those studies were knowledge workers or students and thus, differ from the population studied here. Indeed, the decision to use telemedicine implies not only a personal evaluation of its benefits by physicians, but highly depends upon the context in which the clinical act is performed where physicians must comply to hospital managers’, colleagues’, and patients’ expectancies. Moreover, the feeling of professional responsibility is central to physicians’ decision-making (Tanriverdi & Venkatraman, 1999) and therefore, influences their acceptance of telemedicine technology.
In their study of telemedicine acceptance, Hu and collaborators (1999) found that physicians’ perceived control over telemedicine utilisation, as measured by proper training, technology access, and in-house technology expertise, was positively associated with intention (Hu & Chau, 1999). Unfortunately, the impact of facilitating conditions (FC) on intention could not be tested in the present study since this construct was removed from the structural model. The CFA performed has indicated a poor fit of the FC items with the measurement model. Thus, the effect of facilitating conditions and barriers may not be adequately captured by the items found in the questionnaire. A plausible explanation to this could be that the FC items were selected from a survey among physicians attending a congress on telemedicine and who were more familiar with this technology. Hence, those items may not had the same meaning for all physicians in the sample. The limited penetration of telemedicine technology in most of the surveyed hospitals could make it difficult for physicians to anticipate potential barriers or facilitating conditions to its utilisation. Consequently, special attention should be given when selecting facilitating conditions items for the study of telemedicine adoption in order to take the degree of exposure to the technology into account. Furthermore, in Triandis’ original model, facilitating conditions are hypothesised as direct behavioural determinants and not as predictors of intention. Thus, other studies should investigate the impact of facilitating conditions on telemedicine utilisation by physicians in the context of a larger diffusion of this technology.
Habit, measured by the frequency of telemedicine utilisation in the past, did not appear as a strong predictor of future utilisation. This is consistent with Bergeron et al. (1995) who found that neither frequency of use nor internalisation of information systems was predicted by past experience. As Thompson et al. (1991) have stated, the measure of habit as the frequency of a behaviour’s occurrence may not be appropriate. They advocate that the frequency of technology utilisation was identical to the measure of utilisation itself (behaviour). In the present study, habit was assessed by a single item, which may have been insufficient to capture the effect of physicians’ past experiences with information and communication technologies. Paré and Elam (1995) have employed a multidimensional measure of habit and found a significant relationship between this variable and personal computer utilisation.
Contrary to our findings, studies that have investigated the determinants of telemedicine acceptance by physicians have found limited support for the impact of social factors on intention to use this technology (Hu et al., 1999; Hu & Chau, 1999; Croteau & Vieru, 2002). As suggested by Succi and Walter (1999), the measure of social norms in psychosocial models may not be appropriate to assess the normative dimension of telemedicine acceptance by physicians. Integrating physicians’ perceived impacts of telemedicine utilisation on their professional status and their beliefs regarding moral responsibility to use this technology could thus improve the measure of the normative construct.
On other respects, the unexpected relationship between the self identity construct and intention to use telemedicine deserve attention. As some researchers have suggested (Courville & Thompson, 2001), it is important to interpret the correlation coefficient between a given predictor and the criterion variable in conjunction with standardised beta weights. In the measurement model, there was a positive correlation of .37 between self identity and intention, while the beta weight was -.33. As the self identity and the normative factors constructs were positively correlated (r = .64), a net (Cohen & Cohen, 1983) suppression effect was detected. As Maasen & Bakker (2001) indicate, it is important to acknowledge the occurrence of a suppression situation in structural models and to consider it when interpreting the results. A suppressor variable increases the predictive validity of another variable by its inclusion in the regression equation (Maassen & Bakker, 2001). In fact, including the self identity score with a negative weight suppressed irrelevant variance in the latent normative construct, thus enhancing prediction of intention by this item. The beta weight of 1.08 for the normative factors is explained by the presence of this suppressor variable (Deegan, 1978).
In this study, self identity refers to the degree of correspondence between physicians’ perception of telemedicine users’ characteristics and their auto-evaluation of these characteristics for themselves. When included in the regression equation, this construct clears out the variance reflecting self-identity concept from the variables measuring professional as well as social normative beliefs. In social psychology, a distinction is made between private, social, and collective self (Triandis, 1989; Ybarra & Trafimow, 1998). The private self represents the assessment of the self by the self (e.g. “I am competent”); the public self is an assessment of the self by the generalised other (e.g. “People think I am competent”); and the collective self corresponds to the assessment of the self by a specific reference group (e.g. “My co-workers think I am competent”) (Triandis, 1989; p.509). These three facets of the normative construct may influence individual behaviours in different manners depending on the context [31]. As noted by Triandis (1989), the cultural context influences which normative cognitions are “sampled” by individuals in the formation of salient beliefs. Thus, intention to use telemedicine is principally influenced by public and collective normative factors and removing the effect of physicians’ self-perception as telemedicine users (or private self) could increase the predictive validity of the normative construct.
Implications for telemedicine diffusion
The normative factors involved in physicians’ intention to use telemedicine are both personal and social, but are primarily of professional nature. The “significant others” who could exert an influence on the decision of physicians to use telemedicine are colleagues, consulting specialists, hospital managers and patients. Similarly, the way physicians perceive their social role as professionals influences their acceptance of telemedicine. For instance, those who believe that using telemedicine is normal for physicians of their region would be more likely to use this technology. Thus, in their decision to use telemedicine, physicians seem to be mostly influenced by cognitions from the “collective self”, i.e. their perception of what the social groups to which they belong expects from them. Hence, to promote the diffusion of telemedicine, campaigns should include messages from peers, patients and telemedicine specialists, and insist on the relevance of telemedicine for physicians of different regions and specialities.
The feeling of professional responsibility also exerts a strong influence on physicians’ intention to use telemedicine in their practice. The promotion of telemedicine diffusion in the health care system should target the benefits for patients with respect to equity in access to specialised medical services, quality and continuity of care. Consequently, physicians would be more likely to perceive the use of telemedicine as a professional obligation towards the well being of their patients.
However, since self identity (or private self) plays a suppression role in the relationship between normative factors and intention to use telemedicine, it is important to take its influence into account. Practically, physicians who do not perceive themselves as telemedicine users would more likely be concerned by messages addressing normative beliefs towards telemedicine use. For physicians who consider they have the attributes of telemedicine users, i.e. those who sample cognitions from the private self in their decision-making (Triandis, 1989), messages focusing on collective norms would be less efficient. Thus, messages promoting the use of telemedicine in medical practice should be selected with caution and tailored to the characteristics of physicians.
Limits of the study
This study has some limitations. First, in spite of a strategy involving local contact persons in each hospital, the response rate was low and varied a lot between hospitals. Low participation has been frequently reported in previous studies of telemedicine acceptance by physicians (Hu et al., 1999; Hu & Chau, 1999). Modest financial incentives have proved effective to increase physicians’ participation in mail surveys (Donaldson et al., 1999). Other strategies could also be explored to ensure better response rates in subsequent studies, such as involving departments chiefs of service (Hu & Chau, 1999) or promoting the study during CMDP meetings.
A second limitation of this study pertains to the generalisability of the results. The population under study was formed by all the physicians practising in hospitals of the RQTE. Thus, the sample is composed of volunteers from this population and is representative of a certain type of physicians. Therefore, responses to this study are subject to self-selection biases. Globally, characteristics of physicians in the sample correspond to those of the whole Quebec physicians population, with the exception of the over-representation of physicians from remote and outlying regions. Since telemedicine is primarily aimed at improving access to specialised healthcare services in remote regions, physicians from these regions are generally more aware of the different applications of this technology and could have a better opinion towards it. Although comparisons between physicians’ responses across regions have not indicated significant differences in intention to use telemedicine, more research is needed to explore contextual factors that impact technology acceptance by healthcare providers.
Thirdly, although the TIB has been satisfactory in predicting a large proportion of variance in physicians’ intention to use telemedicine, some of the theoretical hypotheses were rejected. Also, a global normative item was created because of multicollinearity between social and personal normative components. Furthermore, facilitating conditions were deleted from the structural model since the measurement model of this construct was unsatisfactory. These theoretical limitations call for the use of structural equation modelling in prospective empirical studies based on the TIB in order to validate the model.
Despite these limitations, the present study contributes considerably to the understanding of telemedicine acceptance by physicians. This study was the first, to the best of our knowledge, to employ Triandis’ Theory of Interpersonal Behaviour to investigate the determinants of physicians’ intention to use telemedicine. This model has the advantage of considering cultural variations in the formation of behaviours (Facione, 1993). Physicians represent a particular group of professionals and items measuring theoretical constructs must be adapted to their reality. This was done by applying a qualitative emic-etic approach (Davidson et al., 1976) to the development of the research instrument. Moreover, the structural modelling approach has permitted to assess the validity of the TIB for predicting telemedicine acceptance by physicians. The measurement model was also satisfactory tested by a confirmatory factor analysis. Thus, this research responds to calls for additional theory-testing efforts to extend the results from prior studies by proposing a conceptual framework that considers the particular characteristics of the medical profession. Finally, the present study provides avenues for promoting telemedicine acceptance among physicians and thus, for supporting the diffusion of this technology in the health care system.
Conclusion
The rapid advancements in information and communication technologies over the last years have spurred the development of various telemedicine experiments in Canada. However, the diffusion of this technology to the whole healthcare system remains a major challenge. As a professional group, physicians have an important influence on the integration of telemedicine applications in different clinical settings. In the past, models such as the TPB and the TAM have been applied with limited support to the study of telemedicine acceptance by physicians. The TIB appears as a more comprehensive model since it integrates many psychosocial dimensions involved in the formation of individuals’ behavioural intention.
From a practical standpoint, this study has indicated some avenues for the diffusion of telemedicine in the healthcare system. Thus, communicating positive opinions towards telemedicine from groups such as colleagues and patients, demonstrating the relevance of using this technology in a variety of clinical contexts, and addressing the benefits of using telemedicine for improving patient care could be used as strategies to promote physicians’ acceptance of telemedicine.
From a theoretical standpoint, the findings of this study call for the development of alternatives to measure normative factors that influence physicians’ decision to use telemedicine. Theory refinement is still needed and the integration of constructs from different models represents a promising approach. Also, qualitative research could be conducted to explore more extensively the formation of physicians’ cognitions with respect to telemedicine acceptance. Finally, further studies should compare the determinants of physicians’ acceptance of telemedicine for different clinical or educational purposes and investigate the potential variations among various cultural settings in order to gain a broader understanding of the conditions under which this technology could be implemented on a large scale.
Acknowledgements
The study on which this paper is based was substantially supported by a grant form the Canadian Institutes of Health Research (Project No. 49452). The realisation of this research was also made possible with the support of a doctoral scholarship from the FCAR/FRSQ to Marie-Pierre Gagnon. |
**Building Django 2.0 Web Applications
**
Create enterprise-grade, scalable Python web applications easily with Django 2.0
Tom Aratyn
****BIRMINGHAM - MUMBAI****
# Building Django 2.0 Web Applications
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
**Commissioning Editor:** Amarabha Banerjee
**Acquisition Editor:** Noyonika Das
**Content Development Editor:** Gauri Pradhan
**Technical Editor:** Rutuja Vaze
**Copy Editor:** Dhanya Baburaj
**Project Coordinator:** Sheejal Shah
**Proofreader:** Safis Editing
**Indexer:** Aishwarya Gangawane
**Graphics:** Jason Monteiro
**Production Coordinator:** Shraddha Falebhai
First published: April 2018
Production reference: 1250418
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78728-621-4
www.packtpub.com
mapt.io
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
# Why subscribe?
* Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
* Improve your learning with Skill Plans built especially for you
* Get a free eBook or video every month
* Mapt is fully searchable
* Copy and paste, print, and bookmark content
# PacktPub.com
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at `service@packtpub.com` for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
# Contributors
# About the author
**Tom Aratyn** is a software developer and the founder of the Boulevard Platform. He has a decade of experience developing web apps for companies of all sizes (from boutiques to large start-ups, such as Snapchat). He loves solving problems using his server-side and client-side development skills and helping other developers grow.
I want to thank the many people who made this book possible. Thanks mom! Thanks to my friends who helped keep me grounded through it all. Particular thanks to my editor, Gauri Pradhan, who helped me so much over the many months. Thank you to the reviewers Andrei Kulakov and Dan Noble for helping improve the book. My thanks also to the many other folks on the Packt team, including Dhanya Baburaj, Rutuja Vaze, Noyonika Das, and everyone else!
# About the reviewers
**Andrei Kulakov** lives in New York and has worked in the software industry for 10 years,
including many projects in genetic research, linguistics, hardware systems, healthcare and
machine learning. In his spare time, Andrei can often be found practicing hand-balancing in
one of the city parks.
**Dan Noble** is an accomplished full-stack web developer, data engineer, and author with more than 10 years of development experience. He enjoys working with a variety of programming languages and software frameworks, particularly Python, Elasticsearch, and JavaScript.
Dan currently works on geospatial web applications and data processing systems. He has been a user and an advocate of Django and Elasticsearch since 2009. He is the author of the book Monitoring Elasticsearch and a technical reviewer for The _Elasticsearch Cookbook_ , _Second Edition_ , by Alberto Paro, by Packt Publishing.
# Packt is searching for authors like you
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
# Table of Contents
1. Title Page
2. Copyright and Credits
1. Building Django 2.0 Web Applications
3. www.packtpub.com
1. Why subscribe?
2. PacktPub.com
4. Contributors
1. About the author
2. About the reviewers
3. Packt is searching for authors like you
5. Preface
1. Who this book is for
2. What this book covers
3. To get the most out of this book
1. Download the example code files
2. Conventions used
4. Get in touch
1. Reviews
6. Starting MyMDB
1. Starting My Movie Database (MyMDB)
1. Starting the project
1. Installing Django
2. Creating the project
3. Configuring database settings
2. The core app
1. Making the core app
2. Installing our app
3. Adding our first model – Movie
4. Migrating the database
5. Creating our first movie
6. Creating movie admin
7. Creating MovieList view
8. Adding our first template – movie_list.html
9. Routing requests to our view with URLConf
10. Running the development server
3. Individual movie pages
1. Creating the MovieDetail view
2. Creating the movie_detail.html template
3. Adding MovieDetail to core.urls.py
4. A quick review of the section
4. Pagination and linking movie list to movie details
1. Updating MovieList.html to extend base.html
2. Setting the order
3. Adding pagination
4. 404 – for when things go missing
5. Testing our view and template
5. Adding Person and model relationships
1. Adding a model with relationships
2. Different types of relationship fields
1. Director – ForeignKey
2. Writers – ManyToManyField
3. Role – ManyToManyField with a through class
3. Adding the migration
4. Creating a PersonView and updating MovieList
1. Creating a custom manager – PersonManager
2. Creating a PersonDetail view and template
3. Creating MovieManager
5. A quick review of the section
6. Summary
7. Adding Users to MyMDB
1. Creating the user app
1. Creating a new Django app
2. Creating a user registration view
3. Creating the RegisterView template
4. Adding a path to RegisterView
5. Logging in and out
1. Updating user URLConf
2. Creating a LoginView template
3. A successful login redirect
4. Creating a LogoutView template
6. A quick review of the section
2. Letting users vote on movies
1. Creating the Vote model
2. Creating VoteForm
3. Creating voting views
1. Adding VoteForm to MovieDetail
2. Creating the CreateVote view
3. Creating the UpdateVote view
4. Adding views to core/urls.py
5. A quick review of the section
3. Calculating Movie score
1. Using MovieManager to calculate Movie score
2. Updating MovieDetail and template
4. Summary
8. Posters, Headshots, and Security
1. Uploading files to our app
1. Configuring file upload settings
2. Creating the MovieImage model
3. Creating and using the MovieImageForm
4. Updating movie_detail.html to show and upload images
5. Writing the MovieImageUpload view
6. Routing requests to views and files
2. OWASP Top 10
1. A1 injection
2. A2 Broken Authentication and Session Management
3. A3 Cross Site Scripting
4. A4 insecure direct object references
5. A5 Security misconfiguration
6. A6 Sensitive data exposure
7. A7 Missing function level access control
8. A8 Cross Site Request Forgery (CSRF)
9. A9 Using components with known vulnerabilities
10. A10 Unvalidated redirects and forwards
3. Summary
9. Caching in on the Top 10
1. Creating a top 10 movies list
1. Creating MovieManager.top_movies()
2. Creating the TopMovies view
3. Creating the top_movies_list.html template
4. Adding a path to TopMovies
2. Optimizing Django projects
1. Using the Django Debug Toolbar
2. Using Logging
3. Application Performance Management
4. A quick review of the section
3. Using Django's cache API
1. Examining the trade-offs between Django cache backends
1. Examining Memcached trade-offs
2. Examining dummy cache trade-offs
3. Examining local memory cache trade-offs
4. Examine file-based cache trade-offs
5. Examining database cache trade-offs
2. Configuring a local memory cache
3. Caching the movie list page
1. Creating our first mixin – CachePageVaryOnCookieMixin
2. Using CachePageVaryOnCookieMixin with MovieList
4. Caching a template fragment with {% cache %}
5. Using the cache API with objects
4. Summary
10. Deploying with Docker
1. Organizing configuration for production and development
1. Splitting requirements files
2. Splitting settings file
1. Creating common_settings.py
2. Creating dev_settings.py
3. Creating production_settings.py
2. Creating the MyMDB Dockerfile
1. Starting our Dockerfile
2. Installing packages in Dockerfile
3. Collecting static files in Dockerfile
4. Adding Nginx to Dockerfile
1. Configuring Nginx
2. Creating Nginx runit service
5. Adding uWSGI to the Dockerfile
6. Configuring uWSGI to run MyMDB
1. Creating the uWSGI runit service
7. Finishing our Dockerfile
3. Creating a database container
4. Storing uploaded files on AWS S3
1. Signing up for AWS
2. Setting up the AWS environment
3. Creating the file upload bucket
5. Using Docker Compose
1. Tracing environment variables
2. Running Docker Compose locally
1. Installing Docker
2. Using Docker Compose
6. Sharing your container via a container registry
7. Launching containers on a Linux server in the cloud
1. Starting the Docker EC2 VM
2. Shutting down the Docker EC2 VM
8. Summary
11. Starting Answerly
1. Creating the Answerly Django project
2. Creating the Answerly models
1. Creating the Question model
2. Creating the Answer model
3. Creating migrations
3. Adding a base template
1. Creating base.html
4. Configuring static files
5. Letting users post questions
1. Ask question form
2. Creating AskQuestionView
3. Creating ask.html
4. Installing and configuring Markdownify
5. Installing and configuring Django Crispy Forms
6. Routing requests to AskQuestionView
7. A quick review of the section
6. Creating QuestionDetailView
1. Creating Answer forms
1. Creating AnswerForm
2. Creating AnswerAcceptanceForm
2. Creating QuestionDetailView
3. Creating question_detail.html
1. Creating the display_question.html common template
2. Creating list_answers.html
3. Creating the post_answer.html template
4. Routing requests to the QuestionDetail view
7. Creating the CreateAnswerView
1. Creating create_answer.html
2. Routing requests to CreateAnswerView
8. Creating UpdateAnswerAcceptanceView
9. Creating the daily questions page
1. Creating DailyQuestionList view
2. Creating the daily question list template
3. Routing requests to DailyQuestionLists
10. Getting today's question list
11. Creating the user app
1. Using Django's LoginView and LogoutView
2. Creating RegisterView
12. Updating base.html navigation
13. Running the development server
14. Summary
12. Searching for Questions with Elasticsearch
1. Starting with Elasticsearch
1. Starting an Elasticsearch server with docker
2. Configuring Answerly to use Elasticsearch
3. Creating the Answerly index
2. Loading existing Questions into Elasticsearch
1. Creating the Elasticsearch service
2. Creating a manage.py command
3. Creating a search view
1. Creating a search function
2. Creating the SearchView
3. Creating the search template
4. Updating the base template
4. Adding Questions into Elasticsearch on save()
1. Upserting into Elasticsearch
5. Summary
13. Testing Answerly
1. Installing Coverage.py
2. Measuring code coverage
3. Creating a unit test for Question.save()
4. Creating models for tests with Factory Boy
1. Creating a UserFactory
2. Creating the QuestionFactory
5. Creating a unit test for a view
6. Creating a view integration test
7. Creating a live server integration test
1. Setting up Selenium
2. Testing with a live Django server and Selenium
8. Summary
14. Deploying Answerly
1. Organizing configuration for production and development
1. Splitting our requirements file
2. Splitting our settings file
1. Creating common_settings.py
2. Creating dev_settings.py
3. Creating production_settings.py
2. Preparing our server
1. Installing required packages
2. Configuring Elasticsearch
1. Installing Elasticsearch
2. Running Elasticsearch
3. Creating the database
3. Deploying Answerly with Apache
1. Creating the virtual host config
2. Updating wsgi.py to set environment variables
3. Creating the environment config file
4. Migrating the database
5. Collecting static files
6. Enabling the Answerly virtual host
7. A quick review of the section
4. Deploying Django projects as twelve-factor apps
1. Factor 1 – Code base
2. Factor 2 – Dependencies
3. Factor 3 – Config
4. Factor 4 – Backing services
5. Factor 5 – Build, release, and run
6. Factor 6 – Processes
7. Factor 7 – Port binding
8. Factor 8 – Concurrency
9. Factor 9 – Disposability
10. Factor 10 – Dev/prod parity
11. Factor 11 – Logs
12. Factor 12 – Admin processes
13. A quick review of the section
5. Summary
15. Starting Mail Ape
1. Creating the Mail Ape project
1. Listing our Python dependencies
2. Creating our Django project and apps
3. Creating our app's URLConfs
4. Installing our project's apps
2. Creating the mailinglist models
1. Creating the MailingList model
2. Creating the Subscriber model
3. Creating the Message model
3. Using database migrations
1. Configuring the database
2. Creating database migrations
3. Running database migrations
4. MailingList forms
1. Creating the Subscriber form
2. Creating the Message Form
3. Creating the MailingList form
5. Creating MailingList views and templates
1. Common resources
1. Creating a base template
2. Configuring Django Crispy Forms to use Bootstrap 4
3. Creating a mixin to check whether a user can use the mailing list
4. Creating MailingList views and templates
1. Creating the MailingListListView view
2. Creating the CreateMailingListView and template
3. Creating the DeleteMailingListView view
4. Creating MailingListDetailView
5. Creating Subscriber views and templates
1. Creating SubscribeToMailingListView and template
2. Creating a thank you for subscribing view
3. Creating a subscription confirmation view
4. Creating UnsubscribeView
6. Creating Message Views
1. Creating CreateMessageView
2. Creating the Message DetailView
6. Creating the user app
1. Creating the login template
2. Creating the user registration view
7. Running Mail Ape locally
8. Summary
16. The Task of Sending Emails
1. Creating common resources for emails
1. Creating the base HTML email template
2. Creating EmailTemplateContext
2. Sending confirmation emails
1. Configuring email settings
2. Creating the send email confirmation function
3. Creating the HTML confirmation email template
4. Creating the text confirmation email template
5. Sending on new Subscriber creation
6. A quick review of the section
3. Using Celery to send emails
1. Installing celery
2. Configuring Celery settings
3. Creating a task to send confirmation emails
4. Sending emails to new subscribers
5. Starting a Celery worker
6. A quick review of the section
4. Sending messages to subscribers
1. Getting confirmed subscribers
2. Creating the SubscriberMessage model
3. Creating SubscriberMessages when a message is created
4. Sending emails to subscribers
5. Testing code that uses Celery tasks
1. Using a TestCase mixin to patch tasks
2. Using patch with factories
3. Choosing between patching strategies
6. Summary
17. Building an API
1. Starting with the Django REST framework
1. Installing the Django REST framework
2. Configuring the Django REST Framework
2. Creating the Django REST Framework Serializers
3. API permissions
4. Creating our API views
1. Creating MailingList API views
1. Listing MailingLists by API
2. Editing a mailing list via an API
2. Creating a Subscriber API
1. Listing and Creating Subscribers API
2. Updating subscribers via an API
5. Running our API
6. Testing your API
7. Summary
18. Deploying Mail Ape
1. Separating development and production
1. Separating our requirements files
2. Creating common, development, and production settings
2. Creating an infrastructure stack in AWS
1. Accepting parameters in a CloudFormation template
2. Listing resources in our infrastructure
1. Adding Security Groups
2. Adding a Database Server
3. Adding a Queue for Celery
4. Creating a Role for Queue access
3. Outputting our resource information
4. Executing our template to create our resources
3. Building an Amazon Machine Image with Packer
1. Installing Packer
2. Creating a script to create our directory structure
3. Creating a script to install all our packages
4. Configuring Apache
5. Configuring Celery
6. Creating the environment configuration files
7. Making a Packer template
8. Running Packer to build an Amazon Machine Image
4. Deploying a scalable self-healing web app on AWS
1. Creating an SSH key pair
2. Creating the web servers CloudFormation template
1. Accepting parameters in the web worker CloudFormation template
2. Creating Resources in our web worker CloudFormation template
3. Outputting resource names
3. Creating the Mail Ape 1.0 release stack
4. SSHing into a Mail Ape EC2 Instance
5. Creating and migrating our database
6. Releasing Mail Ape 1.0
5. Scaling up and down with update-stack
6. Summary
19. Other Books You May Enjoy
1. Leave a review - let other readers know what you think
# Preface
Who doesn't have an idea for the next great app or service they want to launch? However, most apps, services, and websites ultimately rely on a server being able to accept requests and then create, read, update, and delete records based on those requests. Django makes it easy to build and launch websites, services, and backends for your great idea. However, despite the history of being used at large-scale successful start-ups and enterprises, it can be difficult to gather all the resources necessary to actually take an idea from empty directory to running production server.
Over the course of three projects, _Building Django Web Applications_ guides you from an empty directory to creating full-fledged apps to replicate the core functionality of some of the web's most popular web apps. In Part 1, you'll make your own online movie database. In Part 2, you'll make a website letting users ask and answer questions. In Part 3, you'll make a web app to manage mailing lists and send emails. All three projects culminate in your deploying the project to a server so that you can see your ideas come to life. Between starting each project and deploying it, we'll cover important practical concepts such as how to build APIs, secure your project, add search using Elasticsearch, use caching, and offload tasks to worker process to help your project scales.
_Building Django Web Applications_ is for developers who already know some basics of Python, but want to take their skills to the next level. Basic understanding of HTML and CSS is also recommended, as these languages will be mentioned but are not the focus of the book.
After reading this book, you'll be familiar with everything it takes to launch an amazing web app using Django.
# Who this book is for
This book is for developers who are familiar with Python. Readers should know how to run commands in Bash shell. Some basic HTML and CSS knowledge is assumed. Finally, the reader should be able to connect to a PostgreSQL database on their own.
# What this book covers
Chapter 1, _Building MyMDB_ , covers starting a Django project and the core MyMDB Django app. You will create the core models, views, and templates. You will create URLConfs to help Django route requests to your views. By the end of this chapter, you will have a tested Django project that you can access using your web browser.
Chapter 2, Adding Users to MyMDB, covers adding user registration and authentication. With users being able to register, log in, and log out, you will accept and display votes on movies. Finally, you'll write aggregate queries using Django's QuerySet API to score each movie.
Chapter 3, _Posters, Headshots, and Security_ , covers securely accepting and storing files from your users. You'll learn about the top web application security issues, as listed in the OWASP Top Ten, and how Django mitigates those issues.
Chapter 4, _Caching in on the Top 10_ , covers how to optimize Django projects. You'll learn how to measure what needs optimization. Finally, you'll learn about the different caching strategies Django makes available and when to use them.
Chapter 5, _Deploying with Docker_ , covers how to deploy Django using Nginx and uWSGI in a Docker container. You'll learn how to store uploaded files in S3 to protect the user. Finally, you'll run your Docker container on a Linux server in the Amazon Web Services cloud.
Chapter 6, _Starting Answerly_ , covers creating the models, views, templates, and apps for the Answerly project. You'll learn how to use Django's built-in date views to show a list of questions asked each day. You'll also learn how to split large templates into more manageable components.
Chapter 7, _Searching for Questions with Elasticsearch_ , covers working with Elasticsearch to let users search our questions. You will learn how to create a service that avoid coupling external services to your models or views. You will also learn how to automatically load and update model data in an external service.
Chapter 8, _Testing Answerly_ , covers how to test a Django project. You will learn how to measure code coverage in a Django project and how to easily generate test data. You will also learn how to write different types of tests from unit tests to live server tests with a working browser.
Chapter 9, _Deploying Answerly_ , covers how to deploy a Django project on a Linux server with Apache and mod_wsgi. You'll also learn how to treat your Django project like a twelve-factor app to keep it easy to scale.
Chapter 10, _Starting Mail Ape_ , covers creating the models, views, templates, and apps for the Mail Ape project. You'll learn how to use alternate fields for non-sequential primary keys.
Chapter 11, _Sending Emails_ , covers how to use Django's email functionality. You'll also learn how to use Celery to process tasks outside of the Django request/response cycle and how to test code that relies on Celery tasks.
Chapter 12, _Building an API_ , covers how to create an API using the **Django REST Framework** ( **DRF** ). You'll learn how DRF lets you quickly build an API from your Django models without repeating a lot of unnecessary code. You will also learn how to access and test your API.
Chapter 13, _Deploying Mail Ape_ , covers how to deploy a Django app into the Amazon Web Services cloud. You'll learn how to make an **Amazon Machine Image** ( **AMI** ) a part of a release. Then, you'll create a CloudFormation template to declare your infrastructure and servers as code. You'll take a look at how to use AWS to horizontally scale your system to run multiple web workers. Finally, you'll bring it all online using the AWS command-line interface.
# To get the most out of this book
To get the most out of this book book you should:
1. Have some familiarity with Python and have Python3.6+ installed
2. Be able to install Docker or other new software on your computer
3. Know how to connect to a Postgres server from your computer
4. Have access to a Bash shell
# Download the example code files
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
1. Log in or register at www.packtpub.com.
2. Select the SUPPORT tab.
3. Click on Code Downloads & Errata.
4. Enter the name of the book in the Search box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
* WinRAR/7-Zip for Windows
* Zipeg/iZip/UnRarX for Mac
* 7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at <https://github.com/PacktPublishing/Building-Django-2.0-Web-Applications>. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at **<https://github.com/PacktPublishing/>**. Check them out!
# Conventions used
There are a number of text conventions used throughout this book.
`CodeInText`: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "It also offers a `create()` method for creating and saving an instance."
A block of code is set as follows:
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
DATABASES = {
'default': {
**'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),** }
}
Any command-line input or output is written as follows:
**$ pip install -r requirements.dev.txt**
**Bold** : Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Clicking on MOVIES will show us a list of movies."
Warnings or important notes appear like this.
Tips and tricks appear like this.
# Get in touch
Feedback from our readers is always welcome.
**General feedback** : Email `feedback@packtpub.com` and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at `questions@packtpub.com`.
**Errata** : Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
**Piracy** : If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at `copyright@packtpub.com` with a link to the material.
**If you are interested in becoming an author** : If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
# Reviews
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
# Starting MyMDB
The first project we will build is a basic **Internet Movie Database** ( **IMDB** ) clone called **My Movie Database (MyMDB)** written in Django 2.0 that we will deploy using Docker. Our IMDB clone will have the following two types of users: users and administrators. The users will be able to rate movies, add images from movies, and view movies and cast. The administrators will be able to add movies, actors, writers, and directors.
In this chapter, we'll do the following things:
* Create our new Django project MyMDB, an IMDB clone
* Make a Django app and create our first models, views, and templates
* Learn about and use a variety of fields in our models and create relationships across models
The code for this project is available online at <https://github.com/tomaratyn/MyMDB>.
By the end, we'll be able to add movies, people, and roles into our project and let users view them in easy-to-customize HTML templates.
# Starting My Movie Database (MyMDB)
First, let's make a directory for our project:
**$ mkdir MyMDB
$ cd MyMDB**
All our future commands and paths will be relative to this project directory.
# Starting the project
A Django project is composed of multiple Django apps. A Django app can come from many different places:
* Django itself (for example, `django.contrib.admin`, the admin backend app)
* Installed Python packages (for example, `django-rest-framework`, a framework for creating REST APIs from Django models)
* Written as part of the project (the code we'll be writing)
Usually, a project uses a mix of all of the preceding three options.
# Installing Django
We'll install Django using `pip`, Python's preferred package manager and track which packages we install in a `requirements.dev.txt` file:
django<2.1
psycopg2<2.8
Now, let's install the packages:
**$ pip install -r requirements.dev.txt**
# Creating the project
With Django installed, we have the `django-admin` command-line tool with which we can generate our project:
**$ django-admin startproject config
$ tree config/
config/
├── config
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py**
The parent of the `settings.py` file is called `config` because we named our project `config` instead of `mymdb`. However, letting that top-level directory continue to be called `config` is confusing, so let's just rename it `django` (a project may grow to contain lots of different types of code; calling the parent of the Django code `django`, again, makes it clear):
**$ mv config django
$ tree .
.
├── django
│ ├── config
│ │ ├── __init__.py
│ │ ├── settings.py
│ │ ├── urls.py
│ │ └── wsgi.py
│ └── manage.py
└── requirements.dev.txt
2 directories, 6 files**
Let's take a closer look at some of these files:
* `settings.py`: This is where Django stores all the configuration for your app by default. In the absence of a `DJANGO_SETTINGS` environment variable, this is where Django looks for settings by default.
* `urls.py`: This is the root `URLConf` for the entire project. Every request that your web app gets will get routed to the first view that matches a path inside this file (or a file `urls.py` reference).
* `wsgi.py`: **Web Server Gateway Interface** ( **WSGI** ) is the interface between Python and a web server. You won't touch this file very much, but it's how your web server and your Python code know how to talk to each other. We'll reference it in Chapter 5, _Deploying with Docker_.
* `manage.py`: This is the command center for making non-code changes. Whether it's creating a database migration, running tests, or starting the development server, we will use this file often.
Note what's missing is that the `django` directory is not a Python module. There's no `__init__.py` file in there, and there should _not_ be. If you add one, many things will break because we want the Django apps we add to be top-level Python modules.
# Configuring database settings
By default, Django creates a project that will use SQLite, but that's not usable for production, so we'll follow the best practice of using the same database in development as in production.
Let's open up `django/config/settings.py` and update it to use our Postgres server. Find the line in `settings.py` that starts with `DATABASES`. By default, it will look like this:
DATABASES = {
'default': {
**'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),** }
}
To use Postgres, change the preceding code to the following one:
DATABASES = {
'default': {
**'ENGINE': 'django.db.backends.postgresql',
'NAME': 'mymdb',
'USER': 'mymdb',
'PASSWORD': 'development',
'HOST': '127.0.0.1',
'PORT': '5432',** }
}
Most of this will seem familiar if you've connected to a database before, but let's review:
* `DATABASES = {`: This constant is a dictionary of database connection information and is required by Django. You can have multiple connections to different databases, but, most of the time, you will just need an entry called `default`.
* `'default': {`: This is the default database connection configuration. You should always have a `default` set of connections settings. Unless you specify otherwise (and, in this book, we won't), this is the connection you'll be using.
* `'ENGINE': 'django.db.backends.postgresql '`: This tells Django to use the Postgres backend. This in turn uses `psycopg2`, Python's Postgres library.
* `'NAME': 'mymdb',`: The name of the database you want to connect to.
* `'USER': 'mymdb',`: The username for your connection.
* `'PASSWORD': 'development',`: The password for your database user.
* `'HOST': '127.0.0.1',`: The address of the database server you want to connect to.
* `'PORT': '5432',`: The port you want to connect to.
# The core app
Django apps follow a **Model View Template** ( **MVT** ) pattern; in this pattern, we will note the following things:
* **Models** are responsible for saving and retrieving data from the database
* **Views** are responsible for processing HTTP Requests, initiating operations on Models, and returning HTTP responses
* **Templates** are responsible for the look of the response body
There's no limit on how many apps you can have in your Django project. Ideally, each app should have a tightly scoped and self-contained functionality like any other Python module, but at the beginning of a project, it can be hard to know where the complexity will lie. That's why I find it useful to start off with a `core` app. Then, when I notice clusters of complexity around particular topics (let's say, in our project, actors could become unexpectedly complex if we're getting traction there), then we can refactor that into its own tightly scoped app. Other times, it's clear that a site has self-contained components (for example, an admin backend), and it's easy to start off with multiple apps.
# Making the core app
To make a new Django app, we first have to use `manage.py` to create the app and then add it to the list of `INSTALLED_APPS`:
**$ cd django
$ python manage.py startapp core
$ ls
config core manage.py
$tree core
core
├─ 472; __init__.py
├── admin.py
├── apps.py
├── migrations
│ └── __init__.py
├── models.py
├── tests.py
└── views.py
1 directory, 7 files**
Let's take a closer look at what's inside of the core:
* `core/__init__.py`: The core is not just a directory, but also a Python module.
* `admin.py`: This is where we will register our models with the built-in admin backend. We'll describe that in the _Movie Admin_ section.
* `apps.py`: Most of the time, you'll leave this alone. This is where you would put any code that needs to run when registering your application, which is useful if you're making a reusable Django app (for example, a package you want to upload to PyPi).
* `migrations`: This is a Python module with database migrations. Database migrations describe how to _migrate_ the database from one known state to another. With Django, if you add a model, you can just generate and run a migration using `manage.py`, which you can see later in this chapter in the _Migrating the database_ section.
* `models.py`: This is for models.
* `tests.py`: This is for tests.
* `views.py`: This is for views.
# Installing our app
Now that our core app exists, let's make Django aware of it by adding it to the list of installed apps in `settings.py` file. Your `settings.py` should have a line that looks like this:
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
`INSTALLED_APPS` is a list of Python paths to Python modules that are Django apps. We already have apps installed to solve common problems, such as managing static files, sessions, and authentication and an admin backend because of Django's Batteries Included philosophy.
Let's add our `core` app to the top of that list:
INSTALLED_APPS = [
'core',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
# Adding our first model – Movie
Now we can add our first model, that is, Movie.
A Django model is a class that is derived from `Model` and has one or more `Fields`. In database terms, a `Model` class corresponds to a database table, `Field` classes correspond to columns, and instances of a `Model` correspond to rows. Using an ORM like Django's, let's take advantage of Python and Django to write expressive classes instead of DB writing our models once in Python and again in SQL.
Let's edit `django/core/models.py` to add a `Movie` model:
from django.db import models
class Movie(models.Model):
NOT_RATED = 0
RATED_G = 1
RATED_PG = 2
RATED_R = 3
RATINGS = (
(NOT_RATED, 'NR - Not Rated'),
(RATED_G,
'G - General Audiences'),
(RATED_PG,
'PG - Parental Guidance '
'Suggested'),
(RATED_R, 'R - Restricted'),
)
title = models.CharField(
max_length=140)
plot = models.TextField()
year = models.PositiveIntegerField()
rating = models.IntegerField(
choices=RATINGS,
default=NOT_RATED)
runtime = \
models.PositiveIntegerField()
website = models.URLField(
blank=True)
def __str__(self):
return '{} ({})'.format(
self.title, self.year)
`Movie` is derived from `models.Model`, which is the base class for all Django models. Next, there's a series of constants that describe ratings; we'll take a look at that when we look at the `rating` field, but first let's look at the other fields:
* `title = models.CharField(max_length=140)`: This will become a `varchar` column with a length of 140. Databases generally require a maximum size for `varchar` columns, so Django does too.
* `plot = models.TextField()`: This will become a `text` column in our database, which has no maximum length requirement. This makes it more appropriate for a field that can have a paragraph (or even pages) of text.
* `year = models.PositiveIntegerField()`: This will become an `integer` column, and Django will validate the value before saving it to ensure that it is `0` or higher when you save it.
* `rating = models.IntegerField(choices=RATINGS, default=NOT_RATED)`: This is a more complicated field. Django will know that this is going to be an `integer` column. The optional argument `choices` (which is available for all `Fields`, not just `IntegerField`) takes an iterable (list or tuple) of value/display pairs. The first element in the pair is a valid value that can be stored in the database and the second is a human-friendly version of the value. Django will also add an instance method to our model called `get_rating_display()`, which will return the matching second element for the value stored in our model. Anything that doesn't match one of the values in `choices` will be a `ValidationError` on save. The `default` argument provides a default value if one is not provided when creating the model.
* `runtime = models.PositiveIntegerField()`: This is the same as the `year` field.
* `website = models.URLField(blank=True)`: Most databases don't have a native URL column type, but data-driven web apps often need to store them. A `URLField` is a `varchar(200)` field by default (this can be set by providing a `max_length` argument). `URLField` also comes with validation, checking whether its value is a valid web (`http`/`https`/`ftp`/`ftps`) URL. The `blank` argument is used by the `admin` app to know whether to require a value (it does not affect the database).
Our model also has a `__str__(self)` method, which is a best practice that helps Django convert the model to a string. Django does this in the administrative UI and in our own debugging.
Django's ORM automatically adds an autoincrementing `id` column, so we don't have to repeat that on all our models. It's a simple example of Django's **Don't Repeat Yourself** **(DRY)** philosophy. We'll take a look at more examples as we go along.
# Migrating the database
Now that we have a model, we will need to create a table in our database that matches it. We will use Django to generate a migration for us and then run the migration to create a table for our movie model.
While Django can create and run migrations for our Django apps, it will not create the database and database user for our Django project. To create the database and user, we have to connect to the server using an administrator's account. Once we've connected we can create the database and user by executing the following SQL:
CREATE DATABASE mymdb;
CREATE USER mymdb;
GRANT ALL ON DATABASE mymdb to "mymdb";
ALTER USER mymdb PASSWORD 'development';
ALTER USER mymdb CREATEDB;
The above SQL statements will create the database and user for our Django project. The `GRANT` statement ensures that our mymdb user will have access to the database. Then, we set a password on the `mymdb` user (make sure it's the same as in your `settings.py` file). Finally, we give the `mymdb` user permission to create new databases, which will be used by Django to create a test database when running tests.
To generate a migration for our app, we'll need to tell `manage.py` file to do as follows:
**$ cd django
$ python manage.py makemigrations core
Migrations for 'core':
core/migrations/0001_initial.py
- Create model Movie**
A `migration` is a Python file in our Django app that describes how to change the database into a desired state. Django migrations are not tied to a particular database system (the same migrations will work across supported databases, unless _we_ add database-specific code). Django generates migration files that use Django's migrations API, which we won't be looking at in this book, but it's useful to know that it exists.
Remember that it's _apps_ not _projects_ that have migrations (since it's _apps_ that have models).
Next, we tell `manage.py` to migrate our app:
**$ python manage.py migrate core
Operations to perform:
Apply all migrations: core
Running migrations:
Applying core.0001_initial... OK**
Now, our table exists in our database:
**$ python manage.py dbshell
psql (9.6.1, server 9.6.3)
Type "help" for help.
mymdb= > \dt
List of relations
Schema | Name | Type | Owner
--------+-------------------+-------+-------
public | core_movie | table | mymdb
public | django_migrations | table | mymdb
(2 rows)
mymdb=> \q**
We can see that our database has two tables. The default naming scheme for Django's model's tables is `<app_name>_<model_name>`. We can tell `core_movie` is the table for the `Movie` model from the `core` app. `django_migrations` is for Django's internal use to track the migrations that have been applied. Altering the `django_migrations` table directly instead of using `manage.py` is a bad idea, which will lead to problems when you try to apply or roll back migrations.
The migration commands can also run without specifying an app, in which case it will run on all the apps. Let's run the `migrate` command without an app:
**$ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, core, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying sessions.0001_initial... OK**
This creates tables to keep track of users, sessions, permissions, and the administrative backend.
# Creating our first movie
Like Python, Django offers an interactive REPL to try things out. The Django shell is fully connected to the database, so we can create, query, update, and delete models from the shell:
**$ cd django
$ python manage.py shell
Python 3.4.6 (default, Aug 4 2017, 15:21:32)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole) >>> from core.models import Movie
>>> sleuth = Movie.objects.create(
... title='Sleuth',
... plot='An snobbish writer who loves games'
... ' invites his wife\'s lover for a battle of wits.',
... year=1972,
... runtime=138,
... )
>>> sleuth.id
1
>>> sleuth.get_rating_display()
'NR - Not Rated'**
In the preceding Django shell session, note that there are a number of attributes of `Movie` that we didn't create:
* `objects` is the model's default manager. Managers are an interface for querying the model's table. It also offers a `create()` method for creating and saving an instance. Every model must have at least one manager, and Django offers a default manager. It's often advisable to create a custom manager; we'll see that later in the _Adding Person and model relationships_ section.
* `id` is the primary key of the row for this instance. As mentioned in the preceding step, Django creates it automatically.
* `get_rating_display()` is a method that Django added because the `rating` field was given a tuple of `choices`. We didn't have to provide `rating` with a value in our `create()` call because the `rating` field has a `default` value (`0`). The `get_rating_display()` method looks up the value and returns the corresponding display value. Django will generate a method like this for each `Field` attribute with a `choices` argument.
Next, let's create a backend for managing movies using the Django Admin app.
# Creating movie admin
Being able to quickly generate a backend UI lets users to start building the content of the project while the rest of the project is still in development. It's a nice feature that helps parallelize progress and avoid a repetitious and boring task (read/update views share a lot of functionalities). Providing this functionality out of the box is another example of Django's Batteries Included philosophy.
To get Django's admin app working with our models, we will perform the following steps:
1. Register our model
2. Create a super user who can access the backend
3. Run the development server
4. Access the backend in a browser
Let's register our `Movie` model with the admin by editing `django/core/admin.py`, as follows:
from django.contrib import admin
from core.models import Movie
admin.site.register(Movie)
Now our model is registered!
Let's now create a user who can access the backend using `manage.py`:
**$ cd django
$ python manage.py createsuperuser
Username (leave blank to use 'tomaratyn'):
Email address: tom@aratyn.nam
Password:
Password (again):
Superuser created successfully.**
Django ships with a **development server** that can serve our app, but is not appropriate for production:
**$ python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
September 12, 2017 - 20:31:54
Django version 1.11.5, using settings 'config.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.**
Also, open it in a browser by navigating to `http://localhost:8000/`:
To access the admin backend, go to `http://localhost:8000/admin`:
Once we log in with the credentials, we have to manage users and movies:
Clicking on MOVIES will show us a list of movies:
Note that the title of the link is the result of our `Movie.__str__` method. Clicking on it will give you a UI to edit the movie:
On the main admin screen and on the movie list screen, you have links to add a new movie. Let's add a new movie:
Now, our movie list shows both movies:
Now that we have a way of letting our team populate the database with movies, let's start working on the views for our users.
# Creating MovieList view
When Django gets a request, it uses the path of the request and the `URLConf` of the project to match a request to a view, which returns an HTTP response. Django's views can be either functions, often referred to as **Function-Based Views** ( **FBVs** ), or classes, often called **Class-Based Views** ( **CBVs** ). The advantage of CBVs is that Django comes with a rich suite of generic views that you can subclass to easily (almost declaratively) write views to accomplish common tasks.
Let's write a view to list the movies that we have. Open `django/core/views.py` and change it to the following:
from django.views.generic import ListView
from core.models import Movie
class MovieList(ListView):
model = Movie
`ListView` requires at least a `model` attribute. It will query for all the rows of that model, pass it to the template, and return the rendered template in a response. It also offers a number of hooks that we may use to replace default behavior, which are fully documented.
How does `ListView` know how to query all the objects in `Movie`? For that, we will need to discuss manager and `QuerySet` classes. Every model has a default manager. Manager classes are primarily used to query objects by offering methods, such as `all()`, that return a `QuerySet`. A `QuerySet` class is Django's representation of a query to the database. `QuerySet` has a number of methods, including `filter()` (such as a `WHERE` clause in a `SELECT` statement) to limit a result. One of the nice features of the `QuerySet` class is that it is lazy; it is not evaluated until we try to get a model out of the `QuerySet`. Another nice feature is that methods such as `filter()` take _lookup expressions_ , which can be field names or span across relationship models. We'll be doing this throughout our projects.
All manager classes have an `all()` method that should return an unfiltered `Queryset`, the equivalent of writing `SELECT * FROM core_movie;`.
So, how does `ListView` know that it has to query all the objects in `Movie`? `ListView` checks whether it has a `model` attribute, and, if present, knows that `Model` classes have a default manager with a `all()` method, which it calls. `ListView` also gives us a convention for where to put our template, as follows: `<app_name>/<model_name>_list.html`.
# Adding our first template – movie_list.html
Django ships with its own template language called the **Django Template language**. Django can also use other template languages (for example, Jinja2), but most Django projects find using the Django Template language to be efficient and convenient.
In the default configuration that is generated in our `settings.py` file, the Django Template language is configured to use `APP_DIRS`, meaning that each Django app can have a `templates` directory, which will be searched to find a template. This can be used to override templates that other apps use without having to modify the third-party apps themselves.
Let's make our first template in `django/core/templates/core/movie_list.html`:
<!DOCTYPE html>
<html>
<body>
<ul>
{% for movie in object_list %}
<li>{{ movie }}</li>
{% empty %}
<li>
No movies yet.
</li>
{% endfor %}
</ul>
<p>
Using https?
{{ request.is_secure|yesno }}
</p>
</body>
</html>
Django templates are standard HTML (or whatever text format you wish to use) with variables (for example, `object_list` in our example) and tags (for example, `for` in our example). Variables will be evaluated to strings by being surrounded with `{{ }}`. Filters can be used to help format or modify variables before being printed (for example, `yesno`). We can also create custom tags and filters.
A full list of filters and tags is provided in the Django docs (<https://docs.djangoproject.com/en/2.0/ref/templates/builtins/>).
The Django template language is configured in the `TEMPLATES` variable of `settings.py`. The `DjangoTemplates` backend can take a lot of `OPTIONS`. In d _evelopment_ , it can be helpful to add `'string_if_invalid': 'INVALID_VALUE',`. Any time Django can't match a variable in a template to a variable or tag, it will print out `INVALID_VALUE`, which makes it easier to catch typos. Remember that you should not use this setting in _Production_. The full list of options is available in Django's documentation (<https://docs.djangoproject.com/en/dev/topics/templates/#django.template.backends.django.DjangoTemplates>).
The final step will be to connect our view to a `URLConf`.
# Routing requests to our view with URLConf
Now that we have a model, view, and template, we will need to tell Django which requests it should route to our `MovieList` View using a URLConf. Each new project has a root URLConf that created by Django (in our case it's the `django/config/urls.py` file). Django developers have developed the best practice of each app having its own URLConf. Then, the root URLConf of a project will include each app's URLConf using the `include()` function.
Let's create a URLConf for our `core` app by creating a `django/core/urls.py` file with the following code:
from django.urls import path
from . import views
app_name = 'core'
urlpatterns = [
path('movies',
views.MovieList.as_view(),
name='MovieList'),
]
At its simplest, a URLConf is a module with a `urlpatterns` attribute, which is a list of `path` s. A `path` is composed of a string that describes a string, describing the path in question and a callable. CBVs are not callable, so the base `View` class has a static `as_view()` method that _returns_ a callable. FBVs can just be passed in as a callback (without the `()` operator, which would execute them).
Each `path()` should be named, which is a helpful best practice for when we have to reference that path in our template. Since a URLConf can be included by another URLConf, we may not know the full path to our view. Django offers a `reverse()` function and `url` template tag to go from a name to the full path to a view.
The `app_name` variable sets the app that this `URLConf` belongs to. This way, we can reference a named `path` without Django getting confused about other apps having a `path` of the same name (for example, `index` is a very common name, so we can say `appA:index` and `appB:index` to distinguish between them).
Finally, let's connect our `URLConf` to the root `URLConf` by changing `django/config/urls.py` to the following:
from django.urls import path, include
from django.contrib import admin
import core.urls
urlpatterns = [
path('admin/', admin.site.urls),
path('', include(
core.urls, namespace='core')),
]
This file looks much like our file previous `URLConf`, except that our `path()` object isn't taking a view but instead the result of the `include()` function. The `include()` function lets us prefix an entire `URLConf` with a path and give it a custom namespace.
Namespaces let us distinguish between `path` names like the `app_name` attribute does, except without modifying the app (for example, a third-party app).
You might wonder why we're using `include()` but the Django Admin site is using `property`? Both `include()` and `admin.site.urls` return similarly formatted 3-tuple. However, instead of remembering what each portion of the 3-tuple has to have, you should just use `include()`.
# Running the development server
Django now knows how to route requests to our View, which knows the Models that need to be shown and which template to render. We can tell `manage.py` to start our development server and view our result:
**$ cd django
$ python manage.py runserver**
In our browser, go to `http://127.0.0.1:8000/movies`:
Good job! We made our first page!
In this section, we created our first model, generated and ran the migration for it, and created a view and template so that users can browse it.
Now, let's add a page for each movie.
# Individual movie pages
Now that we have our project layout, we can move more quickly. We're already tracking information for each movie. Let's create a view that will show that information.
To add movie details, we'll need to do the following things:
1. Create a `MovieDetail` view
2. Create `movie_detail.html` template
3. Reference to our `MovieDetail` view in our `URLConf`
# Creating the MovieDetail view
Just like Django provides us with a `ListView` class to do all the common tasks of listing models, Django also provides a `DetailView` class that we can subclass to create a view showing the details of a single `Model`.
Let's create our view in `django/core/views.py`:
from django.views.generic import (
ListView, DetailView,
)
from core.models import Movie
class MovieDetail(DetailView):
model = Movie
class MovieList(ListView):
model = Movie
A `DetailView` requires that a `path()` object include either a `pk` or `slug` in the `path` string so that `DetailView` can pass that value to the `QuerySet` to query for a specific model instance. A **slug** is a short URL-friendly label that is often used in content-heavy sites, as it is SEO friendly.
# Creating the movie_detail.html template
Now that we have the View, let's make our template.
Django's Template language supports template inheritance, which means that you can write a template with all the look and feel for your website and mark the `block` sections that other templates will override. This lets us to create the look and feel of the entire website without having to edit each template. Let's use this to create a base template with MyMDB's branding and look and feel and then add a Movie Detail template that inherits from the base template.
A base template shouldn't be tied to a particular app, so let's make a general templates directory:
**$ mkdir django/templates**
Django doesn't know to check our `templates` directory yet, so we will need to update the configuration in our `settings.py` file. Find the line that starts with `TEMPLATES` and change the configuration to list our `templates` directory in the `DIRS` list:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
],
'APP_DIRS': True,
'OPTIONS': {
# omittted for brevity
},
},
]
The only change we've made is that we added our new `templates` directory to the list under the `DIRS` key. We have avoided hardcoding the path to our `templates` directory using Python's `os.path.join()` function and the already configured `BASE_DIR`. `BASE_DIR` is set at runtime to the path of the project. We don't need to add `django/core/templates` because the `APP_DIRS` setting tells Django to check each app for the `templates` directory.
Although it's very convenient that `settings.py` is the Python file where we can use `os.path.join` and all of Python, be careful not to get too clever. `settings.py` needs to be easy to read and understand. There's nothing worse than having to debug your `settings.py`.
Let's create a base template in `django/templates/base.html` that has a main column and sidebar:
<!DOCTYPE html>
<html lang="en" >
<head >
<meta charset="UTF-8" >
<meta
name="viewport"
content="width=device-width, initial-scale=1, shrink-to-fit=no"
>
<link
href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css"
integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M"
rel="stylesheet"
crossorigin="anonymous"
>
<title >
{% block title %}MyMDB{% endblock %}
</title>
<style>
.mymdb-masthead {
background-color: #EEEEEE;
margin-bottom: 1em;
}
</style>
</head >
<body >
<div class="mymdb-masthead">
<div class="container">
<nav class="nav">
<div class="navbar-brand">MyMDB</div>
<a
class="nav-link"
href="{% url 'core:MovieList' %}"
>
Movies
</a>
</nav>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-sm-8 mymdb-main">
{% block main %}{% endblock %}
</div>
<div
class="col-sm-3 offset-sm-1 mymdb-sidebar"
>
{% block sidebar %}{% endblock %}
</div>
</div>
</div>
</body >
</html >
Most of this HTML is actually bootstrap (HTML/CSS framework) boilerplate, but we do have a few new Django tags:
* `{% block title %}MyMDB{% endblock %}`: This creates a block that other templates can replace. If the block is not replaced, the contents from the parent template will be used.
* `href="{% url 'core:MovieList' %}"`: The `url` tag will produce a URL path for the named `path`. URL names should be referenced as `<app_namespace>:<name>`; in our case, `core` is the namespace of the core app (per `django/core/urls.py`), and `MovieList` is the name of the `MovieList` view's URL.
This lets us create a simple template in `django/core/templates/core/movie_detail.html`:
{% extends 'base.html' %}
{% block title %}
{{ object.title }} - {{ block.super }}
{% endblock %}
{% block main %}
<h1>{{ object }}</h1>
<p class="lead">
{{ object.plot }}
</p>
{% endblock %}
{% block sidebar %}
<div>
This movie is rated:
<span class="badge badge-primary">
{{ object.get_rating_display }}
</span>
</div>
{% endblock %}
This template has a lot less HTML in it because `base.html` already has that. All `MovieDetail.html` has to do is provide values to the blocks that `base.html` defines. Let's take a look at some new tags:
* `{% extends 'base.html' %}`: If a template wants to extend another template the first line must be an `extends` tag. Django will look for the base template (which can `extend` another template) and execute it first, then replace the blocks. A template that extends another cannot have content outside of `block`s because it's ambiguous where to put that content.
* `{{ object.title }} - {{ block.super }}`: We reference `block.super` inside the `title` template `block`. `block.super` returns the contents of the `title` template `block` in the base template.
* `{{ object.get_rating_display }}`: The Django Template language doesn't use `()` to execute the method, just referencing it by name will execute the method.
# Adding MovieDetail to core.urls.py
Finally, we add our `MovieDetail` view to `core/urls.py`:
from django.urls import path
from . import views
urlpatterns = [
path('movies',
views.MovieList.as_view(),
name='MovieList'),
path('movie/<int:pk>',
views.MovieDetail.as_view(),
name='MovieDetail'),
]
The `MovieDetail` and `MovieList` `path()` calls both look almost the same, except for the `MovieDetail` string that has a named parameter. A `path` route string can include angle brackets to give a parameter a name (for example, `<pk>`) and even define a type that the parameter's content must conform to (for example, `<int:pk>` will only match values that parse as an `int`). These named sections are captured by Django and passed to the view by name. `DetailView` expects a `pk` (or `slug`) argument and uses it to get the correct row from the database.
Let's use `python manage.py runserver` to start the `dev` server and take a look at what our new template looks like:
# A quick review of the section
In this section, we've created a new view, `MovieDetail`, learned about template inheritance, and how to pass parameters from a URL path to our view.
Next, we'll add pagination to our `MovieList` view to prevent it from querying the entire database each time.
# Pagination and linking movie list to movie details
In this section, we'll update our movie list to provide a link to each movie and to have pagination to prevent our entire database being dumped into one page.
# Updating MovieList.html to extend base.html
Our original `MovieList.html` was a pretty sparse affair. Let's update it to look nicer using our `base.html` template and the bootstrap CSS it provides:
{% extends 'base.html' %}
{% block title %}
All The Movies
{% endblock %}
{% block main %}
<ul>
{% for movie in object_list %}
<li>
<a href="{% url 'core:MovieDetail' pk=movie.id %}">
{{ movie }}
</a>
</li>
{% endfor %}
</ul>
{% endblock %}
We're also seeing the `url` tag being used with a named argument `pk` because the `MovieDetail` URL requires a `pk` argument. If there was no argument provided, then Django would raise a `NoReverseMatch` exception on rendering, resulting in a `500` error.
Let's take a look at what it looks like:
# Setting the order
Another problem with our current view is that it's not ordered. If the database is returning an unordered query, then pagination won't help navigation. What's more, there's no guarantee that each time the user changes pages that the content will be consistent, as the database may return a differently ordered result set for each time. We need our query to be ordered consistently.
Ordering our model also makes our lives as developers easier too. Whether using a debugger, writing tests, or running a shell ensuring that our models are returned in a consistent order can make troubleshooting simpler.
A Django model may optionally have an inner class called `Meta`, which lets us specify information about a Model. Let's add a `Meta` class with an `ordering` attribute:
class Movie(models.Model):
# constants and fields omitted for brevity
class Meta:
ordering = ('-year', 'title')
def __str__(self):
return '{} ({})'.format(
self.title, self.year)
`ordering` takes a list or tuple of, usually, strings that are field names, optionally prefixed by a `-` character that denotes descending order. `('-year', 'title')` is the equivalent of the SQL clause `ORDER BY year DESC, title`.
Adding `ordering` to a Model's `Meta` class will mean that `QuerySets` from the model's manager will be ordered.
# Adding pagination
Now that our movies are always ordered the same way, let's add pagination. A Django `ListView` already has built-in support for pagination, so all we need to do is take advantage of it. **Pagination** is controlled by the `GET` parameter `page` that controls which page to show.
Let's add pagination to the bottom of our `main` template `block`:
{% block main %}
<ul >
{% for movie in object_list %}
<li >
<a href="{% url 'core:MovieDetail' pk=movie.id %}" >
{{ movie }}
</a >
</li >
{% endfor %}
</ul >
{% if is_paginated %}
<nav >
<ul class="pagination" >
<li class="page-item" >
<a
href="{% url 'core:MovieList' %}?page=1"
class="page-link"
>
First
</a >
</li >
{% if page_obj.has_previous %}
<li class="page-item" >
<a
href="{% url 'core:MovieList' %}?page={{ page_obj.previous_page_number }}"
class="page-link"
>
{{ page_obj.previous_page_number }}
</a >
</li >
{% endif %}
<li class="page-item active" >
<a
href="{% url 'core:MovieList' %}?page={{ page_obj.number }}"
class="page-link"
>
{{ page_obj.number }}
</a >
</li >
{% if page_obj.has_next %}
<li class="page-item" >
<a
href="{% url 'core:MovieList' %}?page={{ page_obj.next_page_number }}"
class="page-link"
>
{{ page_obj.next_page_number }}
</a >
</li >
{% endif %}
<li class="page-item" >
<a
href="{% url 'core:MovieList' %}?page=last"
class="page-link"
>
Last
</a >
</li >
</ul >
</nav >
{% endif %}
{% endblock %}
Let's take a look at some important points of our `MovieList` template:
* `page_obj` is of the `Page` type, which knows information about this page of results. We use it to check whether there is a next/previous page using `has_next()`/`has_previous()` (we don't need to put `()` in the Django template language, but `has_next()` is a method, not a property). We also use it to get the `next_page_number()`/`previous_page_number()`. Note that it is important to use the `has_*()` method to check for the existence of the next/previous page numbers before retrieving them. If they don't exist when retrieved, `Page` throws an `EmptyPage` exception.
* `object_list` continues to be available and hold the correct values. Even though `page_obj` encapsulates the results for this page in `page_obj.object_list`, `ListView` does the convenient work of ensuring that we can continue to use `object_list` and our template doesn't break.
We now have the pagination working!
# 404 – for when things go missing
We now have a couple of views that can't function if given the wrong value in the URL (the wrong `pk` will break `MovieDetail`; the wrong `page` will break `MovieList`); let's plan for that by handling `404` errors. Django offers a hook in the root URLConf to let us use a custom view for `404` errors (also for `403`, `400`, and `500`—all following the same names scheme). In your root `urls.py` file, add a variable called `handler404` whose value is a string Python path to your custom view.
However, we can continue to use the default `404` handler view and just write a custom template. Let's add a `404` template in `django/templates/404.html`:
{% extends "base.html" %}
{% block title %}
Not Found
{% endblock %}
{% block main %}
<h1>Not Found</h1>
<p>Sorry that reel has gone missing.</p>
{% endblock %}
Even if another app throws a `404` error, this template will be used.
At the moment, if you've got an unused URL such as `http://localhost:8000/not-a-real-page`, you won't see our custom 404 template because Django's `DEBUG` settings is `True` in `settings.py`. To make our 404 template visible, we will need to change the `DEBUG` and `ALLOWED_HOSTS` settings in `settings.py`:
DEBUG = False
ALLOWED_HOSTS = [
'localhost',
'127.0.0.1'
]
`ALLOWED_HOSTS` is a setting that restricts which `HOST` values in an HTTP request Django will respond to. If `DEBUG` is `False` and a `HOST` does not match an `ALLOWED_HOSTS` value, then Django will return a `400` error (you can customize both the view and template for this error as described in the preceding code). This is a security feature that protects us and will be discussed more in our chapter on security.
Now that our project is configured, let's run the Django development server:
**$ cd django**
**$ python manage.py runserver**
With it running, we can use our web browser to open <http://localhost:8000/not-a-real-page>. Our results should look like this:
# Testing our view and template
Since we now have some logic in our `MoveList` template, let's write some tests. We'll talk a lot more about testing in the Chapter 8, _Testing Answerly_. However, the basics are simple and follow the common XUnit pattern of the `TestCase` classes holding test methods that make assertions.
For Django's `TestRunner` to find a test, it must be in the `tests` module of an installed app. Right now, that means `tests.py`, but, eventually, you may wish to switch to a directory Python module (in which case, prefix your test filenames with `test` for the `TestRunner` to find them).
Let's add a test that performs the following functions:
* If there's more than 10 movies, then pagination controls should be rendered in the template
* If there's more than 10 movies and we don't provide `page` `GET` parameters, consider the following things:
* The `page_is_last` context variable should be `False`
* The `page_is_first` context variable should be `True`
* The first item in the pagination should be marked as active
The following is our `tests.py` file:
from django.test import TestCase
from django.test.client import \
RequestFactory
from django.urls.base import reverse
from core.models import Movie
from core.views import MovieList
class MovieListPaginationTestCase(TestCase):
ACTIVE_PAGINATION_HTML = """
<li class="page-item active">
<a href="{}?page={}" class="page-link">{}</a>
</li>
"""
def setUp(self):
for n in range(15):
Movie.objects.create(
title='Title {}'.format(n),
year=1990 + n,
runtime=100,
)
def testFirstPage(self):
movie_list_path = reverse('core:MovieList')
request = RequestFactory().get(path=movie_list_path)
response = MovieList.as_view()(request)
self.assertEqual(200, response.status_code)
self.assertTrue(response.context_data['is_paginated'])
self.assertInHTML(
self.ACTIVE_PAGINATION_HTML.format(
movie_list_path, 1, 1),
response.rendered_content)
Let's take a look at some interesting points:
* `class MovieListPaginationTestCase(TestCase)`: `TestCase` is the base class for all Django tests. It has a number of conveniences built in, including a number of convenient assert methods.
* `def setUp(self)`: Like most XUnit testing frameworks, Django's `TestCase` class offers a `setUp()` hook that is run before each test. A `tearDown()` hook is also available if needed. The database is cleaned up between each test, so we don't need to worry about deleting any models we added.
* `def testFirstPage(self):`: A method is a test if its name is prefixed with `test`.
* `movie_list_path = reverse('core:MovieList')`: `reverse()` was mentioned before and is the Python equivalent of the `url` Django template tag. It will resolve the name into a path.
* `request = RequestFactory().get(path=movie_list_path)`: `RequestFactory` is a convenient factory for creating fake HTTP requests. A `RequestFactory` has convenience methods for creating `GET`, `POST`, and `PUT` requests by its convenience methods named after the verb (for example, `get()` for `GET` requests). In our case, the `path` object provided doesn't matter, but other views may want to inspect the path of the request.
* `self.assertEqual(200, response.status_code)`: This asserts that the two arguments are equal. A response's `status_code` to check success or failure (`200` being the status code for success—the one code you never see when you browse the web).
* `self.assertTrue(response.context_data['is_paginated'])`: This asserts that the argument evaluates to `True`. `response` exposes the context that is used in rendering the template. This makes finding bugs much easier as you can quickly check actual values used in rendering.
* `self.assertInHTML(`: `assertInHTML` is one of the many convenient methods that Django provides as part of its **Batteries Included** philosophy. Given a valid HTML string `needle` and valid HTML string `haystack`, it will assert that `needle` is in `haystack`. The two strings need to be valid HTML because Django will parse them and examine whether one is inside the other. You don't need to worry about spacing or the order of attributes/classes. It's a very convenient assertion when you try to ensure that templates are working right.
To run tests, we can use `manage.py`:
**$ cd django
$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 0.035s
OK
Destroying test database for alias 'default'...**
Finally, we can be confident that we've got pagination working right.
# Adding Person and model relationships
In this section, we will add relationships between models to our project. People's relationship to movies can create a complex data model. The same person can be the actor, writer, and director (for example, _The Apostle_ (1997) written, directed, and starring Robert Duvall). Even leaving out the crew and production teams and simplifying a bit, the data model will involve a one-to-many relationship using a `ForiengKey` field, a many-to-many relationship using a `ManyToManyField`, and a class that adds extra information about a many-to-many relationship using a `through` class in a `ManyToManyField`.
In this section, we will do the following things step by step:
1. Create a `Person` model
2. Add a `ForeignKey` field from `Movie` to `Person` to track the director
3. Add a `ManyToManyField` from `Movie` to `Person` to track the writers
4. Add a `ManyToManyField` with a `through` class (`Actor`) to track who performed and in what role in a Movie
5. Create the migration
6. Add the director, writer, and actors to the movie details template
7. Add a `PersonDetail` view to the list that indicates what movies a Person has directed, written, and performed in
# Adding a model with relationships
First, we will need a `Person` class to describe and store a person involved in a movie:
class Person(models.Model):
first_name = models.CharField(
max_length=140)
last_name = models.CharField(
max_length=140)
born = models.DateField()
died = models.DateField(null=True,
blank=True)
class Meta:
ordering = (
'last_name', 'first_name')
def __str__(self):
if self.died:
return '{}, {} ({}-{})'.format(
self.last_name,
self.first_name,
self.born,
self.died)
return '{}, {} ({})'.format(
self.last_name,
self.first_name,
self.born)
In `Person`, we also see a new field (`DateField`) and a new parameter for fields (`null`).
`DateField` is used for tracking date-based data, using the appropriate column type on the database (`date` on Postgres) and `datetime.date` in Python. Django also offers a `DateTimeField` to store the date and time.
All fields support the `null` parameter (`False` by default), which indicates whether the column should accept `NULL` SQL values (represented by `None` in Python). We mark `died` as supporting `null` so that we can record people as living or dead. Then, in the `__str__()` method we print out a different string representation if someone is alive or dead.
We now have the `Person` model that can have various relationships with `Movies`.
# Different types of relationship fields
Django's ORM has support for fields that map relationships between models, including one-to-many, many-to-many, and many-to-many with an intermediary model.
When two models have a one-to-many relationship, we use a `ForeignKey` field, which will create a column with a **Foreign Key** ( **FK** ) constraint (assuming that there is database support) between the two tables. In the model without the `ForeignKey` field, Django will automatically add a `RelatedManager` object as an instance attribute. The `RelatedManager` class makes it easier to query for objects in a relationship. We'll take a look at examples of this in the following sections.
When two models have a many-to-many relationship, either (but not both) of them can get the `ManyToManyField()`; Django will create a `RelatedManager` on the other side for you. As you may know, relational databases cannot actually have a many-to-many relationship between two tables. Rather, relational databases require a _bridging_ table with foreign keys to each of related tables. Assuming that we don't want to add any attributes describing the relationship, Django will create and manage this bridging table for us automatically.
Sometimes, we want extra fields to describe a many-to-many relationship (for example, when it started or ended); for that, we can provide a `ManyToManyField` with a `through` model (sometimes called an association class in UML/OO). This model will have a `ForeignKey` to each side of the relationship and any extra fields we want.
We'll create an example of each of these, as we go along adding directors, writers, and actors into our `Movie` model.
# Director – ForeignKey
In our model, we will say that each movie can have one director, but each director can have directed many movies. Let's use the `ForiengKey` field to add a director to our movie:
class Movie(models.Model):
# constants, methods, Meta class and other fields omitted for brevity.
director = models.ForeignKey(
to='Person',
on_delete=models.SET_NULL,
related_name='directed',
null=True,
blank=True)
Let's take a look at our new field line by line:
* `to='Person'`: All of Django's relationship fields can take a string reference as well as reference to the related model. This argument is required.
* `on_delete=models.SET_NULL`: Django needs instruction on what to do when the referenced model (instance/row) is deleted. `SET_NULL` will set the `director` field of all the `Movie` model instances directed by the deleted `Person` to `NULL`. If we wanted to cascade deletes we would use the `models.CASCADE` object.
* `related_name='directed'`: This is an optional argument that indicates the name of the `RelatedManager` instance on the other model (which lets us query all the `Movie` model instances a `Person` directed). If `related_name` were not provided, then `Person` would get an attribute called `movie_set` (following the `<model with FK>_set` pattern). In our case, we will have multiple different relationships between `Movie` and `Person` (writer, director, and actors), so `movie_set` would become ambiguous, and we must provide a `related_name`.
This is also the first time we're adding a field to an existing model. When doing so, we have to _either_ add `null=True` or offer a `default` value. If we do not, then the migration will force us to. This requirement exists because Django has to assume that there are existing rows in the table (even if there aren't) when the migration is run. When a database adds the new column, it needs to know what it should insert into existing rows. In the case of the `director` field, we can accept that it may sometimes be `NULL`.
We have now added a field to `Movie` and a new attribute to `Person` instances called `directed` (of the `RelatedManager` type). `RelatedManager` is a very useful class that is like a model's default Manager, but automatically manages the relationship across the two models.
Let's take a look at `person.directed.create()` and compare it to `Movie.objects.create()`. Both methods will create a new `Movie`, but `person.directed.create()` will make sure that the new `Movie` has `person` as its `director`. `RelatedManager` also offers the `add` and `remove` methods so that we can add a `Movie` to a `directed` set of `Person` by calling `person.directed.add(movie)`. There's also a `remove()` method that works similarly, but removes a model from the relationship.
# Writers – ManyToManyField
Two models may also have a many-to-many relationship, for example, a person may write many movies and a movie may be written by many people. Next, we'll add a `writers` field to our `Movie` model:
class Movie(models.Model):
# constants, methods, Meta class and other fields omitted for brevity.
writers = models.ManyToManyField(
to='Person',
related_name='writing_credits',
blank=True)
A `ManyToManyField` established a many-to-many relationship and acts like a `RelatedManager`, permitting users to query and create models. We again use the `related_name` to avoid giving `Person` a `movie_set` attribute and instead give it a `writing_credits` attribute that will be a `RelatedManager`.
In the case of a `ManyToManyField`, both sides of the relationship have `RelatedManager` s so that `person.writing_credits.add(movie)` has the same effect as writing `movie.writers.add(person)`.
# Role – ManyToManyField with a through class
The last example of a relationship field we'll look at is used when we want to use an intermediary model to describe the relationship between two other models that have a many-to-many relationship. Django lets us do this by creating a model that describes the _join table_ between the two models in a many-to-many relationship.
In our case, we will create a many-to-many relationship between `Movie` and `Person` through `Role`, which will have a `name` attribute:
class Movie(models.Model):
# constants, methods, Meta class and other fields omitted for brevity.
actors = models.ManyToManyField(
to='Person',
through='Role',
related_name='acting_credits',
blank=True)
class Role(models.Model):
movie = models.ForeignKey(Movie, on_delete=models.DO_NOTHING)
person = models.ForeignKey(Person, on_delete=models.DO_NOTHING)
name = models.CharField(max_length=140)
def __str__(self):
return "{} {} {}".format(self.movie_id, self.person_id, self.name)
class Meta:
unique_together = ('movie',
'person',
'name')
This looks like the preceding `ManyToManyField`, except we have both a `to` (referencing `Person` as before) argument and a `through` (referencing `Role`) argument.
The `Role` model looks much like one would design a _join table_ ; it has a `ForeignKey` to each side of the many-to-many relationship. It also has an extra field called `name` to describe the role.
`Role` also has a unique constraint on it. It requires that `movie`, `person`, and `billing` all to be unique together; setting the `unique_together` attribute on the `Meta` class of `Role` will prevent duplicate data.
This user of `ManyToManyField` will create four new `RelatedManager` instances:
* `movie.actors` will be a related manager to `Person`
* `person.acting_credits` will be a related manager to `Movie`
* `movie.role_set` will be a related manager to `Role`
* `person.role_set` will be a related manager to `Role`
We can use any of the managers to query models but only the `role_set` managers to create models or modify relationships because of the intermediary class. Django will throw an `IntegrityError` exception if you try to run `movie.actors.add(person)` because there's no way to fill in the value for `Role.name`. However, you can write `movie.role_set.add(person=person, name='Hamlet')`.
# Adding the migration
Now, we can generate a migration for our new models:
**$ python manage.py makemigrations core
Migrations for 'core':
core/migrations/0002_auto_20170926_1650.py
- Create model Person
- Create model Role
- Change Meta options on movie
- Add field movie to role
- Add field person to role
- Add field actors to movie
- Add field director to movie
- Add field writers to movie
- Alter unique_together for role (1 constraint(s))**
Then, we can run our migration so that the changes get applied:
**$ python manage.py migrate core
Operations to perform:
Apply all migrations: core
Running migrations:
Applying core.0002_auto_20170926_1651... OK**
Next, let's make our movie pages link to the people in the movies.
# Creating a PersonView and updating MovieList
Let's add a `PersonDetail` view that our `movie_detail.html` template can link to. To create our view, we'll go through a four-step process:
1. Create a manager to limit the number of database queries
2. Create our view
3. Create our template
4. Create a URL that references our view
# Creating a custom manager – PersonManager
Our `PersonDetail` view will list all the movies in which a `Person` is acting, writing, or directing credits. In our template, we will print out the name of each film in each credit (and `Role.name` for the acting credits). To avoid sending a flood of queries to the database, we will create new managers for our models that will return smarter `QuerySet` s.
In Django, any time we access a property across a relationship, then Django will query the database to get the related item (in the case of looping over each item `person.role_set.all()`, one for each related `Role`). In the case of a `Person` who is in _N_ movies, this will result in _N_ queries to the database. We can avoid this situation with the `prefetch_related()` method (later we will look at `select_related()` method). Using the `prefetch_related()` method, Django will query all the related data across a single relationship in a single additional query. However, if we don't end up using the prefetched data, querying for it will waste time and memory.
Let's create a `PersonManager` with a new method, `all_with_prefetch_movies()`, and make it the default manager for `Person`:
class PersonManager(models.Manager):
def all_with_prefetch_movies(self):
qs = self.get_queryset()
return qs.prefetch_related(
'directed',
'writing_credits',
'role_set__movie')
class Person(models.Model):
# fields omitted for brevity
objects = PersonManager()
class Meta:
ordering = (
'last_name', 'first_name')
def __str__(self):
# body omitted for brevity
Our `PersonManager` will still offer all the same methods as the default because `PersonManager` inherits from `models.Manager`. We also define a new method, which uses `get_queryset()` to get a `QuerySet`, and tells it to prefetch the related models. `QuerySets` are lazy, so no communication with the database happens until the query set is evaluated (for example by, iteration, casting to a bool, slicing, or evaluated by an `if` statement). `DetailView` won't evaluate the query until it uses `get()` to get the model by PK.
The `prefetch_related()` method takes one or more _lookups_ , and after the initial query is done, it automatically queries those related models. When you access a model related to the one from your `QuerySet`, Django won't have to query it, as you will already have it prefetched in the `QuerySet.`
A _lookup_ is what a Django `QuerySet` takes to express a field or `RelatedManager` in a model. A lookup can even span across relationships by separating the name of the relationship field (or `RelatedManager`) and the related models field with two underscores:
Movie.objects.all().filter(actors__last_name='Freeman', actors__first_name='Morgan')
The preceding call will return a `QuerySet` for all the `Movie` model instances in which Morgan Freeman has been an actor.
In our `PersonManager`, we're telling Django to prefetch all the movies that a `Person` has directed, written, and had a role in as well as prefetch the roles themselves. Using the `all_with_prefetch_movies()` method will result in a constant number of queries no matter how prolific the `Person` has been.
# Creating a PersonDetail view and template
Now we can write a very thin view in `django/core/views.py`:
class PersonDetail(DetailView):
queryset = Person.objects.all_with_prefetch_movies()
This `DetailView` is different because we're not providing it with a `model` attribute. Instead, we're giving it a `QuerySet` object from our `PersonManager` class. When `DetailView` uses the `filter()` of `QuerySet` and `get()` methods to retrieve the model instance, `DetailView` will derive the name of the template from the model instance's class name just as if we had provided model class as an attribute on the view.
Now, let's create our template in `django/core/templates/core/person_detail.html`:
{% extends 'base.html' %}
{% block title %}
{{ object.first_name }}
{{ object.last_name }}
{% endblock %}
{% block main %}
<h1>{{ object }}</h1>
<h2>Actor</h2>
<ul >
{% for role in object.role_set.all %}
<li >
<a href="{% url 'core:MovieDetail' role.movie.id %}" >
{{ role.movie }}
</a >:
{{ role.name }}
</li >
{% endfor %}
</ul >
<h2>Writer</h2>
<ul >
{% for movie in object.writing_credits.all %}
<li >
<a href="{% url 'core:MovieDetail' movie.id %}" >
{{ movie }}
</a >
</li >
{% endfor %}
</ul >
<h2>Director</h2>
<ul >
{% for movie in object.directed.all %}
<li >
<a href="{% url 'core:MovieDetail' movie.id %}" >
{{ movie }}
</a >
</li >
{% endfor %}
</ul >
{% endblock %}
Our template doesn't have to do anything special to make use of our prefetching.
Next, we should give the `MovieDetail` view the same benefit that our `PersonDetail` view received.
# Creating MovieManager
Let's start with a `MovieManager` in `django/core/models.py`:
class MovieManager(models.Manager):
def all_with_related_persons(self):
qs = self.get_queryset()
qs = qs.select_related(
'director')
qs = qs.prefetch_related(
'writers', 'actors')
return qs
class Movie(models.Model):
# constants and fields omitted for brevity
objects = MovieManager()
class Meta:
ordering = ('-year', 'title')
def __str__(self):
# method body omitted for brevity
The `MovieManager` introduces another new method, called `select_related()`. The `select_related()` method is much like the `prefetch_related()` method but it is used when the relation leads to only one related model (for example, with a `ForeignKey` field). The `select_related()` method works by using a `JOIN` SQL query to retrieve the two models in one query. Use `prefetch_related()` when the relation _may_ lead to more than one model (for example, either side of a `ManyToManyField` or a `RelatedManager` attribute).
Now, we can update our `MovieDetail` view to use the query set instead of the model directly:
class MovieDetail(DetailView):
queryset = (
Movie.objects
.all_with_related_persons())
The view renders exactly the same, but it won't have to query the database each time a related `Person` model instance is required, as they were all prefetched.
# A quick review of the section
In this section, we created the `Person` model and established a variety of relationships between the `Movie` and `Person` models. We created a one-to-many relationship with a `ForeignKey` field class, a many-to-many relationship using the `ManyToManyField` class, and used an intermediary (or association) class to add extra information for a many-to-many relationship by providing a `through` model to a `ManyToManyField`. We also created a `PersonDetail` view to show a `Person` model instance and used a custom model manager to control the number of queries Django sends to the database.
# Summary
In this chapter, we created our Django project and started our `core` Django app. We saw how to use Django's Model-View-Template approach to create easy-to-understand code. We created concentrated database logic near the model, pagination in views, and HTML in templates following the Django best practice of _fat models, thin views,_ and _dumb templates_.
Now we're ready to add users who can register and vote on their favorite movies.
# Adding Users to MyMDB
In our preceding chapter, we started our project and created our `core` app and our `core` models (`Movie` and `Person`). In this chapter, we will build on that foundation to do the following things:
* Let users register, log in, and log out
* Let logged in users vote movies up/down
* Score each movie based on the votes
* Use votes to recommend the top 10 movies.
Let's start this chapter with managing users.
# Creating the user app
In this section, you will create a new Django app, called `user`, register it with your project, and make it manage users.
At the beginning of Chapter 1, _Building MyMDB_ , you learned that a Django project is made up of many Django apps (such as our existing `core` app). A Django app should provide well-defined and tightly scoped behavior. Adding user management to our `core` app violates that principle. Making a Django app bear too many responsibilities makes it harder to test and harder to reuse. For example, we'll be reusing the code we write in this `user` Django app throughout this book.
# Creating a new Django app
As we did when we created the `core` app, we will use `manage.py` to generate our `user` app:
**$ cd django
$ python manage.py startapp user
$ cd user
$ ls
__init__.py admin.py apps.py migrations models.py tests.py views.py**
Next, we'll register it with our Django project by editing our `django/config/settings.py` file and updating the `INSTALLED_APPS` property:
INSTALLED_APPS = [
'user', # must come before admin
'core',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
We will need to put `user` before the `admin` app for reasons that we'll discuss in the _Logging in and out_ section. Generally, it's a good idea to put our apps above built-in apps.
Our `user` app is now a part of our project. Usually, we would now move on to creating and defining models for our app. However, thanks to Django's built-in `auth` app, we already have a user model that we can use.
If we want to use a custom user model, then we can register it by updating `settings.py` and setting `AUTH_USER_MODEL` to a string python path to the model (for example, `AUTH_USER_MODEL=myuserapp.models.MyUserModel`).
Next, we'll create our user registration view.
# Creating a user registration view
Our `RegisterView` class will be responsible for letting users register for our site. If it receives a `GET` request, then it will show them the `UserCreationFrom`; if it gets a `POST` request, it will validate the data and create the user. `UserCreationForm` is provided by the `auth` app and provides a way to collect and validate the data required to register a user; also, it is capable of saving a new user model if the data is valid.
Let's add our view to `django/user/views.py`:
from django.contrib.auth.forms import (
UserCreationForm,
)
from django.urls import (
reverse_lazy,
)
from django.views.generic import (
CreateView,
)
class RegisterView(CreateView):
template_name = 'user/register.html'
form_class = UserCreationForm
success_url = reverse_lazy(
'core:MovieList')
Let's take a look at our code line by line:
* `class RegisterView(CreateView):`: Our view extends `CreateView`, so it doesn't have to define how to handle `GET` and `POST` requests, as we will discuss in the following steps.
* `template_name = 'user/register.html'`: This is a template that we'll create. Its context will be a little different than what we've seen before; it won't have an `object` or `object_list` variables but will have a `form` variable, which is an instance of the class we set in the `form_class` attribute.
* `form_class = UserCreationForm`: This is the form class that this `CreateView` should use. Simpler models could just say `model = MyModel`, but a user is a little more complex because passwords need to be entered twice then hashed. We'll talk about how Django stores password in Chapter 3, _Posters, Headshots, and Security_.
* `success_url = reverse_lazy('core:MovieList')`: When model creation succeeds, this is the URL that you need to redirect to. This is actually an optional parameter; if the model has a method called `model.get_absolute_url()`, then that will be used and we don't need to provide `success_url`.
The behavior of `CreateView` is spread across a number of base classes and mixins that interact through methods that act as hooks we can override to change behavior. Let's take a look at some of the most critical points.
If `CreateView` receives a `GET` request, it will render the template for the form. One of the ancestors of `CreateView` is `FormMixin` which overrides `get_context_data()` to call `get_form()` and add the form instance to our template's context. The rendered template is returned as the body of the response by `render_to_response`.
If `CreateView` receives a `POST` request, it will also use `get_form()` to get the form instance. The form will be _bound_ to the `POST` data in the request. A bound form can validate the data it is bound to. `CreateView` will then call `form.is_valid()` and either `form_valid()` or `form_invalid()` as appropriate. `form_valid()` will call `form.save()` (saving the data to the database) then return a 302 response that will redirect the browser to `success_url`. The `form_invalid()` method will re-render the template with the form (which will now contain error messages for the user to fix and resubmit).
We're also seeing `reverse_lazy()` for the first time. It's a lazy version of `reverse()`. Lazy functions are functions that return a value that is not resolved until it is used. We can't use `reverse()` because views classes are evaluated while the full set of URLConfs are still being built, so if we need to use `reverse()` at the _class_ level of a view, we must use `reverse_lazy()`. The value will not resolved until the view returns its first response.
Next, let's create the template for our view.
# Creating the RegisterView template
In writing a template with a Django form, we must remember that Django doesn't provide the `<form>` or `<button type='submit>` tags, just contents of the form body. This lets us potentially include multiple Django forms in the same `<form>`. With that in mind, let's add our template to `django/user/templates/user/register.html`:
{% extends "base.html" %}
{% block main %}
<h1>Register for MyMDB</h1>
<form method="post">
{{ form.as_p}}
{% csrf_token %}
<button
type="submit"
class="btn btn-primary">
Register
</button>
</form>
{% endblock %}
Like our previous templates, we extend `base.html` and put our code in one of the existing `block` s (in this case, `main`). Let's take a closer look at how forms render.
When a form is rendered, it renders in two parts, first an optional `<ul class='errorlist'>` tag of general error messages (if any), then each field is rendered in four basic parts:
* a `<label>` tag with the field name
* a `<ul class="errorlist">` tag with errors from the user's previous form submission; this will only render if there were errors for that field
* an `<input>` (or `<select>`) tag to accept input
* a `<span class="helptext">` tag for the field's help text
`Form` comes with the following three utility methods to render the form:
* `as_table()`: Each field is wrapped in a `<tr>` tag with the label in a `<th>` tag and the widget wrapped in a `<td>` tag. The containing `<table>` tag is not provided.
* `as_ul`: The entire field (label and help text widget) is wrapped in a `<li>` tag. The containing `<ul>` tag is not provided.
* `as_p`: The entire field (label and help text widget) is wrapped in a `<p>` tag.
Containing `<table>` and `<ul>` tags are not provided for the same form that a `<form>` tag is not provided, to make it easier to output multiple forms together if necessary.
If you want fine-grained control over form rendering, `Form` instances are iterable, yielding a `Field` in each iteration, or can be looked up by name as `form["fieldName"]`.
In our example, we use the `as_p()` method because we don't need fine-grained layout control.
This template is also the first time we will see the `csrf_token` tag. CSRF is a common vulnerability in web apps that we'll discuss more in Chapter 3, _Posters, Headshots, and Security_. Django automatically checks all `POST` and `PUT` requests for a valid `csrfmiddlewaretoken` and header. Requests missing this won't even reach the view, but will get a `403 Forbidden` response.
Now that we have our template, let's add a `path()` object to our view in our URLConf.
# Adding a path to RegisterView
Our `user` app doesn't have a `urls.py` file, so we'll have to create the `django/user/urls.py` file:
from django.urls import path
from user import views
app_name = 'user'
urlpatterns = [
path('register',
views.RegisterView.as_view(),
name='register'),
]
Next, we'll have to `include()` this URLConf in our root URLConf in `django/config/urls.py`:
from django.urls import path, include
from django.contrib import admin
import core.urls
import user.urls
urlpatterns = [
path('admin/', admin.site.urls),
path('user/', include(
user.urls, namespace='user')),
path('', include(
core.urls, namespace='core')),
]
Since URLConf will only search until the _first_ matching `path` is found, we always want to put `path`s with no prefix or with the broadest URLConfs last so that they don't accidentally block other views.
# Logging in and out
Django's `auth` app provides views for logging in and out. Adding this to our project will be a two-step process:
1. Registering the views in the `user` URLConf
2. Adding templates for the views
# Updating user URLConf
Django's `auth` app provides a lot of views to help make user management and authentication easier, including logging in/out, changing passwords, and resetting forgotten passwords. A full-featured production app should offer all three features to users. In our case, we will restrict ourselves to just log in and log out.
Let's update `django/user/urls.py` to use log in and log out views of `auth`:
from django.urls import path
from django.contrib.auth import views as auth_views
from user import views
app_name = 'user'
urlpatterns = [
path('register',
views.RegisterView.as_view(),
name='register'),
path('login/',
auth_views.LoginView.as_view(),
name='login'),
path('logout/',
auth_views.LogoutView.as_view(),
name='logout'),
]
If you're providing log in/log out, password change, and password reset, then you can use URLConf of `auth` as shown in the following code snippet:
from django.contrib.auth import urls
app_name = 'user'
urlpatterns = [
path('', include(urls)),
]
Now, let's add the template.
# Creating a LoginView template
First, let's add a template for the login page in `django/user/templates/registration/login.html`:
{% extends "base.html" %}
{% block title %}
Login - {{ block.super }}
{% endblock %}
{% block main %}
<form method="post">
{% csrf_token %}
{{ form.as_p }}
<button
class="btn btn-primary">
Log In
</button>
</form>
{% endblock %}
The preceding code looks very similar to `user/register.html`.
However, what should happen when the user logs in?
# A successful login redirect
In `RegisterView`, we were able to specify where to redirect the user after success because we created the view. The `LoginView` class will follow these steps to decide where to redirect the user:
1. Use the `POST` parameter `next` if it is a valid URL and point at a server hosting this application. `path()` names are not available.
2. Use the `GET` parameter `next` if it is a valid URL and point at a server hosting this application. `path()` names are not available.
3. `LOGIN_REDIRECT_URL` setting which has a default of `'/accounts/profile/'`. `path()` names _are_ available.
In our case, we want to redirect all users to the movie list, so let's update `django/config/settings.py` to have a `LOGIN_REDIRECT_URL` setting:
LOGIN_REDIRECT_URL = 'core:MovieList'
However, if there were cases where we wanted to redirect users to a specific page, we could use the `next` parameter to specifically redirect them to a particular page. For example, if a user tries to perform an action before they're logged in, we pass the page they were on to `LoginView` as a `next` parameter to redirect them back to the page they were on after logging in.
Now, when a user will log in, they will be redirected to our Movie List view. Next, let's create a template for the logout view.
# Creating a LogoutView template
The `LogoutView` class behaves strangely. If it receives a `GET` request, it will log the user out and then try to render `registration/logged_out.html`. It's unusual for `GET` requests to modify a user's state, so it's worth remembering that this view is a bit different.
There's another wrinkle with the `LogoutView` class. If you don't provide a `registration/logged_out.html` template and you have the `admin` app installed, then Django _may_ use the template of `admin` because the `admin` app does have that template (log out of the `admin` app, and you'll see it).
The way that Django resolves template names into files is a three-step process that stops as soon as a file is found, as follows:
1. Django iterates over the directories in the `DIRS` list in `settings.TEMPLATES`.
2. If `APP_DIRS` is `True`, then it will iterate over the apps listed in `INSTALLED_APPS` until a match is found. If `admin` comes before `user` in the `INSTALLED_APPS` list, then it will match first. If `user` comes first, `user` will match first.
3. Raise a `TemplateDoesNotExist` exception.
This is why we put `user` first in our list of installed apps and added a comment warning future developers not to change the order.
We're now done with our `user` app. Let's review what we've accomplished.
# A quick review of the section
We've created a `user` app to encapsulate user management. In our `user` app, we leveraged a lot of functionalities that Django's `auth` app provides, including `UserCreationForm`, `LoginView`, and `LogoutView` classes. We've also learned about some new generic views that Django provides and used `CreateView` in combination with the `UserCreationForm` class to make the `RegisterView` class.
Now that we have users, let's allow them to vote on our movies.
# Letting users vote on movies
Part of the fun of community sites such as IMDB is being able to vote on the movies we love and hate. In MyMDB, users will be able to vote for movies with either a or a . A movie will have a score, which is the number of minus the number of .
Let's start with the most important part of voting: the `Vote` model.
# Creating the Vote model
In MyMDB, each user can have one vote per movie. The vote can either be positive— —or negative— .
Let's update our `django/core/models.py` file to have our `Vote` model:
class Vote(models.Model):
UP = 1
DOWN = -1
VALUE_CHOICES = (
(UP, "",),
(DOWN, "",),
)
value = models.SmallIntegerField(
choices=VALUE_CHOICES,
)
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE
)
movie = models.ForeignKey(
Movie,
on_delete=models.CASCADE,
)
voted_on = models.DateTimeField(
auto_now=True
)
class Meta:
unique_together = ('user', 'movie')
This model has the following four fields:
* `value`, which must be `1` or `-1`.
* `user` is a `ForeignKey`, which references the `User` mode through `settings.AUTH_USER_MODEL`. Django recommends that you never reference `django.contrib.auth.models.User` directly but using either `settings.AUTH_USER_MODEL` or `django.contrib.auth.get_user_model()`.
* `movie` is a `ForeignKey` referencing a `Movie` model.
* `voted_on` is a `DateTimeField` with `auto_now` enabled. The `auto_now` argument makes the model update the field to the current date time every time the model is saved.
The `unique_together` attribute of Meta creates a unique constraint on the table. A unique constraint will prevent two rows having the same value for both `user` and `movie`, enforcing our rule of one vote per user per movie.
Let's create a migration for our mode with `manage.py`:
**$ python manage.py makemigrations core
Migrations for 'core':
core/migrations/0003_auto_20171003_1955.py
- Create model Vote
- Alter field rating on movie
- Add field movie to vote
- Add field user to vote
- Alter unique_together for vote (1 constraint(s))**
Then, let's run our migration:
**$ python manage.py migrate core
Operations to perform:
Apply all migrations: core
Running migrations:
Applying core.0003_auto_20171003_1955... OK**
Now that we have our model and table set up, let's create a form to validate votes.
# Creating VoteForm
Django's forms API is very robust and lets us create almost any kind of form we want. If we want to create an arbitrary form, we can create a class that extends `django.forms.Form` and add whatever fields we want to it. However, if we want to build a form that represents a model, Django offers us a shortcut with `django.forms.ModelForm`.
The type of form we want depends on where the form will be placed and how it will be used. In our case, we want a form we can place on the `MovieDetail` page and just let it give the user the following two radio buttons: and .
Let's take a look at the simplest `VoteForm` possible:
from django import forms
from core.models import Vote
class VoteForm(forms.ModelForm):
class Meta:
model = Vote
fields = (
'value', 'user', 'movie',)
Django will generate a form from the `Vote` model using the `value`, `user`, and `movie` fields. `user` and `movie` will be `ModelChoiceField`s that use a `<select>` dropdown to pick the correct value, and `value` is a `ChoiceField` that also uses a `<select>` drop-down widget, not quite what we wanted by default.
`VoteForm` will require `user` and `movie`. Since we'll use `VoteForm` to save new votes, we can't eliminate those fields. However, letting users vote on behalf of other users will create a vulnerability. Let's customize our form to prevent that:
from django import forms
from django.contrib.auth import get_user_model
from core.models import Vote, Movie
class VoteForm(forms.ModelForm):
user = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=get_user_model().
objects.all(),
disabled=True,
)
movie = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=Movie.objects.all(),
disabled=True
)
value = forms.ChoiceField(
label='Vote',
widget=forms.RadioSelect,
choices=Vote.VALUE_CHOICES,
)
class Meta:
model = Vote
fields = (
'value', 'user', 'movie',)
In the preceding form, we've customized the fields.
Let's take a closer look at the `user` field:
* `user = forms.ModelChoiceField(`: A `ModelChoiceField` accepts another model as the value for this field. The choice of model is validated by providing a `QuerySet` instance of valid options.
* `queryset=get_user_model().objects.all(),`: A `QuerySet` that defines the valid choices for this field. In our case, any user can vote.
* `widget=forms.HiddenInput,`: The `HiddenInput` widget renders as a `<input type='hidden'>` HTML element, meaning that the user won't be distracted by any UI.
* `disabled=True,`: The `disabled` parameter tells the form to ignore any provided data for this field and only use values initially provided in the code. This prevents users from voting on behalf of other users.
The `movie` field is much the same as `user`, but with the `queryset` attribute queries for `Movie` model instances.
The value field is customized in a different way:
* `value = forms.ChoiceField(`: A `ChoiceField` is used to represent a field that can have a single value from a limited set. By default, it's represented by a drop-down list widget.
* `label='Vote',`: The `label` attribute lets us customize the label used for this field. While `value` makes sense in our code, we want users to think that their / is the vote.
* `widget=forms.RadioSelect,`: A dropdown hides the options until a user clicks on the dropdown. But our values are effective calls to action that we want to be always visible. Using the `RadioSelect` widget, Django will render each choice as an `<input type='radio'>` tag with the appropriate `<label>` tag and `name` value to make voting easier.
* `choices=Vote.VALUE_CHOICES,`: A `ChoiceField` must be told the valid choices; conveniently, it uses the same format as a model field's `choices` parameter, so we can reuse the `Vote.VALUE_CHOICES` tuple we used in the model.
Our newly customized form will appear with the label `vote` and two radio buttons.
Now that we have our form, let's add voting to the `MovieDetail` view and create views that know how to process votes.
# Creating voting views
In this section, we will update the `MovieDetail` view to let users cast their votes and views that log the votes in the database. To process the users casting votes, we will create the following two views:
* `CreateVote`, which will be a `CreateView` to be used if a user hasn't voted for a movie yet
* `UpdateVote`, which will be an `UpdateView` to be used if a user has already voted but is changing their vote
Let's start by updating `MovieDetail` to provide a UI for voting on a movie.
# Adding VoteForm to MovieDetail
Our `MovieDetail.get_context_data` method will be a bit more complex now. It will have to get the user's vote for the movie, instantiate the form, and know which URL to submit the vote to (`create_vote` or `update_vote`).
The first thing we will need is a way to check whether a user model has a related `Vote` model instance for a given `Movie` model instance. To do this, we will create a `VoteManager` class with a custom method. Our method will have a special behavior—if there is no matching `Vote` model instance, it will return an _unsaved_ blank `Vote` object. This will make it easier to instantiate our `VoteForm` with the proper `movie` and `user` values.
Here's our new `VoteManager`:
class VoteManager(models.Manager):
def get_vote_or_unsaved_blank_vote(self, movie, user):
try:
return Vote.objects.get(
movie=movie,
user=user)
except Vote.DoesNotExist:
return Vote(
movie=movie,
user=user)
class Vote(models.Model):
# constants and field omitted
objects = VoteManager()
class Meta:
unique_together = ('user', 'movie')
`VoteManager` is much like our previous `Manager`s.
One thing we haven't encountered before is instantiating a model using its constructor (for example, `Vote(movie=movie, user=user)`) as opposed to its manager's `create()` method. Using the constructor creates a new model in memory but _not_ in the database. An unsaved model is fully functional in itself (all the methods and manager methods are generally available), with the exception of anything that relies on relationships. An unsaved model has no `id` thus cannot be looked up using a `RelatedManager` or `QuerySet` until it is saved by calling its `save()` method.
Now that we have everything that `MovieDetail` needs, let's update it:
class MovieDetail(DetailView):
queryset = (
Movie.objects
.all_with_related_persons())
def get_context_data(self, **kwargs):
ctx = super().get_context_data(**kwargs)
if self.request.user.is_authenticated:
vote = Vote.objects.get_vote_or_unsaved_blank_vote(
movie=self.object,
user=self.request.user
)
if vote.id:
vote_form_url = reverse(
'core:UpdateVote',
kwargs={
'movie_id': vote.movie.id,
'pk': vote.id})
else:
vote_form_url = (
reverse(
'core:CreateVote',
kwargs={
'movie_id': self.object.id}
)
)
vote_form = VoteForm(instance=vote)
ctx['vote_form'] = vote_form
ctx['vote_form_url'] = \
vote_form_url
return ctx
We've introduced two new elements in the preceding code, `self.request` and instantiating forms with instances.
Views have access to the request that they're processing through their `request` attribute. Also, `Request`s have a `user` property that gives us access to the user who made the request. We use this to check whether the user is authenticated or not, since only authenticated users can vote.
`ModelForms` can be instantiated with an instance of the model they represent. When we instantiate a `ModelForm` with an instance and render it, the fields will have the values of the instance. A nice shortcut for a common task is to display this model's values in this form.
We will also reference two `path`s that we haven't created yet; we'll do that in a moment. First, let's finish off our `MovieDetail` update by updating the `movie_detail.html` template sidebar block:
{% block sidebar %}
{# rating div omitted #}
<div>
{% if vote_form %}
<form
method="post"
action="{{ vote_form_url }}" >
{% csrf_token %}
{{ vote_form.as_p }}
<button
class="btn btn-primary" >
Vote
</button >
</form >
{% else %}
<p >Log in to vote for this
movie</p >
{% endif %}
</div >
{% endblock %}
In designing this, we again follow the principle that templates should have the least amount of logic possible.
Next, let's add our `CreateVote` view.
# Creating the CreateVote view
The `CreateVote` view will be responsible for validating vote data using `VoteForm` and then creating the correct `Vote` model instance. However, we will not create a template for voting. If there's a problem, we'll just redirect the user to the `MovieDetail` view.
Here's the `CreateVote` view we should have in our `django/core/views.py` file:
from django.contrib.auth.mixins import (
LoginRequiredMixin, )
from django.shortcuts import redirect
from django.urls import reverse
from django.views.generic import (
CreateView, )
from core.forms import VoteForm
class CreateVote(LoginRequiredMixin, CreateView):
form_class = VoteForm
def get_initial(self):
initial = super().get_initial()
initial['user'] = self.request.user.id
initial['movie'] = self.kwargs[
'movie_id']
return initial
def get_success_url(self):
movie_id = self.object.movie.id
return reverse(
'core:MovieDetail',
kwargs={
'pk': movie_id})
def render_to_response(self, context, **response_kwargs):
movie_id = context['object'].id
movie_detail_url = reverse(
'core:MovieDetail',
kwargs={'pk': movie_id})
return redirect(
to=movie_detail_url)
We've introduced four new concepts in the preceding code that are different than in the `RegisterView` class—`get_initial()`, `render_to_response()`, `redirect()`, and `LoginRequiredMixin`. They are as follows:
* `get_initial()` is used to pre-populate a form with `initial` values before the form gets `data` values from the request. This is important for `VoteForm` because we've disabled `movie` and `user`. `Form` disregards `data` assigned to disabled fields. Even if a user sends in a different `movie` value or `user` value in the form, it will be disregarded by the disabled fields, and our `initial` values will be used instead.
* `render_to_response()` is called by `CreateView` to return a response with the render template to the client. In our case, we will not return a response with a template, but an HTTP redirect to `MovieDetail`. There is a serious downside to this approach—we lose any errors associated with the form. However, since our user has only two choices for input, there aren't many error messages we could provide anyway.
* `redirect()` is from Django's `django.shortcuts` package. It provides shortcuts for common operations, including creating an HTTP redirect response to a given URL.
* `LoginRequiredMixin` is a mixin that can be added to any `View` and will check whether the request is being made by an authenticated user. If the user is not logged in, they will be redirected to the login page.
Django's default setting for a login page is `/accounts/profile/`, so let's change this by editing our `settings.py` file and adding a new setting:
LOGIN_REDIRECT_URL = 'user:login'
We now have a view that will create a `Vote` model instance and redirect the user back to the related `MovieDetail` view on success or failure.
Next, let's add a view to let users update their `Vote` model instances.
# Creating the UpdateVote view
The `UpdateVote` view is much simpler because `UpdateView` (like `DetailView`) takes care of the job of looking up the vote though we still have to be concerned about `Vote` tampering.
Let's update our `django/core/views.py` file:
from django.contrib.auth.mixins import (
LoginRequiredMixin, )
from django.core.exceptions import (
PermissionDenied)
from django.shortcuts import redirect
from django.urls import reverse
from django.views.generic import (
UpdateView, )
from core.forms import VoteForm
class UpdateVote(LoginRequiredMixin, UpdateView):
form_class = VoteForm
queryset = Vote.objects.all()
def get_object(self, queryset=None):
vote = super().get_object(
queryset)
user = self.request.user
if vote.user != user:
raise PermissionDenied(
'cannot change another '
'users vote')
return vote
def get_success_url(self):
movie_id = self.object.movie.id
return reverse(
'core:MovieDetail',
kwargs={'pk': movie_id})
def render_to_response(self, context, **response_kwargs):
movie_id = context['object'].id
movie_detail_url = reverse(
'core:MovieDetail',
kwargs={'pk': movie_id})
return redirect(
to=movie_detail_url)
Our `UpdateVote` view checks whether the `Vote` retrieved is the logged in user's vote in the `get_object()` method. We've added this check to prevent vote tampering. Our user interface doesn't let users do this by mistake. If the `Vote` wasn't cast by the logged in user then `UpdateVote` throws a `PermissionDenied` exception that Django will process and return into a `403 Forbidden` response.
The final step will be to register our new views with the `core` URLConf.
# Adding views to core/urls.py
We've now created two new views, but, as always, they're not accessible to users until they're listed in a URLConf. Let's edit `core/urls.py`:
urlpatterns = [
# previous paths omitted
path('movie/<int:movie_id>/vote',
views.CreateVote.as_view(),
name='CreateVote'),
path('movie/<int:movie_id>/vote/<int:pk>',
views.UpdateVote.as_view(),
name='UpdateVote'),
]
# A quick review of the section
In this section, we saw examples of how to build basic and highly customized forms for accepting and validating user input. We also discussed some of the built-in views that simplify the common tasks of processing forms.
Next, we'll show how to start using our users, votes to rank each movie and provide a top-10 list.
# Calculating Movie score
In this section, we'll use Django's aggregate query API to calculate the score for each movie. Django makes writing database agnostic aggregate queries easy by building the functionality into its `QuerySet` objects.
Let's start by adding a method to calculate a score to `MovieManager`.
# Using MovieManager to calculate Movie score
Our `MovieManager` class is responsible for building `QuerySet` objects associated with `Movie`. We now need a new method that retrieves movies (ideally still with the related persons) and marking each movie's score based on the sum of the votes it received (we can just sum all the `1` and `-1`).
Let's take a look at how we can do this using Django's `QuerySet.annotate()` API:
from django.db.models.aggregates import (
Sum
)
class MovieManager(models.Manager):
def all_with_related_persons(self):
qs = self.get_queryset()
qs = qs.select_related(
'director')
qs = qs.prefetch_related(
'writers', 'actors')
return qs
def all_with_related_persons_and_score(self):
qs = self.all_with_related_persons()
qs = qs.annotate(score=Sum('vote__value'))
return qs
In `all_with_related_persons_and_score`, we call `all_with_related_persons` and get a `QuerySet` that we can modify further with our `annotate()` call.
`annotate` turns our regular SQL query into an aggregate query, adding the supplied aggregate operation's result to a new attribute called `score`. Django abstracts most common SQL aggregate functions into class representations, including `Sum`, `Count`, and `Average` (and many more).
The new `score` attribute is available on any instance we `get()` out of the `QuerySet` as well as in any methods we want to call on our new `QuerySet` (for example, `qs.filter(score__gt=5)` would return a `QuerySet` that has movies with a `score` attribute greater than 5).
Our new method still returns a `QuerySet` that is lazy, which means that our next step is to update `MovieDetail` and its template.
# Updating MovieDetail and template
Now that we can query movies with their scores, let's change the `QuerySet` `MovieDetail` uses:
class MovieDetail(DetailView):
queryset = Movie.objects.all_with_related_persons_and_score()
def get_context_data(self, **kwargs):
# body omitted for brevity
Now, when `MovieDetail` uses `get()` on its query set, the `Movie` will have a score attribute. Let's use it in our `movie_detail.html` template:
{% block sidebar %}
{# movie rating div omitted #}
<div >
<h2 >
Score: {{ object.score|default_if_none:"TBD" }}
</h2 >
</div>
{# voting form div omitted #}
{% endblock %}
We can reference the `score` property safely because of `QuerySet` of `MovieDetail`. However, we don't have a guarantee that the score will not be `None` (for example, if the `Movie` has no votes). To guard against a blank score, we use the `default_if_none` filter to provide a value to print out.
We now have a `MovieManager` method that can calculate the score for all movies, but when you use it in `MovieDetail`, it means that it will only do so for the `Movie` being displayed.
# Summary
In this chapter, we added users to our system, letting them register, log in (and out), and vote on our movies. We learned how to use aggregate queries to efficiently calculate the results of these votes in the database.
Next, we will let users upload pictures associated with our `Movie` and `People` models and discuss security considerations.
# Posters, Headshots, and Security
Movies are a visual medium, so a database of movies should, at the very least, have images. Letting users upload files can have big security implications; so, in this chapter, we'll discuss both topics together.
In this chapter, we will do the following things:
* Add a file upload functionality that lets users upload images for each movie
* Examine the **Open Web Application Security Project** ( **OWASP** ) top 10 list of risks
We'll examine the security implications of the file upload as we go. Also, we'll take a look at where Django can help us and where we have to make careful design decisions.
Let's start by adding file upload to MyMDB.
# Uploading files to our app
In this section, we will create a model that will represent and manage the files that our users upload to our site; then, we'll build a form and view to validate and process those uploads.
# Configuring file upload settings
Before we begin implementing file upload, we will need to understand that file upload depends on a number of settings that must be different in production and development. These settings affect how files are stored and served.
Django has two sets of settings for files: `STATIC_*` and `MEDIA_*`. **Static files** are files that are part of our project, developed by us (for example, CSS and JavaScript). **Media files** are files that users upload to our system. Media files should not be trusted and certainly _never_ executed.
We will need to set two new settings in our `django/conf/settings.py`:
MEDIA_URL = '/uploaded/'
MEDIA_ROOT = os.path.join(BASE_DIR, '../media_root')
`MEDIA_URL` is the URL that will serve the uploaded files. In development, the value doesn't matter very much, as long as it doesn't conflict with the URL of one of our views. In production, uploaded files should be served from a different domain (not a subdomain) than the one that serves our app. A user's browser that gets tricked into executing a file it requested from the same domain (or a subdomain) as our app will trust that file with the cookies (including the session ID) for our user. This default policy of all browsers is called the **Same Origin Policy**. We'll discuss this again in Chapter 5, _Deploying with Docker_.
`MEDIA_ROOT` is the path to the directory where Django should save the code. We want to make sure that this directory is not under our code directory so that it won't be accidentally checked in to version control or accidentally granted any generous permissions (for example, execution permission) that we grant our code base.
There are other settings we will want to configure in production, such as limiting the request body, but those will be done as part of deployment in Chapter 5, _Deploying with Docker_.
Next, let's create that `media_root` directory:
**$ mkdir media_root
$ ls
django media_root requirements.dev.txt**
Great! Next, let's create our `MovieImage` model.
# Creating the MovieImage model
Our `MovieImage` model will use a new field called `ImageField` to save the file and to _attempt_ to validate that a file is an image. Although `ImageField` does try to validate the field, it is not enough to stop a malicious user who crafts an intentionally malicious file (but will help a user who accidentally clicked on a `.zip` instead of a `.png`). Django uses the `Pillow` library to do this validation; so, let's add `Pillow` to our requirements file `requirements.dev.txt`:
**Pillow <4.4.0**
Then, install our dependencies with `pip`:
**$ pip install -r requirements.dev.txt**
Now, we can create our model:
from uuid import uuid4
from django.conf import settings
from django.db import models
def movie_directory_path_with_uuid(
instance, filename):
return '{}/{}'.format(
instance.movie_id, uuid4())
class MovieImage(models.Model):
image = models.ImageField(
upload_to=movie_directory_path_with_uuid)
uploaded = models.DateTimeField(
auto_now_add=True)
movie = models.ForeignKey(
'Movie', on_delete=models.CASCADE)
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE)
`ImageField` is a specialized version of `FileField` that uses `Pillow` to confirm that a file is an image. `ImageField` and `FileField` work with Django's file storage API, which provides a way to store and retrieve files, as well as read and write them. By default, Django ships with `FileSystemStorage`, which implements the storage API to store data on the local filesystem. This is sufficient for development, but we'll look at alternatives in Chapter 5, _Deploying with Docker_.
We used the `upload_to` parameter of `ImageField` to specify a function to generate the uploaded file's name. We don't want users to be able to specify the name of files in our system, as they may choose names that abuse our users' trust and make us look bad. We use a function that will store all the images for a given movie in the same directory and use `uuid4` to generate a universally unique name for each file (this also avoids name collisions and dealing with files overwriting each other).
We also record who uploaded the file so that if we find a bad file, we have a clue for how to find other bad files.
Let's now make a migration and apply it:
**$ python manage.py makemigrations core
Migrations for 'core':
core/migrations/0004_movieimage.py
- Create model MovieImage
$ python manage.py migrate core
Operations to perform:
Apply all migrations: core
Running migrations:
Applying core.0004_movieimage... OK**
Next, let's build a form for our `MovieImage` model and use it in our `MovieDetail` view.
# Creating and using the MovieImageForm
Our form will be much like our `VoteForm` in that it will hide and disable the `movie` and `user` fields that are necessary for our model but dangerous to trust from the client. Let's add it to `django/core/forms.py`:
from django import forms
from core.models import MovieImage
class MovieImageForm(forms.ModelForm):
movie = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=Movie.objects.all(),
disabled=True
)
user = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=get_user_model().
objects.all(),
disabled=True,
)
class Meta:
model = MovieImage
fields = ('image', 'user', 'movie')
We don't override the `image` field with a custom field or widget because the `ModelForm` class will automatically provide the correct `<input type="file">`.
Now, we can use it in the `MovieDetail` view:
from django.views.generic import DetailView
from core.forms import (VoteForm,
MovieImageForm,)
from core.models import Movie
class MovieDetail(DetailView):
queryset = Movie.objects.all_with_related_persons_and_score()
def get_context_data(self, **kwargs):
ctx = super().get_context_data(**kwargs)
ctx['image_form'] = self.movie_image_form()
if self.request.user.is_authenticated:
# omitting VoteForm code.
return ctx
def movie_image_form(self):
if self.request.user.is_authenticated:
return MovieImageForm()
return None
This time, our code is simpler because users can _only_ upload new images, no other operations are supported, letting us always provide an empty form. However, with this approach, we still don't show error messages. Losing error messages should not be viewed as best practice.
Next, we'll update our template to use our new form and uploaded images.
# Updating movie_detail.html to show and upload images
We'll have to make two updates to our `movie_detail.html` template. First, we will need to update our `main` template `block` to have a list of images. Second, we'll have to update our `sidebar` template `block` to contain our upload form.
Let's update our `main` block first:
{% block main %}
<div class="col" >
<h1 >{{ object }}</h1 >
<p class="lead" >
{{ object.plot }}
</p >
</div >
<ul class="movie-image list-inline" >
{% for i in object.movieimage_set.all %}
<li class="list-inline-item" >
<img src="{{ i.image.url }}" >
</li >
{% endfor %}
</ul >
<p >Directed
by {{ object.director }}</p >
{# writers and actors html omitted #}
{% end block %}
We used the `image` field's `url` property in the preceding code, which returns the `MEDIA_URL` setting joined with the calculated filename so that our `img` tag correctly displays the image.
In the `sidebar` `block`, we'll add our form to upload a new image:
{% block sidebar %}
{# rating div omitted #}
{% if image_form %}
<div >
<h2 >Upload New Image</h2 >
<form method="post"
enctype="multipart/form-data"
action="{% url 'core:MovieImageUpload' movie_id=object.id %}" >
{% csrf_token %}
{{ image_form.as_p }}
<p >
<button
class="btn btn-primary" >
Upload
</button >
</p >
</form >
</div >
{% endif %}
{# score and voting divs omitted #}
{% endblock %}
This is very similar to our preceding form. However, we _must_ remember to include the `enctype` property in our `form` tag for the uploaded file to be attached to the request properly.
Now that we're done with our template, we can create our `MovieImageUpload` view to save our uploaded files.
# Writing the MovieImageUpload view
Our penultimate step will be to add a view to process the uploaded file to `django/core/views.py`:
from django.contrib.auth.mixins import (
LoginRequiredMixin)
from django.views.generic import CreateView
from core.forms import MovieImageForm
class MovieImageUpload(LoginRequiredMixin, CreateView):
form_class = MovieImageForm
def get_initial(self):
initial = super().get_initial()
initial['user'] = self.request.user.id
initial['movie'] = self.kwargs['movie_id']
return initial
def render_to_response(self, context, **response_kwargs):
movie_id = self.kwargs['movie_id']
movie_detail_url = reverse(
'core:MovieDetail',
kwargs={'pk': movie_id})
return redirect(
to=movie_detail_url)
def get_success_url(self):
movie_id = self.kwargs['movie_id']
movie_detail_url = reverse(
'core:MovieDetail',
kwargs={'pk': movie_id})
return movie_detail_url
Our view once again delegates all the work of validating and saving the model to `CreateView` and our form. We retrieve the `user.id` attribute from the request's `user` property (certain that the user is logged in because of the `LoginRequiredMixin` class) and the movie ID from the URL, then pass them to the form as initial arguments since the `user` and `movie` fields of `MovieImageForm` are disabled (so they ignore the values from the request body). The work of saving and renaming the file is all done by Django's `ImageField`.
Finally, we can update our project to route requests to our `MovieImageUpload` view and serve our uploaded files.
# Routing requests to views and files
In this section, we'll update `URLConf` of `core` to route requests to our new `MovieImageUpload` view and look at how we can serve our uploaded images in development. We'll take a look at how to serve the uploaded images in production Chapter 5, _Deploying with Docker_.
To route requests to our `MovieImageUpload` view, we'll update `django/core/urls.py`:
from django.urls import path
from . import views
app_name = 'core'
urlpatterns = [
# omitted existing paths
path('movie/<int:movie_id>/image/upload',
views.MovieImageUpload.as_view(),
name='MovieImageUpload'),
# omitted existing paths
]
We add our `path()` function as usual, and ensure that we remember that it expects a parameter called `movie_id`.
Now, Django will know how to route to our view, but it doesn't know how to serve the uploaded files.
To serve the uploaded files in development, we'll update `django/config/urls.py`:
from django.conf import settings
from django.conf.urls.static import (
static, )
from django.contrib import admin
from django.urls import path, include
import core.urls
import user.urls
MEDIA_FILE_PATHS = static(
settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)
urlpatterns = [
path('admin/', admin.site.urls),
path('user/', include(
user.urls, namespace='user')),
path('', include(
core.urls, namespace='core')),
] + MEDIA_FILE_PATHS
Django offers the `static()` function, which will return a list with a single `path` object that will route any request beginning with the string `MEDIA_URL` to a file inside `document_root`. It will give us a way of serving our uploaded image files in development. This feature is not appropriate for production, and `static()` will return an empty list if `settings.DEBUG` is `False`.
Now that we've seen much of Django core functionality, let's discuss how it relates to the **Open Web Application Security Project** ( **OWASP** ) list of the top 10 most critical security risks (OWASP Top 10).
# OWASP Top 10
The OWASP is a not for profit charitable organization focused on making _security visible_ by providing impartial practical security advice for web applications. All of OWASP's materials are free and open source. Since 2010, OWASP solicits data from information security professionals and uses it to develop a list of the top 10 most critical security risks in web application security (the OWASP Top 10). Although this list does not claim to enumerate all problems (it's just the top 10), it is based on what security professionals are seeing out in the wild while doing penetration tests and code audits on real code either in production or development at companies around the world.
Django is developed to minimize and avoid these risks as much as possible and, where possible, to give developers the tools to minimize the risks themselves.
Let's enumerate the OWASP Top 10 from 2013 (the latest version at the time of writing, the 2017 RC1 having been rejected) and take a look at how Django helps us mitigate each risk.
# A1 injection
This has been the number one issue since the creation of the OWASP Top 10. **Injection** means users being able to inject code that is executed by our system or a system we use. For example, SQL Injection vulnerabilities let an exploiter execute arbitrary SQL code in our database, which can lead to them circumventing almost all the controls and security measures we have (for example, letting them authenticate as an administrative user; SQL Injection exploits may lead to shell access). The best solution for this, particularly for SQL Injection, is to use parametrized queries.
Django protects us from SQL Injection by providing us with the `QuerySet` class. `QuerySet` ensures that all queries it sends are parameterized so that the database is able to distinguish between our SQL code and the values in the queries. Using parametrized queries will prevent SQL Injection.
However, Django does permit raw SQL queries using `QuerySet.raw()` and `QuerySet.extra()`. Both these methods support parameterized queries, but it is up to the developer to ensure that they **never** put values from a user into a SQL query using string formatting (for example, `str.format`) but **always** use parameters.
# A2 Broken Authentication and Session Management
**Broken Authentication** and **Session Management** refer to the risk of an attacker being able to either authenticate as another user or take over another user's session.
Django protects us here in a few ways, as follows:
* Django's `auth` app always hashes and salts passwords so that even if the database is compromised, user passwords cannot be reasonably cracked.
* Django supports multiple _slow_ hashing algorithms (for example, Argon2 and Bcrypt), which make brute-force attacks impractical. These algorithms are not provided out of the box (Django uses `PBDKDF2` by default) because they rely on third-party libraries but can be configured using the `PASSWORD_HASHERS` setting.
* The Django session ID is never made available in the URL by default, and the session ID changes after login.
However, Django's cryptographic functionality is always seeded with the `settings.SECRET_KEY` string. Checking production value of `SECRET_KEY` into version control should be considered a security problem. The value should never be shared in plain text, as we'll discuss Chapter 5, _Deploying with Docker_.
# A3 Cross Site Scripting
**Cross Site Scripting** ( **XSS** ) is when an attacker is able to get a web app to display HTML or JavaScript created by the attacker rather than the one created by the developer(s). This attack is very powerful because if the attacker can execute arbitrary JavaScript, then they can send requests, which look indistinguishable from genuine requests from the user.
Django protects all variables in templates with HTML encoding by default.
However, Django does provide utilities to mark text as safe, which will result in values not being encoded. These should be used sparingly and with a full appreciation for the dire security consequences if they are abused.
# A4 insecure direct object references
**Insecure direct object references** are when we insecurely expose implementation details in our resource references without protecting the resources from illicit access/exploitation. For example, the paths in the `src` attribute of our movie detail page's `<img>` tag map directly to files in the filesystem. If a user manipulates a URL, they could access images to which they should not have access to, thus exploiting a vulnerability. Or, using auto incrementing primary keys that are exposed to the user in a URL can let malicious users iterate through all the items in the database. The impact of this risk is highly dependent on the resources exposed.
Django helps us by not coupling routing paths to views. We can do model lookups based on primary keys, but we are not required to do so and may add extra fields to our models (for example, `UUIDField`) to decouple table primary keys from IDs used in URLs. In our Mail Ape project in Part 3, we'll see how we can use the `UUIDField` class as the primary key of a model.
# A5 Security misconfiguration
**Security misconfiguration** refers to the risk incurred when the proper security mechanisms are deployed inappropriately. This risk is at the border of development and operations and requires the two teams to cooperate. For example, if we run our Django app in production with the `DEBUG` setting set to `True`, we would risk exposing far too much information to the public without having any errors in our code base.
Django helps us with sane defaults and technical and topic guides on the Django project website. The Django community is also helpful—they post on mailing lists and online blogs, though online blog posts should be treated skeptically until you validate their claims.
# A6 Sensitive data exposure
**Sensitive data exposure** is the risk that sensitive data may be accessed without the proper authorization. This risk is broader than just an attacker highjacking a user's session, as it includes questions of how backups are stored, how encryption keys are rotated, and, most importantly, which data is actually considered _sensitive_. The answers to these questions are project/business specific.
Django can help reduce risks of inadvertent exposure from attackers using network sniffing by being configured to serve pages only over HTTPS.
However, Django doesn't provide encryption directly nor does it manage key rotation, logs, backups, and the database itself. There are many factors that affect this risk, which are outside of Django's scope.
# A7 Missing function level access control
While A6 referred to data being exposed, missing function level access control refers to functionality being inadequately protected. Consider our `UpdateVote` view—if we had forgotten the `LoginRequiredMixin` class, then anyone could send an HTTP request and change our users' votes.
Django's `auth` app provides a lot of useful features to mitigate these issues, including a permission system that is outside the scope of this project and mixins and utilities to make using these permissions simple (for example, `LoginRequiredMixin` and `PermissionRequiredMixin`).
However, it is up to us to use Django's tools appropriately to the job at hand.
# A8 Cross Site Request Forgery (CSRF)
**CSRF** (pronounced _see surf_ ) is the most technically complex risk in the OWASP Top 10. CSRF relies on the fact that it will automatically send all the cookies associated with the domain whenever a browser requests any resource from a server. A malicious attacker may trick one of our logged in users to view a page on a third-party site (for example, `malicious.example.org`) with, for example, an `img` tag with a `src` attribute that points to a URL from our site (for example, `mymdb.example.com`). When the user's browser sees that `src`, it will make a `GET` request to that URL and send all the cookies (including session ID) associated with our site.
The risk is that if our web app receives a `GET` request, it will make a modification that the user didn't intend. The mitigation for this risk is to make sure that any operation that makes a modification (for example, `UpdateVote`) has a unique and unpredictable value (a CSRF token) that only our system knows, which confirms that the user is intentionally using our app to perform this operation.
Django helps us a lot to mitigate this risk. Django provides the `csrf_token` tag to make it easy to add a CSRF token to a form. Django takes care of adding a matching cookie (to validate against the token) and that any request with a verb that is not a `GET`, `HEAD`, `OPTIONS`, or `TRACE` has a valid CSRF token to be processed. Django further helps us do the right thing by having all its generic editing views ( `EditView`, `CreateView`, `DeleteView`, and `FormView`) perform only a modification operation on `POST` and never on `GET`.
However, Django can't save us from ourselves. If we decide to disable this functionality or write views that have side effects on `GET`, Django can't help us.
# A9 Using components with known vulnerabilities
A chain is only as strong as its weakest link, and, sometimes, projects can have vulnerabilities in the frameworks and libraries they rely on.
The Django project has a security team that accepts confidential reports of security issues and has a security disclosure policy to keep the community aware of issues affecting their projects. Generally, a Django release receives support (including security updates) for 16 months from its first release, but **Long-Term Support** ( **LTS** ) releases receive support for 3 years (the next LTS release will be Django 2.2).
However, Django doesn't automatically update itself and doesn't force us to run the latest version. Each deployment must manage this for themselves.
# A10 Unvalidated redirects and forwards
If our site can be used to redirect/forward a user to a third-party site automatically, then our site is at risk of having its reputation used to trick users into being forwarded to malicious sites.
Django protects us by making sure that the `next` parameter of `LoginView` will only forward user's URLs that are part of our project.
However, Django can't protect us from ourselves. We have to make sure that we never use use-provided and unvalidated data as the basis of an HTTP redirect or forward.
# Summary
In this section, we've updated our app to let users upload images related to movies and reviewed the OWASP Top 10. We covered how Django protects us and also where we need to protect ourselves.
Next, we'll build a list of the top 10 movies and take a look at how to use caching to avoid scanning our entire database each time.
# Caching in on the Top 10
In this chapter, we'll use the votes that our users have cast to build a list of the top 10 movies in MyMDB. In order to ensure that this popular page remains quick to load, we'll take a look at tools to help us optimize our site. Finally, we'll look at Django's caching API and how to use it to optimize our project.
In this chapter, we will do the following things:
* Create a top 10 movie list using an aggregate query
* Learn about Django instrumentation tools to measure optimization
* Use Django's cache API to cache results of expensive operations
Let's start by making our top 10 movies list page.
# Creating a top 10 movies list
For building our top 10 list, we'll start off by creating a new `MovieManager` method and then use it in a new view and template. We'll also update the top header in our base template to make the list easily accessible from every page.
# Creating MovieManager.top_movies()
Our `MovieManager` class needs to be able to return a `QuerySet` object of the most popular movies as voted by our users. We're using a naive formula for popularity, that is, the sum of votes minus the sum of votes. Just like in Chapter 2, _Adding Users to MyMDB_ , we will use the `QuerySet.annotate()` method to make an aggregate query to count the votes.
Let's add our new method to `django/core/models.py`:
from django.db.models.aggregates import (
Sum
)
class MovieManager(models.Manager):
# other methods omitted
def top_movies(self, limit=10):
qs = self.get_queryset()
qs = qs.annotate(
vote_sum=Sum('vote__value'))
qs = qs.exclude(
vote_sum=None)
qs = qs.order_by('-vote_sum')
qs = qs[:limit]
return qs
We order our results by the sum of their votes (descending) to get our top movies list. However, we face the problem that some movies won't have a vote and so will have `NULL` as their `vote_sum` value. Unfortunately, `NULL` will be ordered first by Postgres. We'll solve this by adding the constraint that a movie with no votes will, by definition, not be one of the top movies. We use `QuerySet.exclude` (which is the opposite of `QuerySet.filter`) to remove movies that don't have a vote.
This is the first time that we see a `QuerySet` object being sliced. A `QuerySet` object is not evaluated by slicing unless a step is provided (for example, `qs [10:20:2]` would make the `QuerySet` object be evaluated immediately and return rows 10, 12, 14, 16, and 18).
Now that we have a `QuerySet` object with the proper `Movie` model instances, we can use the `QuerySet` object in our view.
# Creating the TopMovies view
Since our `TopMovies` view will need to show a list, we can use Django's `ListView` like we have before. Let's update `django/core/views.py`:
from django.views.generic import ListView
from core.models import Movie
class TopMovies(ListView):
template_name = 'core/top_movies_list.html'
queryset = Movie.objects.top_movies(
limit=10)
Unlike the previous `ListView` classes, we will need to specify a `template_name` attribute. Otherwise, `ListView` would try to use `core/movie_list.html`, which is used by the `MovieList` view.
Next, let's create our template.
# Creating the top_movies_list.html template
Our Top 10 Movies page will not need pagination, so the template is pretty simple. Let's create `django/core/templates/core/top_movies_list.html`:
{% extends "base.html" %}
{% block title %}
Top 10 Movies
{% endblock %}
{% block main %}
<h1 >Top 10 Movies</h1 >
<ol >
{% for movie in object_list %}
<li >
<a href="{% url "core:MovieDetail" pk=movie.id %}" >
{{ movie }}
</a >
</li >
{% endfor %}
</ol >
{% endblock %}
Extending `base.html`, we will redefine two template `block` tags. The new `title` template `block` has our new title. The `main` template `block` lists the movies in the `object_list`, including a link to each movie.
Finally, let's update `django/templates/base.html` to include a link to our Top 10 Movies page:
{# rest of template omitted #}
<div class="mymdb-masthead">
<div class="container">
<nav class="nav">
{# skipping other nav items #}
<a
class="nav-link"
href="{% url 'core:TopMovies' %}"
>
Top 10 Movies
</a>
{# skipping other nav items #}
</nav>
</div>
</div>
{# rest of template omitted #}
Now, let's add a `path()` object to our URLConf so that Django can route requests to our `TopMovies` view.
# Adding a path to TopMovies
As always, we will need to add a `path()` to help Django route requests to our view. Let's update `django/core/urls.py`:
from django.urls import path
from . import views
app_name = 'core'
urlpatterns = [
path('movies',
views.MovieList.as_view(),
name='MovieList'),
path('movies/top',
views.TopMovies.as_view(),
name="TopMovies"),
# other paths omitted
]
With that, we're done. We now have a Top 10 Movies page on MyMDB.
However, looking through all the votes means scanning the largest table in the project. Let's look at ways to optimize our project.
# Optimizing Django projects
There is no single correct answer for how to optimize a Django project because different projects have different constraints. To succeed, it's important to be clear about what you're optimizing and what to use in hard numbers, not intuition.
It's important to be clear about what we're optimizing because optimization usually involves trade-offs. Some of the constraints you may wish to optimize for are as follows:
* Response time
* Web server memory
* Web server CPU
* Database memory
Once you know what you're optimizing, you will need a way to measure current performance and the optimized code's performance. Optimized code is often more complex than unoptimized code. You should always confirm that the optimization is effective before taking on the burden of the complexity.
Django is just Python, so you can use a Python profiler to measure performance. This is a useful but complicated technique. Discussing the details of Python profiling goes beyond the scope of this book. However, it's important to remember that Python profiling is a useful tool at our disposal.
Let's take a look at some Django-specific ways that you can measure performance.
# Using the Django Debug Toolbar
The Django Debug Toolbar is a third-party package that provides a lot of useful debug information right in the browser. The toolbar is composed of a list of panels. Each panel provides a distinct set of information.
Some of the most useful panels (which are enabled by default) are as follows:
* **Request Panel:** It shows information related to the request, including the view that processed the request, arguments it received (parsed out of the path), cookies, session data, and `GET`/`POST` data in the request.
* **SQL Panel:** It shows how many queries are made, a timeline of their execution, and a button to run `EXPLAIN` on the query. Data-driven web applications are often slowed down by their database queries.
* **Templates Panel:** It shows the templates that were rendered and their context.
* **Logging Panel:** It shows any log messages produced by the view. We'll discuss logging more in the next section.
The profile panel is an advanced panel that is available but not enabled by default. This panel runs a profiler on your view and shows you the results. The panel comes with some caveats, which are explained in the Django Debug Toolbar documentation online (<https://django-debug-toolbar.readthedocs.io/en/stable/panels.html#profiling>).
Django Debug Toolbar is useful in development, but should not be run in production. By default, it will only work if `DEBUG = True` (a setting you must **never** use in production).
# Using Logging
Django uses Python's built-in logging system, which you can configure using `settings.LOGGING` . It's configured using a `DictConfig`, as documented in the Python documentation.
As a refresher, here's how Python's logging system works. The system is composed of _loggers_ , which receive a _message_ and _log level_ (for example, `DEBUG` and `INFO`) from our code. If the logger is configured to not filter out messages at that log level (or higher), it creates a _log record_ that is passed to all its _handlers_. A handler will check whether it matches the handler's log level, then it will format the log record (using a _formatter_ ) and emit the message. Different handlers will emit messages differently. `StreamHandler` will write to a stream (`sys.stderr` by default), `SysLogHandler` writes to `SysLog`, and `SMTPHandler` sends an email.
By logging how long operations take, you can get a meaningful sense of what you need to optimize. Using the correct log levels and handlers, you can measure resource consumption in production.
# Application Performance Management
**Application Performance Management** ( **APM** ) is the name for services that (often) run as part of your application server and trace performed operations. The trace is sent to a reporting server, which combines all the traces, and can give you code line-level insight into the performance of your production servers. This can be helpful for large and complicated deployments, but may be overkill for smaller, simpler web applications.
# A quick review of the section
In this section, we reviewed the importance of knowing what to optimize before you actually start optimizing. We also looked at some tools to help us measure whether our optimization was successful.
Next, we'll take a look at how we can solve some common performance problems with Django's cache API.
# Using Django's cache API
Django provides a caching API out of the box. In `settings.py`, you can configure one or more caches. Caching can be used to store a whole site, a single page's response, a template fragment, or any pickleable object. Django provides a single API that can be configured with a variety of backends.
In this section, we will perform the following functions:
* Look at the different backends for Django's cache API
* Use Django to cache a page
* Use Django to cache a template fragment
* Use Django to cache a `QuerySet`
One thing we won't be looking at is _downstream_ caching, such as **Content Delivery Networks** ( **CDNs** ) or proxy caches. These are not Django specific, and there is a wide variety of options. Generally speaking, these kinds of caches will rely on the same `VARY` headers that Django has already sent.
Next, let's look at configuring the backends for the cache API.
# Examining the trade-offs between Django cache backends
Different backends can be appropriate for different situations. However, the golden rule of caches is that they must be _faster_ than the source they're caching or else you've made your application slower. Deciding which backend is appropriate for which task is best done by instrumenting your project, as discussed in the preceding section. Different backends have different trade-offs.
# Examining Memcached trade-offs
**Memcached** is the most popular cache backend, but it still comes with trade-offs that you need to evaluate. Memcached is an in-memory key value store for small data that can be shared by several clients (for example, Django processes) using one or more Memcached hosts. However, Memcached will not be appropriate for caching large blocks of data (1 MB of data, by default). Also, since Memcached is all in-memory, if the process is restarted then the entire cache is cleared. On the other hand, Memcached has remained popular because it is fast and simple.
Django comes with two Memcached backends, depending on the `Memcached` library that you want to use:
* `django.core.cache.backends.memcached.MemcachedCache`
* `django.core.cache.backends.memcached.PyLibMCCache`
You must also install the appropriate library (`python-memcached` or `pylibmc`, respectively). To specify the address(es) of your Memcached servers set `LOCATION` to a list in the format `address:PORT` (for example, `['memcached.example.com:11211',]`). An example configuration is listed at the end of this section.
Using Memcached in _development_ and _testing_ is unlikely to be very useful, unless you have evidence to the contrary (for example, you need to replicate a complex bug).
Memcached is popular in production environments because it is fast and easy to set up. It avoids duplication of data by letting all your Django processes connect to the same host(s). However, it uses a lot of memory (and degrades quickly and poorly when it runs out of available memory). It's also important to be mindful of the operational costs of running another service.
Here's an example config for using `memcached`:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': [
'127.0.0.1:11211',
],
}
}
# Examining dummy cache trade-offs
The **dummy cache** (`django.core.cache.backends.dummy.DummyCache`) will check whether a key is valid, but otherwise will perform no operations.
This cache can be useful for _development_ and _testing_ when you want to make sure that you're definitely seeing the results of your code changes, not something cached.
Don't use this cache in _production_ , as it has no effect.
Here's an example config for the dummy cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
}
}
# Examining local memory cache trade-offs
The **local memory cache** (`django.core.cache.backends.locmem.LocMemCache`) uses a Python dictionary as a global in-memory cache. If you want to use multiple separate local memory caches, give each unique string in `LOCATION`. It's called a local cache because it's local to each process. If you're spinning up multiple processes (as you would in production), then you might cache the same value multiple times as different processes handle requests. This inefficiency may be preferable for its simplicity, as it does not require another service.
This is a useful cache to use in _development_ and _testing_ to confirm that your code is caching correctly.
You may want to use this in _production,_ but keep in mind the potential inefficiency of different processes caching the same data.
The following is an example config for the local memory cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'defaultcache',
},
'otherCache': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'othercache',
}
}
# Examine file-based cache trade-offs
Django's **file-based cache** (`django.core.cache.backends.filebased.FileBasedCache`) uses compressed files in a specified `LOCATION` directory to cache data. Using files may seem strange; aren't caches supposed to be _fast_ and files _slow_? The answer, again, depends on what you're caching. As an example, network requests to an external API may be slower than your local disk. Remember that each server will have a separate disk, so there will be some duplication of data if you're running a cluster.
You probably don't want to use this in _development_ or _testing_ unless you are heavily memory constrained.
You may want to use this in production to cache resources that are particularly large or slow to request. Remember that you should give your server's process write permission to the `LOCATION` directory. Also, make sure that you give your server(s) enough disk space for your cache.
The following is an example config to use the file-based cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': os.path.join(BASE_DIR, '../file_cache'),
}
}
# Examining database cache trade-offs
The **database cache** backend (`django.core.cache.backends.db.DatabaseCache`) uses a database table (named in `LOCATION`) to store the cache. Obviously, this works best if your database is fast. Depending on the scenario, this may be helpful even when caching results of database queries if the queries are complex but single row lookups are fast. There are upsides to this, as the cache is not ephemeral like a memory cache and can be easily shared across processes and servers (such as Memcached).
The database cache table is not managed by a migration but by a `manage.py` command, as follows:
**$ cd django
$ python manage.py createcachetable**
You probably don't want to use this in _development_ or _testing_ unless you want to replicate your production environment locally.
You may want to use this in _production_ if your testing proves that it's appropriate. Remember to consider what the increased database load will do to its performance.
The following is an example config to use the database cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'django_cache_table',
}
}
# Configuring a local memory cache
In our case, we will use a local memory cache with a very low timeout. This will mean that most requests we make while writing our code will skip the cache (old values, if any, will have expired), but if we quickly click on refresh, we'll be able to get confirmation that our cache is working.
Let's update `django/config/settings.py` to use a local memory cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'default-locmemcache',
'TIMEOUT': 5, # 5 seconds
}
}
Although we can have multiple differently configured caches, the default cache is expected to be named `'default'` .
`Timeout` is how long (in seconds) a value should be kept in the cache before it's culled (removed/ignored). If `Timeout` is `None`, then the value will be considered to never expire.
Now that we have a cache configured, let's cache the `MovieList` page.
# Caching the movie list page
We will proceed on the assumption that the `MovieList` page is very popular and expensive for us generate. To reduce the cost of serving these requests, we will use Django to cache the entire page.
Django provides the decorator (function) `django.views.decorators.cache.cache_page`, which can be used to cache a single page. It may seem strange that this is a decorator instead of a mixin. When Django was initially launched, it didn't have **Class-Based Views** ( **CBVs** ), only **Function-Based Views** ( **FBVs** ). As Django matured, much of the code switched to using CBVs, but there are still some features implemented as FBV decorators.
There are a few different ways to use function decorators in CBVs. Our approach will be to build our own mixin. Much of the power of CBVs comes off from the power of being able to mix in new behavior to existing classes. Knowing how to do that is a useful skill.
# Creating our first mixin – CachePageVaryOnCookieMixin
Let's create a new class in `django/core/mixins.py`:
from django.core.cache import caches
from django.views.decorators.cache import (
cache_page)
class CachePageVaryOnCookieMixin:
"""
Mixin caching a single page.
Subclasses can provide these attributes:
`cache_name` - name of cache to use.
`timeout` - cache timeout for this
page. When not provided, the default
cache timeout is used.
"""
cache_name = 'default'
@classmethod
def get_timeout(cls):
if hasattr(cls, 'timeout'):
return cls.timeout
cache = caches[cls.cache_name]
return cache.default_timeout
@classmethod
def as_view(cls, *args, **kwargs):
view = super().as_view(
*args, **kwargs)
view = vary_on_cookie(view)
view = cache_page(
timeout=cls.get_timeout(),
cache=cls.cache_name,
)(view)
return view
Our new mixin overrides the `as_view()` class method that we use in URLConfs and decorates the view with the `vary_on_cookie()` and `cache_page()` decorators. This effectively acts as if we were decorating the `as_view()` method with our function decorator.
Let's look at the `cache_page()` decorator first. `cache_page()` requires a `timeout` argument and optionally takes a `cache` argument. `timeout` is how long (in seconds) before the cached page should expire and must be recached. Our default timeout value is the default for the cache we're using. Classes that subclass `CachePageVaryOnCookieMixin` can provide a new `timeout` attribute just like our `MovieList` class provides a `model` attribute. The `cache` argument expects the string name of the desired cache. Our mixin is set up to use the `default` cache, but by referencing it via a class attribute, that too can be changed by a subclass.
When caching a page such as `MoveList`, we must remember that the resulting page is different for different users. In our case, the header of `MovieList` looks different for logged in users (shows a _log out_ link) and for logged out users (shows _log in_ and _register_ links). Django, again, does the heavy work for us by providing the `vary_on_cookie()` decorator.
The `vary_on_cookie()` decorator adds a `VARY cookie` header to the response. The `VARY` header is used by caches (both downstream and Django's) to let them know about variants of that resource. `VARY cookie` tells the cache that each different cookie/URL pair is a different resource and should be cached separately. This means that logged in users and logged out users will not see the same page because they will have different cookies.
This has an important impact on our hit ratio (the proportion of times a cache is _hit_ instead of the resource being regenerated). A cache with a low hit ratio will have minimal effect, as most requests will _miss_ the cache and result in a processed request.
In our case, we also use cookies for CSRF protection. While session cookies may lower a hit ratio a bit, depending on the circumstance (look at your user's activity to confirm), a CSRF cookie is practically fatal. The nature of a CSRF cookie is to change a lot so that attackers cannot predict it. If that constantly changing value is sent with many requests, then very few can be cached. Luckily, we can move our CSRF value out of cookies and into the server side session with a `settings.py` change.
Deciding on the right CSRF strategy for your app can be complex. For example, AJAX applications will want to add CSRF tokens through headers. For most sites, the default Django configuration (using cookies) is fine. If you need to change it, it's worth reviewing Django's CSRF protection documentation (<https://docs.djangoproject.com/en/2.0/ref/csrf/>).
In `django/conf/settings.py`, add the following code:
CSRF_USE_SESSIONS = True
Now, Django won't send the CSRF token in a cookie, but will store it in the user's session (stored on the server).
If users already have CSRF cookies, they will be ignored; however, it will still have a dampening effect on the hit ratio. In production, you may wish to consider adding a bit of code to delete those CSRF cookies.
Now that we have a way of easily mixing in caching behavior, let's use it in our `MovieList` view.
# Using CachePageVaryOnCookieMixin with MovieList
Let's update our view in `django/core/views.py`:
from django.views.generic import ListView
from core.mixins import (
VaryCacheOnCookieMixin)
class MovieList(VaryCacheOnCookieMixin, ListView):
model = Movie
paginate_by = 10
def get_context_data(self, **kwargs):
# omitted due to no change
Now when `MovieList` gets a request routed to it, `cache_page` will check whether it has already been cached. If it has been cached, Django will return the cached response without doing any more work. If it hasn't been cached, our regular `MovieList` view will create a new response. The new response will have a `VARY cookie` header added and then get cached.
Next, let's try to cache a part of our Top 10 movie list inside a template.
# Caching a template fragment with {% cache %}
Sometimes, pages load slowly because a part of our template is slow. In this section, we'll take a look at how to solve this problem by caching a fragment of our template. For example, if you are using a tag that takes a long time to resolve (say, because it makes a network request), then it will slow down any page that uses that tag. If you can't optimize the tag itself, it may be sufficient to cache its result in the template.
Let's cache our rendered Top 10 Movies list by editing `django/core/templates/core/top_movies.html`:
{% extends "base.html" %}
{% load cache %}
{% block title %}
Top 10 Movies
{% endblock %}
{% block main %}
<h1 >Top 10 Movies</h1 >
{% cache 300 top10 %}
<ol >
{% for movie in object_list %}
<li >
<a href="{% url "core:MovieDetail" pk=movie.id %}" >
{{ movie }}
</a >
</li >
{% endfor %}
</ol >
{% endcache %}
{% endblock %}
This block introduces us to the `{% load %}` tag and the `{% cache %}` tag.
The `{% load %}` tag is used to load a library of tags and filters and make them available for use in a template. A library may provide one or more tags and/or filters. For example, `{% load humanize %}` loads tags and filters to make values look more human. In our case, `{% load cache %}` provides only the `{% cache %}` tag.
`{% cache 300 top10 %}` will cache the body of the tag for the provided number of seconds under the provided key. The second argument must be a hardcoded string (not a variable), but we can provide more arguments if the fragment needs to have variants (for example, `{% cache 300 mykey request.user.id %}` to cache a separate fragment for each user). The tag will use the `default` cache unless the last argument is `using='cachename'`, in which case the named cache will be used instead.
Caching with `{% cache %}` happens at a different level than when using `cache_page` and `vary_on_cookie`. All the code in the view will still be executed. Any slow code in the view will still slow us down. Caching a template fragment solves only one very particular case of a slow fragment in our template code.
Since `QuerySets` are lazy by putting our `for` loop inside `{% cache %}`, we've avoided evaluating the `QuerySet`. If we want to cache a value to avoid querying it, our code would be much clearer if we did it in the view.
Next, let's look at how to cache an object using Django's cache API.
# Using the cache API with objects
The most granular use of Django's cache API is to store objects compatible with Python's `pickle` serialization module. The `cache.get()`/`cache.set()` methods we'll see here are used internally by the `cache_page()` decorator and the `{% cache %}` tag. In this section, we'll use these methods to cache the `QuerySet` returned by `Movie.objects.top_movies()` .
Conveniently, `QuerySet` objects are pickleable. When a `QuerySets` is pickled, it will immediately be evaluated, and the resulting models will be stored in the built-in cache of the `QuerySet`. When unpickling a `QuerySet`, we can iterate over it without causing new queries. If the `QuerySet` had `select_related` or `prefetch_related`, those queries would execute on pickling and _not_ rerun on unpickling.
Let's remove our `{% cache %}` tag from `top_movies_list.html` and instead update `django/core/views.py`:
import django
from django.core.cache import cache
from django.views.generic import ListView
from core.models import Movie
class TopMovies(ListView):
template_name = 'core/top_movies_list.html'
def get_queryset(self):
limit = 10
key = 'top_movies_%s' % limit
cached_qs = cache.get(key)
if cached_qs:
same_django = cached_qs._django_version == django.get_version()
if same_django:
return cached_qs
qs = Movie.objects.top_movies(
limit=limit)
cache.set(key, qs)
return qs
Our new `TopMovies` view overrides the `get_queryset` method and checks the cache before using `MovieManger.top_movies()`. Pickling `QuerySet` objects does come with one caveat—they are not guaranteed to be compatible across Django versions, so we should check the version used before proceeding.
`TopMovies` also shows a different way of accessing the default cache than what `VaryOnCookieCache` used. Here, we import and use `django.core.cache.cache`, which is a proxy for `django.core.cache.caches['default']` .
It's important to remember the importance of consistent keys when caching using a low-level API. In a large code base, it's easy to store the same data under different keys leading to inefficiency. It can be convenient to put the caching code into your manager or into a utility module.
# Summary
In this chapter, we made a Top 10 Movies view, reviewed tools for instrumenting your Django code, and covered how to use Django's cache API. Django and Django's community provide tools for helping you discover where to optimize your code using profilers, the Django Debug Toolbar, and logging. Django's caching API helps us with a rich API to cache whole pages with `cache_page`, the `{% cache %}` template tag for template fragments, and `cache.set`/`cache.get` for caching any picklable object.
Next, we'll deploy MyMDB with Docker.
# Deploying with Docker
In this chapter, we'll look at how to deploy MyMDB into a production environment using Docker containers hosted on a Linux server in Amazon's **Electric Computing Cloud** ( **EC2** ). We will also use **Simple Storage Service** ( **S3** ) of **Amazon Web Services** ( **AWS** ) to store files that users upload.
We will do the following things:
* Split up our requirements and settings files to separate development and production settings
* Build a Docker container for MyMDB
* Build a database container
* Use Docker Compose to launch both containers
* Launch MyMDB into a production environment on a Linux server in the cloud
First, let's split up our requirements and settings so that our development and production values are kept separate.
# Organizing configuration for production and development
Till now, we've kept a single requirements file and a single `settings.py` file. This has made development convenient. However, we can't use our development settings in production.
The current best practice is to have a separate file for each environment. Each environment's file then imports a common file with shared values. We'll use this pattern for requirements and settings files.
Let's start by splitting up our requirements files.
# Splitting requirements files
Let's create `requirements.common.txt` at the root of our project:
django<2.1
psycopg2
Pillow<4.4.0
Regardless of the environment that we're in, we always need Django, Postgres drivers, and Pillow (for the `ImageField` class). However, this requirements file is never used directly.
Next, let's list our development requirements in `requirements.dev.txt`:
-r requirements.common.txt
django-debug-toolbar==1.8
The preceding file will install everything from `requirements.common.txt` (thanks to `-r`) and the Django Debug Toolbar.
For our production packages, we'll use `requirements.production.txt`:
-r requirements.common.txt
django-storages==1.6.5
boto3==1.4.7
uwsgi==2.0.15
This will also install the packages from `requirements.common.txt`. It will also install the `boto3` and `django-storages` packages to help us upload files to S3 easily. The `uwsgi` package will provide the server we'll use to serve Django.
To install packages for production, we can now execute the following command:
**$ pip install -r requirements.production.txt**
Next, let's split up the settings file along similar lines.
# Splitting settings file
Again, we will follow the current Django best practice of splitting our settings file into the following three files: `common_settings.py`, `production_settings.py`, and `dev_settings.py`.
# Creating common_settings.py
We'll create `common_settings.py` by renaming our current `settings.py` file and then making the changes mentioned in this section.
Let's change `DEBUG = False` so that no new settings file can _accidentally_ be in debug mode. Then, let's change the `SECRET_KEY` setting to get its value from an environment variable, by changing its line to be:
SECRET_KEY = os.getenv('DJANGO_SECRET_KEY')
Let's also add a new setting, `STATIC_ROOT`. `STATIC_ROOT` is the directory where Django will collect all the static files from across our installed apps to make it easier to serve them:
STATIC_ROOT = os.path.join(BASE_DIR, 'gathered_static_files')
In the database config, we can remove all the credentials but keep the `ENGINE` value (to make it clear, we intend to use Postgres everywhere):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
}
}
Finally, let's delete the `CACHES` setting. This will have to be configured differently in each environment.
Next, let's create a development settings file.
# Creating dev_settings.py
Our development settings will be in `django/config/dev_settings.py`. We'll build it incrementally.
First, we will import everything from `common_settings`:
from config.common_settings import *
Then, we'll override the `DEBUG` and `SECRET_KEY` settings:
DEBUG = True
SECRET_KEY = 'some secret'
In development, we want to run in debug mode. We will also feel safe hardcoding a secret key, as we know that it won't be used in production.
Next, let's update the `INSTALLED_APPS` list:
INSTALLED_APPS += [
'debug_toolbar',
]
In development, we can run extra apps (such as the Django Debug Toolbar) by appending a list of development-only apps to the `INSTALLED_APPS` list.
Then, let's update the database configuration:
DATABASES['default'].update({
'NAME': 'mymdb',
'USER': 'mymdb',
'PASSWORD': 'development',
'HOST': 'localhost',
'PORT': '5432',
})
Since our development database is local, we can hardcode the values in our settings to make the file simpler. If your database is not local, avoid checking passwords into version control and use `os.getenv()`, as in production.
Next, let's update the cache configuration:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'default-locmemcache',
'TIMEOUT': 5,
}
}
We'll use a very short timeout in our development cache.
Finally, we need to set file upload directory:
# file uploads
MEDIA_ROOT = os.path.join(BASE_DIR, '../media_root')
In development, we'll store uploaded files on our local filesystem in development. We will specify the directory to upload to using `MEDIA_ROOT`.
The Django Debug Toolbar needs a bit of configuration as well:
# Django Debug Toolbar
INTERNAL_IPS = [
'127.0.0.1',
]
The Django Debug Toolbar will only render at predefined IPs, so we will give it our localhost IP so that we can use it locally.
We can also add more settings that our development-only apps may require.
Next, let's add production settings.
# Creating production_settings.py
Let's create our production settings in `django/config/production_settings.py`.
`production_settings.py` is similar to `dev_settings.py` but often uses `os.getenv()` to get values from environment variables. This helps us keep secrets (for example, Passwords, API tokens, and so on) out of version control and decouples settings from particular servers:
from config.common_settings import *
DEBUG = False
assert SECRET_KEY is not None, (
'Please provide DJANGO_SECRET_KEY '
'environment variable with a value')
ALLOWED_HOSTS += [
os.getenv('DJANGO_ALLOWED_HOSTS'),
]
First, we import the common settings. Out of an abundance of caution, we ensure that the debug mode is off.
Having a `SECRET_KEY` set is vital to our system staying secure. We `assert` to prevent Django from starting up without `SECRET_KEY`. The `common_settings` module should have already set it from an environment variable.
A production website will be accessed from a domain other than `localhost`. We then tell Django what other domains we're serving by appending the `DJANGO_ALLOWED_HOSTS` environment variable to the `ALLOWED_HOSTS` list.
Next, we'll update the database configuration:
DATABASES['default'].update({
'NAME': os.getenv('DJANGO_DB_NAME'),
'USER': os.getenv('DJANGO_DB_USER'),
'PASSWORD': os.getenv('DJANGO_DB_PASSWORD'),
'HOST': os.getenv('DJANGO_DB_HOST'),
'PORT': os.getenv('DJANGO_DB_PORT'),
})
We update the database configuration using values from environment variables.
Then, the cache configuration needs to be set.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'default-locmemcache',
'TIMEOUT': int(os.getenv('DJANGO_CACHE_TIMEOUT'), ),
}
}
In production, we will accept the trade-offs of a local memory cache. We configure the timeout at runtime using another environment variable.
Next, the file upload configuration settings need to bedded.
# file uploads
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY_ID')
AWS_STORAGE_BUCKET_NAME = os.getenv('DJANGO_UPLOAD_S3_BUCKET')
In production, we won't store uploaded images on our container's local filesystem. One core concept of Docker is that containers are ephemeral. It should be acceptable to stop and delete a container and replace it with another. If we stored uploaded images locally, we'd go against that philosophy.
Another reason for not storing uploaded files locally is that they should also be served from a different domain (we discussed this in Chapter 3, _Posters, Headshots, and Security_ ). We will use S3 storage since it's cheap and easy.
The `django-storages` app provides file storage backends for many CDNs, including S3. We tell Django to use that S3 by changing the `DEFAULT_FILE_STORAGE` setting. The `S3Boto3Storage` backend requires a few more settings to be able to work with AWS, including an AWS Access Key, an AWS Secret Access Key, and the name of the destination bucket. We'll discuss the two Access Keys later, in the AWS section.
Now that our settings are organized, we can create our MyMDB `Dockerfile`.
# Creating the MyMDB Dockerfile
In this section, we will create a Dockerfile for MyMDB. Docker runs containers based on an image. An image is defined by a Dockerfile. A Dockerfile must extend another Dockerfile (the reserved `scratch` image being the end of this cycle).
Docker's philosophy is that each container should have a single concern (purpose). This may mean that it runs a single process, or it may run multiple processes working together. In our case, it will run both uWSGI and Nginx processes to provide MyMDB.
Confusingly, Dockerfile refers to both the expected _filename_ and the _file type_. So `Dockerfile` is a Dockerfile.
Let's create a Dockerfile at the root of our project in a file called `Dockerfile`. Dockerfile uses its own language to define the files/directories in the image, as well as any commands required to run while making the image. A complete guide on writing a Dockerfile is out of the scope of this chapter. Instead, we'll build our `Dockerfile` incrementally, discussing only the most relevant elements.
We'll build our `Dockerfile` by following six steps:
1. Initializing the base image and adding the source code to the image
2. Installing packages
3. Collecting static files
4. Configuring Nginx
5. Configuring uWSGI
6. Cleaning up unnecessary resources
# Starting our Dockerfile
The first part of our `Dockerfile` tells Docker which image to use as the base, adds our code, and creates some common directories:
FROM phusion/baseimage
# add code and directories
RUN mkdir /mymdb
WORKDIR /mymdb
COPY requirements* /mymdb/
COPY django/ /mymdb/django
COPY scripts/ /mymdb/scripts
RUN mkdir /var/log/mymdb/
RUN touch /var/log/mymdb/mymdb.log
Let's look at these instructions in more detail:
* `FROM`: This is required in a Dockerfile. `FROM` tells Docker what image to use as the base image for our image. We will use `phusion/baseimage` because it provides a lot of convenient facilities and uses very little memory. It's a tailored-for-Docker Ubuntu image with a smaller easy-to-use init service manager called runit (instead of the Ubuntu's upstart).
* `RUN`: This executes a command as part of building the image. `RUN mkdir /mymdb` creates the directory in which we'll store our files.
* `WORKDIR`: This sets the working directory for all our future `RUN` commands.
* `COPY`: This adds a file (or directory) from our filesystem to the image. Source paths are relative to the directory containing our `Dockerfile`. It's best to make the destination path an absolute path.
We will also reference a new directory called `scripts`. Let's create it at the root of our project directory:
**$ mkdir scripts**
As part of configuring and building the new image, we'll create a few small bash scripts that we'll keep in the `scripts` directory.
# Installing packages in Dockerfile
Next, we'll tell our `Dockerfile` to install all the packages we will need:
RUN apt-get -y update
RUN apt-get install -y \
nginx \
postgresql-client \
python3 \
python3-pip
RUN pip3 install virtualenv
RUN virtualenv /mymdb/venv
RUN bash /mymdb/scripts/pip_install.sh /mymdb
We used `RUN` statements to install the Ubuntu packages and create a virtual environment. To install our Python packages into our virtual environment, we'll create a small script in `scripts/pip_install.sh`:
#!/usr/bin/env bash
root=$1
source $root/venv/bin/activate
pip3 install -r $root/requirements.production.txt
The preceding script simply activates the virtual environment and runs `pip3 install` on our production requirements file.
It's often hard to debug long commands in the middle of a Docker file. Wrapping commands in scripts can make them easier to debug. If something isn't working, you can connect to a container using the `docker exec -it bash -l` command and debug the script as normal.
# Collecting static files in Dockerfile
Static files are the CSS, JavaScript, and images that support our website. Static files may not always be created by us. Some static files come from installed Django apps (for example, Django admin). Let's update our `Dockerfile` to collect the static files:
# collect the static files
RUN bash /mymdb/scripts/collect_static.sh /mymdb
Again, we've wrapped the command in a script. Let's add the following script to `scripts/collect_static.sh`:
#!/usr/bin/env bash
root=$1
source $root/venv/bin/activate
export DJANGO_CACHE_TIMEOUT=100
export DJANGO_SECRET_KEY=FAKE_KEY
export DJANGO_SETTINGS_MODULE=config.production_settings
cd $root/django/
python manage.py collectstatic
The preceding script activates the virtual environment we created in the preceding code and sets the required environment variables. Most of these values don't matter in this context as long as the variables are present. However, the `DJANGO_SETTINGS_MODULE` environment variable is very important. The `DJANGO_SETTINGS_MODULE` environment variable is used by Django to find the settings module. If we don't set it and don't have `config/settings.py`, then Django won't start (even `manage.py` commands will fail).
# Adding Nginx to Dockerfile
To configure Nginx, we will add a config file and a runit service script:
COPY nginx/mymdb.conf /etc/nginx/sites-available/mymdb.conf
RUN rm /etc/nginx/sites-enabled/*
RUN ln -s /etc/nginx/sites-available/mymdb.conf /etc/nginx/sites-enabled/mymdb.conf
COPY runit/nginx /etc/service/nginx
RUN chmod +x /etc/service/nginx/run **
**
# Configuring Nginx
Let's add an Nginx configuration file to `nginx/mymdb.conf`:
# the upstream component nginx needs
# to connect to
upstream django {
server 127.0.0.1:3031;
}
# configuration of the server
server {
# listen on all IPs on port 80
server_name 0.0.0.0;
listen 80;
charset utf-8;
# max upload size
client_max_body_size 2M;
location /static {
alias /mymdb/django/gathered_static_files;
}
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
}
Nginx will be responsible for the following two things:
* Serving static files (URLs starting with `/static`)
* Passing all other requests to uWSGI
The `upstream` block describes the location of our Django (uWSGI) server. In the `location /` block, nginx is instructed to pass requests on to the upstream server using the uWSGI protocol. The `include /etc/nginx/uwsgi_params` file describes how to map headers so that uWSGI understands them.
`client_max_body_size` is an important setting. It describes the maximum size for file uploads. Leaving this value too big can expose a vulnerability, as attackers can overwhelm the server with huge requests.
# Creating Nginx runit service
In order for `runit` to know how to start Nginx, we will need to provide a `run` script. Our `Dockerfile` expects it to be in `runit/nginx/run`:
#!/usr/bin/env bash
exec /usr/sbin/nginx \
-c /etc/nginx/nginx.conf \
-g "daemon off;"
`runit` doesn't want its services to fork off a separate process, so we run Nginx with `daemon off`. Further, `runit` wants us to use `exec` to replace our script's process, the new Nginx process.
# Adding uWSGI to the Dockerfile
We're using uWSGI because it often ranks as the fastest WSGI app server. Let's set it up in our `Dockerfile` by adding the following code:
# configure uwsgi
COPY uwsgi/mymdb.ini /etc/uwsgi/apps-enabled/mymdb.ini
RUN mkdir -p /var/log/uwsgi/
RUN touch /var/log/uwsgi/mymdb.log
RUN chown www-data /var/log/uwsgi/mymdb.log
RUN chown www-data /var/log/mymdb/mymdb.log
COPY runit/uwsgi /etc/service/uwsgi
RUN chmod +x /etc/service/uwsgi/run
This instructs Docker to use a `mymdb.ini` file to configure uWSGI, creates log directories, and adds a uWSGI runit service. In order for runit to start the uWSGI service, we give the runit script permission to execute using the `chmod` command.
# Configuring uWSGI to run MyMDB
Let's create the uWSGI configuration in `uwsgi/mymdb.ini`:
[uwsgi]
socket = 127.0.0.1:3031
chdir = /mymdb/django/
virtualenv = /mymdb/venv
wsgi-file = config/wsgi.py
env = DJANGO_SECRET_KEY=$(DJANGO_SECRET_KEY)
env = DJANGO_LOG_LEVEL=$(DJANGO_LOG_LEVEL)
env = DJANGO_ALLOWED_HOSTS=$(DJANGO_ALLOWED_HOSTS)
env = DJANGO_DB_NAME=$(DJANGO_DB_NAME)
env = DJANGO_DB_USER=$(DJANGO_DB_USER)
env = DJANGO_DB_PASSWORD=$(DJANGO_DB_PASSWORD)
env = DJANGO_DB_HOST=$(DJANGO_DB_HOST)
env = DJANGO_DB_PORT=$(DJANGO_DB_PORT)
env = DJANGO_CACHE_TIMEOUT=$(DJANGO_CACHE_TIMEOUT)
env = AWS_ACCESS_KEY_ID=$(AWS_ACCESS_KEY_ID)
env = AWS_SECRET_ACCESS_KEY_ID=$(AWS_SECRET_ACCESS_KEY_ID)
env = DJANGO_UPLOAD_S3_BUCKET=$(DJANGO_UPLOAD_S3_BUCKET)
env = DJANGO_LOG_FILE=$(DJANGO_LOG_FILE)
processes = 4
threads = 4
Let's take a closer look at some of these settings:
* `socket` tells uWSGI to open a socket on `127.0.0.1:3031` using its custom `uwsgi` protocol (confusingly, the protocol and the server have the same name).
* `chdir` changes the processes's working directory. All paths need to be relative to this location.
* `virtualenv` tells uWSGI the path to the project's virtual environment.
* Each `env` instruction sets an environment variable for our process. We can access these with `os.getenv()` in our code (for example, `production_settings.py`).
* `$(...)` are references environment variables from the uWSGI process's own environment (for example, `$(DJANGO_SECRET_KEY )`).
* `proccesses` sets how many processes we should run.
* `threads` sets how many threads each process should have.
The `processes` and `threads` settings will need to be fine-tuned based on production performance.
# Creating the uWSGI runit service
In order for runit to know how to start uWSGI, we will need to provide a `run` script. Our `Dockerfile` expects it to be in `runit/uwsgi/run`. This script is more complex than what we used for Nginx:
#!/usr/bin/env bash
source /mymdb/venv/bin/activate
export PGPASSWORD="$DJANGO_DB_PASSWORD"
psql \
-h "$DJANGO_DB_HOST" \
-p "$DJANGO_DB_PORT" \
-U "$DJANGO_DB_USER" \
-d "$DJANGO_DB_NAME"
if [[ $? != 0 ]]; then
echo "no db server"
exit 1
fi
pushd /mymdb/django
python manage.py migrate
if [[ $? != 0 ]]; then
echo "can't migrate"
exit 2
fi
popd
exec /sbin/setuser www-data \
uwsgi \
--ini /etc/uwsgi/apps-enabled/mymdb.ini \
>> /var/log/uwsgi/mymdb.log \
2>&1
This script does the following three things:
* Checks whether it can connect to the DB, exiting otherwise
* Runs all the migrations or exits on failure
* Starts uWSGI
runit requires that we use `exec` to start our process so that uWSGI will replace the `run` script's process.
# Finishing our Dockerfile
As the final step, we will clean up and document the port we're using:
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
The `EXPOSE` statement documents which port we're using. Importantly, it does not actually open any ports. We'll have to do that when we run the container.
Next, let's create a container for our database.
# Creating a database container
We will need a database to run Django in production. The PostgreSQL Docker community provides us with a very robust Postgres image that we can extend.
Let's create another container for our database in `docker/psql/Dockerfile`:
FROM postgres:10.1
ADD make_database.sh /docker-entrypoint-initdb.d/make_database.sh
The base image for this `Dockerfile` will use Postgres 10.1. It also has a convenient facility that it will execute any shell or SQL scripts in `/docker-entrypoint-initdb.d` as part of the DB initialization. We'll take advantage of this to create our MyMDB database and user.
Let's create our database initialization script in `docker/psql/make_database.sh`:
#!/usr/bin/env bash
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE DATABASE $DJANGO_DB_NAME;
CREATE USER $DJANGO_DB_USER;
GRANT ALL ON DATABASE $DJANGO_DB_NAME TO "$DJANGO_DB_USER";
ALTER USER $DJANGO_DB_USER PASSWORD '$DJANGO_DB_PASSWORD';
ALTER USER $DJANGO_DB_USER CREATEDB;
EOSQL
We used a shell script in the preceding code so that we can use environment variables to populate our SQL.
Now that we have both our containers ready, let's make sure that we can actually launch them by signing up for and configuring AWS.
# Storing uploaded files on AWS S3
We expect our MyMDB to save files to S3. To accomplish that, we will need to sign up for AWS and then configure our shell to be able to use AWS.
# Signing up for AWS
To sign up, navigate to <https://aws.amazon.com> and follow their instructions. Note that signing up is free.
The resources we will use are all in the AWS free tier at the time of writing this book. Some elements of the free tier are only available to new accounts for the first year. Review your account's eligibility before executing any AWS command.
# Setting up the AWS environment
To interact with the AWS API, we will need the following two tokens—an Access Key and a Secret Access Key. This key pair defines access to an account.
To generate a pair of tokens, go to https://console.aws.amazon.com/iam/home?region=us-west-2#/security_credential_, click on Access Keys, and then click on the create new access keys button. There is no way to retrieve a Secret Access Key if you lose it, so ensure that you save it in a safe place.
The preceding AWS Console link will generate tokens for your root account. This is fine while we're testing things out. In future, you should make users with limited permissions using the AWS IAM permissions system.
Next, let's install the AWS **command-line interface** ( **CLI** ):
**$ pip install awscli**
Then,we need to configure the AWS command line tool with our key and region. The `aws` command offers an interactive `configure` subcommand to do this. Let's run it on the command line:
**$ aws configure**
**AWS Access Key ID [None]: <Your ACCESS key>**
**AWS Secret Access Key [None]: <Your secret key>**
**Default region name [None]: us-west-2**
**Default output format [None]: json**
The `aws configure` command stores the values you entered in a `.aws` directory in your home directory.
To confirm that your new account is set up correctly, request a list of EC2 instances (there should be none):
**$ aws ec2 describe-instances
{
"Reservations": []
}**
# Creating the file upload bucket
S3 is organized into buckets. Each bucket must have a unique name (unique across all of AWS). Each bucket will also have a policy, which controls access.
Let's create a bucket for our file uploads by executing the following commands (change `BUCKET_NAME` to your own unique name):
**$ export AWS_ACCESS_KEY=#your value
$ export AWS_SECRET_ACCESS_KEY=#yourvalue
$ aws s3 mb s3://BUCKET_NAME**
To let unauthenticated users access the files in our bucket, we must set a policy. Let's create the policy in `AWS/mymdb-bucket-policy.json`:
{
"Version": "2012-10-17",
"Id": "mymdb-bucket-policy",
"Statement": [
{
"Sid": "allow-file-download-stmt",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
}
]
}
Ensure that you update `BUCKET_NAME` to the name of your bucket.
Now, we can apply the policy on your bucket using the AWS CLI:
**$ aws s3api put-bucket-policy --bucket BUCKET_NAME --policy "$(cat AWS/mymdb-bucket-policy.json)"**
Ensure that you remember your bucket name, AWS access key, and AWS secret access key as we'll use them in the next section.
# Using Docker Compose
We now have all the pieces of production deployment ready. Docker Compose is how Docker lets multiple containers work together. Docker Compose is made of a command-line tool, `docker-compose`; a configuration file, `docker-compose.yml`; and an environment variable file, `.env`. We will create both these files at the root of our project directory.
Never check your `.env` file into version control. That's where your secrets live. Don't let them leak.
First, let's list our environment variables in `.env`:
# Django settings
DJANGO_SETTINGS_MODULE=config.production_settings
DJANGO_SECRET_KEY=#put your secret key here
DJANGO_LOG_LEVEL=DEBUG
DJANGO_LOG_FILE=/var/log/mymdb/mymdb.log
DJANGO_ALLOWED_HOSTS=# put your domain here
DJANGO_DB_NAME=mymdb
DJANGO_DB_USER=mymdb
DJANGO_DB_PASSWORD=#put your password here
DJANGO_DB_HOST=db
DJANGO_DB_PORT=5432
DJANGO_CACHE_TIMEOUT=200
AWS_ACCESS_KEY_ID=# put aws key here
AWS_SECRET_ACCESS_KEY_ID=# put your secret key here
DJANGO_UPLOAD_S3_BUCKET=# put BUCKET_NAME here
# Postgres settings
POSTGRES_PASSWORD=# put your postgress admin password here
Many of these values are okay to hardcode, but there are a few values that you need to set for your project:
* `DJANGO_SECRET_KEY`: The Django secret key is used as part of the seed for Django's cryptography
* `DJANGO_DB_PASSWORD`: This is the password for the Django's MyMDB database user
* `AWS_ACCESS_KEY_ID`: Your AWS access key
* `AWS_SECRET_ACCESS_KEY_ID`: Your AWS secret access key
* `DJANGO_UPLOAD_S3_BUCKET`: Your bucket name
* `POSTGRES_PASSWORD`: The password for the Postgres database super user (different from the MyMDB database user)
* `DJANGO_ALLOWED_HOSTS`: The domain we'll be serving from (we'll fill this in once we start an EC2 instance)
Next, we define how our containers work together in `docker-compose.yml`:
version: '3'
services:
db:
build: docker/psql
restart: always
ports:
- "5432:5432"
environment:
- DJANGO_DB_USER
- DJANGO_DB_NAME
- DJANGO_DB_PASSWORD
web:
build: .
restart: always
ports:
- "80:80"
depends_on:
- db
environment:
- DJANGO_SETTINGS_MODULE
- DJANGO_SECRET_KEY
- DJANGO_LOG_LEVEL
- DJANGO_LOG_FILE
- DJANGO_ALLOWED_HOSTS
- DJANGO_DB_NAME
- DJANGO_DB_USER
- DJANGO_DB_PASSWORD
- DJANGO_DB_HOST
- DJANGO_DB_PORT
- DJANGO_CACHE_TIMEOUT
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY_ID
- DJANGO_UPLOAD_S3_BUCKET
This Compose file describes the two services that make up MyMDB (`db` and `web`). Let's review the configuration options we used:
* `build`: Path to a build context. A build context is, generally speaking, a directory with a `Dockerfile`. So, `db` uses the `psql` directory and `web` uses the `.` directory (the project root directory, which has a `Dockerfile`).
* `ports`: A list of port mappings, describing how to route connections from ports on the host to ports on the container. In our case, we're not changing any ports.
* `environment`: Environment variables for each service. The format we're using implies we're getting the values from our `.env` file. However, you could hardcode values using the `MYVAR=123` syntax.
* `restart`: This is the restart policy for the container. `always` indicates that Docker should always try to restart the container if it stops for any reason.
* `depends_on`: This tells Docker to start the `db` container before the `web` container. However, we still can't be sure that Postgres will manage to start before uWSGI, so we need to check the database is up in our runit script.
# Tracing environment variables
Our production configuration relies heavily on environment variables. Let's review the steps we must follow before it can be accessed in Django by `os.getenv()`:
1. List the variable in `.env`
2. Include the variable under the environment option `environment` in `docker-compose.yml`
3. Include the uWSGI ini file variable with `env`
4. Access the variable with `os.getenv`
# Running Docker Compose locally
Now that we have configured our Docker containers and Docker Compose, we can run the containers. One of the advantages of Docker Compose is that it can provide the same environment everywhere. This means that we can run Docker Compose locally and get the exact same environment that we'll get in production. There's no need to worry that there's an extra process or a different distribution across environments. Let's run Docker Compose locally.
# Installing Docker
To follow along with the rest of this chapter, you must install Docker on your machine. Docker, Inc. provides Docker Community Edition for free from its website: <https://docker.com>. The Docker Community Edition installer is an easy-to-use wizard on Windows and Mac. Docker, Inc. also offers official packages for most major Linux distributions.
Once you have it installed, you'll be able to follow all of the next steps.
# Using Docker Compose
To start our containers locally, run the following command:
**$ docker-compose** **up -d**
`docker-compose up` builds and then starts our containers. The `-d` option detaches Compose from our shell.
To check whether our containers are running, we can use `docker ps`:
**$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0bd7f7203ea0 mymdb_web "/sbin/my_init" 52 seconds ago Up 51 seconds 0.0.0.0:80- >80/tcp, 8031/tcp mymdb_web_1
3b9ecdcf1031 mymdb_db "docker-entrypoint..." 46 hours ago Up 52 seconds 0.0.0.0:5432->5432/tcp mymdb_db_1**
To check the Docker logs, you can use the `docker logs` command to note the output of startup scripts:
**$ docker logs mymdb_web_1**
To access a shell inside the container (so that you can examine files or view application logs), use this `docker exec` command to start bash:
**$ docker exec -it mymdb_web_1 bash -l**
To stop the containers, use the following command:
**$ docker-compose stop**
To stop the containers and _delete_ them, use the following command:
**$ docker-compose down**
When you delete a container, you delete all the data in it. That's not a problem for the Django container as it holds no data. However, if you delete the db container, you _lose the database's data_. Be careful in production.
# Sharing your container via a container registry
Now that we have a working container, we may want to make it more widely accessible. Docker has the concept of a container registry. You can push your container to a container registry to make it available either publicly or to just your team.
The most popular Docker container registry is the Docker Hub (<https://hub.docker.com>). You can create an account for free and, at the time of writing this book, each account comes with one free private repository and unlimited public repositories. Most cloud providers also have a docker repository hosting facilities as well (though prices may vary).
The rest of this section assumes that you have a host configured. We'll use Docker Hub as our example, but all the steps are the same regardless of who hosts your container repository.
To share your container, you'll need to do the following things:
1. Log in to a Docker registry
2. Tag our container
3. Push to a Docker registry
Let's start by logging in to a Docker registry:
**$ docker login -u USERNAME -p PASSWORD docker.io**
The `USERNAME` and `PASSWORD` values need to be the same as you used for your account on Docker Hub. `docker.io` is the domain of Docker Hub's container registry. If you're using a different container registry host, then you need to change the domain.
Now that we're logged in, let's rebuild and tag our container:
**$ docker build . -t USERNAME/REPOSITORY:latest**
Where your `USERNAME` and `REPOSITORY` values are replaced with your values. The `:latest` suffix is the tag for the build. We could have many different tags in the same repository (for example, `development`, `stable`, and `1.x`). Tags in Docker are much like tags in version control; they help us find a particular item quickly and easily. `:latest` is the common tag given to the latest build (though it may not be stable).
Finally, let's push our tagged build to our repository:
**$ docker push USERNAME/REPOSITORY:latest**
Docker will show us its progress uploading and then show a SHA256 digest upon success.
When we push a Docker image to a remote repository we need to be mindful of any private data stored on the image. All the files we created or added in `Dockerfile` are contained in the pushed image. Just like we don't want to hard code passwords in code that is stored in a remote repository, we also don't want to store sensitive data (like passwords) in Docker images that might be stored on remote servers. This is another reason we emphasize storing passwords in environment variables rather than hard coding them.
Great! Now you can share the repo with other team members to run your Docker container.
Next, let's launch our container.
# Launching containers on a Linux server in the cloud
Now that we have everything working, we can deploy it to the internet. We can use Docker to deploy our containers to any Linux server. Most people who use Docker are using a cloud provider to provide a Linux server host. In our case, we will use AWS.
In the preceding section, when we used `docker-compose`, we were actually using it to send commands to a Docker service running on our machine. Docker Machine provides a way to manage remote servers running Docker. We will use `docker-machine` to start an EC2 instance, which will host our Docker containers.
Starting an EC2 instance can cost money. We'll use an instance that is eligible for the AWS free tier `t2.micro` at the time of writing this book. However, you are responsible for checking the terms of the AWS free tier.
# Starting the Docker EC2 VM
We will launch our EC2 VM (called an EC2 instance) into our account's **Virtual Private Cloud** ( **VPC** ). However, each account has a unique VPC ID. To get your VPC ID, run the following command:
**$ export AWS_ACCESS_KEY=#your value
$ export AWS_SECRET_ACCESS_KEY=#yourvalue
$ export AWS_DEFAULT_REGION=us-west-2
$ aws ec2 describe-vpcs | grep VpcId
"VpcId": "vpc-a1b2c3d4",**
The value used in the preceding code is not a real value.
Now that we know our VPC ID, we can use `docker-machine` to launch an EC2 instance:
**$ docker-machine create \
--driver amazonec2 \
--amazonec2-instance-type t2.micro \
--amazonec2-vpc-id vpc-a1b2c3d4 \
--amazonec2-region us-west-2 \
mymdb-host**
This tells Docker Machine to launch an EC2 `t2.micro` instance in the `us-west-2` region and the provided VPC. Docker Machine takes care of ensuring that a Docker daemon is installed and started on the server. When referencing this EC2 instance in Docker Machine, we refer to it by the name `mymdb-host`.
When the instance is started, we can ask AWS for the public DNS name for our instance:
**$ aws ec2 describe-instances | grep -i publicDnsName**
The preceding command may return multiple copies of the same value even if only one instance is up. Put the result in the `.env` file as `DJANGO_ALLOWED_HOSTS`.
All EC2 instances are protected by a firewall determined by their security group. Docker Machine automatically created a security group for our server when it started our instance. In order for our HTTP requests to make it to our machine, we will need to open port `80` in the `docker-machine` security group, as follows:
**$ aws ec2 authorize-security-group-ingress \
--group-name docker-machine \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0**
Now that everything is set up, we can configure `docker-compose` to talk to our remote server and bring up our containers:
**$ eval $(docker-machine env mymdb-host)
$ docker-compose up -d**
Congratulations! MyMDB is up in a production environment. Check it out by navigating to the address used in `DJANGO_ALLOWED_HOSTS`.
The instructions here are focused on starting an AWS Linux server. However, all the Docker commands have equivalent options for Google Cloud, Azure, and other major cloud providers. There's even a _generic_ option that is made to work with any Linux server, though your mileage may vary depending on the Linux distribution and Docker version.
# Shutting down the Docker EC2 VM
Docker machine can also be used to stop VM running Docker as shown in the following snippet:
**$ export AWS_ACCESS_KEY=#your value
$ export AWS_SECRET_ACCESS_KEY=#yourvalue
$ export AWS_DEFAULT_REGION=us-west-2
$ eval $(docker-machine env mymdb-host)
$ docker-machine stop mymdb-host**
This will stop the EC2 instance and destroy all the containers in it. If you wish to preserve your DB, ensure that you back up your database by running the preceding `eval` command and then opening a shell using `docker exec -it mymdb_db_1 bash -l`.
# Summary
In this chapter, we've launched MyMDB into a production Docker environment on the internet. We've created a Docker container for MyMDB using a Dockerfile. We used Docker Compose to make MyMDB work with a PostgreSQL database (also in a Docker container). Finally, we launched the containers on the AWS cloud using Docker Machine.
Congratulations! You now have MyMDB running.
In the next chapter, we'll make our implementation of Stack Overflow.
# Starting Answerly
The second project that we will build is a Stack Overflow clone called Answerly. Users who register for Answerly will be able to ask and answer questions. A question's asker will also be able to accept answers to mark them as useful.
In this chapter, we'll do the following things:
* Create our new Django project—Answerly, a Stack Overflow clone
* Create the models for Answerly (`Question` and `Answer`)
* Let users register
* Create forms, views, and templates to let users interact with our models
* Run our code
The code for this project is available online at <https://github.com/tomaratyn/Answerly>.
This chapter won't go deeply into topics already covered in Chapter 1, _Building MyMDB_ , although it will touch upon many of the same points. Instead, this chapter will focus on going a bit further and introducing new views and third-party libraries.
Let's start our project!
# Creating the Answerly Django project
First, let's make a directory for our project:
**$ mkdir answerly
$ cd answerly**
All our future commands and paths will be relative to this project directory. A Django project is composed of multiple Django apps.
We'll install Django using `pip`, Python's preferred package manager. We will also track the packages that we install in a `requirements.txt` file:
django<2.1
psycopg2<2.8
Now, let's install the packages:
**$ pip install -r requirements.txt**
Next, let's generate the actual Django project using `django-admin`:
**$ django-admin startproject config
$ mv config django**
By default, Django creates a project that will use SQLite, but that's not usable for production; so, we'll follow the best practice of using the same database in development as in production.
Let's open up `django/config/settings.py` and update it to use our Postgres server. Find the line in `settings.py` that starts with `DATABASES`; to use Postgres, change the `DATABASES` value to the following code:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'answerly',
'USER': 'answerly',
'PASSWORD': 'development',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
Now that we have our project started and configured, we can create and install the two Django apps we'll make as part of this project:
**$ cd django
$ python manage.py startapp user
$ python manage.py startapp qanda**
A Django project is composed of apps. Django apps are where all the functionalities and code live. Models, forms, and templates all belong to Django apps. An app, like every other Python module, should have a clearly defined scope. In our case, we have two apps each with different roles. The `qanda` app will be responsible for the question and answer functionality of our app. The `user` app will be responsible for user management of our app. Each of them will also rely on other apps and Django's core functionality to work effectively.
Now, let's install our apps in our project by updating `django/config/settings.py`:
INSTALLED_APPS = [
'user',
'qanda',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
Now that Django knows about our app, let's install start with the models for `qanda`.
# Creating the Answerly models
Django is particularly helpful for creating data-driven apps. Models, representing the data in the apps, are often the core of these apps. Django encourages this with the best practice of _fat models, thin views, dumb templates_. The advice encourages us to place business logic in our models rather than our views.
Let's start building our `qanda` models with the `Question` model.
# Creating the Question model
We'll create our `Question` model in `django/qanda/models.py`:
from django.conf import settings
from django.db import models
from django.urls.base import reverse
class Question(models.Model):
title = models.CharField(max_length=140)
question = models.TextField()
user = models.ForeignKey(to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE)
created = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.title
def get_absolute_url(self):
return reverse('questions:question_detail', kwargs={'pk': self.id})
def can_accept_answers(self, user):
return user == self.user
A `Question` model, like all Django models, is derived from `django.db.models.Model`. It has the following four fields that will become columns in a `questions_question` table:
* `title`: A character field that will become a `VARCHAR` column of maximum 140 characters.
* `question`: This is the body of the question. Since we can't predict how long this will be, we use a `TextField`, which will become a `TEXT` column.The `TEXT` columns don't have a size limit.
* `user`: This will create a foreign key to the project's configured user model. In our case, we will go with the default `django.contrib.auth.models.User` that comes with Django. However, it's still recommended to not hardcode this when we can avoid it.
* `created`: This will be automatically set to the date and time that the `Question` model was created.
`Question` also implements the following two methods commonly seen on Django models (`__str__` and `get_absolute_url`):
* `__str__()`: This tells Python how to convert our model to a string. This is useful in the admin backend, our own templates, and in debugging.
* `get_absolute_url()`: This is a commonly implemented method that lets the model return the path of a URL to view this model. Not all models need this method. Django's built-in views, such as `CreateView`, will use this method to redirect the user to the view after the model is created.
Finally, in the spirit of _fat models_ , we also have `can_accept_answers()`. The decision of who can accept an `Answer` to a `Question` lies with the `Question`. Currently, only the user who asked the question can accept an answer.
Now that we have the `Question` s, we naturally need `Answer` s.
# Creating the Answer model
We'll create the `Answer` model in the `django/questions/models.py` file as shown in the following code:
from django.conf import settings
from django.db import models
class Question(model.Models):
# skipped
class Answer(models.Model):
answer = models.TextField()
user = models.ForeignKey(to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE)
created = models.DateTimeField(auto_now_add=True)
question = models.ForeignKey(to=Question,
on_delete=models.CASCADE)
accepted = models.BooleanField(default=False)
class Meta:
ordering = ('-created', )
The `Answer` model has five fields and a `Meta` class. Let's take a look at the fields first:
* `answer`: This is an unlimited text field for the user's answer. `answer` will become a `TEXT` column.
* `user`: This will create a foreign key to the user model that our project has been configured to use. The user model will gain a new `RelatedManager` under the name `answer_set`, which will be able to query all the `Answer` s for a user.
* `question`: This will create a foreign key to our `Question` model. `Question` will also gain a new `RelatedManager` under the name `answer_set`, which will be able to query all the `Answer` s to a `Question`.
* `created`: This will be set to the date and time when the `Answer` was created.
* `accepted`: This is a Boolean that will be set to `False` by default. We'll use it to mark accepted answers.
A model's `Meta` class lets us set metadata for our model and table. For `Answer`, we're using the `ordering` option to ensure that all queries will be ordered by `created`, in descending order. In this way, we ensure that the newest answers will be listed first, by default.
Now that we have `Question` and `Answer` models, we will need to create migrations to create their tables in the database.
# Creating migrations
Django comes with a built-in migration library. This is part of Django's _batteries included_ philosophy. Migrations provide a way to manage the changes that we will need to make to our schema. Whenever we make a change to a model, we can use Django to generate a migration, which will contain the instructions on how to create or change the schema to fit the new model's definition. To make the change to our database, we will apply the schema.
Like many operations we perform on our project, we'll use the `manage.py` script Django provides for our project:
**$ python manage.py makemigrations
Migrations for 'qanda':
qanda/migrations/0001_initial.py
- Create model Answer
- Create model Question
- Add field question to answer
- Add field user to answer
$ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, qanda, sessions
Running migrations:
Applying qanda.0001_initial... OK**
Now that we've created the migrations and applied them, let's set up a base template for our project so that our code works well.
# Adding a base template
Before we create our views, let's create a base template. Django's template language allows templates to inherit from each other. A base template is a template that all our other project's templates will extend. This will give our entire project a common look and feel.
Since a project is composed of multiple apps and they will all use the same base template, a base template belongs to the project, not to any particular app. This is a rare exception to the rule that everything lives in an app.
To add a project-wide templates directory, update `django/config/settings.py`. Check the `TEMPLATES` setting and update it to this:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates')
],
'APP_DIRS': True,
'OPTIONS': {
# skipping rest of options.
},
},
]
In particular, the `DIRS` option for the `django.template.backends.django.DjangoTemplates` setting sets a project-wide template directory that will be searched. `'APP_DIRS': True` means that each installed app's `templates` directory will also be searched. In order for Django to search `django/templates`, we must add `os.path.join(BASE_DIR, 'templates')` to the `DIRS` list.
# Creating base.html
Django comes with its own template language eponymously called the Django Template Language. Django templates are text files, which are rendered using a dictionary (called a context) to look up values. A template can also include tags (which use the `{% tag argument %}` syntax). A template can print values from its context using the `{{ variableName }}` syntax. Values can be sent to filters to tweak them before being displayed (for example, `{{ user.username | uppercase }}` will print the user's username with all uppercase characters). Finally, the `{# ignored #}` syntax can comment out multiple lines of text.
We'll create our base template in `django/templates/base.html`:
{% load static %}
<!DOCTYPE html>
<html lang="en" >
<head >
<meta charset="UTF-8" >
<title >{% block title %}Answerly{% endblock %}</title >
<link
href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/css/bootstrap.min.css"
rel="stylesheet">
<link
href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css"
rel="stylesheet">
<link rel="stylesheet" href="{% static "base.css" %}" >
</head >
<body >
<nav class="navbar navbar-expand-lg bg-light" >
<div class="container" >
<a class="navbar-brand" href="/" >Answerly</a >
<ul class="navbar-nav" >
</ul >
</div >
</nav >
<div class="container" >
{% block body %}{% endblock %}
</div >
</body >
</html >
We won't go over this HTML, but it's worth reviewing the Django template tags involved:
* `{% load static %}`: `load` lets us load template tag libraries that aren't available by default. In this case, we're loading the static library, which provides the `static` tag. The library and tag don't always share their name. This is provided with Django by the `django.contrib.static` app.
* `{% block title %}Answerly{% endblock %}`: Blocks let us define areas that templates can override when extending this template.
* `{% static 'base.css' %}`: The `static` tag (loaded in from the preceding `static` library) uses the `STATIC_URL` setting to create a reference to a static file. In this case, it will return `/static/base.css`. As long as the file is in a directory listed in `settings.STATICFILES_DIRS` and Django is in debug mode, Django will serve that file for us. For production, refer to Chapter 9, _Deploying Answerly_.
That's enough for our `base.html` file to start. We'll update the navigation in `base.html` later, in the _Updating base.html navigation_ section.
Next, let's configure Django to know how to find our `base.css` file by configuring static files.
# Configuring static files
Next, let's configure a directory for project-wide static files in `django/config/settings.py`:
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
This will tell Django that any file in `django/static/` should be served while Django is in debug mode. For production, refer to Chapter 9, _Deploying Answerly_.
Let's put some basic CSS in `django/static/base.css`:
nav.navbar {
margin-bottom: 1em;
}
Now that we have created the foundation, let's create `AskQuestionView`.
# Letting users post questions
We will now create a view for letting users post questions that they need answered.
Django follows **Model-View-Template** ( **MVT** ) pattern separate model, control, and presentation logic and encourage reusability. Models represent the data we'll store in the database. Views are responsible for handling a request and returning a response. Views should not have HTML. Templates are responsible for the body of a response and defining the HTML. This separation of responsibilities has proven to make it easy to write code.
To let users post questions, we'll perform the following steps:
1. Make a form to process the questions
2. Make a view that uses Django forms to create questions
3. Make a template that renders the form in HTML
4. Add a `path` to the view
First, let's make the `QuestionForm` class.
# Ask question form
Django forms serve two purposes. They make it easy to render the body of a form to receive user input. They also validate the user input. When a form is instantiated, it can be given initial values (by the `intial` parameter) and data to validate (by the `data` parameter). A form which has been provided data is said to be bound.
Much of the power of Django comes from how easy it is to join models, forms, and views together to build features.
We'll make our form in `django/qanda/forms.py`:
from django import forms
from django.contrib.auth import get_user_model
from qanda.models import Question
class QuestionForm(forms.ModelForm):
user = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=get_user_model().objects.all(),
disabled=True,
)
class Meta:
model = Question
fields = ['title', 'question', 'user', ]
`ModelForm` makes creating forms from Django models easier. We use the inner `Meta` class of `QuestionForm` to specify the model and fields that are part of the form.
By adding a `user` field, we're able to override how Django renders the `user` field. We tell Django to use the `HiddenInput` widget, which will render the field as `<input type='hidden'>`. The `queryset` argument lets us restrict the users that are valid values (in our case, all users are valid). Finally, the `disabled` argument says that we will ignore any values provided by `data` (that is, from a request) and rely on the `initial` values we provide to the form.
Now that we know how to render and validate a question form, let's create our view.
# Creating AskQuestionView
We will create our `AskQuestionView` class in `django/qanda/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import CreateView
from qanda.forms import QuestionForm
from qanda.models import Question
class AskQuestionView(LoginRequiredMixin, CreateView):
form_class = QuestionForm
template_name = 'qanda/ask.html'
def get_initial(self):
return {
'user': self.request.user.id
}
def form_valid(self, form):
action = self.request.POST.get('action')
if action == 'SAVE':
# save and redirect as usual.
return super().form_valid(form)
elif action == 'PREVIEW':
preview = Question(
question=form.cleaned_data['question'],
title=form.cleaned_data['title'])
ctx = self.get_context_data(preview=preview)
return self.render_to_response(context=ctx)
return HttpResponseBadRequest()
`AskQuestionView` is derived from `CreateView` and uses the `LoginRequiredMixin`. The `LoginRequiredMixin` ensures that any request made by a user who is not logged in will be redirected to the login page. The `CreateView` knows to render the template for `GET` requests and to validate the form on `POST` requests. If a form is valid, `CreateView` will call `form_valid`. If the form is not valid, `CreateView` will re-render the template.
Our `form_valid` method overrides the original `CreateView` method to support a save and preview mode. When we want to save, we will call the original `form_valid` method. The original method saves the new question and returns an HTTP response that redirects the user to the new question (using `Question.get_absolute_url()`). When we want to preview the question, we will re-render our template with the new `preview` variable in our template's context.
When our view is instantiating the form, it will pass the result of `get_initial()` as the `initial` argument and the `POST` data as the `data` argument.
Now that we have our view, let's create `ask.html`.
# Creating ask.html
Let's create our template in `django/qanda/ask.html`:
{% extends "base.html" %}
{% load markdownify %}
{% load crispy_forms_tags %}
{% block title %} Ask a question {% endblock %}
{% block body %}
<div class="col-md-12" >
<h1 >Ask a question</h1 >
{% if preview %}
<div class="card question-preview" >
<div class="card-header" >
Question Preview
</div >
<div class="card-body" >
<h1 class="card-title" >{{ preview.title }}</h1>
{{ preview.question | markdownify }}
</div >
</div >
{% endif %}
<form method="post" >
{{ form | crispy }}
{% csrf_token %}
<button class="btn btn-primary" type="submit" name="action"
value="PREVIEW" >
Preview
</button >
<button class="btn btn-primary" type="submit" name="action"
value="SAVE" >
Ask!
</button >
</form >
</div >
{% endblock %}
This template uses our `base.html` template and puts all its HTML in the `blocks` defined by there. When we render the template, Django renders `base.html` and then fills in the values of the blocks with the contents defined in `ask.html`.
`ask.html` also loads two third-party tag libraries, `markdownify` and `crispy_forms_tags`. `markdownify` provides the `markdownify` filter used in the preview card's body (`{{preview.question | markdownify}}`). The `crispy_forms_tags` library provides the `crispy` filter, which applies Bootstrap 4 CSS classes to help the Django form render nicely.
Each of these libraries needs to be installed and configured, which we do in the following sections ( _Installing and configuring Markdownify_ and _Installing and configuring Django Crispy Forms_ , respectively).
The following are a few more new tags that `ask.html` shows us:
* `{% if preview %}`: This demonstrates how to use an `if` statement in the Django template language. We only want to render a preview of the `Question` if we have a `preview` variable in our context.
* `{% csrf_token %}`: This tag adds the expected CSRF token to our form. CSRF tokens help protect us against malicious scripts trying to submit data on behalf of an innocent but logged-in user; refer to Chapter 3, _Posters, Headshots, and Security_ , for more information. In Django, CSRF tokens are not optional, and `POST` requests missing a CSRF token will not be processed.
Let's take a closer look at those third-party libraries, starting with Markdownify.
# Installing and configuring Markdownify
Markdownify is a Django app available on the **Python Package Index** ( **PyPI** ) created by R Moelker and Erwin Matijsen and licensed under the MIT license (a popular open source license). Markdownify provides the Django template filter `markdownify`, which will convert Markdown to HTML.
Markdownify works by using the **python-markdown** package to convert Markdown to HTML. Marodwnify then uses Mozilla's `bleach` __ library to sanitize the resultiant HTML from Cross Site Scripting ( **XSS** ) attacks. The result is then returned to the template for output.
To install Markdownify, let's add it to our `requirements.txt` file:
django-markdownify==0.2.2
Then, run `pip` to install it:
**$ pip install -r requirements.txt**
Now, we will need to add `markdownify` to our list of `INSTALLED_APPS` in `django/config/settings.py`.
The last step is to configure Markdownify to let it know which HTML tags to whitelist. Add the following settings to `settings.py`:
MARKDOWNIFY_STRIP = False
MARKDOWNIFY_WHITELIST_TAGS = [
'a', 'blockquote', 'code', 'em', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'h7', 'li', 'ol', 'p', 'strong', 'ul',
]
This will whitiest all the text, list, and heading tags we want available to our users. Setting `MARKDOWNIFY_STRIP` to `False` tells Markdownify to HTML encode (rather than strip) any other HTML tag.
Now that we've configured Markdownify, let's install and configure Django Crispy Forms.
# Installing and configuring Django Crispy Forms
Django Crispy Forms is a third-party Django app available on PyPI. Miguel Araujo is the development lead. It is licensed under the MIT license. Django Crispy Forms is one of the most popular Django libraries because it makes it so easy to render pretty (crisp) forms.
One of the problems we encounter in Django is that when Django renders a field it will render it something like this:
<label for="id_title">Title:</label>
<input
type="text" name="title" maxlength="140" required id="id_title" />
However, in order to style that form nicely, for example, using Bootstrap 4, we would like to render something more like this:
<div class="form-group">
<label for="id_title" class="form-control-label requiredField">
Title
</label>
<input type="text" name="title" maxlength="140"
class="textinput textInput form-control" required="" id="id_title">
</div>
Sadly, Django doesn't provide hooks that would let us easily wrap the field in a `div` with class `form-group`, or add CSS classes such as `form-control` or `form-control-label`.
Django Crispy Forms solves this with its `crispy` filter. If we send a form into it by performing `{{ form | crispy}}`, Django Crispy Forms will correctly transform the form's HTML and CSS to work with a variety of CSS frameworks (including Zurb Foundation, Bootstrap 3, and Bootstrap 4). You can further customize the form's rendering through more advanced usage of Django Crispy Forms, but we won't be doing that in this chapter.
To install Django Crispy Forms, let's add it to our `requirements.txt` and install it using `pip`:
**$ echo "django-crispy-forms==1.7.0" >> requirements.txt
$ pip install -r requirements.txt**
Now, we will need to install it as a Django app in our project by editing `django/config/settings.py` and adding `'crispy_forms'` to our list of `INSTALLED_APPS`.
Next, we will need to configure our project so that Django Crispy Forms knows to use the Bootstrap 4 template pack. Update `django/config/settings.py` with a new config:
CRISPY_TEMPLATE_PACK = 'bootstrap4'
Now that we've installed all the libraries our template relies on, we can configure Django to route requests to our `AskQuestionView`.
# Routing requests to AskQuestionView
Django routes requests using a URLConf. It's a list of `path()` objects that a request's path is matched against. The view of the first matching `path()` gets to process the request. A URLConf can include another URLConf. A project's settings defines its root URLConf (in our case, `django/config/urls.py`).
Defining all the `path()` objects for all the views in a project in the root URLConf can get messy and makes the apps less reusable. It's often convenient to put a URLConf (usually in a `urls.py` file) in each app. Then, the root URLConf can use the `include()` function to include other apps' URLConfs to route requests.
Let's create a URLConf for our `qanda` app in `django/qanda/urls.py`:
from django.urls.conf import path
from qanda import views
app_name = 'qanda'
urlpatterns = [
path('ask', views.AskQuestionView.as_view(), name='ask'),
]
A path has at least two components:
* First, a string defining the matching path. This may have named parameters that will be passed to the view. We'll see an example of this later, in the _Routing requests to the QuestionDetail view_ section.
* Second, a callable that takes a request and returns a response. If your view is a function (also known as a **Function-Based View** ( **FBV** )), then you can just pass a reference to your function. If you're using a **Class-Based View** ( **CBV** ), then you can use its `as_view()` class method to return the required callable.
* Optionally, a `name` parameter which we can use to reference this `path()` object in our view or template (for example, like the `Question` model does in its `get_absolute_url()` method).
It is very strongly recommended that you name all your `path()` objects.
Now, let's update our root URLConf to include the `qanda` URLConf:
from django.contrib import admin
from django.urls import path, include
import qanda.urls
urlpatterns = [
path('admin/', admin.site.urls),
path('', include(qanda.urls, namespace='qanda')),
]
This means that requests to `answerly.example.com/ask` will route to our `AskQuestionView.`
# A quick review of the section
In this section, we have performed the following actions:
* Created our first form, `QuestionForm`
* Created `AskQuestionView` that uses the `QuestionForm` to create `Question` s
* Created a template to render `AskQuestionView` and `QuestionForm`
* Installed and configured third-party libraries that provide filters for our template
Now, let's allow our users to view questions with a `QuestionDetailView` class.
# Creating QuestionDetailView
The `QuestionDetailView` has to offer quite a bit of functionality. It must be able to do the following things:
* Show the question
* Show all the answers
* Let users post additional answers
* Let the asker accept answer(s)
* Let the asker reject previously-accepted answers
Although `QuestionDetailView` won't process any forms, it will have to display many forms, leading to a complicated template. This complexity will give us a chance to note how to split a template up into separate subtemplates to make our code more readable.
# Creating Answer forms
We'll need to make two forms to make `QuestionDetailView` work as described in the preceding section:
* `AnswerForm`: For users to post their answers
* `AnswerAcceptanceForm`: For the question's asker to accept or reject answers
# Creating AnswerForm
The `AnswerForm` will have to reference a `Question` model instance and a user because both are required to create an `Answer` model instance.
Let's add our `AnswerForm` to `django/qanda/forms.py`:
from django import forms
from django.contrib.auth import get_user_model
from qanda.models import Answers
class AnswerForm(forms.ModelForm):
user = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=get_user_model().objects.all(),
disabled=True,
)
question = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=Question.objects.all(),
disabled=True,
)
class Meta:
model = Answer
fields = ['answer', 'user', 'question', ]
The `AnswerForm` class looks a lot like the `QuestionForm` class, though with slightly differently named fields. It uses the same technique of preventing a user from tampering with the `Question` associated with an `Answer` just as `QuestionForm` used to prevent tampering with the user of `Question`.
Next, we'll create a form to accept an `Answer`.
# Creating AnswerAcceptanceForm
An `Answer` is accepted if its `accepted` field is `True`. We'll use a simple form to edit this field:
class AnswerAcceptanceForm(forms.ModelForm):
accepted = forms.BooleanField(
widget=forms.HiddenInput,
required=False,
)
class Meta:
model = Answer
fields = ['accepted', ]
Using `BooleanField` comes with a small wrinkle. If we want `BooleanField` to accept `False` values as well as `True` values, we must set `required=False`. Otherwise, `BooleanField` will get confused when it gets a `False` value, thinking that it actually didn't receive a value.
We use a hidden input because we don't want users checking a checkbox and then having to click on submit. Instead, for each answer, we'll generate an accept form and a reject form, which the user can just submit with one click.
Next, let's write the `QuestionDetailView` class.
# Creating QuestionDetailView
Now that we have the forms we'll use, we can create `QuestionDetailView` in `django/qanda/views.py`:
from django.views.generic import DetailView
from qanda.forms import AnswerForm, AnswerAcceptanceForm
from qanda.models import Question
class QuestionDetailView(DetailView):
model = Question
ACCEPT_FORM = AnswerAcceptanceForm(initial={'accepted': True})
REJECT_FORM = AnswerAcceptanceForm(initial={'accepted': False})
def get_context_data(self, **kwargs):
ctx = super().get_context_data(**kwargs)
ctx.update({
'answer_form': AnswerForm(initial={
'user': self.request.user.id,
'question': self.object.id,
})
})
if self.object.can_accept_answers(self.request.user):
ctx.update({
'accept_form': self.ACCEPT_FORM,
'reject_form': self.REJECT_FORM,
})
return ctx
`QuestionDetailView` lets Django's `DetailView` do most of the work. `DetailView` gets a `Question` `QuerySet` out of the default manager of `Question` (`Question.objects`). `DetailView` then uses the `QuerySet` to get a `Question` based on the `pk` it received in the path of the URL. `DetailView` also knows which template to render based on our app and model name (`appname/modelname_detail.html`).
The only area where we've had to customize behavior of `DetailView` is `get_context_data()`. `get_context_data()` provides the context used to render the template. In our case, we use the method to add the forms we want rendered to context.
Next, let's make the template for `QuestionDetailView`.
# Creating question_detail.html
Our template for the `QuestionDetailView` will work slightly differently to our previous templates.
Here's what we'll put in `django/qanda/templates/qanda/question_detail.html`:
{% extends "base.html" %}
{% block title %}{{ question.title }} - {{ block.super }}{% endblock %}
{% block body %}
{% include "qanda/common/display_question.html" %}
{% include "qanda/common/list_answers.html" %}
{% if user.is_authenticated %}
{% include "qanda/common/question_post_answer.html" %}
{% else %}
<div >Login to post answers.</div >
{% endif %}
{% endblock %}
The preceding template seemingly doesn't do anything itself. Instead, we use the `{% include %}` tag to include other templates inside this template, to make organizing our code simpler. `{% include %}` passes the current context to the new template, renders it, and inserts it in place.
Let's take a look at each of these sub templates in turn, staring with `dispaly_question.html`.
# Creating the display_question.html common template
We've put the HTML to display a question into its own sub template. This template can then be included by other templates to render a `question`.
Let's create it in `django/qanda/templates/qanda/common/display_question.html`:
{% load markdownify %}
<div class="question" >
<div class="meta col-sm-12" >
<h1 >{{ question.title }}</h1 >
Asked by {{ question.user }} on {{ question.created }}
</div >
<div class="body col-sm-12" >
{{ question.question|markdownify }}
</div >
</div >
The HTML itself is pretty simple, and there are no new tags here. We reuse the `markdownify` tag and library that we have previously configured.
Next, let's look at the answer list template.
# Creating list_answers.html
The answer list template has to list all the answers for the question and also render whether the answer is accepted. If the user can accept (or reject) answers, then those forms are rendered too.
Let's create the template in `django/qanda/templates/qanda/view_questions/question_answers.html`:
{% load markdownify %}
<h3 >Answers</h3 >
<ul class="list-unstyled answers" >
{% for answer in question.answer_set.all %}
<li class="answer row" >
<div class="col-sm-3 col-md-2 text-center" >
{% if answer.accepted %}
<span class="badge badge-pill badge-success" >Accepted</span >
{% endif %}
{% if answer.accepted and reject_form %}
<form method="post"
action="{% url "qanda:update_answer_acceptance" pk=answer.id %}" >
{% csrf_token %}
{{ reject_form }}
<button type="submit" class="btn btn-link" >
<i class="fa fa-times" aria-hidden="true" ></i>
Reject
</button >
</form >
{% elif accept_form %}
<form method="post"
action="{% url "qanda:update_answer_acceptance" pk=answer.id %}" >
{% csrf_token %}
{{ accept_form }}
<button type="submit" class="btn btn-link" title="Accept answer" >
<i class="fa fa-check-circle" aria-hidden="true"></i >
Accept
</button >
</form >
{% endif %}
</div >
<div class="col-sm-9 col-md-10" >
<div class="body" >{{ answer.answer|markdownify }}</div >
<div class="meta font-weight-light" >
Answered by {{ answer.user }} on {{ answer.created }}
</div >
</div >
</li >
{% empty %}
<li class="answer" >No answers yet!</li >
{% endfor %}
</ul >
Two things to observe about this template are as follows:
* There's a rare bit of logic in the template, `{% if answer.accepted and reject_form %}`. Generally, templates should be dumb and avoid knowing about business logic. However, avoiding this would have created a more complex view. This is a trade-off that we must always evaluate on a case-by-case basis.
* The `{% empty %}` tag is related to our `{% for answer in question.answer_set.all %}` loop. The `{% empty %}` is used in the case of an empty list, much like the Python's `for ... else` syntax.
Next, let's take a look at the post answer template.
# Creating the post_answer.html template
In the next template we're going to create, the user can post and preview their answer.
Let's create our next template in `django/qanda/templates/qanda/common/post_answer.html`:
{% load crispy_forms_tags %}
<div class="col-sm-12" >
<h3 >Post your answer</h3 >
<form method="post"
action="{% url "qanda:answer_question" pk=question.id %}" >
{{ answer_form | crispy }}
{% csrf_token %}
<button class="btn btn-primary" type="submit" name="action"
value="PREVIEW" >Preview
</button >
<button class="btn btn-primary" type="submit" name="action"
value="SAVE" >Answer
</button >
</form >
</div >
This template is quite simple, sampling rendering the `answer_form` using the `crispy` filter.
Now that we have all our subtemplates done, let's create a `path` to route requests to `QuestionDetailView`.
# Routing requests to the QuestionDetail view
To be able to route requests to our `QuestionDetailView`, we need to add it to the URLConf in `django/qanda/urls.py`:
path('q/<int:pk>', views.QuestionDetailView.as_view(),
name='question_detail'),
In the preceding code, we see `path` taking a named parameter `pk`, which must be an integer. This will be passed to the `QuestionDetailView` and available in the `kwargs` dictionary. `DetailView` will rely on the presence of this argument to know which `Question` to retrieve.
Next, we'll create some of the form-related views we referenced in our templates. Let's start with the `CreateAnswerView` class.
# Creating the CreateAnswerView
The `CreateAnswerView` class will be used to create and preview `Answer` model instance for a `Question` model instance.
Let's create it in `django/qanda/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import CreateView
from qanda.forms import AnswerForm
class CreateAnswerView(LoginRequiredMixin, CreateView):
form_class = AnswerForm
template_name = 'qanda/create_answer.html'
def get_initial(self):
return {
'question': self.get_question().id,
'user': self.request.user.id,
}
def get_context_data(self, **kwargs):
return super().get_context_data(question=self.get_question(),
**kwargs)
def get_success_url(self):
return self.object.question.get_absolute_url()
def form_valid(self, form):
action = self.request.POST.get('action')
if action == 'SAVE':
# save and redirect as usual.
return super().form_valid(form)
elif action == 'PREVIEW':
ctx = self.get_context_data(preview=form.cleaned_data['answer'])
return self.render_to_response(context=ctx)
return HttpResponseBadRequest()
def get_question(self):
return Question.objects.get(pk=self.kwargs['pk'])
The `CreateAnswerView` class follows a similar pattern to the `AskQuestionView` class:
* It's a `CreateView`
* It's protected by `LoginRequiredMixin`
* It uses `get_initial()` to provide initial arguments to its form so malicious users can't tamper with the question or user associated with the answer
* It uses `form_valid()` to perform a preview or save operation
The main difference is that we will need to add a `get_question()` method in `CreateAnswerView` to retrieve the question we're answering. `kwargs['pk']` will be populated by the `path` we'll create (just like we did for `QuestionDetailView`).
Next, let's create the template.
# Creating create_answer.html
This template will be able to leverage the common template elements we've already created to make rendering the question and answer forms easier.
Let's create it in `django/qanda/templates/qanda/create_answer.html`:
{% extends "base.html" %}
{% load markdownify %}
{% block body %}
{% include 'qanda/common/display_question.html' %}
{% if preview %}
<div class="card question-preview" >
<div class="card-header" >
Answer Preview
</div >
<div class="card-body" >
{{ preview|markdownify }}
</div >
</div >
{% endif %}
{% include 'qanda/common/post_answer.html' with answer_form=form %}
{% endblock %}
The preceding template introduces a new use of `{% include %}`. When we use the `with` argument, we can then pass a series of new names that values should have in the subtemplate's context. In our case, we will only add the `answer_form` to the context of `post_answer.html`. The rest of the context is still passed to `{% include %}`. We can prevent the rest of the context being passed if we add `only` as the last argument to `{% include %}`.
# Routing requests to CreateAnswerView
The final step is to connect the `CreateAnswerView` to the `qanda` URLConf by adding a new `path` to the `urlpatterns` list in `qanda/urls.py`:
path('q/<int:pk>/answer', views.CreateAnswerView.as_view(),
name='answer_question'),
Next, we'll make a view to process the `AnswerAcceptanceForm`.
# Creating UpdateAnswerAcceptanceView
The `accept_form` and `reject_form` variables we use in the `list_answers.html` template need a view to process their form submissions. Let's add it to `django/qanda/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import UpdateView
from qanda.forms import AnswerAcceptanceForm
from qanda.models import Answer
class UpdateAnswerAcceptance(LoginRequiredMixin, UpdateView):
form_class = AnswerAcceptanceForm
queryset = Answer.objects.all()
def get_success_url(self):
return self.object.question.get_absolute_url()
def form_invalid(self, form):
return HttpResponseRedirect(
redirect_to=self.object.question.get_absolute_url())
`UpdateView` works like a mix of `DetailView` (since it works on a single model) and `CreateView` (since it processes a form). Both `CreateView` and `UpdateView` share a common ancestor: `ModelFormMixin`. `ModelFormMixin` provides us with the hooks we've used so often in the past: `form_valid()`, `get_success_url()`, and `form_invalid()`.
Thanks to the simplicity of this form, we will just respond to an invalid form by redirecting the user to the question.
Next, let's add it to our URLConf in `django/qanda/urls.py` file:
path('a/<int:pk>/accept', views.UpdateAnswerAcceptance.as_view(),
name='update_answer_acceptance'),
Remember to have a parameter named `pk` in your `path()` object's first argument so that `UpdateView` can retrieve the correct `Answer`.
Next, let's create a daily list of questions.
# Creating the daily questions page
To help people find questions, we'll create a list of each day's questions.
Django offers views to create yearly, monthly, weekly, and daily archive views. In our case, we'll use the `DailyArchiveView`, but they all work basically the same. They take a date from the URL's path and search for everything related during that period.
Let's build a daily question list using Django's `DailyArchiveView`.
# Creating DailyQuestionList view
Let's add our `DailyQuestionList` view to `django/qanda/views.py`:
from django.views.generic import DayArchiveView
from qanda.models import Question
class DailyQuestionList(DayArchiveView):
queryset = Question.objects.all()
date_field = 'created'
month_format = '%m'
allow_empty = True
`DailyQuestionList` need not override any methods of `DayArchiveView` just to let Django do the work. Let's look at how it does it.
`DayArchiveView` expects to get a day, month, and year in the URL's path. We can specify the format of these using `day_format`, `month_format`, and `year_format`. In our case, we change the expected format to `'%m'` so that the month is parsed as a number instead of the default `'%b'`, which is the short name of the month. These formats are the same, Python's standard `datetime.datetime.strftime`. Once `DayArchiveView` has the date, it uses that date to filter the provided `queryset` using field named in the `date_field` attribute. The `queryset` is ordered by date. If `allow_empty` is `True`, then results will be rendered, otherwise a 404 exception is thrown, for days with no items to list. To render the template, the object list is passed to the template much like a `ListView`. The default template is assumed to follow the `appname/modelname_archive_day.html` format.
Next, let's create the template for this view.
# Creating the daily question list template
Let's add our template to `django/qanda/templates/qanda/question_archive_day.html`:
{% extends "base.html" %}
{% block title %} Questions on {{ day }} {% endblock %}
{% block body %}
<div class="col-sm-12" >
<h1 >Highest Voted Questions of {{ day }}</h1 >
<ul >
{% for question in object_list %}
<li >
{{ question.votes }}
<a href="{{ question.get_absolute_url }}" >
{{ question }}
</a >
by
{{ question.user }}
on {{ question.created }}
</li >
{% empty %}
<li>Hmm... Everyone thinks they know everything today.</li>
{% endfor %}
</ul >
<div>
{% if previous_day %}
<a href="{% url "qanda:daily_questions" year=previous_day.year month=previous_day.month day=previous_day.day %}" >
<< Previous Day
</a >
{% endif %}
{% if next_day %}
<a href="{% url "qanda:daily_questions" year=next_day.year month=next_day.month day=next_day.day %}" >
Next Day >>
</a >
{% endif %}
</div >
</div >
{% endblock %}
The list of questions is much like one would expect, that is, a `<ul>` tag with a `{% for %}` loop creating `<li>` tags with links.
One of the conveniences of the `DailyArchiveView` (and all the date archive views) is that they provide their template's context with next and previous dates. These dates let us create a kind of pagination across dates.
# Routing requests to DailyQuestionLists
Finally, we'll create a `path` to our `DailyQuestionList` view so that we can route requests to it:
path('daily/<int:year>/<int:month>/<int:day>/',
views.DailyQuestionList.as_view(),
name='daily_questions'),
Next, let's create a view to represent _today'_ s questions.
# Getting today's question list
Having a daily archive is good, but we want to provide a convenient way to access today's archive. We'll use a `RedirectView` to always redirect the user to the `DailyQuestionList` of today's date.
Let's add it to `django/qanda/views.py`:
class TodaysQuestionList(RedirectView):
def get_redirect_url(self, *args, **kwargs):
today = timezone.now()
return reverse(
'questions:daily_questions',
kwargs={
'day': today.day,
'month': today.month,
'year': today.year,
}
)
`RedirectView` is a simple view that returns a 301 or 302 redirect response. We use Django's `django.util.timezone` to get today's date according to how Django has been configured. By default, Django is configured using **Coordinated Universal Time** ( **UTC** ). Due to the complexity of time zones, it's often simplest to track everything in UTC and then adjust the display on the client side.
We've now created all the views for our initial `qanda` app, letting users ask and answer questions. The asker can also accept answer(s) to their question.
Next, let's actually let the users log in, log out, and register with a `user` app.
# Creating the user app
As we mentioned before, a Django app should have a clear scope. To that end, we'll create a separate Django app to manage users, which we will call `user`. We shouldn't place our user management code in `qanda` or the `Question` model in the `user` app.
Let's create the app using `manage.py`:
**$ python manage.py startapp user**
Then, add it to our list of `INSTALLED_APPS` in `django/config/settings.py`:
INSTALLED_APPS = [
'user',
'qanda',
'markdownify',
'crispy_forms',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
It's particularly important to keep the `user` app _before_ the `admin` app, as they will both define login templates. The app that comes first will have their login template resolved first. We don't want our users redirected to the admin app.
Next, let's create a URLConf for our `user` app in `django/user/urls.py`:
from django.urls import path
import user.views
app_name = 'user'
urlpatterns = [
]
Now, we'll have the main URLConf in `django/config/urls.py` include the `user` app's URLConf:
from django.contrib import admin
from django.urls import path, include
import qanda.urls
import user.urls
urlpatterns = [
path('admin/', admin.site.urls),
path('user/', include(user.urls, namespace='user')),
path('', include(qanda.urls, namespace='questions')),
]
Now that we have our app configured, we can add our login and logout views.
# Using Django's LoginView and LogoutView
To provide the login and logout functionalities, we'll use views provided by the `django.contrib.auth` app. Let's update the `django/users/urls.py` to reference them:
from django.urls import path
import user.views
app_name = 'user'
urlpatterns = [
path('login', LoginView.as_view(), name='login'),
path('logout', LogoutView.as_view(), name='logout'),
]
These views take care of logging a user in and out. However, the login view requires a template to render nicely. The `LoginView` expects it under the `registration/login.html` name.
We'll put our template in `django/user/templates/registration/login.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% block title %} Login - {{ block.super }} {% endblock %}
{% block body %}
<h1>Login</h1>
<form method="post" class="col-sm-6">
{% csrf_token %}
{{ form|crispy }}
<button type="submit" class="btn btn-primary">Login</button>
</form>
{% endblock %}
The `LogoutView` doesn't require a template.
Now, we will need to inform our Django project's `settings.py` about the login view's location and the function it should perform when the user logs in and out. Let's add some settings to `django/config/settings.py`:
LOGIN_URL = 'user:login'
LOGIN_REDIRECT_URL = 'questions:index'
LOGOUT_REDIRECT_URL = 'questions:index'
This way, the `LoginRequiredMixin` can know the view to which we need to redirect unauthenticated users. We are also informing `LoginView` and `LogoutView` of `django.contrib.auth` where to redirect the user when they log in and log out, respectively.
Next, let's give users a way to register for our site.
# Creating RegisterView
Django doesn't provide a user registration view, but it does offer a `UserCreationForm` if we're using `django.conrib.auth.models.User` as our user model. Since we are using `django.conrib.auth.models.User` we can use a simple `CreateView` for our registration view:
from django.contrib.auth.forms import UserCreationForm
from django.views.generic.edit import CreateView
class RegisterView(CreateView):
template_name = 'user/register.html'
form_class = UserCreationForm
Now, we just need to create a template at `django/user/templates/register.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% block body %}
<div class="col-sm-12">
<h1 >Register for MyQA</h1 >
<form method="post" >
{% csrf_token %}
{{ form | crispy }}
<button type="submit" class="btn btn-primary" >
Register
</button >
</form >
</div >
{% endblock %}
Again, our template is following a familiar pattern similar what we've seen in past views. We use our base template, blocks, and Django Crispy Form to create our page quickly and simply.
Finally, we can add a `path` to the view in the `user` URLConf's `urlpatterns` list:
path('register', user.views.RegisterView.as_view(), name='register'),
# Updating base.html navigation
Now that we have created all our views, we can update our base template's `<nav>` to list all our URLs:
{% load static %}
<!DOCTYPE html>
<html lang="en" >
<head >
{# skipping unchanged head contents #}
</head >
<body >
<nav class="navbar navbar-expand-lg bg-light" >
<div class="container" >
<a class="navbar-brand" href="/" >Answerly</a >
<ul class="navbar-nav" >
<li class="nav-item" >
<a class="nav-link" href="{% url "qanda:ask" %}" >Ask</a >
</li >
<li class="nav-item" >
<a
class="nav-link"
href="{% url "qanda:index" %}" >
Today's  Questions
</a >
</li >
{% if user.is_authenticated %}
<li class="nav-item" >
<a class="nav-link" href="{% url "user:logout" %}" >Logout</a >
</li >
{% else %}
<li class="nav-item" >
<a class="nav-link" href="{% url "user:login" %}" >Login</a >
</li >
<li class="nav-item" >
<a class="nav-link" href="{% url "user:register" %}" >Register</a >
</li >
{% endif %}
</ul >
</div >
</nav >
<div class="container" >
{% block body %}{% endblock %}
</div >
</body >
</html >
Great! Now our user can always reach the most important pages on our site.
# Running the development server
Finally, we can access our development server using the following command:
**$ cd django
$ python manage.py runserver**
Now we can open the site in a browser at http://localhost:8000/.
# Summary
In this chapter, we started our Answerly project. Answerly is composed of two apps (`user` and `qanda`), two third-party apps installed via PyPI (Markdownify and Django Crispy Forms), and a number of Django's built-in apps (`django.contrib.auth` being used most directly).
A logged-in user can now ask a question, answer questions, and accept answers. We can also see each day's highest-voted questions.
Next, we'll help users discover questions more easily by adding search functionality using ElasticSearch.
# Searching for Questions with Elasticsearch
Now that users can ask and answer questions, we'll add a search functionality to Answerly to help users find questions. Our search will be powered by Elasticsearch. Elasticsearch is a popular open source search engine powered by Apache Lucene.
In the chapter, we will do the following things:
* Create an Elasticsearch service to abstract our code
* Bulk load existing `Question` model instances into Elasticsearch
* Build a search view powered by Elasticsearch
* Save new models into Elasticsearch automatically
Let's start by setting up our project to use Elasticsearch.
# Starting with Elasticsearch
Elasticsearch is maintained by Elastic, though the server is open source. Elastic offers proprietary plugins to make running it in production easier. You can run Elasticsearch yourself or use a SaaS provider, such as Amazon, Google, or Elastic. In development, we'll run Elasticsearch using a Docker image provided by Elastic.
Elasticsearch is made up of zero or more indexes. Each index contains documents. Documents are the objects that one searches for. A document is made of up fields. Fields are indexed by Apache Lucene. Each index is also split up into one or more shards to make indexing and searching faster by distributing it across nodes in a cluster.
We can interact with Elasticsearch using its RESTful API. Most requests and responses are in JSON by default.
First, let's start by getting an Elasticsearch server running in Docker.
# Starting an Elasticsearch server with docker
The simplest way to get an Elasticsearch server running is using the Docker image that Elastic provides.
To obtain and start the Elasticsearch docker image, run the following command:
**$ docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.0.0**
The following command does four things, as follows:
* It downloads the Elasticsearch 6.0 docker image from Elastic's servers
* It runs a container using the Elasticsearch 6.0 docker image as a single node cluster
* It detaches (`-d`) the docker command from the running container (so that we can run more commands in our shell)
* It opens ports (`-p`) `9200` and `9300` on the host computer and redirects them to the container
To confirm that our server is running, we can make the following request to the Elasticsearch server:
$ curl http://localhost:9200/?pretty
{
"name" : "xgf60cc",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "HZAnjZefSjqDOxbMU99KOw",
"version" : {
"number" : "6.0.0",
"build_hash" : "8f0685b",
"build_date" : "2017-11-10T18:41:22.859Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When interacting with Elasticsearch yourself, always add the `pretty` `GET` parameter to have Elasticsearch print the JSON. However, don't use this parameter in your code.
Now that we have our Elasticsearch server, let's configure Django to know about our server.
# Configuring Answerly to use Elasticsearch
Next, we'll update our `settings.py` and `requirements.txt` files to work with Elasticsearch.
Let's update `django/config/settings.py`:
ES_INDEX = 'answerly'
ES_HOST = 'localhost'
ES_PORT = '9200'
These are custom settings that our app will use. Django has no built-in support for Elasticsearch. Instead, we'll reference these settings in our own code.
Let's add the Elasticsearch library to our `requirements.txt` file:
elasticsearch==6.0.0
This is the official Elasticsearch Python library published by Elastic. This library offers a low-level interface that looks much like the RESTful API we can use with cURL. This means that we can easily build a query on the command line with cURL and then convert the JSON to a Python `dict`.
Elastic also offers a higher-level, more Pythonic API called `elasticsearch-dsl`. It includes a pseudo-ORM to write a more Pythonic persistence layer. This may be a good option if your project includes a lot of Elasticsearch code. However, the low-level API closely mirrors the RESTful API, making it easier to reuse code and get assistance from the Elasticsearch community.
Next, let's create the Answerly index in our Elasticsearch server.
# Creating the Answerly index
Let's create an index in Elasticsearch by sending a `PUT` request to our server:
**$ curl -XPUT "localhost:9200/answerly?pretty"**
Great! Now, we can load our existing `Question` model instances into our Elasticsearch index.
# Loading existing Questions into Elasticsearch
Adding a search feature means that we will need to load our existing `Question` model instances into Elasticsearch. The simplest way to solve a problem like this is by adding a `manage.py` command. Custom `manage.py` commands combine the simplicity of writing a regular Python script with the power of the Django API.
Before we add our `manage.py` command, we will need to write our Elasticsearch-specific code. To separate the Elasticsearch code from our Django code, we'll add an `elasticsearch` service to the `qanda` app.
# Creating the Elasticsearch service
Much of the code that we'll be writing in this chapter will be Elasticsearch specific. We don't want to put that code in our views (or `manage.py` commands) because that would introduce coupling between two unrelated components. Instead, we'll isolate the Elasticsearch code into its own module inside `qanda`, then have our views and `manage.py` command call our service module.
The first function we'll create will bulk load `Question` model instances into Elasticsearch.
Let's create a separate file for our Elastic Service code. We'll put our bulk insert code into `django/qanda/service/elasticsearch.py`:
import logging
from django.conf import settings
from elasticsearch import Elasticsearch, TransportError
from elasticsearch.helpers import streaming_bulk
FAILED_TO_LOAD_ERROR = 'Failed to load {}: {!r}'
logger = logging.getLogger(__name__)
def get_client():
return Elasticsearch(hosts=[
{'host': settings.ES_HOST, 'port': settings.ES_PORT,}
])
def bulk_load(questions):
all_ok = True
es_questions = (q.as_elasticsearch_dict() for q in questions)
for ok, result in streaming_bulk(
get_client(),
es_questions,
index=settings.ES_INDEX,
raise_on_error=False,
):
if not ok:
all_ok = False
action, result = result.popitem()
logger.error(FAILED_TO_LOAD_ERROR.format(result['_id'], result))
return all_ok
We've created two functions in our new service, `get_client()` and `bulk_load()`.
The `get_client()` function will return an `Elasticcearch` client configured with values from `settings.py`.
The `bulk_load()` function takes an iterable collection of `Question` model instances and loads them into Elasticsearch using the `streaming_bulk()` helper. Since `bulk_load()` expects an iterable collection, this means that our `manage.py` command will be able to send a `QuerySet` object. Remember that even though we're using a generator expression (which is lazy), our `questions` parameter will execute the full query as soon as we try to iterate over it. It's only the execution of the `as_elasticsearch_dict()` method that will be lazy. We'll write and discuss the new `as_elasticsearch_dict()` method after we're finished looking at the `bulk_load()` function.
Next, the `bulk_load()` function uses the `streaming_bulk()` function. The `streaming_bulk()` function takes four arguments and returns an iterator for reporting the progress of the load. The four arguments are as follows:
* An `Elasticsearch` client
* Our `Question` generator (an iterator)
* The index name
* A flag telling the function not to raise an exception in case of an error (this will cause the `ok` variable to be `False` in case of errors)
The body of our `for` loop will log if there's an error when loading a question.
Next, let's give `Question` a method that can convert it into a `dict` that Elasticsearch can correctly process.
Let's update the `Question` model:
from django.db import models
class Question(models.Model):
# fields and methods unchanged
def as_elasticsearch_dict(self):
return {
'_id': self.id,
'_type': 'doc',
'text': '{}\n{}'.format(self.title, self.question),
'question_body': self.question,
'title': self.title,
'id': self.id,
'created': self.created,
}
The `as_elasticsearch_dict()` method turns a `Question` model instance into a dict suitable for loading into Elasticsearch. The following are the three fields that we specially add to our Elasticsearch dict that aren't in our model:
* `_id`: This is the ID of the Elasticsearch document. This doesn't have to be the same as the model ID. However, if we want to be able to update the Elasticsearch document representing a `Question`, then we need to either store the document's `_id` or be able to calculate it. For simplicity's sake, we just use the same ID.
* `_type`: This is the document's mapping type. As of Elasticsearch 6, Elasticsearch indexes are only able to store one mapping type each. So, all documents in the index should have the same `_type` value. Mapping types are similar to database schema's, telling Elasticsearch how to index and track a document and its fields. One of the convenient features of Elasticsearch is that it doesn't require us to define our type ahead of time. Elasticsearch dynamically builds the document's type based on the data we load.
* `text`: This is a field we will create in the document. For search, it's convenient to have the title and body of the document together in an indexable field.
The rest of the fields in the dictionary are the same as the model's fields.
The presence of `as_elasticsearch_dict()` as a model method may seem problematic. Shouldn't the `elasticsearch` service know how to convert a `Question` in to an Elasticsearch dict? Like many design questions, the answer depends on a variety of factors. One factor that influenced me adding this method to the model is Django's _fat models_ philosophy. Generally, Django encourages writing operations on the model as model methods. Also, the properties of this dict are coupled to the model's fields. Keeping both the lists of fields close together makes it easier for future developers to keep the two lists in sync. However, there may be projects and contexts in which the right thing is to put this kind of function in the service module. As Django developers, it's our job to evaluate the trade-offs and make the best decision for a particular project.
Now that our `elasticsearch` service knows how to bulk add `Questions`, let's expose that functionality with a `manage.py` command.
# Creating a manage.py command
We've used `manage.py` commands to start projects and apps as well as create and run migrations. Now, we'll create a custom command to load all the questions in our project into an Elasticsearch server. This will be a simple introduction to Django management commands. We'll discuss the topic more in Chapter 12, _Building an API_.
A Django management command must be in an app's `manage/commands` subdirectory. An app may have multiple commands. Each command will be called the same as its filename. Inside the file should be a `Command` class that subclasses `django.core.management.BaseCommand`. The code that it should execute should be in the `handle()` method.
Let's create our command in `django/qanda/management/commands/load_questions_into_elastic_search.py`:
from django.core.management import BaseCommand
from qanda.service import elasticsearch
from qanda.models import Question
class Command(BaseCommand):
help = 'Load all questions into Elasticsearch'
def handle(self, *args, **options):
queryset = Question.objects.all()
all_loaded = elasticsearch.bulk_load(queryset)
if all_loaded:
self.stdout.write(self.style.SUCCESS(
'Successfully loaded all questions into Elasticsearch.'))
else:
self.stdout.write(
self.style.WARNING('Some questions not loaded '
'successfully. See logged errors'))
When designing commands, we should think of them as views, that is, _Fat models, thin commands_. This may be a bit more complicated, as there isn't a separate template layer for command-line output, but our output shouldn't be very complex anyway.
In our case, the `handle()` method gets a `QuerySet` of all `Questions` then passes it to `elasticsearch.bulkload`. We then print out whether it was successful or not using helper methods of `Command`. These helper methods are preferred over using `print()` directly because they make writing tests easier. We'll cover this topic in greater detail in our next chapter, Chapter 8, _Testing Answerly_.
Let's run the following command:
**$ cd django
$ python manage.py load_questions_into_elastic_search
Successfully loaded all questions into Elasticsearch.**
With all the questions loaded, let's confirm that they're in our Elasticsearch server. We can access the Elasticsearch server using `curl` to confirm that our questions have been loaded:
$ curl http://localhost:9200/answerly/_search?pretty
Assuming your ElasticSearch server is running on localhost on port 9200, the preceding command will return all the data in the `answerly` index. We can review the results to confirm that our data has been successfully loaded.
Now that we have some questions in Elasticsearch, let's add a search view.
# Creating a search view
In this section, we'll create a view that will let users search our `Question`s and will display the matching results. To achieve this result, we will do the following things:
* Add a `search_for_question()` function to our `elasticsearch` service
* Make a search view
* Make a template to display search results
* Update the base template to have search available everywhere
Let's start by adding search to our `elasticsearch` service.
# Creating a search function
The responsibility for querying our Elasticsearch server for a list of questions matching the user's query lies with our `elasticsearch` service.
Let's add a function that will send a search query and parse the results to `django/qanda/service/elasticsearch.py`:
def search_for_questions(query):
client = get_client()
result = client.search(index=settings.ES_INDEX, body={
'query': {
'match': {
'text': query,
},
},
})
return (h['_source'] for h in result['hits']['hits'])
After we connect with the client, we will send our query and parse the results.
Using the client's `search()` method, we send the query as a Python `dict` in the Elasticsearch Query DSL (domain-specific language). The Elasticsearch Query DSL provides a language for querying Elastic search using a series of nested objects. When sent by HTTP, the query becomes a series of nested JSON objects. In Python, we use `dict` s.
In our case, we're using a `match` query on the `text` field of the documents in the Answerly index. A `match` query is a fuzzy query that checks each document's `text` field to check whether it matches. The Query DSL also supports a number of configuration options to let you build more complex queries. In our case, we will accept the default fuzzy configuration.
Next, `search_for_questions` iterates over the results. Elasticsearch returns a lot of metadata describing the number of results, the quality to the match, and the resulting document. In our case, we will return an iterator of the matching documents (stored in`_source`).
Now that we can get our results from Elasticsearch, we can write our `SearchView`.
# Creating the SearchView
Our `SearchView` will take a `GET` parameter `q` and perform a search using our service module's `search_for_questions()` function.
We'll build our `SearchView` using a `TemplateView`. `TemplateView` renders a template in response to `GET` requests. Let's add `SearchView` to `django/qanda/views.py`:
from django.views.generic import TemplateView
from qanda.service.elasticsearch import search_for_questions
class SearchView(TemplateView):
template_name = 'qanda/search.html'
def get_context_data(self, **kwargs):
query = self.request.GET.get('q', None)
ctx = super().get_context_data(query=query, **kwargs)
if query:
results = search_for_questions(query)
ctx['hits'] = results
return ctx
Next, we'll add a `path()` object routing to our `SearchView` to our URLConf in `django/qanda/urls.py`:
from django.urls.conf import path, include
from qanda import views
app_name = 'qanda'
urlpatterns = [
# skipping previous code
path('q/search', views.SearchView.as_view(),
name='question_search'),
]
Now that we have our our view, let's build our `search.html` template.
# Creating the search template
We'll put our search template in `django/qanda/templates/qanda/search.html`, as follows:
{% extends "base.html" %}
{% load markdownify %}
{% block body %}
<h2 >Search</h2 >
<form method="get" class="form-inline" >
<input class="form-control mr-2"
placeholder="Search"
type="search"
name="q" value="{{ query }}" >
<button type="submit" class="btn btn-primary" >Search</button >
</form >
{% if query %}
<h3>Results from search query '{{ query }}'</h3 >
<ul class="list-unstyled search-results" >
{% for hit in hits %}
<li >
<a href="{% url "qanda:question_detail" pk=hit.id %}" >
{{ hit.title }}
</a >
<div >
{{ hit.question_body|markdownify|truncatewords_html:20 }}
</div >
</li >
{% empty %}
<li >No results.</li >
{% endfor %}
</ul >
{% endif %}
{% endblock %}
In the body of the template, we have a search form that displays the query. If there was a `query`, then we will also show its results (if any).
We have seen many of the tags we are using here before (for example, `for`, `if`, `url`, and `markdownify`). A new filter that we will add is `truncate_words_html`, which receives text via the pipe and a number as an argument. It will truncate the text to the provided number of words (not counting HTML tags) and close any open HTML tags in the resulting fragment.
The result of this template is a list of hits that match our query with a preview of the text of each question. Since we stored the body, title, and ID of the question in Elasticsearch, we are able to show the results without querying our normal database.
Next, let's update our base template to let users search from every page.
# Updating the base template
Let's update the base template to let users search from anywhere. To do that, we'll need to edit `django/templates/base.html`:
{% load static %}
<!DOCTYPE html>
<html lang="en" >
<head >{# head unchanged #}</head >
<body >
<nav class="navbar navbar-expand-lg bg-light" >
<div class="container" >
<a class="navbar-brand" href="/" >Answerly</a >
<ul class="navbar-nav" >
{# previous nav unchanged #}
<li class="nav-item" >
<form class="form-inline"
action="{% url "qanda:question_search" %}"
method="get">
<input class="form-control mr-sm-2" type="search"
name="q"
placeholder="Search">
<button class="btn btn-outline-primary my-2 my-sm-0"
type="submit" >
Search
</button >
</form >
</li >
</ul >
</div >
</nav >
{# rest of body unchanged #}
</body >
</html >
Now, we've got the search form in our header on every page.
With our search complete, let's make sure that every new question is automatically added to Elasticsearch.
# Adding Questions into Elasticsearch on save()
The best way to perform an operation that is each time a model is saved to override the `save()` method that the model inherits from `Model`. We will provide a custom `Question.save()` method to make sure that `Question`s are added and updated in ElasticSearch as soon as they're saved by the Django ORM.
You can still perform an operation when a Django model is saved even if you don't control the source code of that model. Django offers a signals dispatcher (<https://docs.djangoproject.com/en/2.0/topics/signals/>) that lets you listen for events on models you don't own. However, signals introduce a lot of complexity into your code. It's _discouraged_ to use signals unless there is no other option.
Let's update our `Queston` model in `django/qanda/models.py`:
from django.db import models
from qanda.service import elasticsearch
class Question(models.Model):
# other fields and methods unchanged.
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
super().save(force_insert=force_insert,
force_update=force_update,
using=using,
update_fields=update_fields)
elasticsearch.upsert(self)
The `save()` method is called by `CreateView`, `UpdateView`, `QuerySet.create()`, `Manager.create()`, and most third-party code to persist a model. We make sure to call our `upsert()` method after the original `save()` method has returned because we want our model to have an `id` attribute.
Now, let's create our Elasticsearch service's `upsert` method.
# Upserting into Elasticsearch
An upsert operation will update an object if it exists and insert it doesn't. Upsert is a portmanteau of _update_ and _insert_. Elasticsearch supports upsert operations out of the box, which can make our code much simpler.
Let's add our `upsert()` method to `django/qanda/service/elastic_search.py`:
def upsert(question_model):
client = get_client()
question_dict = question_model.as_elasticsearch_dict()
doc_type = question_dict['_type']
del question_dict['_id']
del question_dict['_type']
response = client.update(
settings.ES_INDEX,
doc_type,
id=question_model.id,
body={
'doc': question_dict,
'doc_as_upsert': True,
}
)
return response
We've defined our `get_client()` function in the preceding code block.
To perform an upsert, we use the `update()` method of Elasticsearch `client`. We provide the model as a document `dict` under the `doc` key. To force Elasticsearch to perform an upsert, we will include the `doc_as_upsert` key with a `True` value. One difference between the `update()` method and the bulk insert function we used earlier is that `update()` will not accept an implicit ID (`_id`) in the document. However, we provide the ID of the document to upsert as the `id` argument in our `update()` call. We also remove the `_type` key and value from the `dict` returned by the `question_model.as_elasticsearch_dict()` method and pass value (stored in the `doc_type` variable) as an argument to the `client.update()` method.
We return the response, though our view won't use it.
Finally, we can test our view by running our development server:
**$ cd django**
**$ python manage.py runserver**
Once our development server has started, we can ask a new question at <http://localhost:8000/ask> and then search for it at <http://localhost:8000/q/search>.
Now, we're done adding search functionalities to Answerly!
# Summary
In this chapter, we've added search so that users can search for questions. We set up an Elasticsearch server for development using Docker. We created a `manage.py` command to load all our `Question`s into Elasticsearch. We added a search view where users could see the results of their question. Finally, we updated `Question.save` to keep Elasticsearch and the Django DB in sync.
Next, we'll take an in-depth look at testing a Django app so that we can have confidence as we make future changes.
# Testing Answerly
In the preceding chapter, we added search to Answerly, our question and answer site. However, as our site's functionality grows, we need to avoid breaking the existing functionality. To make sure that our code keeps working, we will take a closer look at testing our Django project.
In this chapter, we will do the following things:
* Install Coverage.py to measure code coverage
* Measure code coverage for our Django project
* Write a unit test for our model
* Write a unit test for a view
* Write a Django integration tests for a view
* Write a Selenium integration test for a view
Let's start by installing Coverage.py.
# Installing Coverage.py
**Coverage.py** is the most popular Python code coverage tool at the time of writing. It's very easy to install as it's available from PyPI. Let's add it to our `requirements.txt` file:
**$ echo "coverage==4.4.2" >> requirements.txt**
Then we can install Coverage.py using pip:
**$ pip install -r requirements.txt**
Now that we have Coverage.py installed, we can start measuring our code coverage.
# Measuring code coverage
**Code coverage** measures which lines of code have been executed during a test. Ideally, by tracking code coverage, we can ensure which code is tested and which code is not. Since Django projects are mainly Python, we can use Coverage.py to measure our code coverage. The following are the two caveats for Django projects:
* Coverage.py won't be able to measure the coverage of our templates (they're not Python)
* Untested class-based views seem more covered than they are
Finding the coverage of a Django app is a two-step process:
1. Running our tests with the `coverage` command
2. Generating a coverage report using `coverage report` or `coverage html`
Let's run Django's unit `test` command with `coverage` to take a look at the baseline for an untested project:
**$ coverage run --branch --source=qanda,user manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Destroying test database for alias 'default'...**
The preceding command tells `coverage` to run a command (in our case, `manage.py test`) to record test coverage. We will use this command with the following two options:
* `--branch`: To track whether both parts of branching statements were covered (for example, when an `if` statement evaluated to `True` and `False`)
* `--source=qanda,user`: To record coverage only for the `qanda` and `user` modules (the code we wrote)
Now that we've recorded the coverage, let's take a look at the coverage of an app without any tests:
**$ coverage report
Name Stmts Miss Branch BrPart Cover
----------------------------------------------------------------------
qanda/__init__.py 0 0 0 0 100%
qanda/admin.py 1 0 0 0 100%
qanda/apps.py 3 3 0 0 0%
qanda/forms.py 19 0 0 0 100%
qanda/management/__init__.py 0 0 0 0 100%
qanda/migrations/0001_initial.py 7 0 0 0 100%
qanda/migrations/__init__.py 0 0 0 0 100%
qanda/models.py 28 6 0 0 79%
qanda/search_indexes.py 0 0 0 0 100%
qanda/service/__init__.py 0 0 0 0 100%
qanda/service/elasticsearch.py 47 32 14 0 25%
qanda/tests.py 1 0 0 0 100%
qanda/urls.py 4 0 0 0 100%
qanda/views.py 76 35 12 0 47%
user/__init__.py 0 0 0 0 100%
user/admin.py 4 0 0 0 100%
user/apps.py 3 3 0 0 0%
user/migrations/__init__.py 0 0 0 0 100%
user/models.py 1 0 0 0 100%
user/tests.py 1 0 0 0 100%
user/urls.py 5 0 0 0 100%
user/views.py 5 0 0 0 100%
----------------------------------------------------------------------
TOTAL 205 79 26 0 55%**
To understand how an untested project is 55% covered, let's look at the coverage of `django/qanda/views.py`. Let's generate an HTML report of the cover using the following command:
**$ cd django
$ coverage html**
The preceding command will create a `django/htmlcov` directory and HTML files that show the coverage report and a visual display of the code coverage. Let's open `django/htmlcov/qanda_views_py.html` and scroll down to around line 72:
The preceding screenshot shows that `DailyQuestionList` is completely covered but `QuestionDetailView.get_context_data()` is not. In the absence of any tests, the difference seems counterintuitive.
Let's remind ourselves how code coverage works. Code coverage tools check whether a particular line of code was _executed_ during a test. In the preceding screenshot, the `DailyQuestionList` class and its members _were_ executed. When the test runner starts, Django will build up the root URLConf much like when it starts for development or production. When the root URLConf is created, it imports the other referenced URLConfs (for example, `qanda.urls`). Those URLConfs, in turn, import their views. Views import forms, models, and other modules.
This import chain means that anything at the top level of a module will appear covered, regardless of whether it is tested. The class definition of `DailyQuestionList` was executed. However, the class itself was not instantiated, nor any of its methods executed. This also explains why the body of `QuestionDetailView.get_context_data()` is not covered. The body of `QuestionDetailView.get_context_data()` was never executed. This is a limitation of code coverage tools when working with declarative code such as `DailyQuestionList`.
Now that we understand some of the limitations of code coverage, let's write a unit test for `qanda.models.Question.save()`.
# Creating a unit test for Question.save()
Django helps you write unit tests to test individual units of code. If our code relies on an external service, then we can use the standard `unittest.mock` library to mock that API, preventing requests to outside systems.
Let's write a test for the `Question.save()` method to verify that when we save a `Question` it will be upserted into Elasticsearch. We'll write the test in `django/qanda/tests.py`:
from unittest.mock import patch
from django.conf import settings
from django.contrib.auth import get_user_model
from django.test import TestCase
from elasticsearch import Elasticsearch
from qanda.models import Question
class QuestionSaveTestCase(TestCase):
"""
Tests Question.save()
"""
@patch('qanda.service.elasticsearch.Elasticsearch')
def test_elasticsearch_upsert_on_save(self, ElasticsearchMock):
user = get_user_model().objects.create_user(
username='unittest',
password='unittest',
)
question_title = 'Unit test'
question_body = 'some long text'
q = Question(
title=question_title,
question=question_body,
user=user,
)
q.save()
self.assertIsNotNone(q.id)
self.assertTrue(ElasticsearchMock.called)
mock_client = ElasticsearchMock.return_value
mock_client.update.assert_called_once_with(
settings.ES_INDEX,
id=q.id,
body={
'doc': {
'_type': 'doc',
'text': '{}\n{}'.format(question_title, question_body),
'question_body': question_body,
'title': question_title,
'id': q.id,
'created': q.created,
},
'doc_as_upsert': True,
}
)
In the preceding code sample, we created a `TestCase` with a single test method. The method creates a user, saves a new `Question`, and then asserts that the mock has behaved correctly.
Like most `TestCase` s, `QuestionSaveTestCase` uses both Django's testing API and code from Python's `unittest` library (for example, `unittest.mock.patch()`). Let's look more closely at how Django's testing API makes testing easier.
`QuestionSaveTestCase` extends `django.test.TestCase` instead of `unittest.TestCase` because Django's `TestCase` offers lots of useful features, as follows:
* The entire test case and each test are atomic database operations
* Django takes care of clearing the database before and after each test
* `TestCase` offers convenient `assert*()` methods such as `self.assertInHTML()` (discussed more in the _Creating a unit test for a view_ section)
* A fake HTTP client to create integration tests (discussed more in the _Creating an integration test for a view_ section)
Since Django's `TestCase` extends `unittest.TestCase`, it still understands and performs correctly when it hits a regular `AssertionError`. So, if `mock_client.update.assert_called_once_with()` raises an `AssertionError` exception, Django's test runner knows how to handle it.
Let's run our tests with `manage.py`:
**$ cd django
$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 0.094s
OK
Destroying test database for alias 'default'...**
Now that we know how to test a model, we can move on to testing views. As we're testing our views, though, we will need to create model instances. Using model's default managers to create model instances will become too verbose. Next, let's make it easier to create the models necessary for testing with Factory Boy.
# Creating models for tests with Factory Boy
In our preceding test, we made a `User` model using `User.models.create_user`. However, that required us to provide a username and password, neither of which we really cared about. We just need a user, not a particular user. For many of our tests, the same principle will hold true for `Question` s and `Answer` s. The Factory Boy library will help us concisely create models in tests.
Factory Boy is particularly useful for Django developers because it knows how to create models based from Django `Model` classes.
Let's install Factory Boy:
**$ pip install factory-boy==2.9.2**
In this section, we'll use Factory Boy to create a `UserFactory` class and a `QuestionFactory` class. Since a `Question` model must have a user in its `user` field, the `QuestionFactory` will show us how `Factory` classes can reference each other.
Let's start with the `UserFactory`.
# Creating a UserFactory
Both `Question` s and `Answer` s are related to users. This means that we'll need to create users in almost all our tests. Generating all the related models for each test using model managers is very verbose and distracting from point of our tests. Django offers an out-of-the-box support for fixtures of our tests. However, Django's fixtures are separate JSON/YAML files that need to be manually maintained or they will grow out of sync and cause problems. Factory Boy will help us by letting use code, a `UserFactory` that can concisely create users model instances at runtime based on the state of the current User model.
Our `UserFactory` will be derived from Factory Boy's `DjangoModelFactory` class, which knows how to deal with Django models. We'll use an inner `Meta` class to tell `UserFactory` which model it's creating (note how this is similar to the `Form` API). We'll also add class attributes to tell Factory Boy how to set values of the model's fields. Finally, we'll override the `_create` method to make `UserFactory` use the manager's `create_user()` method instead of the default `create()` method.
Let's create our `UserFactory` in `django/users/factories.py`:
from django.conf import settings
import factory
class UserFactory(factory.DjangoModelFactory):
username = factory.Sequence(lambda n: 'user %d' % n)
password = 'unittest'
class Meta:
model = settings.AUTH_USER_MODEL
@classmethod
def _create(cls, model_class, *args, **kwargs):
manager = cls._get_manager(model_class)
return manager.create_user(*args, **kwargs)
The `UserFactory` subclasses the `DjangoModelFactory`. The `DjangoModelFactory` will look at our class's `Meta` inner class (which follows the same pattern as `Form` classes).
Let's take a closer look at attributes of `UserFactory`:
* `password = 'unittest'`: This sets the password for each user to be of the same value.
* `username = factory.Sequence(lambda n: 'user %d' % n)`: `Sequence` sets a different value for a field each time the factory creates a model. `Sequence()` takes callable, passes it however many times the factory has been used, and use the callable's return value as the new instance's field value. In our case, our users will have usernames such as `user 0` and `user 1`.
Finally, we overrode the `_create()` method because the `django.contrib.auth.models.User` model has an unusual manager. The default `_create` method of `DjangoModelFactory` will use the model's manager's `create()` method. This is fine for most models, but won't work well for the `User` model. To create a user, we should really use the `create_user` method so that we can pass a password in plain text and have it hashed for storage. This will let us authenticate as that `User`.
Let's try out our factory using the Django shell:
**$ cd django
$ python manage.py shell
Python 3.6.3 (default, Oct 31 2017, 11:15:24)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from user.factories import UserFactory
In [2]: user = UserFactory()
In [3]: user.username
Out[3]: 'user 0'
In [4]: user2 = UserFactory()
In [5]: assert user.username != user2.username
In [6]: user3 = UserFactory(username='custom')
In [7]: user3.username
Out[7]: 'custom'**
In this Django shell session, we will note how we can use `UserFactory`:
* We can create new models using a single no-argument call, `UserFactory()`
* Each call leads to a unique username, `assert user.username != user2.username`
* We can change values the factory used by providing them as arguments, `UserFactory(username='custom')`
Next, let's create a `QuestionFactory`.
# Creating the QuestionFactory
Lots of our tests will require multiple `Question` instances. However, each `Question` must have a user. This can lead to lots of brittle and verbose code. Creating a `QuestionFactory` will solve this problem.
In the preceding example, we saw how we can use `factory.Sequence` to give each new model's attribute a distinct value. Factory Boy also offers `factory.SubFactory`, in which we can indicate that a field's value is the result of another factory.
Let's add `QuestionFactory` to `django/qanda/factories.py`:
from unittest.mock import patch
import factory
from qanda.models import Question
from user.factories import UserFactory
class QuestionFactory(factory.DjangoModelFactory):
title = factory.Sequence(lambda n: 'Question #%d' % n)
question = 'what is a question?'
user = factory.SubFactory(UserFactory)
class Meta:
model = Question
@classmethod
def _create(cls, model_class, *args, **kwargs):
with patch('qanda.service.elasticsearch.Elasticsearch'):
return super()._create(model_class, *args, **kwargs)
Our `QuestionFactory` is very similar to our `UserFactory`. They have the following things in common:
* Derived from the `factory.DjangoModelFactory`
* Have a `Meta` class
* Use `factory.Sequence` to give a field a custom value
* Have a hardcoded value
There are two important differences:
* The `user` field of `QuestionFactory` uses `SubFactory` to give each `Question` a new user created with the `UserFactory`.
* The `_create` method of `QuestionFactory` mocks the Elasticsearch service so that when the model is created, it doesn't try to connect to that service. Otherwise, it calls the default `_create()` method.
To see our `QuestionFactory` in practice, let's write a unit test for our `DailyQuestionList` view.
# Creating a unit test for a view
In this section, we'll write a view unit test for our `DailyQuestionList` view.
Unit testing a view means directly passing the view a request and asserting that the response matches our expectations. Since we're passing the request directly to the view, we also need to directly pass any arguments the view would ordinarily receive parsed out of the request's URL. Parsing values out of URL paths is the responsibility of the request routing, which we don't use in a view unit test.
Let's take a look at our `DailyQuestionListTestCase` class in `django/qanda/tests.py`:
from datetime import date
from django.test import TestCase, RequestFactory
from qanda.factories import QuestionFactory
from qanda.views import DailyQuestionList
QUESTION_CREATED_STRFTIME = '%Y-%m-%d %H:%M'
class DailyQuestionListTestCase(TestCase):
"""
Tests the DailyQuestionList view
"""
QUESTION_LIST_NEEDLE_TEMPLATE = '''
<li >
<a href="/q/{id}" >{title}</a >
by {username} on {date}
</li >
'''
REQUEST = RequestFactory().get(path='/q/2030-12-31')
TODAY = date.today()
def test_GET_on_day_with_many_questions(self):
todays_questions = [QuestionFactory() for _ in range(10)]
response = DailyQuestionList.as_view()(
self.REQUEST,
year=self.TODAY.year,
month=self.TODAY.month,
day=self.TODAY.day
)
self.assertEqual(200, response.status_code)
self.assertEqual(10, response.context_data['object_list'].count())
rendered_content = response.rendered_content
for question in todays_questions:
needle = self.QUESTION_LIST_NEEDLE_TEMPLATE.format(
id=question.id,
title=question.title,
username=question.user.username,
date=question.created.strftime(QUESTION_CREATED_STRFTIME)
)
self.assertInHTML(needle, rendered_content)
Let's take a closer look at the new APIs we've seen:
* `RequestFactory().get(path=...)`: `RequestFactory` is a utility for creating HTTP requests for testing views. Note that our request's `path` is arbitrary here, as it won't be used for routing.
* `DailyQuestionList.as_view()(...)`: We've discussed that each class-based view has an `as_view()` method that returns a callable, but we haven't used it before. Here, we pass in the request, year, month, and day to execute the view.
* `response.context_data['object_list'].count()`: The response returned by our view still has its context. We can use this context to assert whether the view worked correctly more easily than if we had to evaluate the HTML.
* `response.rendered_content`: The `rendered_content` property lets us access the rendered template of the response.
* `self.assertInHTML(needle, rendered_content)`: `TestCase.assertInHTML()` lets us assert whether one HTML fragment is inside another. `assertInHTML()` knows how to parse HTML and doesn't care about attribute order or whitespace. In testing views, we frequently have to check whether a particular bit of HTML is present in a response.
Now that we've created a unit test for a view, let's look at creating an integration test for a view by creating an integration test for `QuestionDetailView`.
# Creating a view integration test
View integration tests use the same `django.test.TestCase` class that a unit test does. An integration test will tell us if our project can route the request to the view and return the correct response. An integration test request will have to go through all the middleware and URL routing that a project is configured with. To help us write integration tests, Django provides `TestCase.client`.
`TestCase.client` is a utility offered by `TestCase` to let us send HTTP requests to our project (it can't send external HTTP requests). Django processes these requests normally. `client` also offers us convenience methods such as `client.login()`, a way of starting an authenticated session. A `TestCase` class also resets its `client` between each test.
Let's write an integration test for `QuestionDetailView` in `django/qanda/tests.py`:
from django.test import TestCase
from qanda.factories import QuestionFactory
from user.factories import UserFactory
QUESTION_CREATED_STRFTIME = '%Y-%m-%d %H:%M'
class QuestionDetailViewTestCase(TestCase):
QUESTION_DISPLAY_SNIPPET = '''
<div class="question" >
<div class="meta col-sm-12" >
<h1 >{title}</h1 >
Asked by {user} on {date}
</div >
<div class="body col-sm-12" >
{body}
</div >
</div >'''
LOGIN_TO_POST_ANSWERS = 'Login to post answers.'
def test_logged_in_user_can_post_answers(self):
question = QuestionFactory()
self.assertTrue(self.client.login(
username=question.user.username,
password=UserFactory.password)
)
response = self.client.get('/q/{}'.format(question.id))
rendered_content = response.rendered_content
self.assertEqual(200, response.status_code)
self.assertInHTML(self.NO_ANSWERS_SNIPPET, rendered_content)
template_names = [t.name for t in response.templates]
self.assertIn('qanda/common/post_answer.html', template_names)
question_needle = self.QUESTION_DISPLAY_SNIPPET.format(
title=question.title,
user=question.user.username,
date=question.created.strftime(QUESTION_CREATED_STRFTIME),
body=QuestionFactory.question,
)
self.assertInHTML(question_needle, rendered_content)
In this sample, we log in and then request a detail view of `Question`. We make multiple assertions about the result to confirm that it is correct (including checking the name of the templates used).
Let's examine some of this code in greater detail:
* `self.client.login(...)`: This begins an authenticated session. All future requests will be authenticated as that user until we call `client.logout()`.
* `self.client.get('/q/{}'.format(question.id))`: This makes an HTTP `GET` request using our client. Unlike when we used `RequestFactory`, the path we provide is to route our request to a view (note that we never reference the view directly in the test). This returns the response created by our view.
* `[t.name for t in response.templates]`: When one of the client's responses renders, the client updates the response with a list of templates used. In the case of the detail view, we used multiple templates. In order to check whether we're showing the UI for posting an answer, we will check whether the `qanda/common/post_answer.html` file is one of the templates used.
With this kind of test, we can gain a lot of confidence that our view works when a user makes a request. However, it does couple the test to the project's configuration. Integration tests make sense even for views coming from third-party apps to confirm that they're being used correctly. If you're making an app that is a library, you may find it better to use a unit test.
Next, let's look at testing that our Django and frontend code are both working correctly by testing and creating a live server test case using Selenium.
# Creating a live server integration test
The final type of test we'll write is a live server integration test. In this test, we'll start up a test Django server and make requests to it using Google Chrome controlled by Selenium.
Selenium is a tool with bindings for many languages (including Python) that lets you control a web browser. This lets you test exactly how a real browser behaves when it's using your project, because you are testing your project with a real browser.
There are some limitations imposed by this kind of test:
* Live tests often have to run in sequence
* It's easy to leak state across tests
* Using a browser is much slower than `TestCase.client()` (the browser makes real HTTP requests)
Despite all these downsides, live server tests can be an invaluable tool at a time when the client side of a web app is so powerful.
Let's start by setting up Selenium.
# Setting up Selenium
Let's add Selenium to our project by installing with `pip`:
**$pip install selenium==3.8.0**
Next, we will need the particular webdriver that tells Selenium how to talk to Chrome. Google provides a **chromedriver** at <https://sites.google.com/a/chromium.org/chromedriver/>. In our case, let's save it at the root of our project directory. Then, let's add the path to that driver in `django/conf/settings.py`:
CHROMEDRIVER = os.path.join(BASE_DIR, '../chromedriver')
Finally, make sure that you have Google Chrome installed on your computer. If not, you can download it at <https://www.google.com/chrome/index.html>.
All major browsers claim to have some level of support for Selenium. If you don't like Google Chrome, you can try one of the others. Refer to Selenium's docs (<http://www.seleniumhq.org/about/platforms.jsp>) for details.
# Testing with a live Django server and Selenium
Now that we have Selenium set up, we can create our live server test. A live server test is particularly useful when our project has a lot of JavaScript. Answerly, though, doesn't have any JavaScript. However, Django's forms do take advantage of HTML5 form attributes that most browsers (including Google Chrome) support. We can still test whether that functionality is being correctly used by our code.
In this test, we will check whether a user can submit an empty question. The `title` and `question` fields should each be marked `required` so that a browser won't submit the form if those fields are empty.
Let's add a new test to `django/qanda/tests.py`:
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium.webdriver.chrome.webdriver import WebDriver
from user.factories import UserFactory
class AskQuestionTestCase(StaticLiveServerTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.selenium = WebDriver(executable_path=settings.CHROMEDRIVER)
cls.selenium.implicitly_wait(10)
@classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
def setUp(self):
self.user = UserFactory()
def test_cant_ask_blank_question(self):
initial_question_count = Question.objects.count()
self.selenium.get('%s%s' % (self.live_server_url, '/user/login'))
username_input = self.selenium.find_element_by_name("username")
username_input.send_keys(self.user.username)
password_input = self.selenium.find_element_by_name("password")
password_input.send_keys(UserFactory.password)
self.selenium.find_element_by_id('log_in').click()
self.selenium.find_element_by_link_text("Ask").click()
ask_question_url = self.selenium.current_url
submit_btn = self.selenium.find_element_by_id('ask')
submit_btn.click()
after_empty_submit_click = self.selenium.current_url
self.assertEqual(ask_question_url, after_empty_submit_click)
self.assertEqual(initial_question_count, Question.objects.count())
Let's take a look at some of the new Django features introduced in this test. Then, we'll review our Selenium code:
* `class AskQuestionTestCase(StaticLiveServerTestCase)`: `StaticLiveServerTestCase` starts a Django server and also ensures that static files are served correctly. You don't have to run `python manage.py collectstatic`. The files will be routed correctly just like if you're running `python manage.py runserver`.
* `def setUpClass(cls)`: All the Django test cases support the `setUpClass()`, `setup()`, `teardown()`, and `teardownClass()` methods as usual. `setUpClass` and `tearDownClass()` are run only once per `TestCase` (before and after, respectively). This makes them ideal for expensive operations, such as connecting to Google Chrome with Selenium.
* `self.live_server_url`: This is the URL to the live server.
Selenium lets us interact with a browser using an API. This book is not focused on Selenium, but let's cover some key methods of the `WebDriver` class:
* `cls.selenium = WebDriver(executable_path=settings.CHROMEDRIVER)`: This instantiates a WebDriver instance with the path to the `ChromeDriver` executable (that we downloaded in the preceding _Setting Up Selenium_ section). We stored the path to the `ChromeDriver` executable in our settings to let us easily reference it here.
* `selenium.find_element_by_name(...)`: This returns an HTML element whose `name` attribute matches the provided argument. `name`s attributes are used by all `<input>` elements whose value is processed by a form, so this is particularly useful for data entry.
* `self.selenium.find_element_by_id(...)`: This is like the preceding step, except find the matching element by its `id` attribute.
* `self.selenium.current_url`: This is the browser's current URL. This is useful for confirming that we're on the page we expect.
* `username_input.send_keys(...)`: The `send_keys()` method lets us type the passed string into the an HTML element. This is particularly useful for `<input type='text'>`and `<input type='password'>` elements.
* `submit_btn.click()`: This triggers a click on the element.
This test logs in as a user, tries to submit a form, and asserts that it is still on the same page. Unfortunately, while a form with an empty required `input` elements won't submit itself, there is no API to confirm that directly. Instead, we confirm that we haven't submitted because the browser is still at the same URL (according to `self.selenium.current_url`) as before we hit submit.
# Summary
In this chapter, we learned how to measure code coverage in Django projects and how to write four different types of tests—unit tests for testing any function or class, including models and forms; and view unit tests for testing views using `RequestFactory`. We covered how to view integration tests for testing that request route to a view and return correct responses and Live server integration tests for testing that your client and server-side code work together correctly.
Now that we have some tests, let's deploy Answerly into a production environment.
# Deploying Answerly
In the preceding chapter, we learned about Django's testing API and wrote some tests for Answerly. As the final step, let's deploy Answerly on an Ubuntu 18.04 (Bionic Beaver) server using the Apache web server and mod_wsgi.
This chapter assumes that you have the code on your server under `/answerly` and are able to push updates to that code. You will make some changes to your code in this chapter. Despite making changes, you will need to avoid developing the habit of making direct changes in production. For example, you might be using a version control system (such as git) to track changes in your code. Then, you can make changes on your local workstation, push them to a remote repository (for example, hosted on GitHub or GitLab), and pull them on your server. This code is available in version control on GitHub (<https://github.com/tomarayn/Answerly>).
In this chapter, we will do the following things:
* Organize our configuration code to separate production and development settings
* Prepare our Ubuntu Linux server
* Deploy our project using Apache and mod_wsgi
* Take a look at how Django lets us deploy our projects as twelve-factor apps
Let's start by organizing our configuration to separate development and production settings.
# Organizing configuration for production and development
Up until now, we've kept a single `requirements` file and a single `settings.py`. This has made development convenient. However, we can't use our development settings in production.
The current best practice is to have a separate file for each environment. Each environment's file then imports a common file with shared values. We'll use this pattern for our requirements and settings files.
Let's start by splitting up our requirements file.
# Splitting our requirements file
First, let's create `requirements.common.txt` at the root of our project:
django<2.1
psycopg2==2.7.3.2
django-markdownify==0.2.2
django-crispy-forms==1.7.0
elasticsearch==6.0.0
Regardless of our environment, these are our common requirements, which we'll need to run Answerly. However, this `requirements` file is never used directly. Our development and production requirements files will reference.
Next, let's list our development requirements in `requirements.development.txt`:
-r requirements.common.txt
ipython==6.2.1
coverage==4.4.2
factory-boy==2.9.2
selenium==3.8.0
The preceding file will install everything from `requirements.common.txt` (thanks to `-r`) as well as our testing packages (`coverage`, `factory-boy`, and `selenium`). We're putting these files in our development file because we don't expect to run these tests in our production environment. If we were running tests in production, then we'd probably move them to `requirements.common.txt`.
For production, our `requirements.production.txt` file is very simple:
-r requirements.common.txt
Answerly doesn't need any special packages. However, we will still create one for clarity.
To install packages in production, we now execute the following command:
**$ pip install -r requirements.production.txt**
Next, let's split up the settings file along similar lines.
# Splitting our settings file
Again, we will follow the current Django best practice of splitting our settings file into three files: `common_settings.py`, `production_settings.py`, and `dev_settings.py`.
# Creating common_settings.py
We'll create `common_settings.py` by renaming our current `settings.py` file and then making some changes.
Let's change `DEBUG = False` so that no new settings file can _accidentally_ be in debug mode. Then, let's change the secret key to be obtained from an environment variable by updating `SECRET_KEY = os.getenv('DJANGO_SECRET_KEY')`.
Let's also add a new setting, `STATIC_ROOT`. `STATIC_ROOT` is the directory where Django will collect all the static files from across our installed apps to make it easier to serve them:
STATIC_ROOT = os.path.join(BASE_DIR, 'static_root')
In the database config, we can remove all the credentials and keep the `ENGINE` value (to make it clear that we intend to use Postgres everywhere):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
}
}
Next, let's create a development settings file.
# Creating dev_settings.py
Our development settings will be in `django/config/dev_settings.py`. Let's build it incrementally.
First, we will import everything from `common_settings`:
from config.common_settings import *
Then, we'll override some settings:
DEBUG = True
SECRET_KEY = 'some secret'
In development, we always want to run in debug mode. Also, we can feel safe hardcoding a secret key, as we know it won't be used in production:
DATABASES['default'].update({
'NAME': 'mymdb',
'USER': 'mymdb',
'PASSWORD': 'development',
'HOST': 'localhost',
'PORT': '5432',
})
Since our development database is local, we can hardcode the values in our settings to make the settings simpler. If your database is not local, avoid checking passwords into version control and use `os.getenv()` like in production.
We can also add more settings that our development-only apps may require. For example, in Chapter 5, _Deploying with Docker_ , we had settings for caches and the Django Debug Toolbar app. Answerly doesn't use those right now, so we won't include those settings.
Next, let's add production settings.
# Creating production_settings.py
Let's create our production settings in `django/config/production_settings.py`.
`production_settings.py` is similar to `dev_settings.py` but often uses `os.getenv()` to get values from environment variables. This helps us to keep secrets (for example, passwords, API tokens, and so on) out of version control and decouples settings from particular servers. We'll touch on this again in the _Factor 3 – config_ section:
from config.common_settings import *
DEBUG = False
assert SECRET_KEY is not None, (
'Please provide DJANGO_SECRET_KEY '
'environment variable with a value')
ALLOWED_HOSTS += [
os.getenv('DJANGO_ALLOWED_HOSTS'),
]
First, we import the common settings. Out of an abundance of caution, we ensure that the debug mode is off.
Having a `SECRET_KEY` set is vital to our system staying secure. We `assert` to prevent Django from starting up without `SECRET_KEY`. The `common_settings.py` file should have already set it from an environment variable.
A production website will be accessed on a domain other than `localhost`. We will tell Django what other domains we're serving by appending the `DJANGO_ALLOWED_HOSTS` environment variable to the `ALLOWED_HOSTS` list.
Next, let's update the database configuration:
DATABASES['default'].update({
'NAME': os.getenv('DJANGO_DB_NAME'),
'USER': os.getenv('DJANGO_DB_USER'),
'PASSWORD': os.getenv('DJANGO_DB_PASSWORD'),
'HOST': os.getenv('DJANGO_DB_HOST'),
'PORT': os.getenv('DJANGO_DB_PORT'),
})
We updated the database configuration using values from environment variables.
Now that we have our settings sorted, let's prepare our server.
# Preparing our server
Now that our code is ready to go into production, let's prepare our server. In this chapter, we will use Ubuntu 18.04 (Bionic Beaver). If you're running another distribution, then some package names may be different, but the steps we'll take will be the same.
To prepare our server, we will perform the following steps:
1. Installing the required operating system packages
2. Setting up Elasticsearch
3. Creating the database
Let's start by installing the packages we need.
# Installing required packages
To run Answerly on our server, we will need to ensure that the correct software is running.
Let's create a list of packages we will need in `ubuntu/packages.txt`:
python3
python3-pip
virtualenv
apache2
libapache2-mod-wsgi-py3
postgresql
postgresql-client
openjdk-8-jre-headless
The preceding code will install packages for the following:
* Full Python 3 support
* The Apache HTTP Server
* mod_wsgi, the Apache HTTP module for running Python web apps
* The PostgreSQL database server and client
* Java 8, required for Elasticsearch
To install the packages, run the following command:
**$ sudo apt install -y $(cat /answerly/ubuntu/packages.txt)**
Next, we'll install our Python packages to a virtual environment:
**$ mkvirutalenv /opt/answerly.venv
$ source /opt/answerly.venv/bin/activate
$ pip install -r /answerly/requirements.production.txt**
Great! Now that we have all the packages, we will need to set up Elasticsearch. Unfortunately, Ubuntu doesn't ship with a recent version of Elasticsearch, so we'll install it directly from Elastic instead.
# Configuring Elasticsearch
We will get Elasticsearch directly from Elastic. Elastic makes this simple by running a server with Ubuntu-compatible `.deb` packages that we can add to our server (Elastic also ships and supports RPMs, if that's more convenient for you). Finally, we have to remember to rebind Elasticsearch to localhost or we will have an unsecured server running on an open public port.
# Installing Elasticsearch
Let's add Elasticsearch to our list of trusted repositories by running the following three commands:
**$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ sudo apt install apt-transport-https
$ echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
$ sudo apt update**
The preceding commands perform the following four steps:
1. Add the Elastic GPG key to the list of trusted GPG keys
2. Ensure that `apt` gets packages over `HTTPS` by installing the `apt-transport-https` package
3. Add a new sources file that lists the Elastic package server so that `apt` knows how to get the Elasticsearch package from Elastic
4. Update the list of available packages (which will now include Elasticsearch)
Now that we have the Elasticsearch available, let's install it:
**$ sudo apt install elasticsearch**
Next, let's configure Elasticsearch.
# Running Elasticsearch
By default, Elasticsearch is configured to bind to a public IP address and includes no authentication.
To change the address Elasticsearch is running on, let's edit `/etc/elasticsearch/elasticsearch.yml`. Find the line with `network.host` and update it, as follows:
network.host: 127.0.0.1
If you don't change the `network.host` setting, then you'll be running Elasticsearch with no authentication and on a public IP. Your server getting hacked becomes inevitable.
Finally, we want to make sure that Ubuntu starts Elasticsearch and keeps it running. To accomplish that, we need to tell systemd to start Elasticsearch:
**$ sudo systemctl daemon-reload
$ sudo systemctl enable elasticsearch.service
$ sudo systemctl start elasticsearch.service**
The preceding commands perform the following three steps:
1. Fully reload systemd, which will then become aware of the newly installed Elasticsearch service
2. Enable the Elasticsearch service so that it starts when the server boots (in case of reboots or shutdown)
3. Start Elasticsearch
If you need to stop the Elasticsearch service, you can use `systemctl`: `sudo systemctl stop elasticsearch.service`.
Now that we have Elasticsearch running, let's configure the database.
# Creating the database
Django has support for migrations but cannot create the database or database user by itself. We'll write a script to do this for us now.
Let's add the database creation script to our project in `postgres/make_database.sh`:
#!/usr/bin/env bash
psql -v ON_ERROR_STOP=1 <<-EOSQL
CREATE DATABASE $DJANGO_DB_NAME;
CREATE USER $DJANGO_DB_USER;
GRANT ALL ON DATABASE $DJANGO_DB_NAME to "$DJANGO_DB_USER";
ALTER USER $DJANGO_DB_USER PASSWORD '$DJANGO_DB_PASSWORD';
ALTER USER $DJANGO_DB_USER CREATEDB;
EOSQL
To create the database, let's run the following commands:
**$ sudo su postgres
$ export DJANGO_DB_NAME=answerly
$ export DJANGO_DB_USER=answerly
$ export DJANGO_DB_PASSWORD=password
$ bash /answerly/postgres/make_database.sh**
The preceding commands do the following three things:
1. Switch us to be the `postgres` user, who is trusted to connect to the Postgres database without any additional credentials.
2. Set environment variables, describing our new database user and schema. **Remember to change the` password` value to a strong password.**
3. Execute the `make_database.sh` script.
Now that we have our server configured, let's deploy Answerly using Apache and mod_wsgi.
# Deploying Answerly with Apache
We will deploy Answerly using Apache and mod_wsgi. mod_wsgi is an open source Apache module that lets Apache host Python programs that implement the **Web Server Gateway Interface** ( **WSGI** ) specification.
The Apache web server is one of the many great options for deploying Django projects. Many organizations have an operations team who deploy Apache servers, so using Apache can remove some organizational hurdles in using Django for a project. Apache (with mod_wsgi) also knows how to run multiple web apps and route requests between them, unlike our previous configuration in Chapter 5, _Deploying with Docker_ , where we needed a reverse proxy (NGINX) and web server (uWSGI). The downside of using Apache is that it uses more memory than uWSGI. Also, Apache doesn't have a way of passing environment variables to our WSGI process. On the whole, deploying with Apache can be a really useful and important tool in a Django developer's belt.
To deploy, we will do the following things:
1. Create a virtual host config
2. Update `wsgi.py`
3. Create an environment config file
4. Collect the static files
5. Migrate the database
6. Enable the virtual host
Let's start creating a virtual host config for our Apache web server.
# Creating the virtual host config
A single Apache web server can host many websites using different technologies from different locations. To keep each website separate, Apache provides the capacity to define a virtual host. Each virtual host is a logically separate site that serves one or more domains and ports.
Since Apache has already been a great web server, we will use it to serve our static files. The web server serving the static files and our mod_wsgi process won't be competing, because they will run as separate processes, thanks to mod_wsgi's daemon mode. mod_wsgi daemon mode means that Answerly will run in separate processes from the rest of Apache. Apache will still be responsible for starting/stopping these processes.
Let's add the Apache virtual host config to our project under `apache/answerly.apache.conf`:
<VirtualHost *:80>
WSGIDaemonProcess answerly \
python-home=/opt/answerly.venv \
python-path=/answerly/django \
processes=2 \
threads=15
WSGIProcessGroup answerly
WSGIScriptAlias / /answerly/django/config/wsgi.py
<Directory /answerly/django/config>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
Alias /static/ /answerly/django/static_root
<Directory /answerly/django/static_root>
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Let's look at the some of these instructions more closely:
* `<VirtualHost *:80>`: This instructs Apache that everything until the closing `</VirtualHost>` tag is part of the virtual host definition.
* `WSGIDaemonProcess`: This configures mod_wsgi to run in daemon mode. The daemon process will be named `answerly`. The `python-home` option defines the virtual environment for the Python process that the daemon will use. The `python-path` option lets us add our modules to the daemon's python so that they can be imported. The `processes` and `threads` options tell Apache how many of each to maintain.
* `WSGIProcessGroup`: This associates this virtual host with the Answerly mod_wsgi daemon. Remember that you keep the `WSGIDaemonProcess` name and the `WSGIProcessGroup` name the same.
* `WSGIScriptAlias`: This describes which requests should be routed to which WSGI script. In our case, all requests should go to Answerly's WSGI script.
* `<Directory /answerly/django/config>`: This block gives all users permission to access our WSGI script.
* `Alias /static/ /answerly/django/static_root`: This routes any request that begins with `/static/` not to mod_wsgi but to our static file root.
* `<Directory /answerly/django/static_root>`: This block gives users permission to access files in `static_root`.
* `ErrorLog` and `CustomLog`: They describe where Apache should send its logs for this virtual host. In our case, we want to log it in the Apache `log` directory (commonly, `/var/log/apache`).
We have now configured Apache to run Answerly. However, if you compare your Apache configuration and your uWSGI configuration from Chapter 5, _Deploying with Docker_ , you'll notice a difference. In the uWSGI configuration, we provided the environment variables that our `production_settings.py` relies on. However, mod_wsgi doesn't offer us such a facility. Instead, we will update `django/config/wsgi.py` to provide the environment variables that `production_settings.py` needs.
# Updating wsgi.py to set environment variables
Now, we will update `django/config/wsgi.py` to provide the environment variables that `production_settings.py` wants but mod_wsgi can't provide. We will also update `wsgi.py` to read a configuration file on startup and then set the environment variables itself. This way, our production settings aren't coupled to mod_wsgi or a config file.
Let's update `django/config/wsgi.py`:
import os
import configparser
from django.core.wsgi import get_wsgi_application
if not os.environ.get('DJANGO_SETTINGS_MODULE'):
parser = configparser.ConfigParser()
parser.read('/etc/answerly/answerly.ini')
for name, val in parser['mod_wsgi'].items():
os.environ[name.upper()] = val
application = get_wsgi_application()
In the updated `wsgi.py`, we check whether there is a `DJANGO_SETTINGS_MODULE` environment variable. If it is absent, we parse our config file and set environment variables. Our `for` loop transforms the names of the variables to ensure that they are uppercase since `ConfigParser` makes them `lowercase` by default.
Next, let's create our environment config file.
# Creating the environment config file
We'll store our environment config under `/etc/answerly/answerly.ini`. We don't want it stored under `/answerly` because it's not part of our code. This file describes the settings for _just_ this server. We should never check this file into version control.
Let's create `/etc/answerly/answerly.ini` on our server:
[mod_wsgi]
DJANGO_ALLOWED_HOSTS=localhost
DJANGO_DB_NAME=answerly
DJANGO_DB_USER=answerly
DJANGO_DB_PASSWORD=password
DJANGO_DB_HOST=localhost
DJANGO_DB_PORT=5432
DJANGO_ES_INDEX=answerly
DJANGO_ES_HOST=localhost
DJANGO_ES_PORT=9200
DJANGO_LOG_FILE=/var/log/answerly/answerly.log
DJANGO_SECRET_KEY=a large random value
DJANGO_SETTINGS_MODULE=config.production_settings
The following are the two things to remember about this file:
* Remember to set `DJANGO_DB_PASSWORD` to the same value you set in when you ran the `make_database.sh` script. _Remember to make sure that this password is strong and secret_.
* Remember to set a strong `DJANGO_SECRET_KEY` value.
We should now have our environment set up for Apache. Next, let's migrate the database.
# Migrating the database
We created the database for Answerly in a previous step, but we didn't create the tables. Let's now migrate the database using Django's built-in migration tools.
On the server, we want to execute the following commands:
**$ cd /answerly/django
$ source /opt/answerly.venv/bin/activate
$ export DJANGO_SECRET_KEY=anything
$ export DJANGO_DB_HOST=127.0.0.1
$ export DJANGO_DB_PORT=5432
$ export DJANGO_LOG_FILE=/var/log/answerly/answerly.log
$ export DJANGO_DB_USER=myqa
$ export DJANGO_DB_NAME=myqa
$ export DJANGO_DB_PASSWORD=password
$ sudo python3 manage.py migrate --settings=config.production_settings**
Our `django/config/production_settings.py` will require us to provide `DJANGO_SECRET_KEY` with a value, but it won't be used in this case. However, providing the correct value for `DJANGO_DB_PASSWORD` and the other `DJANGO_DB` variables is critical.
Once our `migrate` command returns successful, then our database will have all the tables we need.
Next, let's make our static (JavaScript/CSS/image) files available to our users.
# Collecting static files
In our virtual host config, we configured Apache to serve our static (JS, CSS, image, and so on) files. For Apache to serve these files, we need to collect them all under one parent directory. Let's use Django's built-in `manage.py collectstatic` command to do just that.
On the server, let's run the following commands:
**$ cd /answerly/django
$ source /opt/answerly.venv/bin/activate
$ export DJANGO_SECRET_KEY=anything
$ export DJANGO_LOG_FILE=/var/log/answerly/answerly.log
$ sudo python3 manage.py collectstatic --settings=config.production_settings --no-input**
The preceding commands will copy static files from all our installed apps into `/answerly/django/static_root` (per our `STATIC_ROOT` definition in `production_settings.py`). Our virtual host config tells Apache to serve these files directly.
Now, let's tell Apache to start serving Answerly.
# Enabling the Answerly virtual host
To have Apache serve Answerly to users, we will need to enable the virtual host config we created the preceding section, creating the virtual host config. To enable a virtual host in Apache, we will add a soft link point at the virtual host config to Apache's `site-enabled` directory and tell Apache to reload its configuration.
First, let's add our soft link to the Apache `site-enabled` directory:
**$ sudo ln -s /answerly/apache/answerly.apache.conf /etc/apache/site-enabled/000-answerly.conf**
We prefix our softlink with `001` to control what our config gets loaded. Apache loads site configs by filename in character ordinal order (for example, `B` comes before `a` in Unicode/ASCII encoding). The prefix is used to make the order more obvious.
Apache is frequently packaged with a default site. Check out `/etc/apache/sites-enabled/` for sites you don't want to run. Since everything in there should be a soft link, they should be safe to delete.
To activate the virtual host, we will need to reload Apache's configuration:
**$ sudo systemctl reload apache2.service**
Congratulations! You've deployed Answerly on your server.
# A quick review of the section
In this chapter so far, we've looked at how to deploy Django with Apache and mod_wsgi. First, we configured our server by installing packages from Ubuntu and Elastic (for Elasticsearch). Then, we configured Apache to run Answerly as a virtual host. Our Django code will be executed by mod_wsgi.
At this point, we've seen two very different deployment, one using Docker and one using Apache and mod_wsgi. Despite being very different environments, we've followed many similar practices. Let's look at how Django best practices come out of the popular twelve-factor app methodology.
# Deploying Django projects as twelve-factor apps
The _twelve-factor app_ document explains a methodology to develop web apps and services. These principles were documented in 2011 by Adam Wiggins and others primarily on their experience at Heroku (a popular Platform as a Service, PaaS, provider). Heroku was one of the first PaaS that helped developers build easy-to-scale web applications and services. Since being posted, the principles of twelve-factor apps have shaped a lot of the thinking about how to build and deploy SaaS apps—like web apps.
The twelve-factors provide many benefits, as follows:
* Easing automation and onboarding using declarative formats
* Emphasizing portability across deployed environments
* Encouraging production/development environment parity and continuous deployment and integration
* Simplifying scaling without requiring re-architecting
However, when evaluating the twelve factors, it's important to remember that they are strongly coupled to Heroku's approach to deployment. Not all platforms (or PaaS providers) have exactly the same approach. This doesn't make the twelve factors right and other approaches wrong, nor vice versa. Rather the twelve factors are useful principles to keep in mind. You should adapt them to help your projects, just as you would with any methodology.
The twelve factor use of the word _app_ is different to Django's usability:
* A Django project is the equivalent of a twelve factor app
* A Django app is the equivalent of a twelve factor library
In this section, we will examine what each of the twelve-factors means and how they can be applied to your Django projects.
# Factor 1 – Code base
"One codebase tracked in revision control, many deploys" – 12factor.net
This factor emphasizes the following two things:
* All code should be tracked in a version-controlled code repository (repo)
* Each deployment should be able to reference a single version/commit in that repo
This means that when we experience a bug, we know exactly the version of the code that is responsible for that. If our project spans multiple repos, the twelve-factor approach requires that shared code be refactored into libraries and tracked as dependencies (refer to the _Factor 2 – Dependencies_ section). If multiple projects use the same repository, then they should be refactored into separate repositories (sometimes called _multi repo_ ). Over the years since twelve-factor was first published, multirepo versus monorepo (where a single repo is used for multiple projects) has become increasingly debated. Some large projects have found benefits to using a mono repo. Other projects have found success with multiple repos.
Fundamentally, this factor strives to ensure that we know what is running in which environment.
We can write our Django apps in a reusable way so that they can be hosted as libraries that are installed with `pip` (multirepo style). Alternatively, you can host all your Django projects and apps in the same repo (monorepo) by modifying the Python path of your Django project.
# Factor 2 – Dependencies
"Explicitly declare and isolate dependencies" – 12 factor.net
A twelve-factor app shouldn't assume anything about its environment. The libraries and tools a project uses must be declared by the project and installed as part of the deployment (refer to _Factor 5 – Build, release, and run_ section). All running twelve-factor apps should be isolated from each other.
Django projects benefit from Python's rich toolset. "In Python there are two separate tools for these steps – Pip is used for declaration and Virtualenv for isolation" (<https://12factor.net/dependencies>). In Answerly, we also used a list of Ubuntu packages that we installed with `apt`.
# Factor 3 – Config
"Store config in the environment" – 12factor.net
The twelve-factor app methodology provides a useful definition of a config:
"An app's config is everything that is likely to vary between deploys (staging, production, developer environments, etc)" – <https://12factor.net/config>
The twelve-factor app methodology also encourages the use of environment variables for communicating config values to our code. This means that if there's a problem, we can test exactly the code that was deployed (provided by Factor 1) with the exact config used. We can also check whether an error is a config issue or a code issue by deploying the same code with a different config.
In Django, our config is referenced by our `settings.py` files. In both MyMDB and Answerly, we've seen common config values such as `SECRET_KEY`, database credentials, and API keys (for example, AWS keys) passed by environment variables.
However, this is an area where Django best practices differ from the strictest reading of a twelve-factor app. Django projects generally create a separate settings file for staging, production, and local development with most settings hardcoded. It's primarily credentials and secrets which are passed as environment variables.
# Factor 4 – Backing services
"Treat backing services as attached resources" – 12factor.net
A twelve-factor app should not care where a backing service (for example, database) is located and should always access it via a URL. The benefit of this is that our code is not coupled to a particular environment. This approach also permits each piece of our architecture to scale independently.
Answerly, as deployed in this chapter, is located on the same server as its database. However, we don't use a local authentication mechanism but instead provide Django with a host, port, and credentials. This way, we could move our database to another server and no code would have to be changed. We would simply update our config.
Django is written with the assumption that we will treat most services as attached resources (for example, most database documentation assumes this). We still need to practice this principle when working with third-party libraries.
# Factor 5 – Build, release, and run
"Strictly separate build and run stages" – 12factor.net
The twelve-factor approach encourages a deployment to be divided into three distinct steps:
1. **Build** : Where the code and dependencies are gathered into a single bundle (a _build_ )
2. **Release** : Where the build is combined with a config and ready for execution
3. **Run** : Where the combined build and config are executed
A twelve-factor app further requires each release to have a unique ID so that it can be identified.
This level of deployment detail is beyond Django's scope, and there's a variety of levels of adherence to this strict three-step model. A project that uses Django and Docker, as seen in Chapter 5, _Deploying with Docker_ , may adhere to it very closely. MyMDB had a clear build with all the dependencies bundled in the Docker image. However, in this chapter, we never made a bundled build. Instead, we installed dependencies (running `pip install`) after our code was already on our server. Many projects succeed with this simple model. However, as the project scales, this may cause complications. Answerly's deployment shows how twelve-factor principles may be bent and still work for some projects.
# Factor 6 – Processes
"Execute the app as one or more stateless processes" – 12factor.net
The focus of this factor is that app processes should be _stateless_. Each task is executed without relying on a previous task having left data behind. Instead, state should be stored in backing services (refer to _Factor 4 – Backing services_ section), such as a database or external cache. This enables an app to scale easily, because all processes are equally eligible to process a request.
Django is built around this assumption. Even sessions, where a user's login state is stored, isn't saved in a process but in the database by default. Instances of view classes are never reused. The only place where Django comes close to violating this is one of the cache backends (local memory cache). However, as we discussed, that's an inefficient backend. Generally, Django projects use a backing service (for example, memcached) for their caches.
# Factor 7 – Port binding
"Export services via port binding" – 12factor.net
The focus of this factor is that our process should be accessed directly through its port. Accessing a project should be a matter of sending a properly formed request to `app.example.com:1234`. Further, a twelve-factor app should not be run as an Apache module or web server container. If our project needs to parse HTTP requests, it should use library (refer to _Factor 2 – Dependencies_ section) to parse them.
Django follows parts of this principle. Users access a Django project over an HTTP port using HTTP. One aspect of Django that diverges from twelve-factors is that it's almost always run as a child process of a web server (whether Apache, uWSGI, or something else). It's the web server, not Django, that performs the port binding. However, this minor difference has not kept Django projects from scaling effectively.
# Factor 8 – Concurrency
"Scale out via the process model" – 12factor.net
The twelve-factor app principles are focused on scaling (a vital concern for a PaaS provider like Heroku). In factor 8, we saw how the trade-offs and decisions made previously come together to help a project scale.
Since a project runs as a stateless process (refer to _Factor 6 – Processes_ section) available as a port (refer to _Factor 7 – Port binding_ section), concurrency is just a matter of having more processes (across one or more machines). The processes don't need to care whether they're on the same machine or not since any state (like a question's answer) is stored in a backing service (refer to _Factor 4 – Backing services_ section) such as a database. Factor 8 tells us to trust the Unix process model for running services instead of daemonizing or creating PID files.
Since Django projects run as child processes of a web server, they often adapt this principle. Django projects that need to scale often use a combination of reverse proxy (for example, Nginx) and lightweight web server (for example, uWSGI or Gunicorn). Django projects don't directly concern themselves with how processes are managed, but follow the best practice for the web server they're using.
# Factor 9 – Disposability
"Maximize robustness with fast startup and graceful shutdown" – 12factor.net
The disposibility factor has two parts. Firstly, a twelve-factor app should be able to start processing requests on its port soon after the process starts. Remember that all its dependencies (refer to _Factor 2 – Dependencies_ section) have already been installed (refer to _Factor 5 – Build, release, and run_ section). A twelve-factor app should handle a process stopping or shutting gracefully. The process shouldn't put a twelve-factor app into an invalid state.
Django projects are able to shut down gracefully because Django wraps each request in an atomic transaction by default. If a Django process (whether managed by uWSGI, Apache, or anything else) is stopped while a request is only partially processed, the transaction will never be committed. The database will discard the transaction. When we're dealing with other backing services (for example, S3 or Elasticsearch) that don't support transactions, we have to make sure that we consider this in our design.
# Factor 10 – Dev/prod parity
"Keep development, staging, and production as similar as possible" – 12factor.net
All environments that a twelve-factor app run in should be as similar as possible. This is much easier when a twelve-factor app is a simple process (refer to _Factor 6 – Processes_ section). This also includes the backing services the twelve-factor app uses (refer to _Factor 4 – Backing services_ section). For example, a twelve-factor app's development environment should include the same database as the production environment. Tools such as Docker and Vagrant can make this much easier to accomplish today.
The general Django best practice is to use the same database (and other backing services) in development and production. In this book, we've striven to do so. However, the Django community often uses the `manage.py runserver` command in development, as opposed to running uWSGI or Apache.
# Factor 11 – Logs
"Treat logs as event streams" – 12factor.net
Logs should be just output as an unbuffered `stdout` stream, and a _twelve-factor app never concerns itself with routing or storage of its output stream_ (<https://12factor.net/logs>). When the process runs, it should just output unbuffered content to `stdout`. Whoever starts the process (whether a developer or a production server's init process) can then redirect that stream appropriately.
A Django project generally uses Python's logging module. This can support writing to a log file or outputting an unbuffered stream. Generally, Django projects append to a file. That file may be processed or rotated separately (for example, using the `logrotate` utility).
# Factor 12 – Admin processes
"Run admin/management tasks as one-off processes" – 12factor.net
All projects require a one-off task to be run from time to time (for example, database migration). When a twelve-factor app's one-off task is run, it should be run as a separate process from the processes that handle regular requests. However, the one-off process should run with the same environment as all other processes.
In Django that means using the same virtual environment, settings file, and environment variables for running our `manage.py` tasks as our normal process. This is what we did earlier when we migrated the database.
# A quick review of the section
After reviewing all the principles of a twelve-factor app, we will take a look at how Django projects are able to follow these principles to help make our project easy to deploy, scale, and automate.
The main difference between a Django project and a strict twelve-factor app is that Django apps are run by a web server rather than as separate processes (Factor 6). However, as long as we avoid complicated web server configurations (as we do in this book), we can continue to gain the benefits of being a twelve-factor app.
# Summary
In this chapter, we focused on deploying Django to a Linux server running Apache and mod_wsgi. We've also reviewed the principles of a twelve factor app and how a Django app can use them to be easy to deploy, scale, and automate.
Congratulations! You've launched Answerly.
In the next chapter, we'll look at creating a mailing list management app called MailApe.
# Starting Mail Ape
In this chapter, we'll begin building Mail Ape, a mailing list manager that will let users start mailing lists, sign up for mailing lists, and then message people. Subscribers will have to confirm their subscription to a mailing list and be able to unsubscribe. This will help us to ensure that Mail Ape isn't used to serve spam to users.
In this chapter, we will build the core Django functionality of Mail Ape:
* We'll build models that describe Mail Ape, including `MailingList` and `Subscriber`
* We'll use Django's Class-Based Views to create web pages
* We'll use Django's built-in authentication functionality to let users log in
* We'll make sure that only the owner of a `MailingList` model instance can email its subscribers
* We'll create templates to generate the HTML to display the forms to subscribe and email to our users
* We'll run Mail Ape locally using Django's built-in development server
The code for this project is available online at <https://github.com/tomaratyn/MailApe>.
Django follows the **Model View Template** ( **MVT** ) pattern to separate model, control, and presentation logic and encourage reusability. Models represent the data we'll store in the database. Views are responsible for handling a request and returning a response. Views should not have HTML. Templates are responsible for the body of a response and defining the HTML. This separation of responsibilities has proven to make it easy to write code.
Let's start by creating the Mail Ape project.
# Creating the Mail Ape project
In this section, we will create the MailApe project:
**$ mkdir mailape
$ cd mailape**
All the paths in this part of the book will be relative to this directory.
# Listing our Python dependencies
Next, let's create a `requirements.txt` file to track our Python dependencies:
django<2.1
psycopg2<2.8
django-markdownify==0.3.0
django-crispy-forms==1.7.0
Now that we know our requirements, we can install them, as follows:
**$ pip install -r requirements.txt**
This will install the following four libraries:
* `Django`: Our favorite web app framework
* `psycopg2`: The Python PostgreSQL library; we'll use PostgreSQL in both production and development
* `django-markdownify`: A library that makes it easy to render markdown in a Django template
* `django-crsipy-forms`: A library that makes it easy to create Django forms in templates
With Django installed, we can use the `django-admin` utility to create our project.
# Creating our Django project and apps
A Django project is composed of a configuration directory and one or more Django apps. The actual functionality of a project is encapsulated by the installed apps. By default, the configuration directory is named after the project.
A web app is often composed of much more than just the Django code that is executed. We need configuration files, system dependencies, and documentation. To help future developers (including our future selves), we will strive to label each directory clearly:
**$ django-admin startporject config
$ mv config django
$ tree django
django
├── config
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py**
With this approach, our directory structure is clear about the location of our Django code and configuration.
Next, let's create the apps that will encapsulate our functionality:
**$ python manage.py startapp mailinglist
$ python manage.py startapp user**
For each app, we should create a URLConf. A URLConf ensures that requests get routed to the right view. A URLConf is a list of paths, the view that serves the path, and the name for the path. One great thing about URLConfs is that they can include each other. When a Django project is created, it gets a root URLConf (ours is at `django/config/urls.py`). Since a URLConf may include other URLConfs, the name provides a vital way to reference a URL path to a view without knowing the full URL path to the view.
# Creating our app's URLConfs
Let's create a URLConf for the `mailinglist` app in `django/mailinglist/urls.py`:
from django.urls import path
from mailinglist import views
app_name = 'mailinglist'
urlpatterns = [
]
The `app_name` variable is used to scope the paths in case of name collisions. When resolving a pathname, we can prefix it with `mailinglist:` to ensure that it's from this app. As we build our views, we'll add `path` s to the `urlpatterns` list.
Next, let's create another URLConf in the `user` app by creating `django/user/urls.py`:
from django.contrib.auth.views import LoginView, LogoutView
from django.urls import path
import user.views
app_name = 'user'
urlpatterns = [
]
Great! Now, let's include them in the root ULRConf that's located in `django/config/urls.py`:
from django.contrib import admin
from django.urls import path, include
import mailinglist.urls
import user.urls
urlpatterns = [
path('admin/', admin.site.urls),
path('user/', include(user.urls, namespace='user')),
path('mailinglist/', include(mailinglist.urls, namespace='mailinglist')),
]
The root URLConf is just like our app's URLConfs. It has a list of `path()` objects. The `path()` objects in the root URLConfs usually don't have views but `include()` other URLConfs. Let's take a look at the two new functions here:
* `path()`: This takes a string and either a view or the result of `include()`. Django will iterate over the `path()`s in a URLConf until it finds one that matches the path of a request. Django will then pass the request to that view or URLConf. If it's a URLConf, then that list of `path()`s is checked.
* `include()`: This takes a URLConf and a namespace name. A namespace isolates a URLConfs from each other so that we can prevent name collisions, ensuring that we can differentiate `appA:index` from `appB:index`. `include()` returns a tuple; the object at `admin.site.urls` has been already a correctly formatted tuple, so we don't have to use `include()`. Generally, we always use `include()`.
If Django can't find a `path()` object that matches a request's path, then it will return a 404 response.
The result of this URLConf is as follows:
* Any request starting with `admin/` will be routed to the admin app's URLConf
* Any request starting with `mailinglist/` will be routed to the `mailinglist` app's URLConf
* Any request starting with `user/` will be routed to the `user` app's URLConf
# Installing our project's apps
Let's update `django/config/settings.py` to install our apps. We'll change the `INSTALLED_APPS` setting as shown in the following code snippet:
INSTALLED_APPS = [
'user',
'mailinglist',
'crispy_forms',
'markdownify',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
Now that we have our project and apps configured, let's create models for our `mailinglist` app.
# Creating the mailinglist models
In this section, we'll create the models for our `mailinglist` app. Django provides a rich and powerful ORM that will let us define our models in Python without having to deal with the database directly. The ORM converts our Django classes, fields, and objects into relational database concepts:
* A model class maps to a relational database table
* A field maps to a relational database column
* A model instance maps to a relational database row
Each model also comes with a default manager available in the `objects` attribute. A manager provides a starting point for running queries on a model. One of the most important methods a manager has is `create()`. We can use `create()` to create an instance of the model in our database. A manager is also the starting point to get a `QuerySet` for our model.
A `QuerySet` represents a database query for models. `QuerySet`s are lazy and only execute when they're iterated or converted to a `bool`. A `QuerySet` API offers most the functionality of SQL without being tied a particular database. Two particularly useful methods are `QuerySet.filter()` and `QuerySet.exclude()`. `QuerySet.filter()` lets us filter the results of the `QuerySet` to only those matching the provided criteria. `QuerySet.exclude()` lets us exclude results that don't match the criteria.
Let's start with the first model, `MailingList`.
# Creating the MailingList model
Our `MailingList` model will represent a mailing list that one of our users has created. This will be an important model for our system because many other models will be referring to it. We can also anticipate that the `id` of a `MailingList` will have to be publicly exposed in order to relate subscribers back to it. To avoid letting users enumerate all the mailing lists in Mail Ape, we want to make sure that our `MailingList` IDs are nonsequential.
Let's add our `MailingList` model to `django/mailinglist/models.py`:
import uuid
from django.conf import settings
from django.db import models
from django.urls import reverse
class MailingList(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
name = models.CharField(max_length=140)
owner = models.ForeignKey(to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE)
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse(
'mailinglist:manage_mailinglist',
kwargs={'pk': self.id}
)
def user_can_use_mailing_list(self, user):
return user == self.owner
Let's take a closer look at our `MailingList` model:
* `class MailingList(models.Model):`: All Django models must inherit from the `Model` class.
* `id = models.UUIDField`: This is the first time we've specified the `id` field for a model. Usually, we let Django provide one for us automatically. In this case, we wanted nonsequential IDs, so we used a field that provides **Universally Unique Identifiers** ( **UUIDs** ). Django will create the proper database field when we generate the migrations (refer to the _Creating database migrations_ section). However, we have to generate the UUID in Python. To generate new UUIDs for each new model, we used the `default` argument and Python's `uuid4` function. To tell Django that our `id` field is the primary key, we used the `primary_key` argument. We further passed `editable=False` to prevent changes to the `id` attribute.
* `name = models.CharField`: This will represent the mailing list's name. A `CharField` will get converted to a `VARCHAR` column, so we must provide it with a `max_length` argument.
* `owner = models.ForeignKey`: This is a foreign key to Django's user model. In our case, we will use the default `django.contrib.auth.models.User` class. We follow the Django best practice of avoiding hardcoding this model. By referencing `settings.AUTH_USER_MODEL`, we don't couple our app to the project too tightly. This encourages future reuse. The `on_delete=models.CASCADE` argument means that if a user is deleted, all their `MailingList` model instances will be deleted too.
* `def __str__(self)`: This defines how to convert a mailing list to a `str`. Both Django and Python will use this when a `MailingList` needs to be printed out or displayed.
* `def get_absolute_url(self)`: This is a common method on Django models. `get_absolute_url()` returns a URL path that represents the model. In our case, we return the management page for this mailing list. We don't hardcode the path. Instead, we use `reverse()` to resolve the path at runtime by providing the name of the URL. We'll look at named URLs in the _Creating the URLConf_ section.
* `def user_can_use_mailing_list(self, user)`: This is a method we've added for our own convenience. It checks whether a user can use (meaning view-related items and/or send messages) to this mailing list. Django's _Fat models_ philosophy encourages placing code for decisions like this in models rather than in views. This gives us a central place for decisions, ensuring that you **Don't Repeat Yourself** ( **DRY** ).
We now have our `MailingList` model. Next, let's create a model to capture the mailing list's subscribers.
# Creating the Subscriber model
In this section, we will create a `Subscriber` model. A `Subscriber` model can only belong to one `MailingList` and must confirm their subscription. Since we'll need to reference a subscriber for their confirm and unsubscribe pages, we'll want their `id` instance to also be nonsequential.
Let's create the `Subscriber` model in `django/mailinglist/models.py`:
class Subscriber(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
email = models.EmailField()
confirmed = models.BooleanField(default=False)
mailing_list = models.ForeignKey(to=MailingList, on_delete=models.CASCADE)
class Meta:
unique_together = ['email', 'mailing_list', ]
The `Subscriber` model has some similarities to the `MailingList` model. The base class and `UUIDField` function the same. Let's take a look at some of the differences:
* `models.EmailField()`: This is a specialized `CharField` but does extra validation to ensure that the value is a valid email address.
* `models.BooleanField(default=False)`: This lets us store `True`/`False` values. We need to use to this to track whether a user really intends to subscribe to a mailing list.
* `models.ForeignKey(to=MailingList...)`: This lets us create a foreign key between `Subscriber` and `MailingList` model instances.
* `unique_together`: This is an attribute of the `Meta` inner class of `Subscriber`. A `Meta` inner class lets us specify information on the table. For example, `unique_together` lets us add an additional unique constraint on a table. In this case, we prevent a user from signing up twice with the same email.
Now that we can track `Subscriber` model instances, let's track the messages our users want to send to their `MailingList`.
# Creating the Message model
Our users will want to send messages to their `Subscriber` model instances of `MailingList`. In order to know what to send to these subscribers, we will need to store the messages as a Django model.
A `Message` should belong to a `MailingList` and have a nonsequential `id`. We need to save the subject and body of these messages. We will also want to track when the sending began and completed.
Let's add the `Message` model to `django/mailinglist/models.py`:
class Message(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
mailing_list = models.ForeignKey(to=MailingList, on_delete=models.CASCADE)
subject = models.CharField(max_length=140)
body = models.TextField()
started = models.DateTimeField(default=None, null=True)
finished = models.DateTimeField(default=None, null=True)
Again, the `Message` model is very similar to our preceding models in its base class and fields. We do see some new fields in this model. Let's take a closer look at these new fields:
* `models.TextField()`: This is used to store arbitrarily long character data. All major databases have a `TEXT` column type. This is useful to store the `body` attribute of our user's `Message`.
* `models.DateTimeField(default=None, null=True)`: This is used to store date and time values. In Postgres, this becomes a `TIMESTAMP` column. The `null` argument tells Django that this column should be able to accept a `NULL` value. By default, all fields have a `NOT NULL` constraint on them.
We now have our models. Let's create them in our database with database migrations.
# Using database migrations
Database migrations describe how to get a database to a particular state. In this section, we will do the following things:
* Create a database migration for our `mailinglist` app models
* Run the migration on a Postgres database
When we make a change to our models, we can have Django generate the code for creating those tables, fields, and constraints. The migrations that Django generates are created using an API that is also available to Django developers. If we need to do a complicated migration, we can write a migration ourselves. Remember that a proper migration includes code for both applying and reverting a migration. If there's a problem, we want to have a way to undo our migration. When Django generates a migration, it always generates both migrations for us.
Let's start by configuring Django to connect to our PostgreSQL database.
# Configuring the database
To configure Django to connect to our Postgres database, we will need to update the `DATABASES` setting in `django/config/settings.py`:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'mailape',
'USER': 'mailape',
'PASSWORD': 'development',
'HOST': 'localhost',
'PORT': '5432',
}
}
You should not hardcode the password to a production database in your `settings.py` file. If you're connecting to a shared or online instance, set the username, password, and host using environment variables and access them using `os.getenv()`, like we did in our previous production deployment chapters (Chapter 5, _Deploying with Docker_ , and Chapter 9, _Deploying Answerly_ ).
Django cannot create a database and users by itself. We must do that ourselves. You can find a script for doing this in the code for this chapter.
Next, let's create the migrations for models.
# Creating database migrations
To create our database migrations, we will use the `manage.py` script that Django put at the top of the Django project (`django/manage.py`):
**$ cd django
$ python manage.py makemigrations
Migrations for 'mailinglist':
mailinglist/migrations/0001_initial.py
- Create model MailingList
- Create model Message
- Create model Subscriber
- Alter unique_together for subscriber (1 constraint(s))**
Great! Now that we have the migrations, we can run them on our local development database.
# Running database migrations
We use `manage.py` to apply our database migrations to a running database. On the command line, execute the following:
**$ cd django
$ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, mailinglist, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying mailinglist.0001_initial... OK
Applying sessions.0001_initial... OK**
When we run `manage.py migrate` without providing an app, it will run all migrations on all installed Django apps. Our database now has the tables for the `mailinglist` app models and the `auth` app's models (including the `User` model).
Now that we have our models and database set up, let's make sure that we can validate the user's input for these models using Django's forms API.
# MailingList forms
One of the common issues that developers have to solve is how to validate a user input. Django provides input validation through its forms API. The forms API can be used to describe an HTML form using an API very similar to the models API. If we want to create a form that describes a Django model, then the Django form's `ModelForm` offers us a shortcut. We only have to describe what we're changing from the default form representation for the model.
When a Django form is instantiated, it can be provided with any of the three following arguments:
* `data`: The raw input that the end users request
* `initial`: The known safe initial values that we may set for a form
* `instance`: The instance the form is describing, only on `ModelForm`
If a form has been provided `data`, then it is called a bound form. Bound forms can validate their `data` by calling `is_valid()`. A validated form's safe-to-use data is available under the `cleaned_data` dictionary (keyed on the field's name). Errors are available via the `errors` property, which returns a dictionary. A bound `ModelForm` can also create or update its model instance with the `save()` method.
Even if none of the arguments are provided, a form is still able to print itself out as HTML, making our templates much simpler. This mechanism helps us achieve the goal of _dumb templates_.
Let's start creating our forms by creating the `SubscriberForm` class.
# Creating the Subscriber form
An important task Mail Ape must perform is to accept emails of a new `Subscriber` for a `MailingList`. Let's create a form to do that validation for us.
`SubscriberForm` must be able to validate input as a valid email. We also want it to save our new `Subscriber` model instance and associate it with the proper `MailingList` model instance.
Let's create that form in `django/mailinglist/forms.py`:
from django import forms
from mailinglist.models import MailingList, Subscriber
class SubscriberForm(forms.ModelForm):
mailing_list = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=MailingList.objects.all(),
disabled=True,
)
class Meta:
model = Subscriber
fields = ['mailing_list', 'email', ]
Let's take a closer look at our `SubscriberForm`:
* `class SubscriberForm(forms.ModelForm):`: This shows that our form is derived from `ModelForm`. `ModelForm` knows to check our inner `Meta` class for information on the model and fields that can be used as the basis of this form.
* `mailing_list = forms.ModelChoiceField`: This tells our form to use our custom configured `ModelChoiceField` instead of the default that the forms API would use. By default, Django will show a `ModelChoiceField` that would render as a drop-down box. A user could use the dropdown to pick the associated model. In our case, we don't want the user to be able to make that choice. When we show a rendered `SubscriberForm`, we want it be configured for a particular mailing list. To this end, we change the `widget` argument to be a `HiddenInput` class and mark the field as `disabled`. Our form needs to know the `MailingList` model instances that are valid for this form. We provide a `QuerySet` object that matches all `MailingList` model instances.
* `model = Subscriber`: This tells the form's `Meta` inner class that this form is based on the `Subscriber` model.
* `fields = ['mailing_list', 'email', ]`: This tells the form to only include the following fields from the model in the form.
Next, let's make a form for capturing the `Message`s that our users want to send to their `MailingList`.
# Creating the Message Form
Our users will want to send `Message`s to their `MailingList`s. We'll provide a web page with a form where users can create these messages. Before we can create the page, let's create the form.
Let's add our `MessageForm` class to `django/mailinglist/forms.py`:
from django import forms
from mailinglist.models import MailingList, Message
class MessageForm(forms.ModelForm):
mailing_list = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=MailingList.objects.all(),
disabled=True,
)
class Meta:
model = Message
fields = ['mailing_list', 'subject', 'body', ]
As you may have noticed in the preceding code, `MessageForm` works just like `SubscriberFrom`. The only difference is that we've listed a different model and different fields in the `Meta` inner class.
Next, let's create the `MailingListForm` class, which we'll use to accept input for the name of the mailing list.
# Creating the MailingList form
Now, we'll create a `MailingListForm`, which will accept the name and owner of a mailing list. We will use the same `HiddenInput` and `disabled` field pattern as before but this time on the `owner` field. We want to make sure that a user can't change the owner of the mailing list.
Let's add our form to `django/mailinglist/forms.py`:
from django import forms
from django.contrib.auth import get_user_model
from mailinglist.models import MailingList
class MailingListForm(forms.ModelForm):
owner = forms.ModelChoiceField(
widget=forms.HiddenInput,
queryset=get_user_model().objects.all(),
disabled=True,
)
class Meta:
model = MailingList
fields = ['owner', 'name']
The `MailingListForm` is very similar to our previous forms, but introduces a new function, `get_user_model()`. We need to use `get_user_model()` because we don't want to couple ourselves to a particular user model, but we need access to that model's manager to get a `QuerySet`.
Now that we have our forms, we can create the views for our `mailinglist` Django app.
# Creating MailingList views and templates
In the preceding section, we created forms that we can use to collect and validate user input. In this section, we will create the views and templates that actually communicate with the user. A template defines the HTML of a document.
Fundamentally, a Django view is a function that accepts a request and returns a response. While we won't be using these **Function-Based Views** ( **FBVs** ) in this book, it's important to remember that all a view needs to do is meet those two responsibilities. If processing a view also causes another action to occur (for example, sending an email), then we should put that code in a service module rather than directly in the view.
A lot of the work that web developers face is repetitive (for example, processing a form, showing a particular model, listing all instances of that model, and so on). Django's battery included philosophy means that it includes tools to make these kinds of repetitive tasks easier.
Django makes common web developer tasks easier by offering a rich suite of **class-based views** ( **CBVs** ). CBVs use the principles of **Object-Oriented Programming** ( **OOP** ) to increase code reuse. Django comes with a rich suite of CBVs that makes it easy to process a form or show an HTML page for a model instance.
The HTML view returns come from rendering a template. Templates in Django are generally written in Django's template language. Django can also support other template languages (for example, Jinja). Generally, each view is associated with a template.
Let's start by creating some resources many of our views will need.
# Common resources
In this section, we will create some common resources that our views and templates will need:
* We'll create a base template, which all our other templates can extend. Using the same base template across all our pages will give Mail Ape a unifying look and feel.
* We'll create a `MailingListOwnerMixin` class, which will let us protect mailing lists messages from unauthorized access.
Let's start by creating a base template.
# Creating a base template
Let's create a base template for Mail Ape. This template will be used by all our pages to give our entire web app a consistent look.
The **Django template language** ( **DTL** ) lets us write our HTML (or other text-based format) and lets us use _tags_ , _variables_ , and _filters_ to execute code to customize the HTML. Let's take a closer look at those three concepts:
* _tags_ : They are surrounded by `{% %}` and may (`{% block body%}{% endblock %}`) or may not (`{% url "myurl" %}`) contain a body.
* _variables_ : They are surrounded by `{{ }}` and must be set in the template's context (for example, `{{ mailinglist }}`). Though DTL variables are like Python variables, there are differences. The two most critical ones are around executables and dictionaries. Firstly, DTL does not have a syntax to pass arguments to an executable (you never have to use `{{foo(1)}}`). If you reference a variable and it is callable (for example, a function), then the Django template language will call it and return the result (for example, `{{mailinglist.get_absolute_url}}`). Secondly, DTL doesn't distinguish among object attributes, items in a list, and items in a dictionary. All three are accessed using a dot: `{{mailinglist.name}}`, `{{mylist.1}}`, and `{{mydict.mykey}}`.
* _filters_ : They follow a variable and modify its value (for example, `{{ mailinglist.name | upper}}` will return the mailing lists' name in uppercase).
We'll take a look at examples of all three as we continue creating Mail Ape.
Let's create a common templates directory—`django/templates`—and put our template in `django/templates/base.html`:
<!DOCTYPE html>
<html lang="en" >
<head >
<meta charset="UTF-8" >
<title >{% block title %}{% endblock %}</title >
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.3/css/bootstrap.min.css"
/>
</head >
<body >
<div class="container" >
<nav class="navbar navbar-light bg-light" >
<a class="navbar-brand" href="#" >Mail Ape </a >
<ul class="navbar-nav" >
<li class="nav-item" >
<a class="nav-link"
href="{% url "mailinglist:mailinglist_list" %}" >
Your Mailing Lists
</a >
</li >
{% if request.user.is_authenticated %}
<li class="nav-item" >
<a class="nav-link"
href="{% url "user:logout" %}" >
Logout
</a >
</li >
{% else %}
<li class="nav-item" >
<a class="nav-link"
href="{% url "user:login" %}" >
Your Mailing Lists
</a >
</li >
<li class="nav-item" >
<a class="nav-link"
href="{% url "user:register" %}" >
Your Mailing Lists
</a >
</li >
{% endif %}
</ul >
</nav >
{% block body %}
{% endblock %}
</div >
</body >
</html >
In our base template, we will note examples of the following three tags:
* `{% url ... %}`: This returns the path to a view. This works just like the `reverse()` function we saw earlier but in a Django template.
* `{% if ... %} ... {% else %} ... {% endif %}`: This works just like a Python developer would expect. The `{% else %}` clause is optional. The Django template language also supports `{% elif ... %}` if we need to choose among multiple choices.
* `{% block ... %}`: This defines a block that a template, which extends `base.html`, can replace with its own content. We have two blocks, `body` and `title`.
We now have a base template that our other templates can use by just providing body and title blocks.
Now that we have our template, we have to tell Django where to find it. Let's update `django/config/settings.py` to let Django know about our new `django/templates` directory.
In `django/config/settings.py`, find the line that starts with `Templates`. We will need to add our `templates` directory to the list under the `DIRS` key:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
],
'APP_DIRS': True,
'OPTIONS': {
# do not change OPTIONS, omitted for brevity
},
},
]
Django lets us avoid hardcoding the path to `django/templates` by calculating the path to `django` at runtime as `BASE_DIR`. This way, we can use the same setting across environments.
Another important setting we just saw was `APP_DIRS`. This setting tells Django to check each installed app for a `templates` directory when Django is looking for a template. It means that we don't have to update the `DIRS` key for each installed app and lets us isolate our templates under our apps (increasing reusability). Finally, it's important to remember that apps are searched in the order they appear in `INSTALLED_APPS`. If there's a template name collision (for example, two apps provide a template called `registration/login.html`), then the one listed first in `INSTALLED_APPS` will be used.
Next, let's configure our project to use Bootstrap 4 when rendering forms in HTML.
# Configuring Django Crispy Forms to use Bootstrap 4
In our base template, we included the Bootstrap 4 css template. To make it easy to render a form and style it using Bootstrap 4, we will use a third-party Django app called Django Crispy Forms. However, we must configure Django Crispy Forms to tell it to use Bootstrap 4.
Let's add a new setting to the bottom of `django/config/settings.py`:
CRISPY_TEMPLATE_PACK = 'bootstrap4'
Now, Django Crispy Forms is configured to use Bootstrap 4 when rendering a form. We'll take a look at it later in this chapter, in sections covering rendering a form in a template.
Next, let's create a mixin that ensures that only the owners of a mailing list can affect them.
# Creating a mixin to check whether a user can use the mailing list
Django uses **class-based views** ( **CBVs** ) to make it easier to reuse code, simplifying repetitive tasks. One of the repetitive tasks we'll have to do in the `mailinglist` app is protect `MailingList` s and their related models from being tampered with by other users. We'll create a mixin that provides protection.
A mixin is a class that provides a limited functionality that is meant to be used in conjunction with other classes. We've previously seen the `LoginRequired` mixin, which can be used in conjunction with a view class to protect a view from unauthenticated access. In this section, we will create a new mixin.
Let's create our `UserCanUseMailingList` mixin in a new file at `django/mailinglist/mixins.py`:
from django.core.exceptions import PermissionDenied, FieldDoesNotExist
from mailinglist.models import MailingList
class UserCanUseMailingList:
def get_object(self, queryset=None):
obj = super().get_object(queryset)
user = self.request.user
if isinstance(obj, MailingList):
if obj.user_can_use_mailing_list(user):
return obj
else:
raise PermissionDenied()
mailing_list_attr = getattr(obj, 'mailing_list')
if isinstance(mailing_list_attr, MailingList):
if mailing_list_attr.user_can_use_mailing_list(user):
return obj
else:
raise PermissionDenied()
raise FieldDoesNotExist('view does not know how to get mailing '
'list.')
Our class defines a single method, `get_object(self, queryset=None)`. This method has the same signature as `SingleObjectMixin.get_object()`, which is used by many of Django's built-in CBVs (for example, `DetailView`). Our `get_object()` implementation doesn't do any work to retrieve an object. Instead, our `get_object` just checks the object that a parent retrieved to check whether it is, or has, a `MailingList` and confirms that the logged in user can use the mailing list.
One surprising thing about a mixin is that it relies on a super class but doesn't inherit from one. In `get_object()`, we explicitly call `super()`, but `UserCanUseMailingList` doesn't have any base classes. Mixin classes aren't expected to be used by themselves. Instead, they will be used by classes, which subclass them _and_ one or more other classes.
We'll take a look at how this works in the next few sections.
# Creating MailingList views and templates
Now, we'll take a look at the views that will process the user's requests and return responses that show a UI created from our templates.
Let's start by creating a view to list of all our `MailingList`s.
# Creating the MailingListListView view
We will create a view that shows the mailing lists a user owns.
Let's create our `MailingListListView` in `django/mailinglist/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import ListView
from mailinglist.models import MailingList
class MailingListListView(LoginRequiredMixin, ListView):
def get_queryset(self):
return MailingList.objects.filter(owner=self.request.user)
Our view is derived from two views, `LoginRequiredMixin` and `ListView`. `LoginRequiredMixin` is a mixin that ensures that a request made by an unauthenticated user is redirected to a login view instead of being processed. To help the `ListView` know _what_ to list, we will override the `get_queryset()` method and return a `QuerySet` that includes only the `MailingList` s owned by the currently logged in user. To display the result, `ListView` will try to render a template at `appname/modelname_list.html`. In our case, `ListView` will try to render `mailinglist/mailinglist_list.html`.
Let's create that template in `django/mailinglist/templates/mailinglist/mailinglist_list.html`:
{% extends "base.html" %}
{% block title %}
Your Mailing Lists
{% endblock %}
{% block body %}
<div class="row user-mailing-lists" >
<div class="col-sm-12" >
<h1 >Your Mailing Lists</h1 >
<div >
<a class="btn btn-primary"
href="{% url "mailinglist:create_mailinglist" %}" >New List</a >
</div >
<p > Your mailing lists:</p >
<ul class="mailing-list-list">
{% for mailinglist in mailinglist_list %}
<li class="mailinglist-item">
<a href="{% url "mailinglist:manage_mailinglist" pk=mailinglist.id %}" >
{{ mailinglist.name }}
</a >
</li >
{% endfor %}
</ul >
</div >
</div >
{% endblock %}
Our template extends `base.html`. When a template extends another template, it can only put HTML into the `block`s that have been previously defined. We will also see a lot of new Django template tags. Let's take a closer look at them:
* `{% extends "base.html" %}`: This tells the Django template language which template that we're extending.
* `{% block title %}... {% endblock %}`: This tells Django that we're providing new code that it should place in the extended template's `title` block. The previous code in that block (if any) is replaced.
* `{% for mailinglist in mailinglist_list %} ... {% endfor %}`: This provides a for loop for each item in the list.
* `{% url... %}`: The `url` tag will produce a URL path for the named `path`.
* `{% url ... pk=...%}`: This works just like the preceding point, but, in some cases, a `path` may take arguments (for example, the primary key of the `MailingList` to display). We can specify these extra arguments in the `url` tag after the name of the `path`.
We now have a view and template that work together.
The final step with any view is adding the app's URLConf to it. Let's update `django/mailinglist/urls.py`:
from django.urls import path
from mailinglist import views
app_name = 'mailinglist'
urlpatterns = [
path('',
views.MailingListListView.as_view(),
name='mailinglist_list'),
]
Given how we configured our root URLConf earlier, any request sent to `/mailinglist/` will be routed to our `MailingListListView`.
Next, let's add a view to create new `MailingList`s.
# Creating the CreateMailingListView and template
We will create a view to create mailing lists. When our view receives a `GET` request, the view will show our users a form for entering the name of the mailing list. When our view receives a `POST` request, the view will validate the form and either redisplay the form with errors or create the mailing list and redirect the user to the list's management page.
Let's create the view now in `django/mailinglist/views.py`:
class CreateMailingListView(LoginRequiredMixin, CreateView):
form_class = MailingListForm
template_name = 'mailinglist/mailinglist_form.html'
def get_initial(self):
return {
'owner': self.request.user.id,
}
`CreateMailingListView` is derived from two classes:
* `LoginRequiredMixin` redirects requests that are not associated with a logged in user from being processed (we'll configure this later in this chapter, in the _Creating the user app_ section)
* `CreateView` knows how to work with the form indicated in `form_class` and render it using the template listed in `template_name`
`CreateView` is the class that does most of the work here without us needing to provide almost any extra information. Processing a form, validating it, and saving it are always the same, and `CreateView` has the code to do it. If we need to change some of the behavior, we can override one for the hooks that `CreateView` provides, as we do with `get_initial()`.
When `CreateView` instantiates our `MailingListForm`, `CreateView` calls its `get_initial()` method to get the `initial` data (if any) for the form. We use this hook to make sure that the form's owner is set to the logged in user's `id`. Remember that `MailingListForm` has its `owner` field disabled, so the form will ignore any data provided by the user.
Next, let's create the template for our `CreateView` in `django/mailinglist/templates/mailinglist/mailinglist_form.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% block title %}
Create Mailing List
{% endblock %}
{% block body %}
<h1 >Create Mailing List</h1 >
<form method="post" class="col-sm-4" >
{% csrf_token %}
{{ form | crispy }}
<button class="btn btn-primary" type="submit" >Submit</button >
</form >
{% endblock %}
Our template extends `base.html`. When a template extends another template, it can only put HTML into the blocks that have been previously defined by the extended template(s). We also take a lot of new Django template tags. Let's take a closer look at them:
* `{% load crispy_forms_tags %}`: This tells Django to load a new template tag library. In this case, we will load `crispy_from_tags` from the Django Crispy Forms app that we have installed. This provides us with the `crispy` filter we'll see later in this section.
* `{% csrf_token %}`: Any form that Django processes must have a valid CSRF token to prevent CSRF attacks (refer to Chapter 3, _Posters, Headshots, and Security_ ). The `csrf_token` tag returns a hidden input tag with the correct CSRF token. Remember that Django generally won't process a POST request without a CSRF Token.
* `{{ form | crispy }}`: The `form` variable is a reference to the form instance that our view is processing and is passed into this template's context by our `CreateView`. `crispy` is a filter provided by the `crispy_form_tags` tag library and will output the form using HTML tags and CSS classes used in Bootstrap 4.
We now have a view and template that work together. The view is able to use the template to create a user interface to enter data into the form. The view is then able to process the form's data and create a `MailingList` model from valid form data or redisplay the form if the data has a problem. The Django Crispy Forms library renders the form using the HTML and CSS from the Bootstrap 4 CSS Framework.
Finally, let's add our view to the `mailinglist` app's URLConf. In `django/mailinglist/urls.py`, let's add a new `path()` object to the URLConf:
path('new',
views.CreateMailingListView.as_view(),
name='create_mailinglist')
Given how we configured our root URLConf earlier, any request sent to `/mailinglist/new` will be routed to our `CreatingMailingListView`.
Next, let's make a view to delete a `MailingList`.
# Creating the DeleteMailingListView view
Users will want to delete `MailingList` s after they stop being useful. Let's create a view that will prompt the user for confirmation on a `GET` request and delete the `MailingList` on a `POST`.
We'll add our view to `django/mailinglist/views.py`:
class DeleteMailingListView(LoginRequiredMixin, UserCanUseMailingList,
DeleteView):
model = MailingList
success_url = reverse_lazy('mailinglist:mailinglist_list')
Let's take a closer look at the classes that `DeleteMailingListView` is derived from:
* `LoginRequiredMixin`: This serves the same function as in the preceding code, ensuring that requests from an unauthenticated user aren't processed. The user is just redirected to the login page.
* `UserCanUseMailingList`: This is the mixin we created in the preceding code. `DeleteView` uses the `get_object()` method to retrieve the model instance to be deleted. By mixing `UserCanUseMailingList` into the `DeleteMailingListView` class, we protect each user's `MailingList`s from being deleted by unauthorized users.
* `DeleteView`: This is a Django view that knows how to render a confirmation template on a `GET` request and delete the related model on `POST`.
In order for Django's `DeleteView` to function properly, we will need to configure it properly. `DeleteView` knows which model to delete from its `model` attribute. `DeleteView` requires that we provide a `pk` argument when we route requests to it. To render the confirmation template, `DeleteView` will try to use `appname/modelname_confirm_delete.html`. In the case of `DeleteMailingListView`, the template will be `mailinglist/mailinglist_confirm_delete.html`. If the model is successfully deleted, then `DeleteView` will redirect to the `success_url` value. We've avoided hardcoding the `success_url` and instead used `reverse_lazy()` to refer to the URL by name. The `reverse_lazy()` function returns a value that won't resolve until it's used to create a `Response` object.
Let's create the template that `DeleteMailingListView` requires in `django/mailinglist/templates/mailinglist/mailinglist_confirm_delete.html`:
{% extends "base.html" %}
{% block title %}
Confirm delete {{ mailinglist.name }}
{% endblock %}
{% block body %}
<h1 >Confirm Delete?</h1 >
<form action="" method="post" >
{% csrf_token %}
<p >Are you sure you want to delete {{ mailinglist.name }}?</p >
<input type="submit" value="Yes" class="btn btn-danger btn-sm ">
<a class="btn btn-primary btn-lg" href="{% url "mailinglist:manage_mailinglist" pk=mailinglist.id %}">No</a>
</form >
{% endblock %}
In this template, we don't use any forms because there isn't any input to validate. The form submission itself is the confirmation.
The last step will be adding our view to the `urlpatterns` list in `django/mailinglist/urls.py`:
path('<uuid:pk>/delete',
views.DeleteMailingListView.as_view(),
name='delete_mailinglist'),
This `path` looks different than the previous `path()` calls we've seen. In this `path`, we're including a named argument that will be parsed out of the path and passed to the view. We specify `path` named arguments using the `<converter:name>` format. A converter knows how to match a part of the path (for example, the `uuid` converter knows how to match a UUID; `int` knows how to match a number; `str` will match any non-empty string except `/`). The matched text is then passed to the view as a key word argument with the provided name. In our case, to route a request to `DeleteMailingListView`, it has to have a path like this: `/mailinglist/bce93fec-f9c6-4ea7-b1aa-348d3bed4257/delete`.
Now that we can list, create, and delete `MailingList`s, let's create a view to manage its `Subscriber`s and `Message`s.
# Creating MailingListDetailView
Let's create a view that will list all the `Subscriber`s and `Message`s related to a `MailingList`. We want also need a place to show our users the `MailingList`s subscription page link. Django can make it easy to create a view that represents a model instance.
Let's create our `MailingListDetailView` in `django/mailinglist/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import DetailView
from mailinglist.mixins import UserCanUseMailingList
from mailinglist.models import MailingList
class MailingListDetailView(LoginRequiredMixin, UserCanUseMailingList,
DetailView):
model = MailingList
We're using the `LoginRequiredMixin` and `UserCanUseMailingList` the same way and for the same purpose as before. This time, we're using them with `DetailView`, which is one of the simplest views. It simply renders a template for an instance of the model it's been configured for. It retrieves the model instance by receiving a `pk` argument from `path` just like `DeleteView`. Also, we don't have to explicitly configure the template it will use because, by convention, it uses `appname/modelname_detail.html`. In our case, it will be `mailinglist/mailinglist_detail.html`.
Let's create our template in `django/mailinglist/templates/mailinglist/mailinglist_detail.html`:
{% extends "base.html" %}
{% block title %}
{{ mailinglist.name }} Management
{% endblock %}
{% block body %}
<h1 >{{ mailinglist.name }} Management
<a class="btn btn-danger"
href="{% url "mailinglist:delete_mailinglist" pk=mailinglist.id %}" >
Delete</a >
</h1 >
<div >
<a href="{% url "mailinglist:create_subscriber" mailinglist_pk=mailinglist.id %}" >Subscription
Link</a >
</div >
<h2 >Messages</h2 >
<div > Send new
<a class="btn btn-primary"
href="{% url "mailinglist:create_message" mailinglist_pk=mailinglist.id %}">
Send new Message</a >
</div >
<ul >
{% for message in mailinglist.message_set.all %}
<li >
<a href="{% url "mailinglist:view_message" pk=message.id %}" >{{ message.subject }}</a >
</li >
{% endfor %}
</ul >
<h2 >Subscribers</h2 >
<ul >
{% for subscriber in mailinglist.subscriber_set.all %}
<li >
{{ subscriber.email }}
{{ subscriber.confirmed|yesno:"confirmed,unconfirmed" }}
<a href="{% url "mailinglist:unsubscribe" pk=subscriber.id %}" >
Unsubscribe
</a >
</li >
{% endfor %}
</ul >
{% endblock %}
The preceding code template introduces only one new item (the `yesno` filter), but really shows how all the tools of Django's template language come together.
The `yesno` filter takes a value and returns `yes` if the value evaluates to `True`, `no` if it evaluates to `False`, and `maybe` if it is `None`. In our case, we've passed an argument that tells `yesno` to return `confirmed` if `True` and `unconfirmed` if `False`.
The `MailingListDetailView` class and template illustrate how Django lets us concisely complete a common web developer task: display a page for a row in a database.
Next, let's create a new `path()` object to our view in the `mailinglist` URLConf:
path('<uuid:pk>/manage',
views.MailingListDetailView.as_view(),
name='manage_mailinglist')
Next, let's create views for our `Subscriber` model instances.
# Creating Subscriber views and templates
In this section we'll create views and templates to let users interact with our `Subscriber` model. One of the main differences between these views and the `MailingList` and `Message` views is that they will not need any mixins because they will be exposed publicly. Their main protection from tampering is that `Subscriber`s are identified by a UUID which has a large key space, meaning that tampering is unlikely.
Let's start with `SubscribeToMailingListView`.
# Creating SubscribeToMailingListView and template
We need a view to collect `Subscriber`s to `MailingList`s. Let's create a `SubscribeToMailingListView` class with `django/mailinglist/views.py`:
class SubscribeToMailingListView(CreateView):
form_class = SubscriberForm
template_name = 'mailinglist/subscriber_form.html'
def get_initial(self):
return {
'mailing_list': self.kwargs['mailinglist_id']
}
def get_success_url(self):
return reverse('mailinglist:subscriber_thankyou', kwargs={
'pk': self.object.mailing_list.id,
})
def get_context_data(self, **kwargs):
ctx = super().get_context_data(**kwargs)
mailing_list_id = self.kwargs['mailinglist_id']
ctx['mailing_list'] = get_object_or_404(
MailingList,
id=mailing_list_id)
return ctx
Our `SubscribeToMailingListView` is similar to `CreateMailingListView` but overrides a couple of new methods:
* `get_success_url()`: This is called by `CreateView` to get a URL to redirect the user to the model that has been created. In `CreateMailingListView`, we didn't need to override it because the default behavior uses the model's `get_absolute_url`. We use the `reverse()` function resolve the path to the thank you page.
* `get_context_data()`: This lets us add new variables to the template's context. In this case, we need access to the `MailingList` the user may subscribe to show the `MailingList`'s name. We use Django's `get_object_or_404()` shortcut function to retrieve the `MailingList` by its ID or raise a 404 exception. We'll have this view's `path` parse the `mailinglist_id` out of our request's path (refer to to the , at the end of this section).
Next, let's create our template in `mailinglist/templates/mailinglist/subscriber_form.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% block title %}
Subscribe to {{ mailing_list }}
{% endblock %}
{% block body %}
<h1>Subscribe to {{ mailing_list }}</h1>
<form method="post" class="col-sm-6 ">
{% csrf_token %}
{{ form | crispy }}
<button class="btn btn-primary" type="submit">Submit</button>
</form>
{% endblock %}
This template doesn't introduce any tags but shows another example of how we can use Django's template language and the Django Crispy Forms API to quickly build a pretty HTML form. We extend `base.html`, as before, to give our page a consistent look and feel. `base.html` also provides the blocks we're going to put our content into. Outside of any block, we `{% load %}` the Django Crispy Forms tag library so that we can use the `crispy` filter on our form to generate the Bootstrap 4 compatible HTML.
Next, let's make sure that Django knows how to route requests to our new view by adding a `path()` to `SubscribeToMailingListView` to the `mailinglist` app's URLConf's `urlpatterns` list:
path('<uuid:mailinglist_id>/subscribe',
views.SubscribeToMailingListView.as_view(),
name='subscribe'),
In this `path()`, we need to match the `uuid` parameter that we pass to our view as `mailinglist_pk`. This is the keyword argument that our `get_context_data()` method referenced.
Next, let's create a thank you page to thank users for subscribing to a mailing list.
# Creating a thank you for subscribing view
After a user subscribes to a mailing list, we want to show them a _thank you_ page. This page can be the same for all users who subscribe to the same mailing list since all it will show is the name of the mailing list (not the subscriber's email). To create this view, we're going to use the `DetailView` we've seen before but this time without any additional mixing (there's no information to protect here).
Let's create our `ThankYouForSubscribingView` in `django/mailinglist/views.py`:
from django.views.generic import DetailView
from mailinglist.models import MailingList
class ThankYouForSubscribingView(DetailView):
model = MailingList
template_name = 'mailinglist/subscription_thankyou.html'
Django does all the work for us in the `DetailView` as long as we provide a `model` attribute. The `DetailView` knows how to look up a model and then render a template for that model. We also provide a `template_name` attribute because the `mailinglist/mailinglist_detail.html` template (which `DetailView` would use by default) is already being used by `MailingListDetailView`.
Let's create our template in `django/mailinglist/templates/mailinglist/subscription_thankyou.html`:
{% extends "base.html" %}
{% block title %}
Thank you for subscribing to {{ mailinglist }}
{% endblock %}
{% block body %}
<div class="col-sm-12" ><h1 >Thank you for subscribing
to {{ mailinglist }}</h1 >
<p >Check your email for a confirmation email.</p >
</div >
{% endblock %}
Our template just shows a thank you and the template name.
Finally, let's add a `path()` to `ThankYouForSubscribingView` to the `mailinglist` app's URLConf's `urlpatterns` list:
path('<uuid:pk>/thankyou',
views.ThankYouForSubscribingView.as_view(),
name='subscriber_thankyou'),
Our `path` needs to match a UUID in order to route a request to `ThankYouForSubscribingView`. The UUID will be passed into the view as the keyword argument `pk`. This `pk` will be used by `DetailView` to find the correcting `MailingList`.
Next, we will need to let a user confirm that they want to receive emails at this address.
# Creating a subscription confirmation view
To prevent spammers from abusing our service, we will need to send an email to our subscribers to confirm that they really want to subscribe to one of our users' mailing lists. We'll cover sending those emails, but we'll create the confirmation page now.
This confirmation page will behave a little strangely. Simply visiting the page will modify `Subscriber.confirmed` to `True`. This is standard for how mailing list confirmation pages work (we want to avoid creating extra work for our subscribers) but strange according to the HTTP spec, which says that `GET` requests should not modify a resource.
Let's create our `ConfirmSubscriptionView` in `django/mailinglist/views.py`:
from django.views.generic import DetailView
from mailinglist.models import Subscriber
class ConfirmSubscriptionView(DetailView):
model = Subscriber
template_name = 'mailinglist/confirm_subscription.html'
def get_object(self, queryset=None):
subscriber = super().get_object(queryset=queryset)
subscriber.confirmed = True
subscriber.save()
return subscriber
`ConfirmSubscriptionView` is another `DetailView` since it shows a single model instance. In this case, we override the `get_object()` method in order to modify the object before returning it. Since `Subscriber`s are not required to be users of our system, we don't need to use `LoginRequiredMixin`. Our view is protected from brute force enumeration because the key space of `Subscriber.id` is large and assigned non-sequentially.
Next, let's create our template in `django/mailinglist/templates/mailinglist/confirm_subscription.html`:
{% extends "base.html" %}
{% block title %}
Subscription to {{ subscriber.mailing_list }} confirmed.
{% endblock %}
{% block body %}
<h1 >Subscription to {{ subscriber.mailing_list }} confirmed!</h1 >
{% endblock %}
Our template uses the blocks defined in `base.html` to simply notify the user of their confirmed subscription.
Finally, let's add a `path()` to `ConfirmSubscriptionView` to the `mailinglist` app's URLConf's `urlpatterns` list:
path('subscribe/confirmation/<uuid:pk>',
views.ConfirmSubscriptionView.as_view(),
name='confirm_subscription')
Our `confirm_subscription` path defines the path to match in order to route a request to our view. Our matching expression includes the requirement of a UUID, which will be passed to our `ConfirmSubscriptionView` as the keyword argument `pk`. The parent (`DetailView`) of `ConfirmSubscriptionView` will then use that to retrieve the correct `Subscriber`.
Next, let's allow `Subscribers` to unsubscribe themselves.
# Creating UnsubscribeView
Part of being an ethical mailing provider is letting our `Subscriber`s unsubscribe. Next, we'll create an `UnsubscribeView`, which will delete a `Subscriber` model instance after they've confirmed they definitely want to unsubscribe.
Let's add our view to `django/mailinglist/views.py`:
from django.views.generic import DeleteView
from mailinglist.models import Subscriber
class UnsubscribeView(DeleteView):
model = Subscriber
template_name = 'mailinglist/unsubscribe.html'
def get_success_url(self):
mailing_list = self.object.mailing_list
return reverse('mailinglist:subscribe', kwargs={
'mailinglist_pk': mailing_list.id
})
Our `UnsubscribeView` lets Django's built-in `DeleteView` implement to render the template and find and delete the correct `Subscriber`. `DeleteView` requires that it receive a `pk` for the `Subscriber` as a keyword argument parsed from the path (much like a `DetailView`). When the delete succeeds, we'll redirect the user to the subscription page with the `get_success_url()` method. When `get_success_url()` is executing, our `Subscriber` instance will already be deleted from the database, but a copy of the respective object will be available under `self.object`. We will use that still in memory (but not in the database) instance to get the `id` attribute of the related mailing list.
To render the confirmation form, we will need to create a template in `django/mailinglist/templates/mailinglist/unsubscribe.html`:
{% extends "base.html" %}
{% block title %}
Unsubscribe?
{% endblock %}
{% block body %}
<div class="col">
<form action="" method="post" >
{% csrf_token %}
<p >Are you sure you want to unsubscribe
from {{ subscriber.mailing_list.name }}?</p >
<input class="btn btn-danger" type="submit"
value="Yes, I want to unsubscribe " >
</form >
</div >
{% endblock %}
This template renders a `POST` form, which will act as confirmation of the desire of the `subscriber` to be unsubscribed.
Next, let's add a `path()` to `UnsubscribeView` to the `mailinglist` app's URLConf's `urlpatterns` list:
path('unsubscribe/<uuid:pk>',
views.UnsubscribeView.as_view(),
name='unsubscribe'),
When dealing with views that derive from `DetailView` or `DeleteView`, it's vital to remember to name the path matcher `pk`.
Great, now, let's allow the user to start creating `Message`s that they will send to their `Subscriber`s.
# Creating Message Views
We track the emails that our users want to send to their `Subscriber`s in the `Message` model. To make sure we have an accurate log of what users send to their `Subscribers`, we will restrict the operations available on `Message`s. Our users will only be able to create and view `Message`s. It doesn't make sense to support editing since an email that's been sent can't be modified. We also won't support deleting messages so that both we and the users have an accurate log of what was requested to be sent when.
Let's start with making a `CreateMessageView`!
# Creating CreateMessageView
Our `CreateMessageView` is going to follow a pattern similar to the markdown forms that we created for Answerly. The user will get a form that they can submit to either save or preview. If the submit is a preview, then the form will render along with the preview of the rendered markdown of the `Message`. If the user chooses save, then they will create their new message.
Since we're creating a new model instance, we will use Django's `CreateView`.
Let's create our view in `django/mailinglist/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import CreateView
from mailinglist.models import Message
class CreateMessageView(LoginRequiredMixin, CreateView):
SAVE_ACTION = 'save'
PREVIEW_ACTION = 'preview'
form_class = MessageForm
template_name = 'mailinglist/message_form.html'
def get_success_url(self):
return reverse('mailinglist:manage_mailinglist',
kwargs={'pk': self.object.mailing_list.id})
def get_initial(self):
mailing_list = self.get_mailing_list()
return {
'mailing_list': mailing_list.id,
}
def get_context_data(self, **kwargs):
ctx = super().get_context_data(**kwargs)
mailing_list = self.get_mailing_list()
ctx.update({
'mailing_list': mailing_list,
'SAVE_ACTION': self.SAVE_ACTION,
'PREVIEW_ACTION': self.PREVIEW_ACTION,
})
return ctx
def form_valid(self, form):
action = self.request.POST.get('action')
if action == self.PREVIEW_ACTION:
context = self.get_context_data(
form=form,
message=form.instance)
return self.render_to_response(context=context)
elif action == self.SAVE_ACTION:
return super().form_valid(form)
def get_mailing_list(self):
mailing_list = get_object_or_404(MailingList,
id=self.kwargs['mailinglist_pk'])
if not mailing_list.user_can_use_mailing_list(self.request.user):
raise PermissionDenied()
return mailing_list
Our view inherits from `CreateView` and `LoginRequiredMixin`. We use the `LoginRequiredMixin` to prevent unauthenticated users from sending messages to mailing lists. To prevent logged in but unauthorized users from sending messages, we will create a central `get_mailing_list()` method, which checks that the logged in user can use this mailing list. `get_mailing_list()` expects that the `mailinglist_pk` will be provided as a keyword argument to the view.
Let's take a closer look at the `CreateMessageView` to see how this all works together:
* `form_class = MessageForm`: This is the form that we want `CreateView` to render, validate, and use to create our `Message` model.
* `template_name = 'mailinglist/message_form.html'`: This is the template that we'll create next.
* `def get_success_url()`: After a `Message` is successfully created, we'll redirect our users to the management page of the `MailingList`.
* `def get_initial():`: Our `MessageForm` has its `mailing_list` field disabled so that users can't try to surreptitiously create a `Message` for another user's `MailingList`. Instead, we use our `get_mailing_list()` method to get the mailing list based on the `mailinglist_pk` argument. Using `get_mailing_list()`, we check whether the logged in user can use the `MailingList`.
* `def get_context_data()`: This provides extra variables to the template's context. We provide the `MailingList` as well as the save and preview constants.
* `def form_valid()`: This defines the behavior if the form is valid. We override the default behavior of `CreateView` to check the `action` POST argument. `action` will tell us whether to render a preview of the `Message` or to let `CreateView` save a new `Message` model instance. If we're previewing the message, then we pass an unsaved `Message` instance built by our form to the template's context.
Next, let's make our template in `django/mailinglist/templates/mailinglist/message_form.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% load markdownify %}
{% block title %}
Send a message to {{ mailing_list }}
{% endblock %}
{% block body %}
<h1 >Send a message to {{ mailing_list.name }}</h1 >
{% if message %}
<div class="card" >
<div class="card-header" >
Message Preview
</div >
<div class="card-body" >
<h5 class="card-title" >{{ message.subject }}</h5 >
<div>{{ message.body|markdownify }}</div>
</div >
</div >
{% endif %}
<form method="post" class="col-sm-12 col-md-9" >
{% csrf_token %}
{{ form | crispy }}
<button type="submit" name="action"
value="{{ SAVE_ACTION }}"
class="btn btn-primary" >Save
</button >
<button type="submit" name="action"
value="{{ PREVIEW_ACTION }}"
class="btn btn-primary" >Preview
</button >
</form >
{% endblock %}
This template loads the third party Django Markdownify tag library and the Django Crispy Forms tag library. The former gives us the `markdownify` filter and the latter gives us the `crispy` filter. The `markdownify` filter will convert the markdown text it receives into HTML. We previously used Django Markdownify in our Answerly project in part 2.
This template form has two submit buttons, one to save the form and one to preview the form. The preview block is only rendered if we pass in `message` to preview.
Now that we have our view and template, let's add a `path()` to `CreateMessageView` in the `mailinglist` app's URLConf:
path('<uuid:mailinglist_ipk>/message/new',
views.CreateMessageView.as_view(),
name='create_message'),
Now that we can create messages, let's make a view to view messages we've already created.
# Creating the Message DetailView
To let users view the `Message`s they have sent to their `Subscriber`s we need a `MessageDetailView`. This view will simply display a `Message` but should only let users who are logged in and can use the `Message`'s `MailingList` access the view.
Let's create our view in `django/mailinglist/views.py`:
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import DetailView
from mailinglist.mixins import UserCanUseMailingList
from mailinglist.models import Message
class MessageDetailView(LoginRequiredMixin, UserCanUseMailingList,
DetailView):
model = Message
As the name implies, we're going to use the Django's `DetailView`. To provide the protection we need, we'll add Django's `LoginRequiredMixin` and our `UserCanUseMailingList` mixin. As we've seen before, we don't need to specify the name of the template because `DetailView` will assume it based on the name of the app and model. In our case, `DetailView` wants the template to be called `mailinglist/message_detail.html`.
Let's create our template in `mailinglist/message_detail.html`:
{% extends "base.html" %}
{% load markdownify %}
{% block title %}
{{ message.subject }}
{% endblock %}
{% block body %}
<h1 >{{ message.subject }}</h1 >
<div>
{{ message.body|markdownify }}
</div>
{% endblock %}
Our template extends `base.html` and shows the message in the `body` block. When showing the `Message.body`, we use the third party Django Markdownify tag library's `markdownify` filter to render any markdown text as HTML.
Finally, we need to add a `path()` to `MessageDetailView` to the `mailinglist` app's URLConf's `urlpatterns` list:
path('message/<uuid:pk>',
views.MessageDetailView.as_view(),
name='view_message')
We've now completed our `mailinglist` app's models, views, and templates. We've even created a `UserCanUseMailingList` to let our views easily prevent unauthorized access to a `MailingList` or one of its related views.
Next, we'll create a `user` app to encapsulate user registration and authentication.
# Creating the user app
To create a `MailingList` in Mail Ape, the user needs to have an account and be logged in. In this section, we will write the code for our `user` Django app, which will encapsulate everything to do with a user. Remember that the Django app should be tightly scoped. We don't want to put this behavior in our `mailinglist` app, as these are two discrete concerns.
Our `user` app is going to be very similar to the `user` app seen in MyMDB (Part 1) and Answerly (Part 2). Due to this similarity, we will gloss over some topics. For a deeper examination of the topic, refer to Chapter 2, _Adding Users to MyMDb_.
Django makes managing users and authentication easier with its built-in `auth` app (`django.contrib.auth`). The `auth` app offers a default user model, a `Form` for creating new users, as well as log in and log out views. This means that our `user` app only needs to fill in a few blanks before we have complete user management working locally.
Let's start by creating a URLConf for our `user` app in `django/user/urls.py`:
from django.contrib.auth.views import LoginView, LogoutView
from django.urls import path
import user.views
app_name = 'user'
urlpatterns = [
path('login', LoginView.as_view(), name='login'),
path('logout', LogoutView.as_view(), name='logout'),
path('register', user.views.RegisterView.as_view(), name='register'),
]
Our URLConf is made up of three views:
* `LoginView.as_view()`: This is the `auth` app's login view. The `auth` app provides a view for accepting credentials but doesn't have a template. We'll need to create a template with the name `registration/login.html`. By default, it will redirect a user to `settings.LOGIN_REDIRECT_URL` on login. We can also pass a `next` `GET` parameter to supersede the setting.
* `LogoutView.as_view()`: This is the auth app's logout view. `LogoutView` is one of the few views that modifies state on a `GET` request, logging the user out. The view returns a redirect response. We can use `settings.LOGOUT_REDIRECT_URL` to configure where our user will be redirected to during log out. Again, we use the `GET` parameter `next` to customize this behavior.
* `user.views.RegisterView.as_view()`: This is the user registration view we will write. Django provides us with a `UserCreationForm` but not a view.
We also need to add a few settings to make Django use our `user` view properly. Let's update `django/config/settings.py` with some new settings:
LOGIN_URL = 'user:login'
LOGIN_REDIRECT_URL = 'mailinglist:mailinglist_list'
LOGOUT_REDIRECT_URL = 'user:login'
These three settings tell Django how to redirect the user in different authentication scenarios:
* `LOGIN_URL`: When an unauthenticated user tries to access a page that requires authentication, `LoginRequiredMixin` uses this setting.
* `LOGIN_REDIRECT_URL`: When a user logs in, where should we redirect them to? Often, we redirect them to a profile page; in our case, the page that shows a list of `MailingList`s.
* `LOGOUT_REDIRECT_URL`: When a user logs out, where should we redirect them to? In our case, the login page.
We now have two more tasks left:
* Creating the login template
* Creating the user registration view and template
Let's start by making the login template.
# Creating the login template
Let's make our login template in `django/user/templates/registration/login.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% block title %} Login - {{ block.super }} {% endblock %}
{% block body %}
<h1>Login</h1>
<form method="post" class="col-sm-6">
{% csrf_token %}
{{ form|crispy }}
<button type="submit" id="log_in" class="btn btn-primary">Log in</button>
</form>
{% endblock %}
This form follows all the practices of our previous forms. We use `csrf_token` to protect against a CSRF attack. We use the `crsipy` filter to print the form using Bootstrap 4 style tags and classes.
Remember, we didn't need to make a view to process our login requests because we're using the one that comes with `django.contrib.auth`.
Next, let's create a view and template to register new users.
# Creating the user registration view
Django doesn't come with a view for creating new users, but it does offer a form for capturing a new user's registration. We can combine the `UserCreationForm` with a `CreateView` to quickly create a `RegisterView`.
Let's add our view to `django/user/views.py`:
from django.contrib.auth.forms import UserCreationForm
from django.views.generic.edit import CreateView
class RegisterView(CreateView):
template_name = 'user/register.html'
form_class = UserCreationForm
This is a very simple `CreateView`, like we've seen a few times in this chapter already.
Let's create our template in `django/user/templates/user/register.html`:
{% extends "base.html" %}
{% load crispy_forms_tags %}
{% block body %}
<div class="col-sm-12">
<h1 >Register for Mail Ape</h1 >
<form method="post" >
{% csrf_token %}
{{ form | crispy }}
<button type="submit" class="btn btn-primary" >
Register
</button >
</form >
</div >
{% endblock %}
Again, the template follows the same pattern as our previous `CreateView` templates.
Now, we're ready to run Mail Ape locally.
# Running Mail Ape locally
Django comes with a development server. This server is not suitable for production (or even staging) deployment, but is suitable for local development.
Let's start the server using our Django project's `manage.py` script:
**$ cd django
$ python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
January 29, 2018 - 23:35:15
Django version 2.0.1, using settings 'config.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.**
We can now access our server on `http://127.0.0.1:8000`.
# Summary
In this chapter, we started our Mail Ape project. We created the Django project and started two Django apps. The `mailinglist` app contains our models, views, and templates for the mailing list code. The `user` app holds views and templates related to users. The `user` app is much simpler because it leverages Django's `django.contrib.auth` app.
Next, we'll build an API so that users can integrate with Mail Ape easily.
# The Task of Sending Emails
Now that we have our models and views, we will need to make Mail Ape send emails. We'll have Mail Ape send two kinds of emails, subscriber confirmation emails and mailing list messages. We'll track mailing list message success by creating a new model called `SubscriberMessage` to track whether a message was successfully sent to an address stored in a `Subscriber` model instance. Since sending emails to a lot of `Subscriber` model instances can take a lot of time, we'll use Celery to send emails as tasks outside the regular Django request/response cycle.
In this chapter, we will do the following things:
* Use Django's template system to generate the HTML body of our emails
* Send emails that include both HTML and plain text using Django
* Use Celery to execute asynchronous tasks
* Prevent our code from sending actual emails during testing
Let's start by creating some common resources that we'll use to send dynamic emails.
# Creating common resources for emails
In this section, we will create a base HTML email template and a `Context` object for rendering email templates. We want to create a base HTML template for our emails so that we can avoid repeating boilerplate HTML. We also want to make sure that every email we send includes an unsubscribe link to be good email citizens. Our `EmailTemplateContext` class will consistently provide the common variables that our templates need.
Let's start by creating a base HTML email template.
# Creating the base HTML email template
We'll create our base email HTML template in `django/mailinglist/templates/mailinglist/email/base.html`:
<!DOCTYPE html>
<html lang="en" >
<head >
<body >
{% block body %}
{% endblock %}
Click <a href="{{ unsubscription_link }}">here</a> to unsubscribe from this
mailing list.
Sent with Mail Ape .
</body >
</html >
The preceding template looks like a much simpler version of `base.html`, except it has only one block. Email templates can extend `email/base.html` and override the body block to avoid the boilerplate HTML. Despite the filenames being the same (`base.html`), Django won't confuse the two. Templates are identified by their template paths, not just filenames.
Our base template also expects the `unsubscription_link` variable to always exist. This will let users unsubscribe if they don't want to continue receiving emails.
To make sure that our templates always have the `unsubscription_link` variable, we'll create a `Context` that makes sure to always provide it.
# Creating EmailTemplateContext
As we've discussed before (refer to Chapter 1, _Building MyMDB_ ), to render a template, we will need to provide Django with a `Context` object that has the variables the template references. When writing class-based views, we only have to provide a dict in the `get_context_data()` method and Django takes care of everything for us. However, when we want to render a template ourselves, we'll have to instantiate the `Context` class ourselves. To ensure that all our email template-rendering code provides the same minimum information, we'll create a custom template `Context`.
Let's create our `EmailTemplateContext` class in `django/mailinglist/emails.py`:
from django.conf import settings
from django.template import Context
class EmailTemplateContext(Context):
@staticmethod
def make_link(path):
return settings.MAILING_LIST_LINK_DOMAIN + path
def __init__(self, subscriber, dict_=None, **kwargs):
if dict_ is None:
dict_ = {}
email_ctx = self.common_context(subscriber)
email_ctx.update(dict_)
super().__init__(email_ctx, **kwargs)
def common_context(self, subscriber):
subscriber_pk_kwargs = {'pk': subscriber.id}
unsubscribe_path = reverse('mailinglist:unsubscribe',
kwargs=subscriber_pk_kwargs)
return {
'subscriber': subscriber,
'mailing_list': subscriber.mailing_list,
'unsubscribe_link': self.make_link(unsubscribe_path),
}
Our `EmailTemplateContext` is made up of the following three methods:
* `make_link()`: This joins a URL's path with our project's `MAILING_LIST_LINK_DOMAIN` setting. The `make_link` is necessary because Django's `reverse()` function doesn't include a domain. A Django project can be hosted on multiple different domains. We'll discuss the `MAILING_LIST_LINK_DOMAIN` value more in the _Configuring email settings_ section.
* `__init__()`: This overrides the `Context.__init__(...)` method to give us a chance to add the results of the `common_context()` method to the value of the `dict_` parameter. We're careful to let the data received by the argument overwrite the data we generate in `common_context`.
* `common_context()`: This returns a dictionary that provides the variables we want available to all `EmailTemplateContext` objects. We always want to have `subscriber`, `mailing_list`, and `unsubscribtion_link` available.
We'll use both these resources in our next section, where we'll send confirmation emails to new `Subscriber` model instances.
# Sending confirmation emails
In this section, we'll send emails to new `Subscriber`s to let them confirm their subscription to a `MailingList`.
In this section, we will:
1. Add Django's email configuration settings to our `settings.py`
2. Write a function to send emails using Django's `send_mail()` function
3. Create and render HTML and text templates for the body of our emails
4. Update `Subscriber.save()` to send the emails when a new `Subscriber` is created
Let's start by updating configuration with our mail server's settings.
# Configuring email settings
In order to be able to send emails, we need to configure Django to talk to a **Simple Mail Transfer Protocol** ( **SMTP** ) server. In development and while learning, you can probably use the same SMTP server that your email client uses. Using such a server for sending large amounts of production email is likely a violation of your email provider's Terms of Service and can lead to account suspension. Be careful of which accounts you use.
Let's update our settings in `django/config/settings.py`:
EMAIL_HOST = 'smtp.example.com'
EMAIL_HOST_USER = 'username'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
EMAIL_HOST_PASSWORD = os.getenv('EMAIL_PASSWORD')
MAILING_LIST_FROM_EMAIL = 'noreply@example.com'
MAILING_LIST_LINK_DOMAIN = 'http://localhost:8000'
In the preceding code sample, I've used a lot of instances of `example.com`, which you should replace with the correct domain for your SMTP host and your domain. Let's take a closer look at the settings:
* `EMAIL_HOST`: This is the address of the SMTP sever we're using.
* `EMAIL_HOST_USER`: The username used to authenticate to the SMTP server.
* `EMAIL_PORT`: The port to connect to the SMTP server.
* `EMAIL_USE_TLS`: This is optional and defaults to `False`. Use it if you're connecting over TLS to the SMTP server. If you're using SSL, then use the `EMAIL_USE_SSL` setting. The SSL and TLS settings are mutually exclusive.
* `EMAIL_HOST_PASSWORD`: The password for the host. In our case, we will expect the password in an environment variable.
* `MAILING_LIST_FROM_EMAIL`: This is a custom setting we're using to set who set the `FROM` header on the emails we send.
* `MAILING_LIST_LINK_DOMAIN`: This is the domain to prefix all email template links with. We saw this setting used in our `EmailTemplateContext` class.
Next, let's write our create function to send the confirmation emails.
# Creating the send email confirmation function
Now, we will create a function that will actually create and send confirmation emails to our `Subscriber`s. The `email` module will contain all our email-related code (we've already created the `EmailTemplateContext` class there).
Our `send_confirmation_email()` function will have to do the following:
1. Create a `Context` for rendering the email bodies
2. Create the subject for the email
3. Render the HTML and text email body
4. Send the email using the `send_mail()` function
Let's create that function in `django/mailinglist/emails.py`:
from django.conf import settings
from django.core.mail import send_mail
from django.template import engines, Context
from django.urls import reverse
CONFIRM_SUBSCRIPTION_HTML = 'mailinglist/email/confirmation.html'
CONFIRM_SUBSCRIPTION_TXT = 'mailinglist/email/confirmation.txt'
class EmailTemplateContext(Context):
# skipped unchanged class
def send_confirmation_email(subscriber):
mailing_list = subscriber.mailing_list
confirmation_link = EmailTemplateContext.make_link(
reverse('mailinglist:confirm_subscription',
kwargs={'pk': subscriber.id}))
context = EmailTemplateContext(
subscriber,
{'confirmation_link': confirmation_link}
)
subject = 'Confirming subscription to {}'.format(mailing_list.name)
dt_engine = engines['django'].engine
text_body_template = dt_engine.get_template(CONFIRM_SUBSCRIPTION_TXT)
text_body = text_body_template.render(context=context)
html_body_template = dt_engine.get_template(CONFIRM_SUBSCRIPTION_HTML)
html_body = html_body_template.render(context=context)
send_mail(
subject=subject,
message=text_body,
from_email=settings.MAILING_LIST_FROM_EMAIL,
recipient_list=(subscriber.email,),
html_message=html_body)
Let's take a closer look at our code:
* `EmailTemplateContext()`: This instantiates the `Context` class we created earlier. We provide it with a `Subscriber` instance and a `dict`, which contains the confirmation link. The `confirmation_link` variable will be used by our templates, which we'll create in the next two sections.
* `engines['django'].engine`: This references the Django Template engine. The engine knows how to find `Template`s using the configuration settings in the `TEMPLATES` setting of `settings.py`.
* `dt_engine.get_template()`: This returns a template object. We provide the name of the template as an argument to the `get_template()` method.
* `text_body_template.render()`: This renders the template (using the context we created previously) into a string.
Finally, we send the email using the `send_email()` function. The `send_email()` function takes the following arguments:
* `subject=subject`: The subject of the email message.
* `message=text_body`: The text version of the email.
* `from_email=settings.MAILING_LIST_FROM_EMAIL`: The sender's email address. If we don't provide a `from_email` argument, then Django will use the `DEFAULT_FROM_EMAIL` setting.
* `recipient_list=(subscriber.email,)`: A list (or tuple) of recipient email addresses. This must be a collection, even if you're only sending to one recipient. If you include multiple recipients, they will be able to see each other.
* `html_message=html_body`: The HTML version of the email. This argument is optional, as we don't have to provide an HTML body. If we provide an HTML body, then Django will send an email that includes both the HTML and text body. Email clients will choose to display the HTML or the plain text version of the email.
Now that we have our code for sending the emails, let's make our email body templates.
# Creating the HTML confirmation email template
Let's make the HTML subscription email confirmation template. We'll create the template in `django/mailinglist/templates/mailinglist/email_templates/confirmation.html`:
{% extends "mailinglist/email_templates/email_base.html" %}
{% block body %}
<h1>Confirming subscription to {{ mailing_list }}</h1 >
<p>Someone (hopefully you) just subscribed to {{ mailinglist }}.</p >
<p>To confirm your subscription click <a href="{{ confirmation_link }}">here</a>.</p >
<p>If you don't confirm, you won't hear from {{ mailinglist }} ever again.</p >
<p>Thanks,</p >
<p>Your friendly internet Mail Ape !</p>
{% endblock %}
Our template looks just like an HTML web page template, but it will be used in an email. Just like a normal Django template, we're extending a base template and filling out a block. In our case, the template we're extending is the `email/base.html` template we created at the start of this chapter. Also, note how we're using variables that we provided in our `send_confirmation_email()` function (for example, `confirmation_link`) and our `EmailTemplateContext` (for example, `mailing_list`).
Emails can include HTML but are not always rendered by web browsers. Notably, some versions of Microsoft Outlook use the Microsoft Word HTML renderer to render emails. Even Gmail, which runs in a browser, manipulates the HTML it receives before rendering it. Be careful to test complicated layouts in real email clients.
Next, let's create the plain text version of this template.
# Creating the text confirmation email template
Now, we will create the plain text version of our confirmation email template; let's create it in `django/mailinglist/templates/mailinglist/email_templates/confirm_subscription.txt`:
Hello {{subscriber.email}},
Someone (hopefully you) just subscribed to {{ mailinglist }}.
To confirm your subscription go to {{confirmation_link}}.
If you don't confirm you won't hear from {{ mailinglist }} ever again.
Thanks,
Your friendly internet Mail Ape !
In the preceding case, we're not using any HTML nor extending any base template.
However, we're still referencing variables that we provided in our `send_confirmation_email()` (for example, `confirmation_link`) function and our `EmailTemplateContext` class (for example, `mailing_list`).
Now that we have all the code necessary for sending emails, let's send them out when we create a new `Subscriber` model instance.
# Sending on new Subscriber creation
As the final step, we'll take sending confirmation emails to users; we need to call our `send_confirmation_email` function. Based on the philosophy of fat models, we will call our `send_confirmation_email` function from our `Subscriber` model rather than a view. In our case, we will send the email when a new `Subscriber` model instance is saved.
Let's update our `Subscriber` model to send a confirmation email when a new `Subscriber` has been saved. To add this new behavior, we will need to edit `django/mailinglist/models.py`:
from django.db import models
from mailinglist import emails
class Subscriber(models.Model):
# skipping unchanged model body
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
is_new = self._state.adding or force_insert
super().save(force_insert=force_insert, force_update=force_update,
using=using, update_fields=update_fields)
if is_new:
self.send_confirmation_email()
def send_confirmation_email(self):
emails.send_confirmation_email(self)
The best way to add a new behavior when a model is created is to override the model's `save()` method. When overriding `save()`, it is vital that we still call the super class's `save()` method to make sure that the model does save. Our new save method does three things:
* Checks whether the current model is a new model
* Calls the super class's `save()` method
* Sends the confirmation email if the model is new
To check if the current model instance is new, we check the `_state` attribute. The `_state` attribute is an instance of the `ModelState` class. Generally, attributes that begin with an underscore (`_`) are considered private and may change across Django releases. However, the `ModelState` class is described in Django's official documentation so we can feel more comfortable using it (though we should keep an eye on future release notes for changes). If the `self._state.adding` is `True`, then the `save()` method is going to insert this model instance as a new row. If `self._state.adding` is `True`, then the `save()` method is going to update an existing row.
We've also wrapped the call to `emails.send_confirmation_email()` in a `Subscriber` method. This will be useful if we ever want to resend a confirmation email. Any code that wants to resend a confirmation email will not have to know about the `emails` module. The model is the expert on all its operations. This is the heart of the fat model philosophy.
# A quick review of the section
In this section, we've learned more about Django's template system and how to send emails. We've learned how to render a template without using one of Django's built-in views to render it for us using the Django template engine directly. We've used the Django best practice of creating a service module to isolate all our email code. Finally, we've also used `send_email()` to send an email with a text and HTML body.
Next, let's use Celery to send these emails after we return a response to our users.
# Using Celery to send emails
As we build increasingly complicated applications, we often want to perform operations without forcing the user to wait on us to return them an HTTP response. Django works well with Celery, a popular Python distributed task queue, to accomplish this.
Celery is a library to _queue_ _tasks_ in _brokers_ to be processed by Celery _workers_. Let's take a closer look at some of these terms:
* A **Celery task** encapsulates a callable we want executed asynchronously.
* A **Celery** **queue** is a list of tasks in a first in, first out order stored in a broker.
* A **Celery broker** is a server that provides fast and efficient storage of queues. Popular brokers include RabbitMQ, Redis, and AWS SQS. Celery has different levels of support for different brokers. We will use Redis as our broker in development.
* **Celery workers** are separate processes that check queues for tasks to execute and execute them.
In this section, we will be doing the following things:
1. Installing Celery
2. Configuring Celery to work with Django
3. Using Celery queue a send confirmation email task
4. Using a Celery worker to send our emails
Let's start by installing Celery.
# Installing celery
To install Celery, we'll update our `requirements.txt` file with these new changes:
celery<4.2
celery[redis]
django-celery-results<2.0
We will install three new packages and their dependencies:
* `celery`: Installs the main Celery package
* `celery[redis]`: Installs the dependencies we need to use Redis as our broker
* `django-celery-results`: Lets us store the results of executed tasks in our Django database; this is just one way of storing and logging Celery's results
Next, let's install our new packages using `pip`:
**$ pip install -r requirements.txt**
Now that we have Celery installed, let's configure Mail Ape to use Celery.
# Configuring Celery settings
To configure Celery, we will need to make two sets of changes. First, we'll update the Django config to use Celery. Second, we'll create a celery configuration file that our worker will use.
Let's start by updating `django/config/settings.py`:
INSTALLED_APPS = [
'user',
'mailinglist',
'crispy_forms',
'markdownify',
'django_celery_results',
'django.contrib.admin',
# other built in django apps unchanged.
]
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'django-db'
Let's take a closer look at these new settings:
* `django_celery_results`: This is a Celery extension that we installed as a Django app to let us store the results of our Celery tasks in the Django DB.
* `CELERY_BROKER_URL`: This is the URL to our Celery broker. In our case, we will use a local Redis server in development.
* `CELERY_RESULT_BACKEND`: This indicates where to store the results. In our case, we will use the Django database.
Since the `django_celery_results` app lets us save results in the database, it includes new Django models. For those models to exist in the database, we will need to migrate our database:
**$ cd django
$ python manage.py migrate django_celery_results**
Next, let's create a configuration file for our Celery worker. The worker will need an access to Django and our Celery broker.
Let's create the Celery worker configuration in `django/config/celery.py`:
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
app = Celery('mailape')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
Celery knows how to work with a Django project out of the box. Here, we configure an instance of the Celery library based on our Django configuration. Let's review these settings in detail:
* `setdefault('DJANGO_SETTINGS_MODULE', ...)`: This ensures that our Celery worker knows which Django settings module to use if the `DJANGO_SETTINGS_MODULE` environment variable is not set for it.
* `Celery('mailape')`: This instantiates the Celery library for Mail Ape. Most Django apps use only one Celery instance, so the `mailape` string is not significant.
* `app.config_from_object('django.conf:settings', namespace='CELERY')`: This tells our Celery library to configure itself from the object at `django.conf.settings`. The `namespace` argument tells Celery that its settings are prefixed with `CELERY`.
* `app.autodiscover_tasks()`: This lets us avoid registering tasks by hand. When Celery is working with Django, it will check each installed app for a `tasks` module. Any tasks in that module will be automatically discovered.
Let's learn more about tasks by creating a task to send confirmation emails.
# Creating a task to send confirmation emails
Now that Celery is configured, let's create a task to send a confirmation email to a subscriber.
A Celery task is a subclass of `Celery.app.task.Task`. However, most of the time when we create Celery tasks, we use Celery's decorators to mark a function as a task. In a Django project, it's often simplest to use the `shared_task` decorator.
When creating a task, it's useful to think of it like a view. The Django community's best practices recommend _thin views_ , which means that views should be simple. They should not be responsible for complicated tasks, but should delegate that work to the model or a service module (for example, our `mailinglist.emails` module).
Keep task functions simple and put all the logic in models or service modules.
Let's create a task to send our confirmation emails in `django/mailinglist/tasks.py`:
from celery import shared_task
from mailinglist import emails
@shared_task
def send_confirmation_email_to_subscriber(subscriber_id):
from mailinglist.models import Subscriber
subscriber = Subscriber.objects.get(id=subscriber_id)
emails.send_confirmation_email(subscriber)
There are a few unique things about our `send_confirmation_email_to_subscriber` function:
* `@shared_task`: This is a Celery decorator that turns a function into a `Task`. A `shared_task` is available to all Celery instances (in most Django cases, there's only one anyway).
* `def send_confirmation_email_to_subscriber(subscriber_id):`: This is a regular function that takes a subscriber ID as an argument. A Celery task can receive any pickle-able object (including a Django model). However, if you're passing around something that may be viewed as confidential (for example, an email address), you may wish to limit the number of systems that store the data (for example, not store it on the broker). In this case, we're passing our task function an ID of the `Subscriber` instead of the full `Subscriber`. The task function then queries the database for the related `Subscriber` instance.
A final item of note in this function is that we import the `Subscriber` model inside the function instead of at the top of the file. In our case, we will have our `Subscriber` model call this task. If we import the `models` module at the top of `tasks.py` and import the `tasks` module at the top of `model.py`, then we'll have a cyclic import error. In order to prevent that, we import `Subscriber` inside the function.
Next, let's call our task from `Subscriber.send_confirmation_email()`.
# Sending emails to new subscribers
Now that we have our task, let's update our `Subscriber` to send confirmation emails using the task instead of using the `emails` module directly.
Let's update `django/mailinglist/models.py`:
from django.db import models
from mailinglist import tasks
class Subscriber(models.Model):
# skipping unchanged model
def send_confirmation_email(self):
tasks.send_confirmation_email_to_subscriber.delay(self.id)
In our updated `send_confirmation_email()` method, we will take a look at how to call a task asynchronously.
A Celery task can be called either synchronously or asynchronously. Using the regular `()` operator, we'll call the task synchronously (for example, `tasks.send_confirmation_email_to_subscriber(self.id)`). A task that executes synchronously executes like a regular function call.
A Celery task also has the `delay()` method to execute a task asynchronously. When a task is told to execute asynchronously, it will queue a message in Celery's message broker. The Celery workers will then (eventually) pull the message from the broker's queue and execute the task. The result of the task is stored in the storage backend (in our case, the Django database).
Calling a task asynchronously returns a `result` object that offers a `get()` method. Calling `result.get()` blocks the current thread until the task has finished. `result.get()` then returns the result of the task. In our case, our tasks will not return anything, so we won't use the `result` function..
`task.delay(1, a='b')` is actually a shortcut for `task.apply_async((1,), kwargs={'a':'b'})`. Most of the time, the shortcut method is what we want. If you need a greater degree of control over your tasks execution, `apply_async()` is documented in the Celery documentation (<http://docs.celeryproject.org/en/latest/userguide/calling.html>).
Now that we can call tasks, let's start a worker to process our queued tasks.
# Starting a Celery worker
Starting a Celery worker does not require us to write any new code. We can start one from the command line:
**$ cd django
$ celery worker -A config.celery -l info**
Let's look at all the arguments we gave `celery`:
* `worker`: This indicates that we want to start a new worker.
* `-A config.celery`: This is the app, or configuration, we want to use. In our case, the app we want is configured in `config.celery`.
* `-l info`: This is the log level to output. In this case, we're using `info`. By default, the level is `WARNING`.
Our worker is now able to process tasks queued by our code in Django. If we find we're queueing a lot of tasks, we can just start more `celery worker` processes.
# A quick review of the section
In this section, you learned how to use Celery to process tasks asynchronously.
We learned how to set the broker and backend using the `CELERY_BROKER_URL` and `CELERY_RESULT_BACKEND` settings in our `settings.py`. We also created a `celery.py` file for our celery worker. Then, we used the `@shared_task` decorator to make a function a Celery task. With the task available, we learned how to call a Celery task with the `.delay()` shortcut method. Finally, we started a Celery worker to execute queued tasks.
Now that we know the basics, let's use this approach to send messages to our subscribers.
# Sending messages to subscribers
In this section, we're going to create the `Message` model instances that represent messages that our users want to send to their mailing lists.
To send these messages, we will need to do the following things:
* Create a `SubscriberMessage` model to track which messages got sent and when
* Create a `SubscriberMessage` model instance for each confirmed `Subscriber` model instance associated with the new `Message` model instance
* Have `SubscriberMessage` model instances send an email to their associated `Subscriber` model instance's email
To make sure that even a `MailingList` model instance with lots of related `Subscriber` model instances doesn't slow down our website, we will use Celery to build our list of `SubscriberMessage` model instances _and_ send the emails.
Let's start by creating a `SubscriberManager` to help us get a list of confirmed `Subscriber` model instances.
# Getting confirmed subscribers
Good Django projects use custom model managers to centralize and document `QuerySet` objects related to their models. We need a `QuerySet` object to retrieve all the confirmed `Subscriber` model instances that belong to a given `MailingList` model instance.
Let's update `django/mailinglist/models.py` to add a new `SubscriberManager` class that knows how to get confirmed `Subscriber` model instances for a `MailingList` model instance:
class SubscriberManager(models.Manager):
def confirmed_subscribers_for_mailing_list(self, mailing_list):
qs = self.get_queryset()
qs = qs.filter(confirmed=True)
qs = qs.filter(mailing_list=mailing_list)
return qs
class Subscriber(models.Model):
# skipped fields
objects = SubscriberManager()
class Meta:
unique_together = ['email', 'mailing_list', ]
# skipped methods
Our new `SubscriberManager` object replaces the default manager in `Subscriber.objects`. The `SubscriberManager` class offers the `confirmed_subscribers_for_mailing_list()` method as well as all the methods of the default manager.
Next, let's create the `SubscriberMessage` model.
# Creating the SubscriberMessage model
Now, we will create a `SubscriberMessage` model and manager. The `SubscriberMessage` model will let us track whether we successfully sent an email to a `Subscriber` model instance. The custom manager will have a method of creating all the `SubscriberMessage` model instances that a `Message` model instance needs.
Let's start by creating our `SubscriberMessage` in `django/mailinglist/models.py`:
import uuid
from django.conf import settings
from django.db import models
from mailinglist import tasks
class SubscriberMessage(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
message = models.ForeignKey(to=Message, on_delete=models.CASCADE)
subscriber = models.ForeignKey(to=Subscriber, on_delete=models.CASCADE)
created = models.DateTimeField(auto_now_add=True)
sent = models.DateTimeField(default=None, null=True)
last_attempt = models.DateTimeField(default=None, null=True)
objects = SubscriberMessageManager()
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
is_new = self._state.adding or force_insert
super().save(force_insert=force_insert, force_update=force_update, using=using,
update_fields=update_fields)
if is_new:
self.send()
def send(self):
tasks.send_subscriber_message.delay(self.id)
Our `SubscriberMessage` model is pretty heavily customized compared to most of our other models:
* The `SubsriberMessage` fields connect it to a `Message` and a `Subscriber` let it track when it was created, last tried to send an email, and succeeded.
* `SubscriberMessage.objects` is a custom manager that we'll create in the following section.
* `SubscriberMessage.save()` works similar to `Subscriber.save()`. It checks whether the `SubscriberMessage` is new and whether it calls the `send()` method.
* `SubscriberMessage.send()` queues a task to send the message. We'll create that task later in the _Sending emails to Subscribers_ section.
Now, let's create a `SubscriberMessageManager` in `django/mailinglist/models.py`:
from django.db import models
class SubscriberMessageManager(models.Manager):
def create_from_message(self, message):
confirmed_subs = Subscriber.objects.\
confirmed_subscribers_for_mailing_list(message.mailing_list)
return [
self.create(message=message, subscriber=subscriber)
for subscriber in confirmed_subs
]
Our new manager offers a method of creating `SubscriberMessages` from a `Message`. The `create_from_message()` method returns a list of `SubscriberMessage`s each created using the `Manager.create()` method.
Finally, in order to have a new model available, we will need to create a migration and apply it:
**$ cd django
$ python manage.py makemigrations mailinglist
$ python manage.py migrate mailinglist**
Now that we have our `SubscriberMessage` model and table, let's update our project to automatically create `SubscriberMessage` model instances when a new `Message` is created.
# Creating SubscriberMessages when a message is created
Mail Ape is meant to send a message as soon as it is created. For a `Message` model instance to become an email in a subscriber's inbox, we will need to build a set of `SubscriberMessage` model instances. The best time to build that set of `SubscriberMessage` model instances is just after a new `Message` model instance is created.
Let's override `Message.save()` in `django/mailinglist/models.py`:
class Message(models.Model):
# skipped fields
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
is_new = self._state.adding or force_insert
super().save(force_insert=force_insert, force_update=force_update,
using=using, update_fields=update_fields)
if is_new:
tasks.build_subscriber_messages_for_message.delay(self.id)
Our new `Message.save()` method follows a similar pattern as before. `Message.save()` checks whether the current `Message` is new and whether it then queues the `build_subscriber_messages_for_message` task for execution.
We'll use Celery to build the set of `SubscriberMessage` model instances asynchronously because we don't know how many `Subscriber` model instances are related to our `MailingList` model instance. If there are very many related `Subscriber` model instances, then it might make our web server unresponsive. Using Celery, our web server will return a response as soon as the `Message` model instance is saved. The `SubscriberMessage` model instances will be created by an entirely separate process.
Let's create the `build_subscriber_messages_for_message` task in `django/mailinglist/tasks.py`:
from celery import shared_task
@shared_task
def build_subscriber_messages_for_message(message_id):
from mailinglist.models import Message, SubscriberMessage
message = Message.objects.get(id=message_id)
SubscriberMessage.objects.create_from_message(message)
As we discussed previously, our task doesn't contain much logic in itself. `build_subscriber_messages_for_message` lets the `SubscriberMessage` manager encapsulate all the logic of creating the `SubscriberMessage` model instances.
Next, let's write the code for sending emails that contain the `Message` our users create.
# Sending emails to subscribers
Our final step in this section will be to send an email based on a `SubscriberMessage`. Earlier, we had our `SubscriberMessage.save()` method queue a task to send a `Subscriber` a `Message`. Now, we'll create that task and update the `emails.py` code to send the emails.
Lets's start by updating `django/mailinglist/tasks.py` with a new task:
from celery import shared_task
@shared_task
def send_subscriber_message(subscriber_message_id):
from mailinglist.models import SubscriberMessage
subscriber_message = SubscriberMessage.objects.get(
id=subscriber_message_id)
emails.send_subscriber_message(subscriber_message)
This new task follows the same pattern as the previous tasks we've created:
* We use the `shared_task` decorator to turn a regular function into a Celery task
* We import our model inside our task function to prevent a cyclical import error
* We let the `emails` module do the actual work of sending the email
Next, let's update the `django/mailinglist/emails.py` file to send emails based on a `SubscriberMessage`:
from datetime import datetime
from django.conf import settings
from django.core.mail import send_mail
from django.template import engines
from django.utils.datetime_safe import datetime
SUBSCRIBER_MESSAGE_TXT = 'mailinglist/email/subscriber_message.txt'
SUBSCRIBER_MESSAGE_HTML = 'mailinglist/email/subscriber_message.html'
def send_subscriber_message(subscriber_message):
message = subscriber_message.message
context = EmailTemplateContext(subscriber_message.subscriber, {
'body': message.body,
})
dt_engine = engines['django'].engine
text_body_template = dt_engine.get_template(SUBSCRIBER_MESSAGE_TXT)
text_body = text_body_template.render(context=context)
html_body_template = dt_engine.get_template(SUBSCRIBER_MESSAGE_HTML)
html_body = html_body_template.render(context=context)
utcnow = datetime.utcnow()
subscriber_message.last_attempt = utcnow
subscriber_message.save()
success = send_mail(
subject=message.subject,
message=text_body,
from_email=settings.MAILING_LIST_FROM_EMAIL,
recipient_list=(subscriber_message.subscriber.email,),
html_message=html_body)
if success == 1:
subscriber_message.sent = utcnow
subscriber_message.save()
Our new function takes the following steps:
1. Builds the context for the templates using the `EmailTemplateContext` class we created earlier
2. Renders the text and HTML versions of the email using the Django Template engine
3. Records the time of the current sending attempt
4. Sends the email using Django's `send_mail()` function
5. If `send_mail()` returned that it sent an email, it records the time the message was sent
Our `send_subscriber_message()` function requires us to create HTML and text templates to render.
Let's create our HTML email body template in `django/mailinglist/templates/mailinglist/email_templates/subscriber_message.html`:
{% extends "mailinglist/email_templates/email_base.html" %}
{% load markdownify %}
{% block body %}
{{ body | markdownify }}
{% endblock %}
This template renders the markdown body of the `Message` into HTML. We've used the `markdownify` tag library to render markdown into HTML before. We don't need HTML boilerplate or to include an unsubscribe link footer because the `email_base.html` already does that.
Next, we must create the text version of the message template in `mailinglist/templates/mailinglist/email_templates/subscriber_message.txt`:
{{ body }}
---
You're receiving this message because you previously subscribed to {{ mailinglist }}.
If you'd like to unsubsribe go to {{ unsubscription_link }} and click unsubscribe.
Sent with Mail Ape .
This template looks very similar. In this case, we simply output the body as un-rendered markdown. Also, we don't have a base template for our text emails, so we have to write out the footer with an unsubscribe link manually.
Congratulations! You've now updated Mail Ape to send emails to mailing list subscribers.
Make sure that you restart your `celery worker` process(es) any time you change your code. `celery worker` does not include an automatic restart like the Django `runserver`. If we don't restart the `worker`, then it won't get any updated code changes.
Next, let's make sure that we can run our tests without triggering Celery or sending an actual email.
# Testing code that uses Celery tasks
At this point, two of our models will automatically queue Celery tasks when they are created. This can create a problem for us when testing our code since we may not want to have a Celery broker running when we run our tests. Instead, we should use Python's `mock` library to prevent the need for an outside system to be running when we run our tests.
One approach we could use is to decorate each test method that uses the `Subscriber` or `Message` models with Python's `@patch()` decorator. However, this manual process is likely to be error-prone. Let's look at some alternatives instead.
In this section, we will take a look at two approaches to make mocking out Celery tasks easier:
* Using a mixin to prevent the `send_confirmation_email_to_subscriber` task from being queued in any test
* Using a Factory to prevent the `send_confirmation_email_to_subscriber` task from being queued
By fixing the same problem in two different ways, you'll get insight into which solution works better in which situation. You may find that having both options available in a project is helpful.
We can use the exact same approaches for patching references to `send_mail` to prevent emails being sent out during testing.
Let's start by using a mixin to apply a patch.
# Using a TestCase mixin to patch tasks
In this approach, we will create a mixin that `TestCase` authors can optionally use when writing `TestCase`s. We've used mixins in a lot of our Django code to override the behavior of class-based views. Now, we'll create a mixin that will override the default behavior of `TestCase`s. We will take advantage of each test method being preceded by a call to `setUp()` and followed by `tearDown()` to set up our patch and mock.
Let's create our mixin `django/mailinglist/tests.py`:
from unittest.mock import patch
class MockSendEmailToSubscriberTask:
def setUp(self):
self.send_confirmation_email_patch = patch(
'mailinglist.tasks.send_confirmation_email_to_subscriber')
self.send_confirmation_email_mock = self.send_confirmation_email_patch.start()
super().setUp()
def tearDown(self):
self.send_confirmation_email_patch.stop()
self.send_confirmation_email_mock = None
super().tearDown()
Our mixin's `setUp()` method does three things:
* Creates a patch and saves it as an attribute of our object
* Starts the patch and saves the resulting mock object as an attribute of our object Access to the mock is important so that we can later assert what it was called
* Calls the parent class's `setUp()` method so that the `TestCase` is properly set up
Our mixin's `tearDown` method also does the following three things:
* Stops the patch
* Removes a reference to the mock
* Calls the parent class's `tearDown` method to complete any other cleanup that needs to happen
Let's create a `TestCase` to test `SubscriberCreation` and take a look at our new `MockSendEmailToSubscriberTask` in action. We'll create a test that creates a `Subscriber` model instance using its manager's `create()` method. The `create()` call will in turn call `save()` on the new `Subscriber` instances. The `Subscriber.save()` method should then queue a `send_confirmation_email` task.
Let's add our test to `django/mailinglist/tests.py`:
from mailinglist.models import Subscriber, MailingList
from django.contrib.auth import get_user_model
from django.test import TestCase
class SubscriberCreationTestCase(
MockSendEmailToSubscriberTask,
TestCase):
def test_calling_create_queues_confirmation_email_task(self):
user = get_user_model().objects.create_user(
username='unit test runner'
)
mailing_list = MailingList.objects.create(
name='unit test',
owner=user,
)
Subscriber.objects.create(
email='unittest@example.com',
mailing_list=mailing_list)
self.assertEqual(self.send_confirmation_email_mock.delay.call_count, 1)
Our test asserts that the mock we created in our mixin has been called once. This gives us confidence that when we create a new `Subscriber`, we will queue the correct task.
Next, let's look at how we can solve this problem using Factory Boy factories.
# Using patch with factories
We discussed using Factory Boy factories in Chapter 8, _Testing Answerly_. Factories make it easier to create complicated objects. We will now take a look at how to use Factories and Python's `patch()` together to prevent tasks from being queued.
Let's create a `SubscriberFactory` in `django/mailinglist/factories.py`:
from unittest.mock import patch
import factory
from mailinglist.models import Subscriber
class SubscriberFactory(factory.DjangoModelFactory):
email = factory.Sequence(lambda n: 'foo.%d@example.com' % n)
class Meta:
model = Subscriber
@classmethod
def _create(cls, model_class, *args, **kwargs):
with patch('mailinglist.models.tasks.send_confirmation_email_to_subscriber'):
return super()._create(model_class=model_class, *args, **kwargs)
Our factory overrides the default `_create()` method to apply the task patch before the default `_create()` method is called. When the default `_create()` method executes, it will call `Subscriber.save()`, which will try to queue the `send_confirmation_email` task. However, the task will be replaced with a mock. Once the model is created and the `_create()` method returns, the patch will be removed.
We can now use our `SubscriberFactory` in a test. Let's write a test in `django/mailinglist/tests.py` to verify that `SubscriberManager.confirmed_subscribers_for_mailing_list()` works correctly:
from django.contrib.auth import get_user_model
from django.test import TestCase
from mailinglist.factories import SubscriberFactory
from mailinglist.models import Subscriber, MailingList
class SubscriberManagerTestCase(TestCase):
def testConfirmedSubscribersForMailingList(self):
mailing_list = MailingList.objects.create(
name='unit test',
owner=get_user_model().objects.create_user(
username='unit test')
)
confirmed_users = [
SubscriberFactory(confirmed=True, mailing_list=mailing_list)
for n in range(3)]
unconfirmed_users = [
SubscriberFactory(mailing_list=mailing_list)
for n in range(3)]
confirmed_users_qs = Subscriber.objects.confirmed_subscribers_for_mailing_list(
mailing_list=mailing_list)
self.assertEqual(len(confirmed_users), confirmed_users_qs.count())
for user in confirmed_users_qs:
self.assertIn(user, confirmed_users)
Now that we've seen both approaches, let's look at some of the trade-offs between the two approaches.
# Choosing between patching strategies
Both Factory Boy factories and `TestCase` mixins help us solve the problem of how to test code that queues a Celery task without queuing a Celery task. Let's take a closer look at some of the trade-offs.
Some of the trade-offs when using a mixin are as follows:
* The patch stays in place during the entire test
* We have access to the resulting mock
* The patch will be applied even on tests that don't need it
* The mixins in our `TestCase` are dictated by the models we reference in our code, which can be a confusing level of indirection for test authors
Some of the trade-offs when using a Factory are as follows:
* We can still access the underlying function in a test if we need to.
* We don't have access to the resulting mock to assert (we often don't need it).
* We don't connect the parent class of `TestCase` to the models we're referring to in our test methods. It's simpler for test authors.
The ultimate decision for which approach to use is dictated by the test we're writing.
# Summary
In this chapter, we gave Mail Ape the ability to send emails to our users' `MailingList`'s confirmed `Subscribers`. We also learned how to use Celery to process tasks outside of Django's request/response cycle. This lets us process tasks that may take a long time or require other resources (for example, SMTP servers and more memory) without slowing down our Django web servers.
We covered a variety of email and Celery-related topics in this chapter. We saw how to configure Django to use an SMTP server. We used Django's `send_email()` function to send emails. We created a Celery task with the `@shared_task` decorator. We queued a Celery task using its `delay()` method. Finally, we explored some useful approaches for testing code that relies on external resources.
Next, let's build an API for our Mail Ape so that our users can integrate into their own websites and apps.
# Building an API
Now that Mail Ape can send emails to our subscribers, let's make it easier for our users to integrate with Mail Ape using an API. In this chapter, we will build a RESTful JSON API that will let users create mailing lists and add subscribers to a mailing list. To simplify creating our API, we will use the **Django REST framework** ( **DRF** ). Finally, we'll access our API on the command line using curl.
In this chapter, we will do the following things:
* Summarize the core concepts of DRF
* Create `Serializer`s that define how to parse and serialize `MailingList` and `Subscriber` models
* Create a permission class to restrict API to users who are`MailingList` owners
* Use the Django REST framework's class-based views to create the views for our API
* Access our API over HTTP using curl
* Test our API in a unit test
Let's start this chapter with DRF.
# Starting with the Django REST framework
We'll start by installing DRF and then reviewing its configuration. As we review the DRF configuration, we'll learn about the features and concepts that make it useful.
# Installing the Django REST framework
Let's start by adding DRF to our `requirements.txt` file:
djangorestframework<3.8
Next, we can install it using `pip`:
**$ pip install -r requirements.txt**
Now that we have the library installed, let's add DRF to our `INSTALLED_APPS` list in the `django/mailinglist/settings.py` file:
INSTALLED_APPS = [
# previously unchanged list
'rest_framework',
]
# Configuring the Django REST Framework
DRF is highly configurable through its view classes. However, we can avoid repeating the same common settings across all our DRF views using DRF's settings in our `settings.py` file.
All of DRF's features project out from how DRF handles views. DRF provides a rich collection of views that extend `APIView` (which in turn extends Django's `View` class). Let's look at the APIView's life cycle and the related settings.
A DRF view's life cycle perform the following actions:
1. **Wrap Django's request object in the DRF request object** : DRF has a specialized `Request` class that wraps Django's `Request` class, as will be discussed in the following sections.
2. **Perform content negotiation** : Find the request parser and response renderer.
3. **Perform authentication** : Check the credentials associated with the request.
4. **Check permissions** : Checks whether the user associated with the request can access this view.
5. **Check throttles** : Checks whether there haven't been too many requests recently by this user.
6. **Execute the view handler** : Performs the action associated with the view (for example, creating the resource, querying the database, and so on).
7. **Render the response** : Renders the response to the correct content type.
DRF's custom `Request` class is much like Django's `Request` class, except that it can be configured with a parser. A DRF view finds the correct parser for the request based on the view's settings and the content type of the request during content negotiation. The parsed contents are available as `request.data` just like a Django request with a `POST` form submission.
DRF views also use a specialized `Response` class that uses a render instead of a Django template. The renderer is selected during the content negotiation step.
Most of the preceding steps are performed using configurable classes. DRF is configurable by creating a dictionary in our project's `settings.py` under the name `REST_FRAMEWORK`. Let's review some of the most important settings:
* `DEFAULT_PARSER_CLASSES`: This supports JSON, forms and multipart forms by default. Other parsers (for example, YAML and MessageBuffer) are available as third-party community packages.
* `DEFAULT_AUTHENTICATION_CLASSES`: This supports session-based authentication and HTTP basic authentication by default. Session authentication can make using your API in your app's frontend easier. DRF ships with a token authentication class. OAuth (1 and 2) support is available through third-party community packages.
* `DEFAULT_PERMISSION_CLASSES`: This defaults to allowing any user to any action (including update and delete operations). DRF ships with a collection of stricter permissions listed in the documentation (<https://www.django-rest-framework.org/api-guide/permissions/#api-reference>). We'll also take a look at how to create a custom permission class later in this chapter.
* `DEFAULT_THROTTLE_CLASSES`/`DEFAULT_THROTTLE_RATES`: This is empty (unthrottled) by default. DRF offers a simple throttling scheme, letting us set different rates for anonymous requests and user requests out of the box.
* `DEFAULT_RENDERER_CLASSES`: This defaults to JSON and a _browsable_ template renderer. The browsable template renderer makes a simple UI for view and testing your views, suitable to development.
We will configure our DRF to be a bit stricter, even in development. Let's update `django/config/settings.py` with the following new setting `dict`:
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_THROTTLE_CLASSES': (
'rest_framework.throttling.UserRateThrottle',
'rest_framework.throttling.AnonRateThrottle',
),
'DEFAULT_THROTTLE_RATES': {
'user': '60/minute',
'anon': '30/minute',
},
}
This configuration restricts the API to authenticated users by default and sets a throttle on their requests. Authenticated users can make 60 requests per minute before being throttled. Unauthenticated users can make 30 requests per minute. DRF accepts throttle periods of `second`, `minute`, `hour`, or `day`.
Next, let's take a look at DRF `Serializer`s.
# Creating the Django REST Framework Serializers
When a DRF parser parses a request's body, the parser basically returns a Python dictionary. However, before we can perform any operation on that data, we will need to confirm that the data is valid. In our previous Django views, we'd use a Django form. In DRF, we use a `Serializer` class.
DRF `Serializer` classes are very similar to Django form classes. Both are involved in receiving validation data and preparing models for output. However, a `Serializer` class doesn't know how to render its data, unlike a Django form that does. Remember that in a DRF view, the renderers are responsible for rendering the result into JSON or whatever format was negotiated by the request.
Much like a Django form, a `Serializer` can be created to work on an arbitrary data or be based on a Django Model. Also, `Serializer` is composed of a collection of fields that we can use to control serialization. When a `Serializer` is related to a model, the Django REST framework knows which serializer `Field` to use for which model `Field`, similar to how `ModelForm`s work.
Let's create a `Serializer` for our `MailingList` model in `django/mailinglist/serializers.py`:
from django.contrib.auth import get_user_model
from rest_framework import serializers
from mailinglist.models import MailingLIst
class MailingListSerializer(serializers.HyperlinkedModelSerializer):
owner = serializers.PrimaryKeyRelatedField(
queryset=get_user_model().objects.all())
class Meta:
model = MailingList
fields = ('url', 'id', 'name', 'subscriber_set')
read_only_fields = ('subscriber_set', )
extra_kwargs = {
'url': {'view_name': 'mailinglist:api-mailing-list-detail'},
'subscriber_set': {'view_name': 'mailinglist:api-subscriber-detail'},
}
This seems very similar to how we wrote `ModelForm`s; let's take a closer look:
* `HyperlinkedModelSerializer`: This is the `Serializer` class that shows a hyperlink to any related model, so when it shows the related `Subscriber` model instances of a `MailingList`, it will show a link (URL) to that instance's detail view.
* `owner = serializers.PrimaryKeyRelatedField(...)`: This changes the field for serializing the model's `owner` field. The `PrimaryKeyRelatedField` returns the related object's primary key. This is useful when the related model doesn't have a serializer or a related API view (like the user model in Mail Ape).
* `model = MailingList`: Tells our `Serializer` which mode it's serializing
* `fields = ('url', 'id', ...)`: This lists the model's fields to serialize. The `HyperlinkedModelSerializer` includes an extra field `url`, which is the URL to the serialized model's detail view. Much like with a Django `ModelForm`, a `ModelSerializer` class (such as `HyperlinkedModelSerializer`) has a set of default serializer fields for each model field. In our case, we've decided to override how `owner` is represented (refer to the preceding point about the `owner` attribute).
* `read_only_fields = ('subscriber_set', )`: This concisely lists which fields may not be modified. In our case, this prevents users from tampering with the mailing list a `Subscriber` is in.
* `extra_kwargs`: This dictionary lets us provide extra arguments to each field's constructor without overriding the entire field. This is usually done to provide a `view_name` argument which is needed to look up the URL to a view.
* `'url': {'view_name': '...'},`: This provides the name of the `MailingList` API detail view.
* `'subscriber_set': {'view_name': '...'},`: This provides the name of the `Subscriber` API detail view.
There are actually two ways of marking a `Serializer`'s field read only. One way is using the `read_only_fields` attribute as in the preceding code sample. Another is to pass `read_only=True` as an argument to a `Field` class's constructor (for example, `email = serializers.EmailField(max_length=240, read_only=True)`).
Next, we'll create two serializers for our `Subscriber` model. Our two subscribers are going to have one difference: whether `Subscriber.email` is editable. We will need to let users write to `Subscriber.email` when they're creating `Subscriber`. However, we don't want them to be able to change the email after they've created the user.
First, let's create a `Serializer` for the `Subscription` model in `django/mailinglist/serialiers.py`:
from rest_framework import serializers
from mailinglist.models import Subscriber
class SubscriberSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Subscriber
fields = ('url', 'id', 'email', 'confirmed', 'mailing_list')
extra_kwargs = {
'url': {'view_name': 'mailinglist:api-subscriber-detail'},
'mailing_list': {'view_name': 'mailinglist:api-mailing-list-detail'},
}
The `SubscriberSerializer` is just like our `MailingListSerializer`. We use many of the same elements:
* Subclassing `serializers.HyperlinkedModelSerializer`
* Declaring the related model using an inner `Meta` class's `model` attribute
* Declaring the related model's fields using an inner `Meta` class's `fields` attribute
* Giving the related model's detail view's name using the `extra_kwargs` dictionary and the `view_name` key.
For our next `Serializer` class, we'll create one just like `SubscriberSerializer` but make the `email` field read only; let's add it to `django/mailinglist/serialiers.py`:
from rest_framework import serializers
from mailinglist.models import Subscriber
class ReadOnlyEmailSubscriberSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Subscriber
fields = ('url', 'id', 'email', 'confirmed', 'mailing_list')
read_only_fields = ('email', 'mailing_list',)
extra_kwargs = {
'url': {'view_name': 'mailinglist:api-subscriber-detail'},
'mailing_list': {'view_name': 'mailinglist:api-mailing-list-detail'},
}
This `Serializer` lets us update whether a `Subscriber` is `confirmed` or not, but it won't let the `Subscriber`'s `email` field change.
Now that we've create a few `Serializers`, we can see how similar they are to Django's built-in `ModelForm`s. Next, let's create a `Permission` class to prevent a user from accessing each other's `MailingList` and `Subscriber` model instances.
# API permissions
In this section, we'll create a permission class that the Django REST framework will use to check whether a user may perform an operation on a `MailingList` or `Subscriber`. This will perform a very similar role to the `UserCanUseMailingList` mixin we created in Chapter 10, Starting Mail Ape.
Let's create our `CanUseMailingList` class in `django/mailinglist/permissions.py`:
from rest_framework.permissions import BasePermission
from mailinglist.models import Subscriber, MailingList
class CanUseMailingList(BasePermission):
message = 'User does not have access to this resource.'
def has_object_permission(self, request, view, obj):
user = request.user
if isinstance(obj, Subscriber):
return obj.mailing_list.user_can_use_mailing_list(user)
elif isinstance(obj, MailingList):
return obj.user_can_use_mailing_list(user)
return False
Let's take a closer look at some of the new elements introduced in our `CanUseMailingList` class:
* `BasePermission`: Provides the basic contract of a permission class, implementing the `has_permission()` and `has_object_permission()` methods to always return `True`
* `message`: This is the message that the `403` response body
* `def has_object_permission(...)`: Checks whether the request's user is the owner of the related `MailingList`
The `CanUseMailingList` class doesn't override `BasePermission.has_permission(self, request, view)` because the permissions in our system are both at the object level rather than the view or model level.
If you need a more dynamic permission system, you may want to use Django's built-in permission system (<https://docs.djangoproject.com/en/2.0/topics/auth/default/#permissions-and-authorization>) or Django Guardian (<https://github.com/django-guardian/django-guardian>).
Now that we have our `Serializer`s and permission class, we will write our API views.
# Creating our API views
In this section, we'll create the actual views that define Mail Ape's RESTful API. The Django REST framework offers a collection of class-based views that are similar to Django's suite of class-based views. One of the main differences between the DRF generic views and the Django generic views is how they combine multiple operations in a single view class. For example, DRF offers the `ListCreateAPIView` class but Django only offers a `ListView` class and a `CreateView` class. DRF offers a `ListCreateAPIView` class because a resource at `/api/v1/mailinglists` would be expected to provide both a list of `MailingList` model instances and a creation endpoint.
Django REST Framework also offers a suite of function decorators (<http://www.django-rest-framework.org/api-guide/views/#function-based-views>) so that you use function-based views too.
Let's learn more about DRF views by creating our API, starting with the `MailingList` API views.
# Creating MailingList API views
Mail Ape will provide an API to create, read, update, and delete `MailingList`s. To support these operations, we will create the following two views:
* A `MailingListCreateListView` that extends `ListCreateAPIView`
* A `MailingListRetrieveUpdateDestroyView` that extends `RetrieveUpdateDestroyAPIView`
# Listing MailingLists by API
To support getting a list of a user's `MailingList` model instances and creating new `MailingList` model instances, we will create the `MailingListCreateListView` class in `django/mailinglist/views.py`:
from rest_framework import generics
from rest_framework.permissions import IsAuthenticated
from mailinglist.permissions import CanUseMailingList
from mailinglist.serializers import MailingListSerializer
class MailingListCreateListView(generics.ListCreateAPIView):
permission_classes = (IsAuthenticated, CanUseMailingList)
serializer_class = MailingListSerializer
def get_queryset(self):
return self.request.user.mailinglist_set.all()
def get_serializer(self, *args, **kwargs):
if kwargs.get('data', None):
data = kwargs.get('data', None)
owner = {
'owner': self.request.user.id,
}
data.update(owner)
return super().get_serializer(*args, **kwargs)
Let's review our `MailingListCreateListView` class in detail:
* `ListCreateAPIView`: This is the DRF generic view we extend. It responds to `GET` requests with the serialized contents returned by the `get_queryset()` method. When it receives a `POST` request, it will create and return a `MailingList` model instance.
* `permission_classes`: This is a collection of permission classes that will be called in an order. If `IsAuthenticated` fails, then `IsOwnerPermission` won't be called.
* `serializer_class = MailingListSerializer`: This is the serializer this view uses.
* `def get_queryset(self)`: This is used to get a `QuerySet` of models to serialize and return.
* `def get_serializer(...)`: This is used to get the serializer instance. In our case, we're overriding the owner (if any) that we received as input from the request with the currently logged in user. By doing so, we ensure that a user can't create a mailing list owned by another. This is very similar to how we might override `get_initial()` in a Django form view (for example, refer to the `CreateMessageView` class from Chapter 10, _Starting Mail Ape_ ).
Now that we have our view, let's add it to our URLConf in `django/mailinglist/urls.py` with the following code:
path('api/v1/mailing-list', views.MailingListCreateListView.as_view(),
name='api-mailing-list-list'),
Now, we can create and list `MailingList` model instances by sending a request to `/mailinglist/api/v1/mailing-list`.
# Editing a mailing list via an API
Next, let's add a view to view, update, and delete a single `MailingList` model instance by adding a new view to `django/mailinglist/views.py`:
from rest_framework import generics
from rest_framework.permissions import IsAuthenticated
from mailinglist.permissions import CanUseMailingList
from mailinglist.serializers import MailingListSerializer
from mailinglist.models import MailingList
class MailingListRetrieveUpdateDestroyView(
generics.RetrieveUpdateDestroyAPIView):
permission_classes = (IsAuthenticated, CanUseMailingList)
serializer_class = MailingListSerializer
queryset = MailingList.objects.all()
`MailingListRetrieveUpdateDestroyView` looks very similar to our previous view but extends the `RetrieveUpdateDestroyAPIView` class. Like Django's built-in `DetailView`, `RetrieveUpdateDestroyAPIView` expects that it will receive the `pk` of the `MailingList` model instance it is to operate on in the request's path. `RetrieveUpdateDestroyAPIView` knows how to handle a variety of HTTP methods:
* On a `GET` request, it retrieves the model identified by the `pk` argument
* On a `PUT` request, it overwrites all the fields of the model identified by the `pk` with the fields received in the argument
* On a `PATCH` request, it overwrites only the fields received in the request
* On a `DELETE` request, it deletes the model identified by the `pk`
Any updates (whether by `PUT` or by `PATCH`) are validated by the `MailingListSerializer`.
Another difference is that we define a `queryset` attribute for the view (`MailingList.objects.all()`) instead of a `get_queryset()` method. We don't need to restrict our `QuerySet` dynamically because the `CanUseMailingList` class will protect us from users editing/viewing `MailingLists` they don't have permission to access.
Just like before, we now need to connect our view to our app's URLConf in `django/mailinglist/urls.py` with the following code:
path('api/v1/mailinglist/<uuid:pk>',
views.MailingListRetrieveUpdateDetroyView.as_view(),
name='api-mailing-list-detail'),
Note that we parse the `<uuid:pk>` argument out of the request's path just like we do with some of Django's regular views that operate on a single model instance.
Now that we have our `MailingList` API, let's allow our users to manage `Subscriber`s by API as well.
# Creating a Subscriber API
In this section, we'll create an API to manage `Subscriber` model instances. This API will be powered by two views:
* `SubscriberListCreateView` to list and create `Subscriber` model instances
* `SubscriberRetrieveUpdateDestroyView` to retrieve, update, and delete a `Subscriber` model instance
# Listing and Creating Subscribers API
`Subscriber` model instances have an interesting difference from `MailingList` model instances in that `Subscriber` model instances are not directly related to a user. To get a list of `Subscriber` model instances, we need to know which `MailingList` model instance we should query. `Subscriber` model instance creation faces the same question, so both these operations will have to receive a related `MailingList`'s `pk` to execute.
Let's start with our by creating our `SubscriberListCreateView` in `django/mailinglist/views.py`:
from rest_framework import generics
from rest_framework.permissions import IsAuthenticated
from mailinglist.permissions import CanUseMailingList
from mailinglist.serializers import SubscriberSerializer
from mailinglist.models import MailingList, Subscriber
class SubscriberListCreateView(generics.ListCreateAPIView):
permission_classes = (IsAuthenticated, CanUseMailingList)
serializer_class = SubscriberSerializer
def get_queryset(self):
mailing_list_pk = self.kwargs['mailing_list_pk']
mailing_list = get_object_or_404(MailingList, id=mailing_list_pk)
return mailing_list.subscriber_set.all()
def get_serializer(self, *args, **kwargs):
if kwargs.get('data'):
data = kwargs.get('data')
mailing_list = {
'mailing_list': reverse(
'mailinglist:api-mailing-list-detail',
kwargs={'pk': self.kwargs['mailing_list_pk']})
}
data.update(mailing_list)
return super().get_serializer(*args, **kwargs)
Our `SubscriberListCreateView` class has much in common with our `MailingListCreateListView` class, including the same base class and `permission_classes` attribute. Let's take a closer look at some of the differences:
* `serializer_class`: Uses a `SubscriberSerializer`.
* `get_queryset()`: Checks whether the related `MailingList` model instance identified in the URL exists before returning a `QuerySet` of all the related `Subscriber` model instances.
* `get_serializer()`: Ensures the new `Subscriber` is associated with the `MailingList` in the URL. We use the `reverse()` function to identify the associated `MailingList` model instance because the `SubscriberSerializer` class is inherits from the `HyperlinkedModelSerializer` class. `HyperlinkedModelSerializer` wants related models to be identified by a hyperlink or path (not a `pk`).
Next, we will add a `path()` object for our `SubscriberListCreateView` class to the URLConf in `django/mailinglist/urls.py`:
path('api/v1/mailinglist/<uuid:mailing_list_pk>/subscribers',
views.SubscriberListCreateView.as_view(),
name='api-subscriber-list'),
When adding a `path()` object for our `SubscriberListCreateView` class, we will need to ensure that we have a `mailing_list_pk` parameter. This lets `SubscriberListCreateView` know which `Subscriber` model instances to operate on.
Our users are now able to add `Subscriber` s to their `MailingList` via our RESTful API. Adding a user to our API will then trigger a confirmation email because `Subscriber.save()` will be called by our `SubscriberSerializer`. Our API doesn't need to know how to send emails because our _fat model_ is the expert on the behavior of `Subscriber`.
However, this API does present a potential bug in Mail Ape. Our current API lets us add a `Subscriber` who has been already confirmed. However, our `Subscriber.save()` method will send a confirmation email to the email address of all new `Subscriber` model instances. This can lead to us spamming the already confirmed `Subscriber`s. To fix this bug, let's update `Subscriber.save` in `django/mailinglist/models.py`:
class Subscriber(models.Model):
# skipping unchanged attributes and methods
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
is_new = self._state.adding or force_insert
super().save(force_insert=force_insert, force_update=force_update,
using=using, update_fields=update_fields)
if is_new and not self.confirmed:
self.send_confirmation_email()
Now, we're only calling `self.send_confirmation_email()` if we're saving a new _and_ unconfirmed `Subscriber` model instance.
Great! Now, let's create a view to retrieve, update, and delete a `Subscriber` model instance.
# Updating subscribers via an API
Now, that we have created and list API operations for Subscriber model instances, we can create an API view for retrieving, updating, and deleting a single `Subscriber` model instance.
Let's add our view to `django/mailinglist/views.py`:
from rest_framework import generics
from rest_framework.permissions import IsAuthenticated
from mailinglist.permissions import CanUseMailingList
from mailinglist.serializers import ReadOnlyEmailSubscriberSerializer
from mailinglist.models import Subscriber
class SubscriberRetrieveUpdateDestroyView(
generics.RetrieveUpdateDestroyAPIView):
permission_classes = (IsAuthenticated, CanUseMailingList)
serializer_class = ReadOnlyEmailSubscriberSerializer
queryset = Subscriber.objects.all()
Our `SubscriberRetrieveUpdateDestroyView` is very similar to our `MailingListRetrieveUpdateDestroyView` view. Both inherit from the same `RetrieveUpdateDestroyAPIView` class to provide the core behavior in response to HTTP requests and use the same `permission_classes` list. `SubscriberRetrieveUpdateDestroyView` however has two differences:
* `serializer_class = ReadOnlyEmailSubscriberSerializer`: This is a different `Serializer`. In the case of updates, we don't want the user to be able to change email addresses.
* `queryset = Subscriber.objects.all()`: This is a `QuerySet` of all `Subscribers`. We don't need to restrict the `QuerySet` because the `CanUseMailingList` will prevent unauthorized access.
Next, let's make sure that we can route to it by adding it to the `urlpatterns` list in `django/mailinglist/urls.py`:
path('api/v1/subscriber/<uuid:pk>',
views.SubscriberRetrieveUpdateDestroyView.as_view(),
name='api-subscriber-detail'),
Now that we have our views, let's try interacting with it on the command line.
# Running our API
In this section, we'll run Mail Ape on the command line and interact with our API on the command line using `curl`, a popular command-line tool used for interacting with servers. In this section, we will perform the following functions:
* Creating a user on the command line
* Creating a mailing list on the command line
* Getting a list of `MailingList` s on the command line
* Creating a `Subscriber` on the command line
* Getting a list of `Subscriber` s on the command line
Let's start by creating our user using the Django `manage.py shell` command:
**$ cd django
$ python manage.py shell
Python 3.6.3 (default)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from django.contrib.auth import get_user_model
In [2]: user = get_user_model().objects.create_user(username='user', password='secret')
In [3]: user.id
2**
If you've already registered a user using the web interface, you can use that user. Also, never use `secret` as your password in production.
Now that we have a user who we can use on the command line, let's start our local Django server:
**$ cd django
$ python manage.py runserver**
Now that our server is running, we can open a different shell and get a list of `MailingList` s for our user:
**$ curl "http://localhost:8000/mailinglist/api/v1/mailing-list" \
-u 'user:secret'
[]**
Let's take a closer look at our command:
* `curl`: This is the tool we're using.
* `"http://... api/v1/mailing-list"`: This is the URL we're sending our request to.
* `-u 'user:secret'`: This is the basic authentication credentials. `curl` takes care of encoding these correctly for us.
* `[]`: This is an empty JSON list returned by the server. In our case, `user` doesn't have any `MailingList`s yet.
We get a JSON response because the Django REST framework is configured to use JSON rendering by default.
To create a `MailingList` for our user, we will need to send a `POST` request like this:
**$ curl -X "POST" "http://localhost:8000/mailinglist/api/v1/mailing-list" \
-H 'Content-Type: application/json; charset=utf-8' \
-u 'user:secret' \
-d $'{
"name": "New List"
}'
{"url":"http://localhost:8000/mailinglist/api/v1/mailinglist/cd983e25-c6c8-48fa-9afa-1fd5627de9f1","id":"cd983e25-c6c8-48fa-9afa-1fd5627de9f1","name":"New List","owner":2,"subscriber_set":[]}**
This is a much longer command with a proportionately longer result. Let's take a look at each new argument:
* `-H 'Content-Type: application/json; charset=utf-8' \`: This adds a new HTTP `Content-Type` header to tell the server to parse the body as JSON.
* `-d $'{ ... }'`: This specifies the body of the request. In our case, we're sending a JSON object with the name of the new mailing list.
* `"url":"http://...cd983e25-c6c8-48fa-9afa-1fd5627de9f1"`: This is the URL for the full details of the new `MailingLIst`.
* `"name":"New List"`: This shows the name of the new list that we requested.
* `"owner":2`: This shows the ID of the owner of the list. This matches the ID of the user we created earlier and included in this request (using `-u`).
* `"subscriber_set":[]`: This shows that there are no subscribers in this mailing list.
We can now repeat our initial request to list `MailingList`s and check whether our new `MailingList` is included:
**$ curl "http://localhost:8000/mailinglist/api/v1/mailing-list" \
-u 'user:secret'
[{"url":"http://localhost:8000/mailinglist/api/v1/mailinglist/cd983e25-c6c8-48fa-9afa-1fd5627de9f1","id":"cd983e25-c6c8-48fa-9afa-1fd5627de9f1","name":"New List","owner":2,"subscriber_set":[]}]**
Seeing that we can run our server and API in development is great, but we don't want to always rely on manual testing. Let's take a look at how to automate testing our API next.
If you want to test creating subscribers, make sure that your Celery broker (for example, Redis) is running and that you've got a worker consuming tasks to get the full experience.
# Testing your API
APIs provide value to our users by letting them automate their interactions with our service. Naturally, DRF helps us automate testing our code as well.
DRF provides replacements for all the common Django tools we discussed in Chapter 8, _Testing Answerly_ :
* `APIRequestFactory` for Django's `RequestFactory` class
* `APIClient` for Django's `Client` class
* `APITestCase` for Django's `TestCase` class
`APIRequestFactory` and `APIClient` make it easier to send requests formatted for our API. For example, they provide an easy way to set credentials for a request that isn't relying on session-based authentication. Otherwise, the two classes serve the same purpose as their default Django equivalents.
The `APITestCase` class simply extends Django's `TestCase` class and replaces Django's `Client` with `APIClient`.
Let's take a look at an example that we can add to `django/mailinglist/tests.py`:
class ListMailingListsWithAPITestCase(APITestCase):
def setUp(self):
password = 'password'
username = 'unit test'
self.user = get_user_model().objects.create_user(
username=username,
password=password
)
cred_bytes = '{}:{}'.format(username, password).encode('utf-8')
self.basic_auth = base64.b64encode(cred_bytes).decode('utf-8')
def test_listing_all_my_mailing_lists(self):
mailing_lists = [
MailingList.objects.create(
name='unit test {}'.format(i),
owner=self.user)
for i in range(3)
]
self.client.credentials(
HTTP_AUTHORIZATION='Basic {}'.format(self.basic_auth))
response = self.client.get('/mailinglist/api/v1/mailing-list')
self.assertEqual(200, response.status_code)
parsed = json.loads(response.content)
self.assertEqual(3, len(parsed))
content = str(response.content)
for ml in mailing_lists:
self.assertIn(str(ml.id), content)
self.assertIn(ml.name, content)
Let's take a closer look at the new code introduced in our `ListMailingListsWithAPITestCase` class:
* `class ListMailingListsWithAPITestCase(APITestCase)`: This makes `APITestCase` our parent class. The `APITestCase` class is basically a `TestCase` class with an `APIClient` object instead of the regular Django `Client` object assigned to the `client` attribute. We will use this class to test our view.
* `base64.b64encode(...)`: This does a base64 encoding of our username and password. We'll use this to provide an HTTP basic authentication header. We must use `base64.b64encode()` instead of `base64.base64()` because the latter introduces white space to visually break up long strings. Also, we will need to `encode`/`decode` our strings because `b64encode()` operates on `byte` objects.
* `client.credentials()`: This lets us set an authentication header to be sent all future requests by this `client` object. In our case, we're sending an HTTP basic authentication header.
* `json.loads(response.content)`: This parses the content body of the response return a Python list.
* `self.assertEqual(3, len(parsed))`: This confirms that the number of items in the parsed list is correct.
If we were to send a second request using `self.client`, we would not need to re-authenticate because `client.credentials()` remembers what it received and continues passing it to all requests. We can clear the credentials by calling `client.credentials()`.
Now, we know how to test our API code!
# Summary
In this chapter, we covered how to use the Django REST framework to create an RESTful API for our Django project. We saw how the Django REST framework uses similar principles to Django forms and Django generic views. We also used some of the core classes in the Django REST framework, we used a `ModelSerializer` to build a `Serializer` based on a Django models, and we used a `ListCreateAPIView` to create a view that can list and create Django models We used `RetrieveUpdateDestroyAPIView` to manage a Django model instance based on its primary key.
Next, we'll deploy our code to the internet using Amazon Web Services.
# Deploying Mail Ape
In this chapter, we will deploy Mail Ape onto a virtual machine in the **Amazon Web Services** ( **AWS** ) cloud. AWS is composed of many different services. We've already discussed using S3 and launching a container into AWS. In this chapter, we will use a lot more AWS services. We will use the **Relational Database Service (RDS)** for a PostgreSQL database server. We will use the **Simple Queue Service (SQS)** for our Celery message queue. We will use **Elastic Computing Cloud (EC2)** to run virtual machines in the cloud. Finally, we will use CloudFormation to define our infrastructure as code.
In this chapter, we will do the following things:
* Separate production and development settings
* Use Packer to create an Amazon Machine Image of our release
* Use CloudFormation to define the infrastructure as code
* Launch Mail Ape into AWS using the command line
Let's start by separating our production development settings.
# Separating development and production
So far, we've kept a single requirements file and a single `settings.py` file. This has made development convenient. However, we can't use our development settings in production.
The current best practice is to have a separate file per environment. Each environment's file then imports a common file with shared values. We'll use this pattern for our requirements and settings files.
Let's start by splitting up our requirements files.
# Separating our requirements files
To separate our requirements, we'll delete the existing `requirements.txt` file and replace it with common, development, and production requirements files. After we delete `requirements.txt`, let's create `requirements.common.txt` at the root of our project:
django<2.1
psycopg2<2.8
django-markdownify==0.3.0
django-crispy-forms==1.7.0
celery<4.2
django-celery-results<2.0
djangorestframework<3.8
factory_boy<3.0
Next, let's create a requirements file for `requirements.development.txt`:
-r requirements.common.txt
celery[redis]
Since we only use Redis in our development setup, we'll keep the package in our development requirements file.
We'll put our production requirements in `requirements.production.txt` at the root of the project:
-r requirements.common.txt
celery[sqs]
boto3
pycurl
In order for Celery to work with SQS (the AWS message queue service), we will need to install the Celery SQS library (`celery[sqs]`). We will also install `boto3`, the Python AWS library, and `pycurl`, a Python `curl` implementation.
Next, let's separate our Django settings files.
# Creating common, development, and production settings
As in our previous chapters, before we sort our settings into three files, we'll create `common_settings.py` by renaming our current `settings.py` then making some changes.
Let's change `DEBUG = False` so that no new settings file can _accidentally_ be in debug mode. Then, let's change the secret key to be obtained from an environment variable by updating `SECRET_KEY = os.getenv('DJANGO_SECRET_KEY')`.
In the database config, we can remove all the credentials but keep the `ENGINE` (to make it clear that we intend to use Postgres everywhere):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
}
}
Next, let's create a development settings file in `django/config/development_settings.py`:
from .common_settings import *
DEBUG = True
SECRET_KEY = 'secret key'
DATABASES['default']['NAME'] = 'mailape'
DATABASES['default']['USER'] = 'mailape'
DATABASES['default']['PASSWORD'] = 'development'
DATABASES['default']['HOST'] = 'localhost'
DATABASES['default']['PORT'] = '5432'
MAILING_LIST_FROM_EMAIL = 'mailape@example.com'
MAILING_LIST_LINK_DOMAIN = 'http://localhost'
EMAIL_HOST = 'smtp.example.com'
EMAIL_HOST_USER = 'username'
EMAIL_HOST_PASSWORD = os.getenv('EMAIL_PASSWORD')
EMAIL_PORT = 587
EMAIL_USE_TLS = True
CELERY_BROKER_URL = 'redis://localhost:6379/0'
Remember that you need to change your `MAILING_LIST_FROM_EMAIL`, `EMAIL_HOST` and `EMAIL_HOST_USER` to your correct development values.
Next, let's put our production settings in `django/config/production_settings.py`:
from .common_settings import *
DEBUG = False
assert SECRET_KEY is not None, (
'Please provide DJANGO_SECRET_KEY environment variable with a value')
ALLOWED_HOSTS += [
os.getenv('DJANGO_ALLOWED_HOSTS'),
]
DATABASES['default'].update({
'NAME': os.getenv('DJANGO_DB_NAME'),
'USER': os.getenv('DJANGO_DB_USER'),
'PASSWORD': os.getenv('DJANGO_DB_PASSWORD'),
'HOST': os.getenv('DJANGO_DB_HOST'),
'PORT': os.getenv('DJANGO_DB_PORT'),
})
LOGGING['handlers']['main'] = {
'class': 'logging.handlers.WatchedFileHandler',
'level': 'DEBUG',
'filename': os.getenv('DJANGO_LOG_FILE')
}
MAILING_LIST_FROM_EMAIL = os.getenv('MAIL_APE_FROM_EMAIL')
MAILING_LIST_LINK_DOMAIN = os.getenv('DJANGO_ALLOWED_HOSTS')
EMAIL_HOST = os.getenv('EMAIL_HOST')
EMAIL_HOST_USER = os.getenv('EMAIL_HOST_USER')
EMAIL_HOST_PASSWORD = os.getenv('EMAIL_HOST_PASSWORD')
EMAIL_PORT = os.getenv('EMAIL_HOST_PORT')
EMAIL_USE_TLS = os.getenv('EMAIL_HOST_TLS', 'false').lower() == 'true'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'us-west-2',
'queue_name_prefix': 'mailape-',
CELERY_BROKER_URL = 'sqs://'
}
Our production settings file gets most of its values from environment variables so that we don't check production values into our server. There are three settings we need to review, as follows:
* `MAILING_LIST_LINK_DOMAIN`: This is the domain that links in our emails will point to. In our case, in the preceding code snippet, we used the same domain that we added to our `ALLOWED_HOSTS` list, ensuring that we're serving the domain that the links point to.
* `CELERY_BROKER_TRANSPORT_OPTIONS`: This is a dictionary of options that configure Celery to use the correct SQS queue. We will need to set the region to `us-west-2` because our entire production deployment will be in that region. By default, Celery will want to use a queue called `celery`. However, we don't want that name to collide with other Celery projects we might deploy. To prevent name collisions, we will configure Celery to use the `mailape-` prefix.
* `CELERY_BROKER_URL`: This tells Celery which broker to use. In our case, we're using SQS. We will give our virtual machine access to SQS using AWS's role-based authorization so that we don't have to provide any credentials.
Now that we have our production settings created, let's make our infrastructure in the AWS Cloud.
# Creating an infrastructure stack in AWS
In order to host an app on AWS, we will need to ensure that we have some infrastructure set up. We'll need to the following things:
* A PostgreSQL server
* Security Groups to open network ports so that we can access our database and web server
* An InstanceProfile to give our deployed VM access to SQS
We could create all that using the AWS web console or using the command-line interface. However, over time, it can be hard to track how our infrastructure is configured if we rely on runtime tweaks. It would be much nicer if we could describe our required infrastructure in files that we could track in version control, much like we track our code.
AWS provides a service called CloudFormation, which lets us treat infrastructure as code. We will define our infrastructure in a CloudFormation template using YAML (JSON is also available, but we'll use YAML). We'll then execute our CloudFormation template to create a CloudFormation stack. The CloudFormation stack will be associated with actual resources in the AWS Cloud. If we delete the CloudFormation stack, the related resources will also be deleted. This gives us simple control over our use of AWS resources.
Let's create our CloudFormation template in `cloudformation/infrastructure.yaml`. Every CloudFormation template begins with a `Description` and the template format version information. Let's start our file with the following:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Our CloudFormation template will have the following three parts:
* `Parameters`: This is where we will describe values that we'll pass in at runtime. This block is optional but useful. In our case, we'll pass in the master database password rather than hardcoding it in our template.
* `Resources`: This is where we will describe the specific resources that our stack will contain. This will describe our database server, SQS queue, security groups, and InstanceProfile.
* `Outputs`: This is where we will describe the values to output to make referencing the resources we created easier. This block is optional but useful. We will provide the address of our database server and the ID of the InstanceProfile we created.
Let's start by creating the `Parameters` block of our CloudFormation template.
# Accepting parameters in a CloudFormation template
To avoid hardcoding values in a CloudFormation template, we can accept parameters. This helps us avoid hardcoding sensitive values (such as passwords) in a template.
Let's add a parameter to accept the password of our database server's master user:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
MasterDBPassword:
Description: Master Password for the RDS instance
Type: String
This adds a `MasterDBPassword` parameter to our template. We will be able to reference this value later on. CloudFormation templates let us add two pieces of information for parameters:
* `Description`: This is not used by CloudFormation but is useful for the people who have to maintain our infrastructure.
* `Type`: CloudFormation uses this to check whether the value we provide is valid _before_ executing our template. In our case, the password is a `String`.
Next let's add a `Resources` block to define the AWS resources we'll need in our infrastructure.
# Listing resources in our infrastructure
Next, we will add a `Resources` block to our CloudFormation template in `cloudformation/infrastructure.yaml`. Our infrastructure template will define five resources:
* Security Groups, which will open network ports, permitting us to access our database and web servers
* Our database server
* Our SQS queue
* A Role that allows access to SQS
* An InstanceProfile, which lets our web servers assume the above Role
Let's start by creating the Security Groups, which will open the network ports by which we'll access our database and web servers.
# Adding Security Groups
In AWS, a SecurityGroup defines a set of network access rules much like a network firewall. By default, virtual machines launched can _send_ data out on any network port but not _accept_ connections on any network port. That means that we can't connect using SSH or HTTP; let's fix that.
Let's update our CloudFormation template in `cloudformation/infrastructure.yaml` with three new Security Groups:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
...
Resources:
SSHSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupName: ssh-access
GroupDescription: permit ssh access
SecurityGroupIngress:
-
IpProtocol: "tcp"
FromPort: "22"
ToPort: "22"
CidrIp: "0.0.0.0/0"
WebSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupName: web-access
GroupDescription: permit http access
SecurityGroupIngress:
-
IpProtocol: "tcp"
FromPort: "80"
ToPort: "80"
CidrIp: "0.0.0.0/0"
DatabaseSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupName: db-access
GroupDescription: permit db access
SecurityGroupIngress:
-
IpProtocol: "tcp"
FromPort: "5432"
ToPort: "5432"
CidrIp: "0.0.0.0/0"
In the preceding code block, we defined three new SecurityGroups to open ports `22` (SSH), `80` (HTTP), and `5432` (default Postgres port).
Let's take a closer look at the syntax of a CloudFormation resource. Each resource block must have a `Type` and `Properties` attributes. The `Type` attribute tells CloudFormation what this resource describes. The `Properties` attribute describe settings for this particular resources.
Our SecurityGroups used the following properties:
* `GroupName`: This gives human-friendly names. This is optional but recommended. CloudFormation can generate the names for us instead. SecurityGroup names must be unique for a given account (for example, I can't have two `db-access` groups, but you and I can each have a `db-access` group).
* `GroupDescription`: This is a human-friendly description of the group's purpose. It's required to be present.
* `SecurityGroupIngress`: This is a list of ports on which to accept incoming connection for VMs in this group.
* `FromPort`/`ToPort`: Often, these two settings will have the same value, the network port you want to be to be able to connect to. `FromPort` is the port on which we will connect. `ToPort` is the VM port on which a service is listening.
* `CidrIp`: This is an IPv4 range to accept connections from. `0.0.0.0/0` means accept all connections.
Next, let's add a database server to our list of resources.
# Adding a Database Server
AWS offers relational databases servers as a service called **Relational Database Service** ( **RDS** ). To create a database server on AWS, we will create a new RDS VM (called an _instance_ ). One important thing to note is that when we launch an RDS instance, we can connect to the PostgreSQL database on the server but we do not have shell access. We must run Django on a different VM.
Let's add an RDS instance to our CloudFormation template in `cloudformation/infrastructure.yaml`:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
...
Resources:
...
DatabaseServer:
Type: AWS::RDS::DBInstance
Properties:
DBName: mailape
DBInstanceClass: db.t2.micro
MasterUsername: master
MasterUserPassword: !Ref MasterDBPassword
Engine: postgres
AllocatedStorage: 20
PubliclyAccessible: true
VPCSecurityGroups: !GetAtt DatabaseSecurityGroup.GroupId
Our new RDS instance entry is of the `AWS::RDS::DBInstance` type. Let's review the properties we set:
* `DBName`: This is the name of the _server_ , not the name of any databases running on it.
* `DBInstanceClass`: This defines the memory and processing power of the virtual machine of the server. At the time of writing this book, `db.t2.micro` is part of a free tier for accounts in their first year.
* `MasterUsername`: This is the username of the privileged administrator account on the server.
* `MasterUserPassword`: This is the password for the privileged administrator account
* `!Ref MasterDBPassword`: This is the shortcut syntax to reference the `MasterDBPassword` parameter. This lets us avoid hardcoding the database server's administrator password.
* `Engine`: This is the type of Database server we want; in our case, `postgres` will give us a PostgreSQL server.
* `AllocatedStorage`: This indicates how much storage space the server should have, in gigabyte (GB).
* `PubliclyAccessible`: This indicates whether the server can be accessed from outside the AWS Cloud.
* `VPCSecurityGroups`: This is a list of SecurityGroups, indicating which ports are open and accessible.
* `!GetAtt DatabaseSecurityGroup.GroupId`: This returns the `GroupID` attribute of the `DatabaseSecurityGroup` security group.
This block also introduces us to CloudFormation's `Ref` and `GetAtt` functions. Both these functions let us reference other parts of our CloudFormation stack which is very important. `Ref` lets us use our `MasterDBPassword` parameter as the value of our database server's `MasterUserPassword`. `GetAtt` lets us reference our AWS generated `GroupId` attribute of `DatabaseSecurityGroup` in our database server's list of `VPCSercurityGroups`.
AWS CloudFormation offers a variety of different functions to make building templates easier. They are documented in the AWS documentation online (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html>).
Next, let's create the SQS Queue that Celery will use.
# Adding a Queue for Celery
SQS is the AWS message queue service. Using SQS, we can create a Celery-compatible message queue that we don't have to maintain. SQS can quickly scale to handle any number of requests we send it.
To define our queue, add it to our `Resources` block in `cloudformation/infrastructure.yaml`:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
...
Resources:
...
MailApeQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: mailape-celery
Our new resource is of the `AWS::SQS::Queue` type and has a single property, `QueueName`.
Next, let's create a role and InstanceProfile to let our production servers access our SQS queue.
# Creating a Role for Queue access
Earlier, in the _Adding Security Groups_ section, we discussed creating SecurityGroups to open network ports so that we could make a network connection. To manage access among AWS resources, we will need to use role-based authorization. In a role-based authorization, we define a role, who can be assigned that role (assume that role), and what actions that role can perform. In order for our web servers to use that role, we will need to create an EC2 instance profile that is associated with that role.
Let's start by adding a role to `cloudformation/infrastructure.yaml`:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
...
Resources:
...
SQSAccessRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: "root"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action: "sqs:*"
Resource: !GetAtt MailApeQueue.Arn
-
Effect: Allow
Action: sqs:ListQueues
Resource: "*"
Our new block is of the `AWS::IAM::Role` type. IAM is short for AWS Identity and Access Management services. Our role is composed of the following two properties:
* `AssumeRolePolicyDocument`: This defines who may be assigned this role. In our case, we're saying that this role may be assumed by any object in Amazon's EC2 service. Later, we'll use it in our EC2 instances.
* `Policies`: This is a list of allowed (or denied) actions for this role. In our case, we're permitting all SQS operations (`sqs:*`) on our previously defined SQS Queue. We reference our queue by getting its `Arn`, **Amazon Resource Name** ( **ARN** ), with the `GetAtt` function. ARNs are Amazon's way of providing each resource on the Amazon cloud with a globally unique ID.
Now that we have our role, we can associate it with an `InstanceProfile` resource, which can be associated with our web servers:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
...
Resources:
...
SQSClientInstance:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- !Ref SQSAccessRole
Our new InstanceProfile is of the `AWS::IAM::InstanceProfile` type and needs a list of associated roles. In our case, we simply reference our previously created `SQSAccessRole` using the `Ref` function.
Now that we've created our infrastructure resources, let's output the address of our database and our ARN of `InstanceProfile` resource.
# Outputting our resource information
CloudFormation templates can have an output block to make it easier to reference the created resources. In our case, we will output the address of our database server and the ARN of `InstanceProfile.`
Let's update our CloudFormation template in `cloudformation/infrastructure.yaml`:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape Infrastructure
Parameters:
...
Resources:
...
Outputs:
DatabaseDNS:
Description: Public DNS of RDS database
Value: !GetAtt DatabaseServer.Endpoint.Address
SQSClientProfile:
Description: Instance Profile for EC2 instances that need SQS Access
Value: !GetAtt SQSClientInstance.Arn
In the preceding code, we're using the `GetAtt` function to return the address of our `DatabaseServer` resource and the ARN of our `SQSClientInstance` `InstanceProfile` resource.
# Executing our template to create our resources
Now that we've created our `CloudFormation` template, we can create a `CloudFormation` stack. When we tell AWS to create our `CloudFormation` stack, it will create all the related resources in our template.
To create our template, we will need the following two things:
* The AWS **command-line interface** ( **CLI** )
* An AWS access key/secret key pair
We can install the AWS CLI using `pip`:
**$ pip install awscli**
To get (or create) your access key/secret key pair, you will need access to the Security Credential (<https://console.aws.amazon.com/iam/home?region=us-west-2#/security_credential>) section of your AWS Console.
Then we need to configure the AWS command line tool with our key and region. The `aws` command offers an interactive `configure` subcommand to do this. Let's run it on the command line:
**$ aws configure**
**AWS Access Key ID [None]: <Your ACCESS key>**
**AWS Secret Access Key [None]: <Your secret key>**
**Default region name [None]: us-west-2**
**Default output format [None]: json**
The `aws configure` command stores the values you entered in a `.aws` directory in your home directory.
With those setups, we can now create our stack:
**$ aws cloudformation create-stack \
--stack-name "infrastructure" \
--template-body "file:///path/to/mailape/cloudformation/infrastrucutre.yaml" \
--capabilities CAPABILITY_NAMED_IAM \
--parameters \
"ParameterKey=MasterDBPassword,ParameterValue=password" \
--region us-west-2**
Creating a stack can take some time, so the command returns without waiting for success. Let's take a closer look at our `create-stack` command:
* `--stack-name`: This is the name of stack we're creating. Stack names must be unique per account.
* `--template-body`: This is either the template itself, or, in our case, a `file://` URL to the template file. Remember that `file://` URLs require the absolute path to the file.
* `--capabilities CAPABILITY_NAMED_IAM`: This is required for templates that create or affect **Identity and Access Management** ( **IAM** ) services. This prevents accidentally affecting access management services.
* `--parameters`: This lets us pass in values for a template's parameters. In our case, we set the master password for our database as `password`, which is not a safe value.
* `--region`: The AWS Cloud is organized as a collection of regions across the world. In our case, we're using `us-west-2`, which is located in a series of data centers around the US state of Oregon.
Remember that you need to set a secure master password for your database.
To take a look at how stack creation is doing, we can check it using AWS Web Console (<https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2>) or using the command line:
**$ aws cloudformation describe-stacks \
--stack-name "infrastructure" \
--region us-west-2**
When the stack is done creating the related resources, it will return a result similar to this:
{
"Stacks": [
{
"StackId": "arn:aws:cloudformation:us-west-2:XXX:stack/infrastructure/NNN",
"StackName": "infrastructure",
"Description": "Mail Ape Infrastructure",
"Parameters": [
{
"ParameterKey": "MasterDBPassword",
"ParameterValue": "password"
}
],
"StackStatus": "CREATE_COMPLETE",
"Outputs": [
{
"OutputKey": "SQSClientProfile",
"OutputValue": "arn:aws:iam::XXX:instance-profile/infrastructure-SQSClientInstance-XXX",
"Description": "Instance Profile for EC2 instances that need SQS Access"
},
{
"OutputKey": "DatabaseDNS",
"OutputValue": "XXX.XXX.us-west-2.rds.amazonaws.com",
"Description": "Public DNS of RDS database"
}
],
}
]
}
Two things to pay particular attention to in the `describe-stack` result are as follows:
* The object under the `Parameters` key, which will show our master database password in plain text
* The object `Outputs` key, which shows the ARN of our `InstanceProfile` resource and the address of our database server
In all the previous code, I've replaced values specific to my account with XXX. Your output will differ.
If you want to remove the resources associated with your stack, you can just delete the stack:
**$ aws cloudformation delete-stack --stack-name "infrastructure"**
Next, we'll build an Amazon Machine Image, which we'll use to run Mail Ape in AWS.
# Building an Amazon Machine Image with Packer
Now that we have our infrastructure running in AWS, let's build our Mail Ape server. In AWS, we could launch an official Ubuntu VM, follow the steps in Chapter 9, _Deploying Answerly_ , and have our Mail Ape running. However, AWS treats EC2 instances as _ephemeral_. If an EC2 instance gets terminated, then we will have to launch a new instance and configure it all over again. There are a few ways to mitigate this problem. We'll solve the problem of ephemeral EC2 instances by building a new **Amazon Machine Image** ( **AMI** ) for our release. Then any time we launch an EC2 instance using that AMI, it will be already perfectly configured.
We will automate building our AMIs using a HashiCorp's Packer tool. Packer gives us a way of creating an AMI from a Packer template. A Packer template is a JSON file that defines the steps needed to configure our EC2 instance into our desired state and save the AMI. For our Packer template to run, we'll also write a collection of shell scripts to configure our AMI. Using a tool like Packer, we can automate building a new release AMI.
Let's start by installing Packer on our machines.
# Installing Packer
Get Packer from the <https://www.packer.io> download page. Packer is available for all major platforms.
Next, we'll create a script to make the directories we'll rely on in production.
# Creating a script to create our directory structure
The first script we will write will create directories for all our code. Let's add the following script to our project in `scripts/make_aws_directories.sh`:
#!/usr/bin/env bash
set -e
sudo mkdir -p \
/mailape/ubuntu \
/mailape/apache \
/mailape/django \
/var/log/celery \
/etc/mailape \
/var/log/mailape
sudo chown -R ubuntu /mailape
In the preceding code, we're using `mkdir` to make the directories. Next, we want to make the `ubuntu` user can write to the `/mailape` directories so that we recursively `chown` the `/mailape` directory.
So, let's create a script to install the Ubuntu packages we require.
# Creating a script to install all our packages
In our production environment, we will have to install Ubuntu packages as well as the Python packages we've already listed. First, let's list all our Ubuntu packages in `ubuntu/packages.txt`:
python3
python3-pip
python3-dev
virtualenv
apache2
libapache2-mod-wsgi-py3
postgresql-client
libcurl4-openssl-dev
libssl-dev
Next, let's create a script to install all the packages in `scripts/install_all_packages`:
#!/usr/bin/env bash
set -e
sudo apt-get update
sudo apt install -y $(cat /mailape/ubuntu/packages.txt | grep -i '^[a-z]')
virtualenv -p $(which python3) /mailape/virtualenv
source /mailape/virtualenv/bin/activate
pip install -r /mailape/requirements.production.txt
sudo chown -R www-data /var/log/mailape \
/etc/mailape \
/var/run/celery \
/var/log/celery
In the preceding script, we will install the Ubuntu packages that we listed above, then create a `virtualenv` to isolate our Mail Ape Python environment and packages. Finally, we give Apache (the `www-data` user) the ownership of some directories so that it can write to them. We couldn't give the `www-data` user the ownership because they probably didn't exist until we installed the `apache2` package.
Next, let's configure to run Apache2 to run Mail Ape using mod_wsgi.
# Configuring Apache
Now, we'll add Apache mod_wsgi configuration just like we did in Chapter 9, _Deploying Answerly_. The mod_wsgi configuration isn't the focus of this chapter, so refer to Chapter 9, _Deploying Answerly_ , for details of how this configuration works.
Let's create a virtual host configuration file for Mail Ape in `apache/mailape.apache.conf`:
LogLevel info
WSGIRestrictEmbedded On
<VirtualHost *:80>
WSGIDaemonProcess mailape \
python-home=/mailape/virtualenv \
python-path=/mailape/django \
processes=2 \
threads=2
WSGIProcessGroup mailape
WSGIScriptAlias / /mailape/django/config/wsgi.py
<Directory /mailape/django/config>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
Alias /static/ /mailape/django/static_root
<Directory /mailape/django/static_root>
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
As we discussed in Chapter 9, _Deploying Answerly_ , there isn't a way to pass environment variables to our mod_wsgi Python processes, so we will need to update our project's `wsgi.py` just like we did in Chapter 9, _Deploying Answerly_.
Here's our new `django/config/wsgi.py`:
import os
import configparser
from django.core.wsgi import get_wsgi_application
if not os.environ.get('DJANGO_SETTINGS_MODULE'):
parser = configparser.ConfigParser()
parser.read('/etc/mailape/mailape.ini')
for name, val in parser['mod_wsgi'].items():
os.environ[name.upper()] = val
application = get_wsgi_application()
We discussed the preceding script in Chapter 9, _Deploying Answerly_. The only difference here is the file we parse, that is, `/etc/mailape/mailape.ini`.
Next, we will need to add our virtual host configuration to the Apache `sites-enabled` directory. Let's create a script to do that in `scripts/configure_apache.sh`:
#!/usr/bin/env bash
sudo rm /etc/apache2/sites-enabled/*
sudo ln -s /mailape/apache/mailape.apache.conf /etc/apache2/sites-enabled/000-mailape.conf
Now that we have a script to configure Apache in a production environment, let's configure our Celery workers to start.
# Configuring Celery
Now that we have Apache running Mail Ape, we will need to configure Celery to start and process our SQS queue. To start our Celery workers, we will use Ubuntu's systemd process management facility.
First, let's create a Celery service file to tell SystemD how to start Celery. We'll create the service file in `ubuntu/celery.service`:
[Unit]
Description=Mail Ape Celery Service
After=network.target
[Service]
Type=forking
User=www-data
Group=www-data
EnvironmentFile=/etc/mailape/celery.env
WorkingDirectory=/mailape/django
ExecStart=/bin/sh -c '/mailape/virtualenv/bin/celery multi start worker \
-A "config.celery:app" \
--logfile=/var/log/celery/%n%I.log --loglevel="INFO" \
--pidfile=/run/celery/%n.pid'
ExecStop=/bin/sh -c '/mailape/virtualenv/bin/celery multi stopwait worker \
--pidfile=/run/celery/%n.pid'
ExecReload=/bin/sh -c '/mailape/virtualenv/bin/celery multi restart worker \
-A "config.celery:app" \
--logfile=/var/log/celery/%n%I.log --loglevel="INFO" \
--pidfile=/run/celery/%n.pid'
[Install]
WantedBy=multi-user.target
Let's take a closer look at some of the options in this file:
* `After=network.target`: This means that SystemD should not start this until our server has connected to a network.
* `Type=forking`: This means that the `ExecStart` command will eventually start a new process that continues to run under its own process ID (PID).
* `User`: This indicates the user that will own the Celery processes. In our case, we're just going to reuse Apache's `www-data` user.
* `EnvironmentFile`: This lists a file that will be read for environment variables and values that will be set for all the `Exec` commands. We list one with our Celery configuration (`/mailape/ubuntu/celery.systemd.conf`) and one with our Mail Ape configuration (`/etc/mailape/celery.env`).
* `ExecStart`: This is the command that will be executed to start Celery. In our case, we start multiple Celery workers. All our Celery commands will operate on our workers based on the process ID files they create. Celery will replace `%n` with the worker's ID.
* `ExecStop`: This is the command that will be executed to stop our Celery workers, based on their PID files.
* `ExecReload`: This is the command that will be executed to restart our Celery workers. Celery supports a `restart` command, so we will use that to perform the restart. However, this command must receive the same options as our `ExecStart` command.
We'll be placing our PID files in `/var/run/celery`, but we will need to make sure that the directory is created. `/var/run` is a special directory, which doesn't use a regular filesystem. We'll need to create a configuration file to tell Ubuntu to create `/var/run/celery`. Let's create this file in `ubuntu/tmpfiles-celery.conf`:
d /run/celery 0755 www-data www-data - -
This tells Ubuntu to create a directory, `/run/celery`, owned by the Apache user (`www-data`).
Finally, let's create a script to put all these files in the right places on our server. We'll name this script `scripts/configure_celery.sh`:
#!/usr/bin/env bash
sudo ln -s /mailape/ubuntu/celery.service /etc/systemd/system/celery.service
sudo ln -s /mailape/ubuntu/celery.service /etc/systemd/system/multi-user.target.wants/celery.service
sudo ln -s /mailape/ubuntu/tmpfiles-celery.conf /etc/tmpfiles.d/celery.conf
Now that Celery and Apache are configured, let's make sure that they have the correct environment configuration to run Mail Ape
# Creating the environment configuration files
Both our Celery and mod_wsgi Python processes will need to pull configuration information out of the environment to connect to the right database, SQS Queue, and many other services. These are settings and values we don't want to check into our version control system (for example, passwords). However, we still need them to be set in a production environment. To create the files that define the environment that our processes will run in, we'll make the script in `scripts/make_mailape_environment_ini.sh`:
#!/usr/bin/env bash
ENVIRONMENT="
DJANGO_ALLOWED_HOSTS=${WEB_DOMAIN}
DJANGO_DB_NAME=mailape
DJANGO_DB_USER=mailape
DJANGO_DB_PASSWORD=${DJANGO_DB_PASSWORD}
DJANGO_DB_HOST=${DJANGO_DB_HOST}
DJANGO_DB_PORT=5432
DJANGO_LOG_FILE=/var/log/mailape/mailape.log
DJANGO_SECRET_KEY=${DJANGO_SECRET}
DJANGO_SETTINGS_MODULE=config.production_settings
MAIL_APE_FROM_EMAIL=admin@blvdplatform.com
EMAIL_HOST=${EMAIL_HOST}
EMAIL_HOST_USER=mailape
EMAIL_HOST_PASSWORD=${EMAIL_HOST_PASSWORD}
EMAIL_HOST_PORT=587
EMAIL_HOST_TLS=true
INI_FILE="[mod_wsgi]
${ENVIRONMENT}
"
echo "${INI_FILE}" | sudo tee "/etc/mailape/mailape.ini"
echo "${ENVIRONMENT}" | sudo tee "/etc/mailape/celery.env"
Our `make_mailape_environment_ini.sh` script has some values hardcoded but references others (for example, passwords) as environment variables. We'll pass the values for these variables into Packer at runtime. Packer will then pass these values to our script.
Next, let's make our Packer template to build our AMI.
# Making a Packer template
Packer creates an AMI based on the instructions listed in a Packer template file. A Packer template is a JSON file made up of three top-level keys:
* `variables`: This will let us set values (such as passwords) at runtime
* `builders`: This specifies the cloud platform-specific details, such as AWS credentials
* `provisioners`: This are the instructions Packer will execute to make our image
Let's create our Packer template in `packer/web_worker.json`, starting with the `variables` section:
{
"variables": {
"aws_access_key": "",
"aws_secret_key": "",
"django_db_password":"",
"django_db_host":"",
"django_secret":"",
"email_host":"",
"email_host_password":"",
"mail_ape_aws_key":"",
"mail_ape_secret_key":"",
"sqs_celery_queue":"",
"web_domain":""
}
}
Under the `variables` key, we will list all the variables we want our template to take as the keys of JSON object. If the variable has a default value, then we can provide it as the value for that variable's key.
Next, let's add a `builders section` to configure Packer to use AWS:
{
"variables": {...},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-west-2",
"source_ami": "ami-78b82400",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "mailape-{{timestamp}}",
"tags": {
"project": "mailape"
}
}
]
}
A `builders` is an array because we could use the same template to build a machine image on multiple platforms (for example, AWS and Google Cloud). Let's take a look at each option in detail:
* `"type": "amazon-ebs"`: Tells Packer we're creating an Amazon Machine Image with Elastic Block Storage. This is the preferred configuration due to the flexibility it offers.
* `"access_key": "{{user aws_access_key }}"`: This is the access key Packer should use to authenticate itself with AWS. Packer includes its own template language so that values can be generated at runtime. Any value between `{{ }}` is generated by the Packer template engine. The template engine offers a `user` function, which takes the name of the user-provided variable and returns its value. For example, `{{user aws_access_key }}` will be replaced by the value the user provided to `aws_access_key` when running Packer.
* `"secret_key": "{{user aws_secret_key }}"`: This is the same for the AWS Secret Key.
* `"region": "us-west-2"`: This specifies the AWS region. All our work will be done in `us-west-2`.
* `"source_ami": "ami-78b82400"`: This is the image that we're going to customize to make our image. In our case, we're using an official Ubuntu AMI. Ubuntu offers an EC2 AMI locator (<http://cloud-images.ubuntu.com/locator/ec2/>) to help find their office AMIs.
* `"instance_type": "t2.micro"`: This is a small inexpensive instance that, at the time of writing this book, falls under the AWS free tier.
* `"ssh_username": "ubuntu"`: Packer performs all its operations on the virtual machine over SSH. This is the username it should use for authentication. Packer will generate its own key pair for authentication, so we don't have to worry about specifying a password or key.
* `"ami_name": "mailape-{{timestamp}}"`: The name of the resulting AMI. `{{timestamp}}` is a function that returns the UTC time in seconds since the Unix epoch.
* `"tags": {...}`: Tagging resources makes it easier to identify resources in AWS. This is optional but recommended.
Now that we've specified our AWS builder, we will need to specify our provisioners.
Packer provisioners are the instructions that customize the server. In our case, we will use the following two types of provisioners:
* `file` provisioners to upload our code to the server
* `shell` provisioners to execute our scripts and commands
First, let's add our `make_aws_directories.sh` script, as we'll need it to run first:
{
"variables": {...},
"builders": [...],
"provisioners": [
{
"type": "shell",
"script": "{{template_dir}}/../scripts/make_aws_directories.sh"
}
]
}
A `shell` provisioner with a `script` property will upload, execute, and remove the script. Packer provides the `{{template_dir}}` function, which returns the directory of template directory. This lets us avoid hardcoding absolute paths. The first provisioner we execute will execute the `make_aws_directories.sh` script we created earlier in this section.
Now that our directories exist, let's copy our code and files over using `file` provisioners:
{
"variables": {...},
"builders": [...],
"provisioners": [
...,
{
"type": "file",
"source": "{{template_dir}}/../requirements.common.txt",
"destination": "/mailape/requirements.common.txt"
},
{
"type": "file",
"source": "{{template_dir}}/../requirements.production.txt",
"destination": "/mailape/requirements.production.txt"
},
{
"type": "file",
"source": "{{template_dir}}/../ubuntu",
"destination": "/mailape/ubuntu"
},
{
"type": "file",
"source": "{{template_dir}}/../apache",
"destination": "/mailape/apache"
},
{
"type": "file",
"source": "{{template_dir}}/../django",
"destination": "/mailape/django"
},
]
}
`file` provisioners upload local files or directories defined by `source` to the server at `destination`.
Since we uploaded our Python code from our working directory, we need to be careful of old `.pyc` files hanging around. Let's make sure that we delete those on our production server:
{
"variables": {...},
"builders": [...],
"provisioners": [
...,
{
"type": "shell",
"inline": "find /mailape/django -name '*.pyc' -delete"
},
]
}
A `shell` provisioner can receive an `inline` attribute. The provisioner will then execute the `inline` command on the server.
Finally, let's execute the rest of the scripts we created:
{
"variables": {...},
"builders": [...],
"provisioners": [
...,
{
"type": "shell",
"scripts": [
"{{template_dir}}/../scripts/install_all_packages.sh",
"{{template_dir}}/../scripts/configure_apache.sh",
"{{template_dir}}/../scripts/make_mailape_environment_ini.sh",
"{{template_dir}}/../scripts/configure_celery.sh"
],
"environment_vars": [
"DJANGO_DB_HOST={{user `django_db_host`}}",
"DJANGO_DB_PASSWORD={{user `django_db_password`}}",
"DJANGO_SECRET={{user `django_secret`}}",
"EMAIL_HOST={{user `email_host`}}",
"EMAIL_HOST_PASSWORD={{user `email_host_password`}}",
"WEB_DOMAIN={{user `web_domain`}}"
]
}
In this case, the `shell` provisioner has received `scripts` and `environment_vars`. `scripts` is an array of paths to shell scripts. Each item in the array will be uploaded and executed. When executing each script, this `shell` provisioner will add the environment variables listed in `environment_vars`. The `environment_vars` parameter is optionally available to all `shell` provisioners to provide extra environment variables.
With our final provisioner added to our file, we've now finished our Packer template. Let's use Packer to execute the template and build our Mail Ape production server.
# Running Packer to build an Amazon Machine Image
With Packer installed and a Mail Ape production server Packer template created, we're ready to build our **Amazon Machine Image** ( **AMI** ).
Let's run Packer to build our AMI:
**$ packer build \
-var "aws_access_key=..." \
-var "aws_secret_key=..." \
-var "django_db_password=..." \
-var "django_db_host=A.B.us-west-2.rds.amazonaws.com" \
-var "django_secret=..." \
-var "email_host=smtp.example.com" \
-var "email_host_password=..." \
-var "web_domain=mailape.example.com" \
packer/web_worker.json
Build 'amazon-ebs' finished.
== > Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-west-2: ami-XXXXXXXX**
Packer will output the AMI ID of our new AMI image. We'll be able to use this AMI to launch an EC2 instance in the AWS Cloud.
If your template fails due to a missing Ubuntu package, retry the build. At the time of writing this book, the Ubuntu package repositories do not always update successfully.
Now that we have our AMI, we can deploy it.
# Deploying a scalable self-healing web app on AWS
Now that we have our infrastructure and a deployable AMI, we can deploy Mail Ape on AWS. Rather than launching a single EC2 instance from our AMI, we will deploy our app using CloudFormation. We'll define a set of resources that will let us scale our app up and down as needed. We'll define the following three resources:
* An Elastic Load Balancer to distribute requests among our EC2 instances
* An AutoScaling Group to launch and terminate EC2 instances
* A LaunchConfig to describe what kind of EC2 instances to launch
First, let's make sure that we have an SSH key if we need to access any of our EC2 instances to troubleshoot any problems after we deploy.
# Creating an SSH key pair
To create an SSH key pair in AWS, we can use the following AWS command line:
**$ aws ec2 create-key-pair --key-name mail_ape_production --region us-west-2
{
"KeyFingerprint": "XXX",
"KeyMaterial": "-----BEGIN RSA PRIVATE KEY-----\nXXX\n-----END RSA PRIVATE KEY-----",
"KeyName": "tom-cli-test"
}**
Ensure that you copy the `KeyMaterial` value to your SSH client's configuration directory (typically, `~/.ssh`)—remember to replace `\n` with actual new lines.
Next, let's start our Mail Ape deployment CloudFormation template.
# Creating the web servers CloudFormation template
Next, let's create a CloudFormation template to deploy Mail Ape servers to the cloud. We'll use CloudFormation to tell AWS how to scale our servers and relaunch them, should a disaster strike. We'll tell CloudFormation to create the following three resources:
* An **Elastic Load Balancer** ( **ELB** ), which will be able to distribute requests among our servers
* A LaunchConfig, which will describe the AMI, instance type, and other details of the EC2 instances we want to use.
* An AutoScaling Group, which will monitor to ensure that we have the right number of healthy EC2 instances.
These three resources are the core of building any kind of scaling self-healing AWS application.
Let's start building our CloudFormation template in `cloudformation/web_worker.yaml`. Our new template will have the same three sections as `cloudformation/infrastracture.yaml`: `Parameters`, `Resources`, and `Outputs`.
Lets's start by adding the `Parameters` section.
# Accepting parameters in the web worker CloudFormation template
Our web worker CloudFormation template will accept the AMI to launch and the InstanceProfile to be used as parameters. This means that we won't have to hardcode the names of the resources we created with Packer and our infrastructure stack, respectively.
Let's create our template in `cloudformation/web_worker.yaml`:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape web worker
Parameters:
WorkerAMI:
Description: Worker AMI
Type: String
InstanceProfile:
Description: the instance profile
Type: String
Now that we have the AMI and the InstanceProfile for our EC2 instances, let's create our CloudFormation stack's resources.
# Creating Resources in our web worker CloudFormation template
Next, we'll definite the **Elastic Load Balancer** ( **ELB** ), Launch Config, and AutoScaling Group. These three resources are the core of most scalable AWS web applications. We'll take a look at how they interact as we build up our template.
First, let's add our Load Balancer:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape web worker
Parameters:
...
Resources:
LoadBalancer:
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
Properties:
LoadBalancerName: MailApeLB
Listeners:
-
InstancePort: 80
LoadBalancerPort: 80
Protocol: HTTP
In the preceding code, we're adding a new resource called `LoadBalancer` of the `AWS::ElasticLoadBalancing::LoadBalancer` type. An ELB needs a name (`MailApeLB`) and a list of `Listeners`. Each `Listeners` entry should define the port our ELB is listening on (`LoadBalancerPort`, the instance port the request will be forwarded to (`InstancePort`), and the protocol the port will use (in our case, `HTTP`).
An ELB will be responsible for distributing HTTP requests across however many EC2 instances we launch to handle our load.
Next, we'll create a LaunchConfig to tell AWS how to launch a new Mail Ape web worker:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape web worker
Parameters:
...
Resources:
LoadBalancer:
...
LaunchConfig:
Type: "AWS::AutoScaling::LaunchConfiguration"
Properties:
ImageId: !Ref WorkerAMI
KeyName: mail_ape_production
SecurityGroups:
- ssh-access
- web-access
InstanceType: t2.micro
IamInstanceProfile: !Ref InstanceProfile
A Launch Config is of the `AWS::AutoScaling::LaunchConfiguration` type and describes the configuration of a new EC2 instance that an Auto Scaling Group should launch. Let's go through all the `Properties` to ensure that we understand what they mean:
* `ImageId`: This is the ID of the AMI we want the instances to run. In our case, we're using the `Ref` function to get the AMI ID from the `WorkerAMI` parameter.
* `KeyName`: This is the name of the SSH key that will be added to this machine. This is useful if we ever need to troubleshoot something live. In our case, we're using the name of the SSH key pair we created earlier in this chapter.
* `SecurityGroups`: This is a list of Security Group names that define what ports AWS is to open. In our case, we're listing the names of the web and SSH groups we created in our infrastructure stack.
* `InstanceType`: This indicates the instance type of our EC2 instances. An instance type defines the computing and memory resources available to our EC2 instance. In our case, we're using a very small affordable instance that is (at the time of writing this book) covered by the AWS Free Tier during the first year.
* `IamInstanceProfile`: This indicates the `InstanceProfile` for our EC2 instances. Here, we're using the `Ref` function to reference the `InstanceProfile` parameter. When we create our stack, we'll use the ARN of the InstanceProfile we created earlier that gives our EC2 instances access to SQS.
Next, we'll define the AutoScaling Group that launches the EC2 instances that the Launch Config describes to serve requests forwarded by the ELB:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape web worker
Parameters:
...
Resources:
LoadBalancer:
...
LaunchConfig:
...
WorkerGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
LaunchConfigurationName: !Ref LaunchConfig
MinSize: 1
MaxSize: 3
DesiredCapacity: 1
LoadBalancerNames:
- !Ref LoadBalancer
Our new **Auto Scaling Group** ( **ASG** ) is of the `AWS::AutoScaling::AutoScalingGroup` type. Let's go through its properties:
* `LaunchConfigurationName`: This is the name of the `LaunchConfiguration` this ASG should use when launching new instances. In our case, we use the `Ref` function to reference `LaunchConfig`, the Launch Configuration we created above.
* `MinSize`/`MaxSize`: These are the attributes required that set the maximum and minimum number of instances this group may contain. These values protect us from accidentally deploying too many instances that may negatively affect either our system or our monthly bill. In our case, we make sure that there is at least one (`1`) instance but no more than three (`3`).
* `DesiredCapacity`: This tells our system how many ASG and how many healthy EC2 instances should be running this ASG. If an instance fails and brings the number of healthy instances below the `DesiredCapacity` value, then ASG will use its Launch Configuration to launch more instances.
* `LoadBalancerNames`: This is a list of ELBs that can route requests to the instances launched by this ASG. When a new EC2 instance becomes a part of this ASG, it will also be added to the list of instances the named ELBs route requests to. In our case, we use the `Ref` function to reference the ELB we defined earlier in this template.
These three tools work together to help us make our Django app scale quickly and smoothly. The ASG gives us a way of saying how many Mail Ape EC2 instances we want running. The Launch Config describes how to launch a new Mail Ape EC2 instance. The ELB will then distribute the requests to all the instances that the ASG launched.
Now that we have our resources, let's output some of the most relevant data to make the rest of our deployment easy.
# Outputting resource names
The final section we'll add to our CloudFormation template is `Outputs` to make it easier to note the address of our ELB and the name of our ASG. We'll need the address of our ELB to add a CNAME record to `mailape.example.com`. We'll need the name of our ASG if we need to access our instances (for example, to run our migrations).
Let's update `cloudformation/web_worker.yaml` with an `Outputs` section:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape web worker
Parameters:
...
Resources:
LoadBalancer:
...
LaunchConfig:
...
WorkerGroup:
...
Outputs:
LoadBalancerDNS:
Description: Load Balancer DNS name
Value: !GetAtt LoadBalancer.DNSName
AutoScalingGroupName:
Description: Auto Scaling Group name
Value: !Ref WorkerGroup
The value of `LoadBalancerDNS` will be the DNS name of the ELB we created above. The value of `AutoScalingGroupName` will be our ASG, which returns the name of the ASG.
Next, let's create a stack for our Mail Ape 1.0 release.
# Creating the Mail Ape 1.0 release stack
Now that we have our Mail Ape web worker CloudFormation template, we can create a CloudFormation stack. When we create the stack, the stack will create its related resources such as the ELB, ASG, and Launch Config. We'll use the AWS CLI to create our stack:
**$ aws cloudformation create-stack \
--stack-name "mail_ape_1_0" \
--template-body "file:///path/to/mailape/cloudformation/web_worker.yaml" \
--parameters \
"ParameterKey=WorkerAMI,ParameterValue=AMI-XXX" \
"ParameterKey=InstanceProfile,ParameterValue=arn:aws:iam::XXX:instance-profile/XXX" \
--region us-west-2**
The preceding command looks very similar to the one we executed to create our infrastructure stack, but there are a couple of differences:
* `--stack-name`: This is the name of the stack we're creating.
* `--template-body "file:///path/..."`: This is a `file://` URL with an absolute path to our CloudFormation template. Since the path prefix ends with two `/` and a Unix path starts with a `/`, we get a weird looking triple `/` here.
* `--parameters`: This template takes two parameters. We can provide them in any order, but we must provide both.
* `"ParameterKey=WorkerAMI, ParameterValue=`: For `WorkerAMI`, we must provide the AMI ID that Packer gave us.
* `"ParameterKey=InstanceProfile,ParameterValue`: For InstanceProfile, we must provide the Instance Profile ARN that our infrastructure stack output.
* `--region us-west-2`: We're doing all our work in the `us-west-2` region.
To take a look at our stack's outputs, we can use the `describe-stack` command from the AWS CLI:
**$ aws cloudformation describe-stacks \
--stack-name mail_ape_1_0 \
--region us-west-2**
The result is a large JSON object; here is a slightly truncated example version:
{
"Stacks": [
{
"StackId": "arn:aws:cloudformation:us-west-2:XXXX:stack/mail_ape_1_0/XXX",
"StackName": "mail_ape_1_0",
"Description": "Mail Ape web worker",
"Parameters": [
{
"ParameterKey": "InstanceProfile",
"ParameterValue": "arn:aws:iam::XXX:instance-profile/XXX"
},
{
"ParameterKey": "WorkerAMI",
"ParameterValue": "ami-XXX"
}
],
"StackStatus": "CREATE_COMPLETE",
"Outputs": [
{
"OutputKey": "AutoScalingGroupName",
"OutputValue": "mail_ape_1_0-WebServerGroup-XXX",
"Description": "Auto Scaling Group name"
},
{
"OutputKey": "LoadBalancerDNS",
"OutputValue": "MailApeLB-XXX.us-west-2.elb.amazonaws.com",
"Description": "Load Balancer DNS name"
}
],
}
]
}
Our resources (for example, EC2 instances) won't be ready until `StackStatus` is `CREATE_COMPLETE`. It can take a few minutes to create all the relevant resources.
We're particularly interested in the objects in the `Outputs` array:
* The first value gives us the name of our ASG. With the name of our ASG, we'll be able to find the EC2 instances in that ASG in case we need to SSH into one.
* The second value gives us the DNS name of our ELB. We'll use our ELB's DNS to create CNAME record for our production DNS record so that we redirect our traffic here (for example, creating a CNAME record for `mailape.example.com` to redirect traffic to our ELB).
Let's look at how to SSH into the EC2 instances that our ASG launched.
# SSHing into a Mail Ape EC2 Instance
The AWS CLI gives us many ways of getting information about our EC2 Instances. Let's find the address of our launched EC2 instance:
**$ aws ec2 describe-instances \
--region=us-west-2 \
--filters='Name=tag:aws:cloudformation:stack-name,Values=mail_ape_1_0'**
The `aws ec2 describe-instances` command will return a lot of information about all our EC2 instances. We can use the `--filters` command to restrict the EC2 instances returned. When we create a stack, many of the related resources are tagged with the stack name. This lets us filter for only those EC2 instances in our `mail_ape_1_0` stack.
The following is a (much) shortened version of the output:
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"ImageId": "ami-XXX",
"InstanceId": "i-XXX",
"InstanceType": "t2.micro",
"KeyName": "mail_ape_production",
"PublicDnsName": "ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com",
"PublicIpAddress": "XXX",
"State": {
"Name": "running"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::XXX:instance-profile/infrastructure-SQSClientInstance-XXX"
},
"SecurityGroups": [
{
"GroupName": "ssh-access"
},
{
"GroupName": "web-access"
}
],
"Tags": [
{
"Key": "aws:cloudformation:stack-name",
"Value": "mail_ape_1_0"
} ] } ] } ] }
In the preceding output, note the `PublicDnsName` and the `KeyName`. Since we created that key earlier in this chapter, we can SSH into this instance:
**$ ssh -i /path/to/saved/ssh/key ubuntu@ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com**
Remember that the `XXX` you see in the preceding output will be replaced by real values in your system.
Now that we can SSH into the system, we can create and migrate our database.
# Creating and migrating our database
For our first release we first need to create our database. To create our database we're going to create a script in `database/make_database.sh`:
#!/usr/bin/env bash
psql -v ON_ERROR_STOP=1 postgresql://$USER:$PASSWORD@$HOST/postgres <<-EOSQL
CREATE DATABASE mailape;
CREATE USER mailape;
GRANT ALL ON DATABASE mailape to "mailape";
ALTER USER mailape PASSWORD '$DJANGO_DB_PASSWORD';
ALTER USER mailape CREATEDB;
EOSQL
This script uses three variables from its environment:
* `$USER`: The Postgres master user username. We defined this as `master` in our `cloudformation/infrastructure.yaml`.
* `$PASSWORD`: The Postgres master user's password. We provided this as a parameter for when we created the `infrastructure` stack.
* `$DJANGO_DB_PASSWORD`: This is the password for the Django database. We provided this as a parameter to Packer when creating our AMI.
Next, we'll execute this script locally by providing the values as variables:
**$ export USER=master**
**$ export PASSWORD=...**
**$ export DJANGO_DB_PASSWORD=...**
**$ bash database/make_database.sh**
Our Mail Ape database is now created.
Next, let's SSH into our new EC2 instance and run our database migrations:
**$ ssh -i /path/to/saved/ssh/key ubuntu@ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com
$ source /mailape/virtualenv/bin/activate
$ cd /mailape/django
$ export DJANGO_DB_NAME=mailape
$ export DJANGO_DB_USER=mailape
$ export DJANGO_DB_PASSWORD=...
$ export DJANGO_DB_HOST=XXX.XXX.us-west-2.rds.amazonaws.com
$ export DJANGO_DB_PORT=5432
$ export DJANGO_LOG_FILE=/var/log/mailape/mailape.log
$ export DJANGO_SECRET_KEY=...**
**$ export DJANGO_SETTINGS_MODULE=config.production_settings
$ python manage.py migrate**
Our `manage.py migrate` command is very similar to what we've used in previous chapters. The main difference here is that we needed to SSH into our production EC2 instance first.
When `migrate` returns success, our database is ready and we're good to release our app.
# Releasing Mail Ape 1.0
Now that we've migrated our database, we're ready to update the DNS records of `mailape.example.com` to point to our ELB's DNS records. Once the DNS records propagate, Mail Ape will be live.
Congratulations!
# Scaling up and down with update-stack
One of the great things about using CloudFormation and Auto Scaling Groups is that it's easy to scale our system up and down. In this section, let's update our system to use two EC2 instances running Mail Ape.
We can update our CloudFormation template in `cloudformation/web_worker.yaml`:
AWSTemplateFormatVersion: "2010-09-09"
Description: Mail Ape web worker
Parameters:
..
Resources:
LoadBalancer:
...
LaunchConfig:
...
WorkerGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
LaunchConfigurationName: !Ref LaunchConfig
MinSize: 1
MaxSize: 3
DesiredCapacity: 2
LoadBalancerNames:
- !Ref LoadBalancer
Outputs:
..
We've updated our `DesiredCapacity` from 1 to 2. Now, instead of creating a new stack, let's update our existing stack:
**$ aws cloudformation update-stack \
--stack-name "mail_ape_1_0" \
--template-body "file:///path/to/mailape/cloudformation/web_worker.yaml" \
--parameters \
"ParameterKey=WorkerAMI,UsePreviousValue=true" \
"ParameterKey=InstanceProfile,UsePreviousValue=true" \
--region us-west-2**
The preceding command looks much like our `create-stack` command. One convenient difference is that we don't need to provide the parameter values again—we can simply inform `UsePreviousValue=true` to tell AWS to reuse the same values as before.
Again, `describe-stack` will tell us when our update is complete:
**aws cloudformation describe-stacks \
--stack-name mail_ape_1_0 \
--region us-west-2**
The result is a large JSON object—here is a truncated example version:
{
"Stacks": [
{
"StackId": "arn:aws:cloudformation:us-west-2:XXXX:stack/mail_ape_1_0/XXX",
"StackName": "mail_ape_1_0",
"Description": "Mail Ape web worker",
"StackStatus": "UPDATE_COMPLETE"
}
]
}
Once our `StackStatus` is `UPDATE_COMPLETE`, our ASG will be updated with a new setting. It can take a couple minutes for the ASG to launch the new EC2 instance, but we can use our previously created `describe-instances` command to look for it:
**$ aws ec2 describe-instances \
--region=us-west-2 \
--filters='Name=tag:aws:cloudformation:stack-name,Values=mail_ape_1_0'**
Eventually, it will return two instances. Here's a highly truncated version of what that output will look like:
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"ImageId": "ami-XXX",
"InstanceId": "i-XXX",
"PublicDnsName": "ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com",
"State": { "Name": "running" }
},
{
"ImageId": "ami-XXX",
"InstanceId": "i-XXX",
"PublicDnsName": "ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com",
"State": { "Name": "running" }
} ] } ] }
To scale down to one instance, just update your `web_worker.yaml` template and run `update-stack` again.
Congratulations! You now know how to scale Mail Ape up to handle a higher load and then scale back down during off peak periods.
Remember that Amazon charges are based on usage. If you scaled up as part of working through this book, remember to scale back down or you may be charged more than you expect. Ensure that you read up on the limits of the AWS free tier on <https://aws.amazon.com/free/>.
# Summary
In this chapter, we've taken our Mail Ape app and launched it into a production environment in the AWS Cloud. We've used AWS CloudFormation to declare our AWS resources as code, making it as easy to track what we need and what changed as in the rest of our code base. We've built the image of our Mail Ape servers run using Packer, again giving us the ability to track our server configuration as code. Finally, we launched Mail Ape into the cloud and learned how to scale it up and down.
Now that we've come to the end of our journey learning to build Django web applications, let's review some of what we've learned. Over three projects we've seen how Django organizes code into models, views, and templates. We've learned how to do input validation with Django's form class and with Django Rest Framework's Serializer classes. We've examined security best practices, caching, and how to send emails. We've seen how to take our code and deploy into Linux servers, Docker containers, and the AWS Cloud.
You're ready to take your idea and launch it with Django! Go for it!
# Other Books You May Enjoy
If you enjoyed this book, you may be interested in these other books by Packt:
**Django RESTful Web Services**
Gastón C. Hillar
ISBN: 978-1-78883-392-9
* The best way to build a RESTful Web Service or API with Django and the Django REST Framework
* Develop complex RESTful APIs from scratch with Django and the Django REST Framework
* Work with either SQL or NoSQL data sources
* Design RESTful Web Services based on application requirements
* Use third-party packages and extensions to perform common tasks
* Create automated tests for RESTful web services
* Debug, test, and profile RESTful web services with Django and the Django REST Framework
**Web Development with Django Cookbook**
Aidas Bendoraitis
ISBN: 978-1-78328-689-8
* Get started with the basic configuration necessary to start any Django project
* Build a database structure out of reusable model mixins
* Manage forms and views and get to know some useful patterns that are used to create them
* Create handy template filters and tags that you can reuse in every project
* Integrate your own functionality into the Django CMS
* Manage hierarchical structures with MPTT
* Import data from local sources and external web services as well as exporting your data to third parties
* Implement a multilingual search with Haystack
* Test and deploy your project efficiently
# Leave a review - let other readers know what you think
Please share your thoughts on this book with others by leaving a review on the site that you bought it from. If you purchased the book from Amazon, please leave us an honest review on this book's Amazon page. This is vital so that other potential readers can see and use your unbiased opinion to make purchasing decisions, we can understand what our customers think about our products, and our authors can see your feedback on the title that they have worked with Packt to create. It will only take a few minutes of your time, but is valuable to other potential customers, our authors, and Packt. Thank you!
|
Q:
ggplot2: object 'y' not found with stat="bin"
I'm sorry to ask a question that has been asked before on SO, but I'm trying to plot some simple data in ggplot2 and am having trouble binning the data along the x-axis. My data consists of visual elements in old books (diagrams, engravings, etc.), and I can plot the frequency of each type of visual element in each year:
#this works
df <- read.table("cleaned_estc_visuals.txt",
header = F,
sep = "\t")
ggplot(data=df, aes(x=V1, y=V3)) +
geom_bar(aes(fill=V2),stat="identity") +
labs(title = "Visuals in Early Modern Books",fill="") +
xlab("Year") +
ylab("Titles")
This yields:
To make the data more legible, I want to bin the values along the x-axis by decade, but can't quite seem to get the call right:
#this doesn't
ggplot(data=df, aes(x=V1, y=V3)) +
geom_bar(aes(fill=V2),binwidth=10,stat="bin")
Running the latter code, I get:
Mapping a variable to y and also using stat="bin".
With stat="bin", it will attempt to set the y value to the count of cases in each group.
This can result in unexpected behavior and will not be allowed in a future version of ggplot2.
If you want y to represent counts of cases, use stat="bin" and don't map a variable to y.
If you want y to represent values in the data, use stat="identity".
See ?geom_bar for examples. (Deprecated; last used in version 0.9.2)
Error in pmin(y, 0) : object 'y' not found
Does anyone know how I can bin by decade along the x-axis? I would be grateful for any advice others can offer.
A:
In your situation, I find it easier to do some data manipulation before calling ggplot(). I personally prefer these packages: dplyr for data management and scales for working with graphics, but you could do this using base functions as well.
library(dplyr)
library(scales)
df2 <- df %>%
mutate(decade = floor(V1 / 10) * 10) %>%
group_by(decade, V2) %>%
summarise(V3 = sum(V3)) %>%
filter(decade != 1800)
ggplot(df2, aes(x = decade, y = V3)) +
geom_bar(aes(fill = V2), stat = "identity") +
labs(x = "Decade", y = "Titles", title = "Visuals in Early Modern Books") +
scale_x_continuous(breaks = pretty_breaks(20)) # using scales::pretty_breaks()
|
This study of some sixty-odd Italian-language music-theater pieces for Holy Week in seventeenth-century Vienna addresses the issues of Habsburg dynastic piety, memory and commemoration, Passion ...
More
This study of some sixty-odd Italian-language music-theater pieces for Holy Week in seventeenth-century Vienna addresses the issues of Habsburg dynastic piety, memory and commemoration, Passion devotion, and political meaning in the works. It further considers some surprising conjunctions of poetic conceptualism in connection with surprising—and theatrical—musical techniques. The pieces were meant to be performed in front of a constructed replica of Christ’s tomb—hence their Italian sobriquet, sepolcri—and often with an additional stage-set. Flourishing during the reign of Emperor Leopold I (1657–1705), the genre was also indebted to the patronage and piety of the women around him, including his stepmother, the Dowager Empress Eleonora, his three wives, and several of his daughters. The libretti, many by the famed Nicolo Minato, show unusual textual strategies in the recollection of Christ’s Passion, as they are imagined to take place after his burial. But they also involve wider realms of the dynastic’s self-image, material possessions, and political ideology. Although both the texts and the music—the latter by a variety of composers, most notably Giovanni Felice Sances and Antonio Draghi, along with Leopold himself—are little studied today, they also combined in performance to provide a sonic enactment of mourning according to the most recent norms of Italian musical dramaturgy.Less
Fruits of the Cross : Passiontide Music Theater in Habsburg Vienna
Robert L. Kendrick
Published in print: 2018-11-13
This study of some sixty-odd Italian-language music-theater pieces for Holy Week in seventeenth-century Vienna addresses the issues of Habsburg dynastic piety, memory and commemoration, Passion devotion, and political meaning in the works. It further considers some surprising conjunctions of poetic conceptualism in connection with surprising—and theatrical—musical techniques. The pieces were meant to be performed in front of a constructed replica of Christ’s tomb—hence their Italian sobriquet, sepolcri—and often with an additional stage-set. Flourishing during the reign of Emperor Leopold I (1657–1705), the genre was also indebted to the patronage and piety of the women around him, including his stepmother, the Dowager Empress Eleonora, his three wives, and several of his daughters. The libretti, many by the famed Nicolo Minato, show unusual textual strategies in the recollection of Christ’s Passion, as they are imagined to take place after his burial. But they also involve wider realms of the dynastic’s self-image, material possessions, and political ideology. Although both the texts and the music—the latter by a variety of composers, most notably Giovanni Felice Sances and Antonio Draghi, along with Leopold himself—are little studied today, they also combined in performance to provide a sonic enactment of mourning according to the most recent norms of Italian musical dramaturgy.
Opera, race, and politics during apartheid South Africa form the foundation of this historiographic work on the Eoan Group, a so-called colored cultural organization that performed opera in the Cape. ...
More
Opera, race, and politics during apartheid South Africa form the foundation of this historiographic work on the Eoan Group, a so-called colored cultural organization that performed opera in the Cape. The La Traviata Affair: Opera in the Time of Apartheid charts Eoan’s opera activities from its inception in 1933 until the cessation of its work by 1980. By accepting funding from the apartheid government and adhering to apartheid conditions, the group, in time, became politically compromised, resulting in the rejection of the group by their own community and the cessation of opera production. However, their unquestioned acceptance of and commitment to the art of opera lead to the most extraordinary of performance trajectories. During apartheid, the Eoan Group provided a space for colored people to perform Western classical art forms in an environment that potentially transgressed racial boundaries and challenged perceptions of racial exclusivity in the genre of opera. This highly significant endeavor and the way it was thwarted at the hands of the apartheid regime is the story that unfolds in this book.Less
La Traviata Affair : Opera in the Age of Apartheid
Hilde Roos
Published in print: 2018-10-23
Opera, race, and politics during apartheid South Africa form the foundation of this historiographic work on the Eoan Group, a so-called colored cultural organization that performed opera in the Cape. The La Traviata Affair: Opera in the Time of Apartheid charts Eoan’s opera activities from its inception in 1933 until the cessation of its work by 1980. By accepting funding from the apartheid government and adhering to apartheid conditions, the group, in time, became politically compromised, resulting in the rejection of the group by their own community and the cessation of opera production. However, their unquestioned acceptance of and commitment to the art of opera lead to the most extraordinary of performance trajectories. During apartheid, the Eoan Group provided a space for colored people to perform Western classical art forms in an environment that potentially transgressed racial boundaries and challenged perceptions of racial exclusivity in the genre of opera. This highly significant endeavor and the way it was thwarted at the hands of the apartheid regime is the story that unfolds in this book.
PRINTED FROM CALIFORNIA SCHOLARSHIP ONLINE (www.california.universitypressscholarship.com). (c) Copyright University of California Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in CALSO for personal use.date: 15 September 2019 |
Q:
Specifying level 2 style for a tree in Tikz
The idea is to produce the org charts automatically by application code reading a database and calculating the node counts and sizes based on the names of teammates and their hierarchy.
After reading the PGF manual pp. 319-220, I devised the below MWE in order to draw a sample static org chart:
\documentclass[10pt,landscape,ansibpaper]{article}
\usepackage{tikz}
\usepackage{geometry}
\geometry{
left=2em,
right=2em,
top=2em,
bottom=2em,
}
\usetikzlibrary{trees}
\tikzstyle{every node}=[draw=black, thin, minimum height=3em]
\begin{document}
\footnotesize
\begin{tikzpicture}[
supervisor/.style={%
text centered, text width=12em,
text=black
},
teammate/.style={%
text centered, text width=12em,
text=black
},
subordinate/.style={%
grow=down,
xshift=-3.2em, % Horizontal position of the child node
text centered, text width=12em,
edge from parent path={(\tikzparentnode.205) |- (\tikzchildnode.west)}
},
level1/.style ={level distance=4em,anchor=west},
level2/.style ={level distance=8em,anchor=west},
level3/.style ={level distance=12em,anchor=west},
level4/.style ={level distance=16em,anchor=west},
level 1/.style={edge from parent fork down,sibling distance=14em,level distance=5em}
% level 2/.style={edge from parent fork down,sibling distance=28em,level distance=5em}
]
\node[anchor=south,supervisor](super){Supervisor\\Supervisory position\\Location}[]
child{node [teammate] {Teammate6\\Position4\\Location4}
child{node [teammate] {Teammate61\\Position4\\Location4}
child[subordinate,level1] {node {Subordinate161}}
child[subordinate,level2] {node {Subordinate261}}}
child{node [teammate] {Teammate62\\Position4\\Location4}
child[subordinate,level1] {node {Subordinate162}}
child[subordinate,level2] {node {Subordinate262}}}
child{node [teammate] {Teammate62\\Position4\\Location4}
child[subordinate,level1] {node {Subordinate162}}
child[subordinate,level2] {node {Subordinate262}}}
}
child{node [teammate] {Teammate7\\Position5\\Location5}
child{node [teammate] {Teammate7\\Position5\\Location5}
child[subordinate,level1] {node {First\\Subordinate}}
child[subordinate,level2] {node {Subordinate2}}
child[subordinate,level3] {node {Third\\Teammate}}
child[subordinate,level4] {node {Longtext-\\teammate}}}
child{node [teammate] {Teammate7\\Position5\\Location5}
child[subordinate,level1] {node {First\\Subordinate}}
child[subordinate,level2] {node {Subordinate2}}
child[subordinate,level3] {node {Third\\Teammate}}
child[subordinate,level4] {node {Longtext-\\teammate}}}
};
\end{tikzpicture}
\end{document}
The output is (quite understandably) garbled:
I thought that uncommenting the level 2 line would fix that, but it made the matters even worse:
What am I doing wrong here?
A:
There is a missing comma between level 1/.style={...} and level 2/.style={...} and you have to increase the sibling distance in level 1and to decrease in level 2.
\documentclass[10pt,landscape,ansibpaper]{article}
\usepackage[margin=2em]{geometry}
\usepackage{tikz}
\usetikzlibrary{trees}
\tikzstyle{every node}=[draw=black, thin, minimum height=3em]
\begin{document}
\footnotesize
\begin{tikzpicture}[
supervisor/.style={%
text centered, text width=12em,
text=black
},
teammate/.style={%
text centered, text width=12em,
text=black
},
subordinate/.style={%
grow=down,
xshift=-3.2em, % Horizontal position of the child node
text centered, text width=12em,
edge from parent path={(\tikzparentnode.205) |- (\tikzchildnode.west)}
},
level1/.style ={level distance=4em,anchor=west},
level2/.style ={level distance=8em,anchor=west},
level3/.style ={level distance=12em,anchor=west},
level4/.style ={level distance=16em,anchor=west},
level 1/.style={edge from parent fork down,sibling distance=45em,level distance=5em},
level 2/.style={edge from parent fork down,sibling distance=18em}
]
\node[anchor=south,supervisor](super){Supervisor\\Supervisory position\\Location}[]
child{node [teammate] {Teammate6\\Position4\\Location4}
child{node [teammate] {Teammate61\\Position4\\Location4}
child[subordinate,level1] {node {Subordinate161}}
child[subordinate,level2] {node {Subordinate261}}}
child{node [teammate] {Teammate62\\Position4\\Location4}
child[subordinate,level1] {node {Subordinate162}}
child[subordinate,level2] {node {Subordinate262}}}
child{node [teammate] {Teammate62\\Position4\\Location4}
child[subordinate,level1] {node {Subordinate162}}
child[subordinate,level2] {node {Subordinate262}}}
}
child{node [teammate] {Teammate7\\Position5\\Location5}
child{node [teammate] {Teammate7\\Position5\\Location5}
child[subordinate,level1] {node {First\\Subordinate}}
child[subordinate,level2] {node {Subordinate2}}
child[subordinate,level3] {node {Third\\Teammate}}
child[subordinate,level4] {node {Longtext-\\teammate}}}
child{node [teammate] {Teammate7\\Position5\\Location5}
child[subordinate,level1] {node {First\\Subordinate}}
child[subordinate,level2] {node {Subordinate2}}
child[subordinate,level3] {node {Third\\Teammate}}
child[subordinate,level4] {node {Longtext-\\teammate}}}
};
\end{tikzpicture}
\end{document}
|
In pictures: Leeds United fans make sure their new head coach feels welcome
Andrew Hutchinson
Leeds United fans welcomed Marcelo Bielsa into the fold as his side kicked off their pre-season with victory.
It was an opportunty for fans to say hello and good luck - as these photos from our snapper showcase.
The backdrop for Bielsa’s maiden game as Leeds’ head coach was the parched countryside between Gloucester and Bristol, at the modest home of Forest Green, but Bielsa did his homework on the League Two club and took his preference for a lean squad to extremes by arriving with 15 players.
There were no 45-minute run-outs or gentle introductions.
The remainder, on their first appearances of the summer, will be put through the same process at York City tomorrow. (July 19) |
Point Piper is a newly constructed house located at Claigan, approximately three miles from Dunvegan on the road to the Coral Beach, an area of great natural beauty.
Point Piper offers beautiful uninterrupted views towards the Little Minch and Outer Hebrides beyond and is the perfect location for a peaceful and relaxing holiday within easy travelling distance of the attractions Skye has to offer.
Islands in Loch Dunvegan
The Isle of Skye is designated as a world class holiday destination and has much interest of a historic, cultural and leisure nature and deserves time to explore.
Within ten minutes of Point Piper is Dunvegan Castle, Seat of Clan MacLeod and their ancestral home for 800 years. We are also close to the only beaches on the Isle of Skye, just the spot for a picnic.
There is a wide range of activities on the Isle of Skye from walking, climbing, boat trips, horse riding and much, much more. |
9 Gigapixel Image of the Milky Way - onosendai
http://www.eso.org/public/images/eso1242a/
======
chaosmachine
Zoomable: <http://www.gigapan.com/gigapans/117375>
~~~
NiekvdMaas
Official zoomable version:
<http://www.eso.org/public/images/eso1242a/zoomable/>
------
s_henry_paulson
Hundreds of billions of planets, and this is just our one small galaxy.
Just given the sheer scale of the universe, I think we almost have to be
foolish to think that we're the only life forms that exist in the whole thing.
~~~
elorant
It’s also foolish (no pun intended) to think that the Universe was meant to
have life. We tend to believe that life is what gives meaning to all that
vastness but Cosmos doesn’t need a reason for its existence, it’s just there.
Furthermore it’s not about just life but intelligent life. Life in form of
microbes could be all around the Universe. But intelligent life could be
extremely rare or it could be just too early and we could be the first of many
species to come. It’s not egoistic to think so, it doesn’t make us feel unique
and special, more likely it makes us feel depressed thinking that we are the
only ones or the first of many to come.
If you take the Drake equation for example and tweak a couple of pessimistic
numbers you realize it doesn’t take long before you come to the conclusion
that life is extremely rare. A very good implementation you can find here:
[http://www.bbc.com/future/story/20120821-how-many-alien-
worl...](http://www.bbc.com/future/story/20120821-how-many-alien-worlds-exist)
I would also like to point to the Fermi paradox. Given the aforementioned
Drake equation many scientists have made estimations about the number of
civilization in our galaxy. Estimations vary from a few dozen to the
thousands. But if there were even one advanced civilization in the galaxy they
should already have made contact somehow. That is the basis of the Fermi
paradox, you can find more at Wikipedia:
<http://en.wikipedia.org/wiki/Fermi_paradox>
~~~
stargazer-3
It is also foolish to take Drake equation seriously or assume that we know
what intelligent life is. Drake equation should be used for demonstration
purposes only. As a side not, it is good to keep in mind that all we did for
ETI search was looking out for a human-like radio signal with a narrow
bandwidth, which is probably not the best way to transmit information through
the Universe.
~~~
elorant
If you read the link I gave for the Fermi paradox you'll see that this is one
of the dozen explanations on why we haven't been contacted yet by an alien
civilization. So, yes, we might be trying to contact the wrong way.
Actually though it's not exactly wrong, we managed to capture once a
significant signal that could be of alien origin. It's called the wow signal
and you can find more here: <http://en.wikipedia.org/wiki/Wow_signal>
The basis of the Fermi paradox though isn't about what we did/do to contact
alien civilizations but the fact that even if one advanced existed in our
galaxy they should have already found us even if we weren't looking for them.
Which brings us back to the conclusion that there might not be advanced
civilizations around and life could very well be in the beginning.
As for the Drake equation it's not a law of physics. It is just a way to
estimate the number of habitable planets in the galaxy and from there to make
an assumption of the number of alien civilizations able to make interstellar
contact.
------
lhtbws
This is amazing. When you can zoom in on a bright speck and discover that it's
actually a giant cluster of stars, and then continue zooming in on that
cluster until it doesn't seem dense anymore, it actually lends context to the
static photos of space we've all seen before.
------
VorticonCmdr
Very cool. Does anyone know why some areas are somewhat blueish. And what
about the very bright stars?
~~~
wl
The colors are a bit arbitrary. The sensor that takes these images only
records intensity and not color. Different filters are placed over the sensor
to record different wavelengths. Not all of these wavelengths are visible
light. The colors are a mapping of these wavelengths to the visible spectrum.
------
colinwinter
It'd be REALLY cool if someone could turn this into a screensaver, where it
progressively pans and zooms in/out. Then when you're mind is just about to be
blown at full-zoom-in, it should rotate like a boss and slap a new perspective
of life into your life.
------
3rd3
Are there some well known features on the picture?
------
Father
Here's a similar thing also made from infrared images of the milky way
[http://djer.roe.ac.uk/vsa/vvv/iipmooviewer-2.0-beta/vvvgps5....](http://djer.roe.ac.uk/vsa/vvv/iipmooviewer-2.0-beta/vvvgps5.html)
------
andrewcooke
abstract <http://adsabs.harvard.edu/abs/2012A%26A...537A.107S> with link to
paper (i think; still downloading paper).
update: the paper is fairly large and doesn't have pretty pictures (it has
lots of technical plots, but i imagine it's not what most people think of as a
fun read). also, this image is more a "public view" of the data; the paper is
for the underlying survey.
------
aaronmoodie
I'm not sure if this is by the same photographer, but the image used in the
Sky Survey app is pretty incredible as well. <http://skysurvey.org>
I'm really looking forward to being able to combine detailed visuals like
these with the rift 3d headset or like.
boom!
------
bajsejohannes
Why is the milky way (I assume that's what we call the fat strip) not
centered? Is that just a projection thing? Does the edges of this image wrap?
(I know shamefully little astronomy)
~~~
stargazer-3
What do you mean by 'not centered'? Imagine you are standing on a field of
corn. To you, the field looks like a line encircling you, although it may look
like a square or circle from above. There's your projection thing.
------
eslaught
Ok, where do I get the 9 Gigapixel version? :-)
P.S. Yes, I know I don't really need that much resolution, but still.
~~~
lloeki
> P.S. Yes, I know I don't really need that much resolution, but still.
I call bullshit ;-) as at 15" retina is 4Mpix. In a short time span we'll have
20~30" retina)class at that size and they could very well be 9~15Mpix. Of
course billion pixels is way too much, but it means that it will scale to the
future (I'd love to have a wall-screen with this)
Anyway, multiple links are on the lower part of the rightmost column,
available from 1024x768 to full res in a variety of formats.
~~~
pserwylo
How does ~275Gpix sound? [0][1]. I did a stint at UCSD for two months as an
undergrad, and sat next to this monster while they were playing with it.
It might sound stupid having that much resolution, but it really is cool to be
able to see that much information in front of you. It's especially good if you
have a number of people standing around who are interacting with various data
sets.
And if you want one yourself, it is all COTS hardware, and you can start with
just a few screens then add later [2].
[0] <http://www.calit2.net/newsroom/release.php?id=1307> [1]
<http://ucsdnews.ucsd.edu/newsrel/general/07-08HIPerSpace.asp> [2]
[http://optiportal.org/index.php/Main_Page#How_to_build_an_Op...](http://optiportal.org/index.php/Main_Page#How_to_build_an_Optiportal)
------
brandoncapecci
How long until we have a macbook with 108,500 by 81,500 resolution?
------
jordanmoore_
... in a lightbox!
------
maeon3
It's really really big. What gets you is that it's really there, go outside
and look up. All those stars, all that energy running down, without anything
harnessing that energy.
There it is, running down like a forest fire. What will it turn into next? I
see the universe as an egg. And it's designed to become a single super
sentient entity someday that makes our sentience look like inanimate energy.
Our sentience will be the inanimate matter building blocks for something we
can't comprehend.
We will comprehend it as much as a carbon molecule comprehends the human mind.
------
rorrr
If you zoom all the way in, it's blurry (I know, I waited for the tiles to
load). There's no single-pixel detail. That means it's not really a
9-gigapixel image, you can easily reduce it by 2x2, and make it a
2.3-gigapixel images. Save space, bandwidth and time for everyone.
~~~
seandougall
I found the same thing, but see chaosmachine's reply above. The unofficial
version actually works.
|
Monday, March 19, 2018, 9:58 a.m.
Judge commits LR man
A 34-year-old Little Rock man accused of killing his parents and abducting his sister has been committed to the State Hospital, at least temporarily, after state doctors diagnosed him as mentally ill and unfit to stand trial. |
Quantum entanglement
01/28/14
By James Sully, SLAC National Accelerator Laboratory
Through ‘spooky action at a distance,’ the properties of two systems remain correlated even after they are separated.
Quantum entanglement happens when two systems—such as two particles—interact. They develop correlations between their properties that are maintained even after they are separated by large distances in space. An observer measuring one system could perfectly predict the corresponding measurements of a second observer looking at the other system far, far away.
An example of an interaction that generates entanglement is the decay of a particle into two other particles: Due to conservation of momentum, the decay products must be entangled in a state where their momenta are correlated. If we measure the momentum of one particle, we know the momentum of the other.
Albert Einstein disparaged the possibility of this prediction, calling it “spooky action at a distance,” but Erwin Schrödinger, and later John Bell, recognized it as an essential feature of quantum mechanics.
Although entanglement doesn’t allow communication faster than the speed of light, it can be used to “teleport” a perfect quantum copy of an object (although one must destroy the original). Recent experiments have teleported quantum states well over 100 kilometers.
Entanglement isn’t just a feature of esoteric experiments. Typical states of matter we encounter all the time are characterized by a large degree of entanglement. In fact, entanglement between objects and their environment is responsible for the emergence of the familiar classical world from counterintuitive quantum laws. The organization of entanglement in matter can also contain a great deal of interesting physical information.
More speculative recent work has suggested that entanglement might be the thread that stitches together different regions of spacetime in quantum gravity. Entanglement between different objects might even generate a wormhole in spacetime that connects them. |
Profiles of the bacterial community in short-term indwelling urinary catheters by duration of catheterization and subsequent urinary tract infection.
Urinary catheterization, even of short duration, increases the risk of subsequent urinary tract infection (UTI). Whether the bacteria found on the surface of catheters placed for <3 days are associated with UTI risk is unknown. We screened the biofilms found on the extraluminal surface of 127 catheters placed for <3 days in women undergoing elective gynecologic surgery, using targeted quantitative polymerase chain reaction and an untargeted 16S rRNA taxonomic screen. Using quantitative polymerase chain reaction, Enterococcus spp were found on virtually all catheters and lactic acid bacteria in most catheters regardless of duration, but neither genus was associated with UTI development during follow-up. Enterococcus, Streptococcus, and Staphylococcus were the most commonly identified genera in the taxonomic screen but were not associated with subsequent UTIs. Although the most common cause of UTI following catheter removal was Escherichia coli, detectable E coli on the catheter surface was not associated with subsequent UTIs. Our analysis does not suggest that the presence of bacteria on the surface of catheters placed for <3 days leads to subsequent UTIs. Other aspects of catheter care are likely more important than preventing bacterial colonization of the catheter surface for preventing UTIs following short-term catheter placement. |
10 Popular Bible 'Verses' That Aren't Actually in the Bible
Share
The Bible, long debated as the bestsellingest book of all time, might also be one of the most quoted texts. But how much of what is cited as coming from the Old and New Testaments is actually in the Bible?
"Spare the rod, spoil the child"
This could very well be a paraphrase of Proverbs 13:24, but the statement doesn't really exist in any translation of the Bible. The Bible verse actually reads: "He who spares the rod hates his son, but he who loves him is careful to discipline him."
Samuel Butler, a 17th century British poet, actually coined the phrase "spare the rod and spoil the child" in his satirical poem, "Hudibras" (read it here).
(Photo: Wikimedia Creative Commons)Samuel Butler, 17th century British poet who coined the term "spare the rod, spoil the child" in his satirical poem "Hudibras."
"Money is the root of all evil"
This misquote is not too far off from the actual verse, found in 1 Timothy 6:10: "For the love of money is a root of all kinds of evil. Some people, eager for money, have wandered from the faith and pierced themselves with many griefs."
(Credit: http://imgur.com/Npadxed)
"God don't like ugly"
While some may want to suggest that this phrase could be a colloquial interpretation from the Book of Proverbs to sum up ungodly behavior, they would be wrong. The phrase, as profound as it may be, is not anywhere in Scripture.
"Cleanliness is next to godliness"
No, Jesus did not say this in the Sermon on the Mount nor in any of his teachings recorded in the Gospels. This Bible misquote might have its root in James 4:8: "Draw near to God and He will draw near to you. Cleanse your hands, you sinners; and purify your hearts, you double-minded."
"Money cometh to me now!"
This phrase, made popular by preacher Dr. Leroy Thompson and frequently chanted during his "Money Cometh to You" conferences is, unfortunately, not in the Bible. The phrase, also picked up by Kenneth Copeland, won't instantaneously attract unexpected income.
"Blessed and highly favored"
Paul, credited with writing many of the New Testament letters, never wrote to the churches in Corinth or Rome declaring Christians to be "blessed and highly favored." As good as the phrase may sound, it's not in the Bible.
"Touch your neighbor"
You ever sat next to somebody in church that was fine and you couldn't wait for the preacher to say, "touch your neighbor"? Y'all lyin! LOL!
This phrase might frequently be heard during sermons, when a preacher has a particular point he or she wants to get across — but, surprisingly, this saying isn't in the Good Book. Christians are admonished throughout Scripture to love their neighbors, but there is nothing in the Bible about turning to your neighbor, high-fiving your neighbor, or touching your neighbor.
"All things work together for good"
This is another passage in which context is key — what things work together for whose good? Romans 8:28 reads in full: "And we know that in all things God works for the good of those who love him, who have been called according to his purpose."
"God moves in mysterious ways"
This might be a universal confession among all Christians, but this phrase is stated nowhere in Scripture. Perhaps the phrase can be linked to Isaiah 55:8: "'For my thoughts are not your thoughts, neither are your ways my ways,' declares the LORD."
"Pride comes before the fall"
This phrase often attributed to the Bible is almost correct. The actual verse, found in Proverbs 16:18, actually reads: "Pride goes before destruction, a haughty spirit before a fall." |
Ask HN: Mental health clinic near Seattle? - mannicken
I might be suffering from depression and would like to consult a doctor. Nothing serious, no suicide attempts yet but I'd like to be on the safe side.<p>Anyone can give recommendations for a clinic near Seattle area?
======
joshuarr
Are you physically active? If no do yourself a favor and exercise. In my
opinion it's the first thing to rule out before taking medicinal measures.
I would also recommend visiting a therapist instead of a GP. Your primary care
physician likely knows everything they know about depression from the
pamphlets and samples the pharmaceutical companies provide. Your GP will have
you fill out a 6 question form asking how sad you feel and then put you on
Zoloft. It's worth eliminating social and environmental issues with a
psychologist before altering your brain chemistry.
[http://therapists.psychologytoday.com/rms/state/WA/Seattle.h...](http://therapists.psychologytoday.com/rms/state/WA/Seattle.html)
Good luck. It's a really easily treatable problem.
~~~
_delirium
Apart from physical activity, regularly seeing some sunshine (i.e. more than
the 2 minutes it takes to walk to a car) is one of the other lifestyle-change
options that can sometimes have surprisingly large effects. It's pretty easy
to try, at least.
------
jacquesm
Wouldn't your GP be the right person to ask? Or the psychiatry department of a
hospital?
The chances that someone on HN has this information are relatively small. It
may work though, who knows!
~~~
joshuarr
It's a big internet, and sometimes you gotta seek help where you're
comfortable.
------
ddemchuk
If you're cognizant of the fact that you aren't at risk immediately but could
be, schedule an appointment with your General Practitioner as soon as you can.
Take some time to try and narrow down what is bringing you down and try to
isolate those stressors for the time being. Go spend some time with those
people who won't ask you questions and will just get it. We all have friends
like that.
Most of all, hang in there.
|
package tests1
// In this test suite, (1) option is always preferred.
import "strconv"
import "errors"
//= unit import: omit parenthesis in a single-package import
import (
"fmt"
)
var (
_ = fmt.Printf
_ = errors.New
_ = strconv.Atoi
)
// T is an example type.
type T struct {
integer int
}
func zeroValPtrAlloc() {
_ = new(T)
_ = new(map[string]bool)
_ = new([]int)
//= zero value ptr alloc: use new(T) for *T allocation
_ = &T{}
//= zero value ptr alloc: use new(T) for *T allocation
_ = &[]int{}
}
func emptySlice() {
_ = make([]int, 0)
_ = make([]float64, 0)
//= empty slice: use make([]T, 0)
_ = []string{}
}
func emptyMap() {
_ = make(map[T]T)
_ = make(map[*T]*T, 0)
//= empty map: use make(map[K]V)
_ = map[int]int{}
}
func hexLit() {
_ = 0xff
_ = 0xabcdef
//= hex lit: use a-f (lower case) digits
_ = 0xABCD
}
func rangeCheck(x, low, high int) {
_ = x > low && x <= high
_ = x+1 >= low && x+1 < high
_ = x >= low && x <= high
//= range check: use align-left, like in `x >= low && x <= high`
_ = low < x || x < high
}
func andNot(x, y int) {
_ = x &^ y
_ = 123 &^ x
//= and-not: remove a space between & and ^, like in `x &^ y`
_ = (x + 100) & ^(y + 2)
}
func floatLit() {
_ = 0.0
_ = 0.123
_ = 1.0
//= float lit: use explicit int/frac part, like in `1.0` and `0.1`
_ = 0.
//= float lit: use explicit int/frac part, like in `1.0` and `0.1`
_ = .0
}
func labelCase() {
ALL_UPPER:
FOO:
//= label case: use ALL_UPPER
UpperCamelCase:
//= label case: use ALL_UPPER
lowerCamelCase:
goto ALL_UPPER
goto FOO
goto UpperCamelCase
goto lowerCamelCase
}
func untypedConstCoerce() {
const zero = 0
var _ int = zero
var _ int32 = 10
//= untyped const coerce: specify type in LHS, like in `var x T = const`
var _ = int64(zero + 1)
}
func threeArgs(a, b, c int) {}
func argListParens() {
threeArgs(
1,
2,
3)
threeArgs(1,
2,
3)
//= arg list parens: align `)` to a same line with last argument
threeArgs(
1,
2,
3,
)
}
func nonZeroLenTestChecker() {
var (
s string
b []byte
m map[int]int
ch chan int
)
// Strings are ignored.
_ = len(s) >= 1
_ = len(s) >= 1
_ = len(s) >= 1
_ = len(b) != 0
_ = len(m) != 0
//= non-zero length test: use `len(s) != 0`
_ = len(ch) > 0
//= non-zero length test: use `len(s) != 0`
_ = len(ch) >= 1
}
func defaultCaseOrder(x int, v interface{}) {
switch x {
default:
case 10:
}
switch v.(type) {
default:
case int:
case string:
}
//= default case order: default case should be the first case
switch {
case x > 20:
default:
}
}
|
Q:
.NET Core WebJob Console App CI/CD using Azure DevOps Pipelines
I'm trying to build my console application through Azure DevOps. So to this I'm following this tutorial.
The follow images, are from what I already did.
Build Solution Pipeline
Build Solution Pipeline / Publish
Build Solution Pipeline / Artifact
Deploy WebJob Pipeline
Deploy WebJob Pipeline / Variables
When I run the Build Solution the zip seems to work, because I see it.
But when I run the Deploy WebJob Pipeline I get ##[error]D:\a\1\s\***.zip not found.. I tried wwwroot/App_Data/jobs/, but still the same error.
What could I be doing wrong? What's the right way to set the zippedArtifactPath?
A:
You're following the tutorial incorrectly. The tutorial is telling you to create a release. You're using a build pipeline to try to release. Although you can do that, you shouldn't.
You have two options:
If you want to keep using the visual designer, use a release. Look at the "Release" tab for this. Releases can be tied to build artifacts and will handle downloading the build artifact automatically.
If you want to use YAML, refer to the YAML documentation and set up a multi-stage pipeline to release.
|
Once in a blue moon, gamers are rewarded for their choice of console by the superpowers of console gaming. Nintendo have rewarded their gamers with countless jewels in the form of their Metroid Prime and Zelda games. Microsoft blessed its followers with Halo and Mech Assault. So that leaves us with the biggest of the super powers, Sony. It is no secret that Sonys Gran Turismo series have brought great success to the PlayStation, but what about other in house developer made successes? Well, in the month of March 2005, God of War for the PlayStation 2 set foot on North American soil and was well received. Earlier this July, that very same game made its way to the shores of the European market. Thus, I thought it would be appropriate to visit ancient Greece and meet God of War, head on. |
The Tactical Dad Baby Carrier
[UPDATE 9/28/16]: Thank you to everyone that backed our Kickstarter project but unfortunately, we didn't meet our goal. We're all bummed here but as with any hurdle, we're going to learn from this experience, make adjustments and continue moving forward with the baby carrier design and production. If you want to stay updated on our progress and receive an early bird special, be sure to signup using the form below. Thanks again for all your support!
We heard you Tactical Dad Family and we're working our butts off designing the most tactical baby carrier EVER. Read a little bit more about it below and let us know your thoughts.
We released our Dad on Diaper Duty Pack back in 2014 but felt something was missing. It wasn’t long before we realized the existing pack needed a counterpart, a baby carrier.
We wanted a baby carrier that not only functioned well but also worked in conjunction with our existing Dad on Diaper Duty pack. After some brainstorming and initial mockups we started working with local designers, manufacturers and child care experts to design and construct a bag that met all the necessary child safety rules and regulations. To date, we’ve been able to create a working prototype that we’ve used personally with our own children. The next steps are to expand on this prototype, start mass production and get these products out to you, the customer.
Prototype #1
Mockup of prototype 2 - better but not there yet.
Currently, we're working on prototype #3. We're in the middle of finding childcare safety experts to help us address any possible issues with designing a baby carrier. Also, we're in the final stages of the design, more molle webbing! We'll keep you posted on all the developments. Until then, please support our Kickstarter Campaign for the baby carrier. |
---
abstract: 'Relativistic heavy-ion collisions lead to a final state which has a higher degree of strangeness saturation than those of elementary collisions. A systematic analysis of this phenomenon, based on the strangeness saturation factor, $\gamma_s$, is made for C+C, Si+Si and Pb+Pb collisions at the CERN SPS collider and for Au+Au collisions at RHIC energies. Strangeness saturation is shown to increase with the number of participants within a colliding system, at both CERN SPS and RHIC energies. The saturation observed in central collisions of lighter nuclei deviates from that seen in peripheral collisions of heavier nuclei with an equivalent participant number, which could be due to the difference in nuclear density.'
address: |
$^a$ Department of Physics, University of Cape Town, Rondebosch 7701, Cape Town, South Africa\
$^b$ Institut für Kern- und Hadronenphysik, Forschungszentrum Rossendorf, PF 510119, D-01314 Dresden, Germany\
author:
- '[J. Cleymans$^a$, B. Kämpfer$^b$, P. Steinberg$^a$[^1], S. Wheaton$^a$]{}'
title: ' Strangeness Saturation: Energy- and System-Size Dependence '
---
Introduction
============
It has been shown that statistical-thermal models are able to reproduce the multiplicities measured in relativistic heavy-ion collisions with remarkable success. This is accomplished with a very small number of parameters– the temperature, baryon-chemical potential, $\mu_B$, and a factor measuring the degree of strangeness saturation, $\gamma_s$. As is now well known, there is very little difference between the temperatures observed in $p+p$ and relativistic heavy-ion collisions. The extracted strangeness saturation factor, $\gamma_s$, is however very different in $p+p$ and heavy-ion collisions. In this paper we focus on hadron multiplicities and extract the thermal parameters as a function of system-size and energy.
The recent study in [@Bearden] (cf. table III therein) impressively demonstrated that, with increasing system-size at SPS energies, the strangeness saturation increases. In [@we] we have shown that at a beam energy of 158 AGeV, in collisions of lead-on-lead nuclei, the strangeness saturation continuously increases with centrality. However, strangeness (as measured by fully-integrated kaon and antikaon multiplicities) is clearly below saturation. A preliminary analysis [@Hirschegg; @Budapest; @Nantes] of the centrality dependence at RHIC energy of $\sqrt{s}_{NN} = 130$ GeV points to a further increase of strangeness towards saturation for central collisions of gold nuclei. An independent analysis [@NuXu; @Kaneta] confirms this finding.
This paper is divided into several sections. Firstly the system-size dependence of the thermal parameters is determined using 4$\pi$-yields from central C+C and Si+Si collisions [@C_Si], and centrality-binned Pb+Pb collisions [@Sikler; @Blume] at 158 AGeV at the CERN SPS. For comparison, centrality-binned mid-rapidity yields from Au+Au collisions at $\sqrt{s}_{NN} = 130$ GeV [@PHENIX] and Pb+Pb collisions at SPS energy [@Sikler] are analysed, despite the danger in applying the thermal model to yields in a limited rapidity window. Finally, the energy dependence of the thermal parameters is further elucidated by analysis of central Pb+Pb yields measured by NA49 at 40, 80 and 158 AGeV [@NA49_coll1; @Mischke_QM02; @NA49_coll2; @NA49_ksi; @Mischke].
Analyses of Hadron Multiplicities
=================================
In the thermal model, hadron multiplicities can be described [@review; @heavy_ions; @PBM_qd; @abundances_a; @abundances_b] by the grand-canonical partition function ${\cal Z} (V, T, \vec \mu_i) = \mbox{Tr} \{
\mbox{e}^{- \frac{\hat H - \vec \mu_i \vec Q_i}{T}} \}$, where $\hat H$ is the statistical operator of the system, $T$ denotes the temperature, and $\mu_i$ and $Q_i$ represent the chemical potentials and corresponding conserved charges respectively. In the analysis of $4\pi$-data, the net-zero strangeness and the baryon-to-electric charge ratio of the colliding nuclei constrain the components of $\vec \mu_i = (\mu_B, \mu_S, \mu_Q)$. The particle numbers are given, in the Boltzmann approximation, by $$N_i^{\rm prim} = V g_i \gamma_s^{\left|S_i\right|}\int
\frac{d^3 p}{(2\pi)^3} \, dm_i \,e^{-\frac{E_i - \vec \mu_i \vec Q_i}{T}}
\mbox{BW} (m_i),$$ where we include phenomenologically a strangeness saturation factor, $\gamma_s$, with $\left|S_i\right|$ the number of valence strange quarks and anti-quarks in species $i$ [@Rafelski] (i.e. $\gamma_s$ for the kaons and $\gamma_s^2$ for $\phi$) to account for incomplete equilibration in this sector, $E_i = \sqrt{\vec p^{\, 2} + m_i^2}$, and $\mbox{BW}$ is the Breit-Wigner distribution. The particle numbers to be compared with experiment are $N_i = N_i^{\rm prim} + \sum_j \mbox{Br}(j \to i) N_j^{\rm prim}$, due to decays of unstable particles with branching ratios $\mbox{Br}(j \to i)$. For small participant numbers (typically $N_{\rm part}$ below 40), one has to resort to a canonical or micro-canonical formalism [@Redlich_Becattini; @Keranen1].
System-size dependence
----------------------
### Analysis of fully-integrated yields
In order to extract the system-size dependence of the thermal parameters we analyse $4\pi$-multiplicities of $\pi^\pm$, $K^\pm$, $\phi$ and $N_{\rm part}$ (taken as the sum over all baryons) in 6 centrality bins in the reaction Pb+Pb [@Sikler; @Blume] (at 158 AGeV) and for central Si+Si and C+C collisions [@C_Si] at the same energy. Our previous analyses [@we; @Hirschegg; @Budapest; @Nantes] of the Pb+Pb system included $\overline{p}$ yields. They are excluded in this analysis in order that the Pb-, C- and Si systems be treated equivalently. No weak feed-down corrections have yet been applied to the peripheral Pb-, or C- and Si systems [@Hoehne_private]. Due to the rather limited data set, the freeze-out temperature was fixed at 165 MeV, independent of centrality and colliding system. This is supported by a variety of fits to both heavy-ion and elementary collision systems in this energy regime [@Bearden; @we; @heavy_ions; @B1; @B2]. Owing to the size of the C- , Si- and peripheral Pb systems, strangeness was treated canonically in all systems. As shown in [@Keranen1], for systems of this size at SPS energy, it is sufficient to treat the baryon- and charge content grand-canonically. The results are displayed in Figs. \[f\_gammas\_sys\_size\] and \[f\_muB\_sys\_size\] with the specifics of each fit explained in the captions.
The strangeness saturation factor, $\gamma_s$, shows an increasing trend with collision centrality in the Pb+Pb system, except possibly over the two most central bins (see Fig. \[f\_gammas\_sys\_size\]). It is also clear that the C+C and Si+Si systems lie above the trend suggested by the Pb+Pb points. This suggests that peripheral Pb+Pb collisions are not equivalent, with respect to strangeness saturation, to central collisions of lighter nuclei with the same participant number. In the C+C and Si+Si systems the baryon chemical potential is also lower than in the peripheral Pb+Pb bins (refer to Fig. \[f\_muB\_sys\_size\]). It should be stressed that the only direct baryon information we have at our disposal in this analysis is the number of participants. It would appear that, in the Pb+Pb system, $\mu_B$ decreases as the collisions become more central. As can be seen in Fig. \[f\_muB\_sys\_size\], the inclusion of $\overline{p}$ yields in the Pb analysis (squares) leads to a roughly centrality-independent baryon chemical potential of approximately 250 MeV. In order to investigate its interplay with $\gamma_s$, we fixed $\mu_B$ at 250 MeV in all collisions. This affected appreciably only the most central Pb+Pb bin, resulting in a monotonic increase in $\gamma_s$ within the Pb system. Thus, the drop in $\gamma_s$ over the most central bins of the Pb system is driven by $N_{\mathrm{part}}$ which, due to its small error relative to other species in the most central bin, is weighted heavily in a $\chi^2$-analysis. Since the errors, particularly in the C and Si systems have not yet been well-established [@Hoehne_private], we repeated the fits minimising the ‘quadratic deviation’ [@PBM_qd] defined by: $$q^2 = \sum_i{\left(M^{exp}_i-M^{model}_i\right)^2\over \left(M^{model}_i\right)^2}$$ where $M^{exp}_i$ and $M^{model}_i$ are the experimental and model-predicted multiplicities of hadron species $i$ respectively. The results of this analysis are shown in Figs. \[f\_gammas\_sys\_size\] and \[f\_muB\_sys\_size\] as triangles. Again $\gamma_s$ in the Pb+Pb system shows a monotonic trend, while the C+C and Si+Si systems still show a sizeable deviation from the peripheral Pb points. This cannot be attributed to rescattering, since it is more dominant in peripheral Pb+Pb than in central C+C and Si+Si reactions [@Bass]. In [@C_Si] it is shown that the strangeness enhancement, as measured by the ratios of strange to non-strange mesons, in these systems scales with $f_2$, the fraction of participants which undergo multiple collisions. As shown in Fig. \[f\_gammas\_f2\], $\gamma_s$ scales with this variable too. The strangeness saturation extracted from p+p collisions at $\sqrt{s} = 19.4$ GeV [@B2] (denoted in Fig. \[f\_gammas\_f2\] by the square) suggests a strong flattening off of $\gamma_s$ for small systems. What is surprising is the approximate equivalence of $f_2$ and $\gamma_s$ for a number of points in Fig. \[f\_gammas\_sys\_size\] ($f_2$ in this figure is denoted by the diamonds).\
### Mid-rapidity analysis
When applying the thermal model to $4\pi$-data, many dynamical effects cancel out in ratios of the fully-integrated hadron yields [@cr]. In particular, effects due to flow disappear if the freeze-out surface is characterized by a single temperature and chemical potential. When applying the thermal model to mid-rapidity data this is true only when the Bjorken model [@bj] holds. Furthermore, in a limited rapidity window there is no guarantee that the total strangeness should be zero, nor that the baryon-to-charge ratio in this kinematic region should be set by the colliding nuclei. In a grand canonical approach this affects the constraints on the chemical potentials. In the canonical formulation the ‘canonical suppression volume’ (i.e. the volume in which quantum numbers are exactly conserved and that used to calculate the densities) and the ‘normalisation volume’ (i.e. the volume required to convert densities to yields) are not necessarily the same. A further complication arises in the treatment of decays. The experimental yields include feed-down from heavier resonances into stable, final-state particles. With mid-rapidity data, this requires careful consideration of the decay kinematics, as particles in a certain rapidity range will in general decay into particles in different kinematic windows.
Despite these disclaimers we analyse the following mid-rapidity yields measured by the NA49 and PHENIX collaborations:
i\) NA49 mid-rapidity yields of $\pi^\pm$, $K^\pm$, $p$ and $\overline{p}$ in the reaction Pb(158 AGeV) + Pb in 6 centrality bins [@Sikler].
ii\) PHENIX mid-rapidity densities of $\pi^\pm$, $K^\pm$ and $p^\pm$ in the reaction Au + Au at $\sqrt{s} = 130$ AGeV in 5 centrality bins [@PHENIX].
The PHENIX yields were not corrected for weak decays. PHENIX estimate the probability for reconstructing protons from $\Lambda$ decays as prompt protons at 32% at $p_T$ = 1 GeV/c [@PHENIX]. The PHENIX analysis was performed with both 0% and 50% feed-down from weak decays, while no weak feeding was included in the SPS analysis. In all cases the grand-canonical formalism was applied, with the total strangeness set to zero but $\mu_Q$ fit as a free parameter.
In Fig. \[f\_mid\_rap\] the system-size dependence of $\gamma_s$ at mid-rapidity is shown for SPS (left panel) and RHIC (right panel). In the SPS plot the results of our earlier analysis of fully-integrated NA49 yields [@Nantes] are included for comparison. It is seen that $\gamma_s$, as extracted from the mid-rapidity NA49 data, is well above that obtained from the analysis of the fully-integrated NA49 yields. In order to exclude the possibility that the difference in the strangeness saturation extracted from the 4$\pi$- and mid-rapidity data analysed here is due solely to different strange hadrons included in the fits, the 4$\pi$ NA49 analysis was repeated with the hidden-strangeness $\phi$ excluded. In this way the two NA49 analyses are equivalent with respect to strange particles. This led to a slight decrease in $\gamma_s$ in the most peripheral bins, and thus an even larger difference between mid-rapidity and fully-integrated results. Thus, certainly at SPS energies, the degree of strangeness saturation is far higher in the central rapidity region. Included in the RHIC plot of Fig. \[f\_mid\_rap\] is $f_2$. This fraction of multiply-struck participants parametrises the system-size dependence of the strangeness saturation factor in Au+Au collisions at RHIC energy remarkably well.\
Energy dependence
-----------------
In order to further investigate the energy dependence of the thermal parameters we analyse the fully-integrated yields of $\pi^\pm$, $K^\pm$, $\Lambda$ and $\overline{\Lambda}$ in central Pb+Pb collisions at 40, 80 and 158 AGeV, supplemented with $K_s^0$, $\Xi^-$, $\overline{\Xi}^+$ and $\phi$ multiplicities at 158 AGeV [@NA49_coll1; @Mischke_QM02; @NA49_coll2; @NA49_ksi]. In Figs. \[f\_gammas\_e\_dep\] and \[f\_wrob\_e\_dep\], $\gamma_s$ and the Wróblewski factor [@Wr], $\lambda_s$, which measures the ratio of newly created $s\bar{s}$ pairs to newly created non-strange valence quark pairs at the primary hadron level: $${\displaystyle \lambda_s = {2\langle s\bar{s}\rangle \over \langle u\bar{u}\rangle + \langle d\bar{d}\rangle}},$$ are displayed as a function of the collision energy. Included in the figures are the results of our earlier system-size analysis of the Pb+Pb system at CERN SPS and the Au+Au system at RHIC [@Nantes]. It should be noted that the centrality cuts on the most central Pb+Pb collisions are slightly different at the various energies (7.2% at 40 and 80 AGeV, and 5% at 158 AGeV). In view of the system-size dependence extracted in the previous section, this will raise the 40 and 80 AGeV points relative to the 158 AGeV point. In Fig. \[f\_wrob\_e\_dep\] one observes that $\lambda_s$ for central collisions decreases with collision energy from 40 AGeV (in agreement with [@AGS_peak]). Within a given collision system, it shows a systematic increase with participant number, while remaining above the typical value of 0.2 seen in $pp$ collisions [@B3]. With respect to $\gamma_s$ there is fairly good agreement between the most peripheral heavy-ion bins and the results from elementary systems of comparable energy [@B2].
For comparison we also show the value of $\gamma_s$ extracted from mid-rapidity Pb+Pb data [@Mischke; @NA49_ksi; @NA49_coll1] at 158 AGeV (open circle) in Fig. \[f\_gammas\_e\_dep\]. As can be seen, the strangeness saturation in the mid-rapidity region is clearly greater than that averaged over 4$\pi$.
Summary
=======
In conclusion, the strangeness saturation factor, $\gamma_s$, has been shown to increase with participant number in the Pb+Pb system at the CERN SPS as well as the Au+Au system at RHIC. Central collisions of C+C and Si+Si at SPS energies deviate, with respect to strangeness saturation, from peripheral Pb+Pb collisions. However, $\gamma_s$ is seen to scale with the fraction of multiply-struck participants, $f_2$. In fact, $f_2$ remarkably tracks the $N_{\rm part}$-dependence of $\gamma_s$ as extracted from mid-rapidity yields in Au+Au collisions at RHIC. Where both mid-rapidity and fully-integrated data was available, the degree of strangeness saturation observed at mid-rapidity was found to be consistently higher than that extracted from 4$\pi$-data.\
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge useful correspondence with C. Höhne and M. Gaździcki. One of us (S.W.) acknowledges the financial assistance of the National Research Foundation (NRF) of South Africa.
[999]{}
I.G. Bearden et al. (NA44 collaboration), nucl-ex/0202019. J. Cleymans, B. Kämpfer, S. Wheaton, Phys. Rev. C 65 (2002) 027901. B. Kämpfer, J. Cleymans, K. Gallmeister, S. Wheaton, hep-ph/0202134. B. Kämpfer, J. Cleymans, K. Gallmeister, S. Wheaton, hep-ph/0204227.
J. Cleymans, B. Kämpfer, S. Wheaton, hep-ph/0208247. N. Xu et al. (STAR collaboration), private communication. M. Kaneta, contribution presented at Quark Matter 2002 (QM 2002), Nantes, France, 18-24 July 2002. C. Höhne (NA49 collaboration), nucl-ex/0209018. F. Sikler (NA49 collaboration), Nucl. Phys. A 661 (1999) 45c. V. Friese et al. (NA49 collaboration), Nucl. Phys. A 698 (2002) 487c. K. Adcox et al. (PHENIX collaboration), Phys. Rev. Lett. 88 (2002) 242301. S.V. Afanasiev et al. (NA49 Collaboration), Phys. Rev. C 66 (2002) 054902. A. Mischke (NA49 Collaboration), contribution presented at Quark Matter 2002 (QM 2002), Nantes, France, 18-24 July 2002. S.V. Afanasiev et al. (NA49 Collaboration), Phys. Lett. B 491 (2000) 59. S.V. Afanasiev et al. (NA49 Collaboration), Phys. Lett. B 538 (2002) 275. A. Mischke (NA49 Collaboration), nucl-ex/0209002. For a general review see e.g. K. Redlich, J. Cleymans, H. Oeschler and A. Tounsi, Acta Physica Polonica B33 (2002) 1609. F. Becattini, J. Cleymans, A. Keränen, E. Suhonen, K. Redlich, Phys. Rev. C 64 (2001) 024901. P. Braun-Munzinger, I. Heppe, J. Stachel, Phys. Lett. B 465 (1999) 15. P. Braun-Munzinger et al., Phys. Lett. B 344 (1995) 43, B 365 (1996) 1, B 465 (1999) 15, B 518 (2001) 415. J. Cleymans, K. Redlich, Phys. Rev. Lett. 81 (1998) 5284;\
J. Sollfrank, J. Phys. G: Nucl. Part. Phys. 23 (1997) 1903. J. Letessier, J. Rafelski and A. Tounsi, Phys. Rev. C 50 (1994) 406;\
C. Slotta, J. Sollfrank and U. Heinz, in [*Proceedings of Strangeness in Hadronic Matter*]{}, Tucson, edited by J. Rafelski, AIP Conf. Proc. No. 340 (AIP, Woodbury, 1995), p. 462. K. Redlich, Nucl. Phys. A 698 (2002) 94. A. Keränen and F. Becattini, Phys. Rev. C 65 (2002) 044901. C. Höhne, private communication. F. Becattini, Z. Phys. C 69 (1996) 485. F. Becattini and U. Heinz, Z. Phys. C 76 (1997) 269. S.A. Bass et al., Prog. Part. Nucl. Phys. 41 (1998) 225. J. Cleymans and K. Redlich, Phys. Rev. C 60 (1999) 054908. J.D. Bjorken, Phys. Rev. D 27 (1983) 140. A. Wróblewski, Acta Physica Polonica B16 (1985) 379. F. Becattini, hep-ph/0206203. P. Braun-Munzinger, J. Cleymans, H. Oeschler, K. Redlich, Nucl. Phys. A 697 (2002) 902.
![The system-size dependence of the strangeness saturation factor, $\gamma_s$, as extracted from centrality-binned Pb+Pb [@Sikler; @Blume], and central C+C and Si+Si data [@C_Si] under various fit conditions. The circles with error bars represent the results of our $\chi^2$-analysis assuming 50% feeding from weak decays, while the triangles show the results minimising the quadratic deviation (again assuming 50% feeding). For comparison, the results of our earlier Pb system analysis [@Nantes], with $\overline{p}$’s included in the fit, are included (squares). Also shown are the fraction of participants which underwent multiple collisions, $f_2$, as extracted from a Glauber calculation [@C_Si] (diamonds).[]{data-label="f_gammas_sys_size"}](./gammas_sys_size.eps){width="12cm"}
![The system-size dependence of the baryon chemical potential, $\mu_B$, as extracted from centrality-binned Pb+Pb [@Sikler; @Blume], and central C+C and Si+Si data [@C_Si] under various fit conditions. The circles with error bars represent the results of our $\chi^2$-analysis assuming 50% feeding from weak decays, while the triangles show the results minimising the quadratic deviation (again assuming 50% feeding). For comparison, the results of our earlier Pb system analysis [@Nantes], with $\overline{p}$’s included in the fit, are included (squares). []{data-label="f_muB_sys_size"}](./muB_sys_size_sps.eps){width="12cm"}
![The strangeness saturation factor, $\gamma_s$, as extracted from centrality-binned Pb+Pb [@Sikler; @Blume] (triangles) and central C+C and Si+Si data [@C_Si] (circles), as a function of $f_2$, the fraction of multiply-struck participants. The results shown are those obtained with $\mu_B$ fixed at 250 MeV and $T$ at 165 MeV, assuming 50% weak feed-down. For comparison, the strangeness saturation as extracted from p+p collisions at $\sqrt{s} = 19.4$ GeV [@B2] is included (square). []{data-label="f_gammas_f2"}](./gammas_f2.eps){width="12cm"}
![Left Panel: Comparison of the strangeness saturation factor, $\gamma_s$, extracted from mid-rapidity NA49 data [@Sikler] (up triangles) with the results of our earlier analysis of NA49 4$\pi$-yields (squares) [@Nantes]. Right Panel: The strangeness saturation observed in Au+Au collisions as extracted from PHENIX data [@PHENIX]. The analysis was performed assuming 50% weak feed-down (down triangles) and 0% weak feed-down (up triangles). Also shown are the fraction of multiply-struck participants, $f_2$, obtained from our Glauber calculation (dashed line).[]{data-label="f_mid_rap"}](./gammas_mod.eps){width="12cm"}
![The energy dependence of the strangeness saturation factor, $\gamma_s$, extracted from central Pb+Pb collisions at 40, 80 and 158 AGeV [@NA49_coll1; @Mischke_QM02; @NA49_coll2; @NA49_ksi] (up triangles), together with the results of our earlier analysis of centrality-binned Pb+Pb collisions at 158 AGeV (down triangles) and Au+Au collisions at RHIC (diamonds) [@Nantes]. For comparison we show the results obtained from $pp$ collisions (filled squares) and $p\overline{p}$ collisions (open squares) at various energies [@B2]. The open circle is extracted from mid-rapidity yields [@Mischke; @NA49_ksi; @NA49_coll1].[]{data-label="f_gammas_e_dep"}](./gammas_energy_dep.eps){width="12cm"}
![The energy dependence of the Wróblewski factor, $\lambda_s$, extracted from central Pb+Pb collisions at 40, 80 and 158 AGeV [@NA49_coll1; @Mischke_QM02; @NA49_coll2; @NA49_ksi] (up triangles), together with the results of our earlier analysis of centrality-binned Pb+Pb collisions at 158 AGeV (down triangles) and Au+Au collisions at RHIC (diamonds) [@Nantes]. For reference we show the typical value of 0.2 extracted from $pp$ systems [@B3].[]{data-label="f_wrob_e_dep"}](./wrob_energy_dep.eps){width="12cm"}
[^1]: Visiting Fulbright Professor on leave of absence from the Brookhaven National Laboratory, Upton, NY, USA
|
// THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
package codebuild_test
import (
"bytes"
"fmt"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/codebuild"
)
var _ time.Duration
var _ bytes.Buffer
func ExampleCodeBuild_BatchGetBuilds() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.BatchGetBuildsInput{
Ids: []*string{ // Required
aws.String("NonEmptyString"), // Required
// More values...
},
}
resp, err := svc.BatchGetBuilds(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_BatchGetProjects() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.BatchGetProjectsInput{
Names: []*string{ // Required
aws.String("NonEmptyString"), // Required
// More values...
},
}
resp, err := svc.BatchGetProjects(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_CreateProject() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.CreateProjectInput{
Artifacts: &codebuild.ProjectArtifacts{ // Required
Type: aws.String("ArtifactsType"), // Required
Location: aws.String("String"),
Name: aws.String("String"),
NamespaceType: aws.String("ArtifactNamespace"),
Packaging: aws.String("ArtifactPackaging"),
Path: aws.String("String"),
},
Environment: &codebuild.ProjectEnvironment{ // Required
ComputeType: aws.String("ComputeType"), // Required
Image: aws.String("NonEmptyString"), // Required
Type: aws.String("EnvironmentType"), // Required
EnvironmentVariables: []*codebuild.EnvironmentVariable{
{ // Required
Name: aws.String("NonEmptyString"), // Required
Value: aws.String("String"), // Required
},
// More values...
},
},
Name: aws.String("ProjectName"), // Required
Source: &codebuild.ProjectSource{ // Required
Type: aws.String("SourceType"), // Required
Auth: &codebuild.SourceAuth{
Type: aws.String("SourceAuthType"), // Required
Resource: aws.String("String"),
},
Buildspec: aws.String("String"),
Location: aws.String("String"),
},
Description: aws.String("ProjectDescription"),
EncryptionKey: aws.String("NonEmptyString"),
ServiceRole: aws.String("NonEmptyString"),
Tags: []*codebuild.Tag{
{ // Required
Key: aws.String("KeyInput"),
Value: aws.String("ValueInput"),
},
// More values...
},
TimeoutInMinutes: aws.Int64(1),
}
resp, err := svc.CreateProject(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_DeleteProject() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.DeleteProjectInput{
Name: aws.String("NonEmptyString"), // Required
}
resp, err := svc.DeleteProject(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_ListBuilds() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.ListBuildsInput{
NextToken: aws.String("String"),
SortOrder: aws.String("SortOrderType"),
}
resp, err := svc.ListBuilds(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_ListBuildsForProject() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.ListBuildsForProjectInput{
ProjectName: aws.String("NonEmptyString"), // Required
NextToken: aws.String("String"),
SortOrder: aws.String("SortOrderType"),
}
resp, err := svc.ListBuildsForProject(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_ListCuratedEnvironmentImages() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
var params *codebuild.ListCuratedEnvironmentImagesInput
resp, err := svc.ListCuratedEnvironmentImages(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_ListProjects() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.ListProjectsInput{
NextToken: aws.String("NonEmptyString"),
SortBy: aws.String("ProjectSortByType"),
SortOrder: aws.String("SortOrderType"),
}
resp, err := svc.ListProjects(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_StartBuild() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.StartBuildInput{
ProjectName: aws.String("NonEmptyString"), // Required
ArtifactsOverride: &codebuild.ProjectArtifacts{
Type: aws.String("ArtifactsType"), // Required
Location: aws.String("String"),
Name: aws.String("String"),
NamespaceType: aws.String("ArtifactNamespace"),
Packaging: aws.String("ArtifactPackaging"),
Path: aws.String("String"),
},
BuildspecOverride: aws.String("String"),
EnvironmentVariablesOverride: []*codebuild.EnvironmentVariable{
{ // Required
Name: aws.String("NonEmptyString"), // Required
Value: aws.String("String"), // Required
},
// More values...
},
SourceVersion: aws.String("String"),
TimeoutInMinutesOverride: aws.Int64(1),
}
resp, err := svc.StartBuild(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_StopBuild() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.StopBuildInput{
Id: aws.String("NonEmptyString"), // Required
}
resp, err := svc.StopBuild(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
func ExampleCodeBuild_UpdateProject() {
sess := session.Must(session.NewSession())
svc := codebuild.New(sess)
params := &codebuild.UpdateProjectInput{
Name: aws.String("NonEmptyString"), // Required
Artifacts: &codebuild.ProjectArtifacts{
Type: aws.String("ArtifactsType"), // Required
Location: aws.String("String"),
Name: aws.String("String"),
NamespaceType: aws.String("ArtifactNamespace"),
Packaging: aws.String("ArtifactPackaging"),
Path: aws.String("String"),
},
Description: aws.String("ProjectDescription"),
EncryptionKey: aws.String("NonEmptyString"),
Environment: &codebuild.ProjectEnvironment{
ComputeType: aws.String("ComputeType"), // Required
Image: aws.String("NonEmptyString"), // Required
Type: aws.String("EnvironmentType"), // Required
EnvironmentVariables: []*codebuild.EnvironmentVariable{
{ // Required
Name: aws.String("NonEmptyString"), // Required
Value: aws.String("String"), // Required
},
// More values...
},
},
ServiceRole: aws.String("NonEmptyString"),
Source: &codebuild.ProjectSource{
Type: aws.String("SourceType"), // Required
Auth: &codebuild.SourceAuth{
Type: aws.String("SourceAuthType"), // Required
Resource: aws.String("String"),
},
Buildspec: aws.String("String"),
Location: aws.String("String"),
},
Tags: []*codebuild.Tag{
{ // Required
Key: aws.String("KeyInput"),
Value: aws.String("ValueInput"),
},
// More values...
},
TimeoutInMinutes: aws.Int64(1),
}
resp, err := svc.UpdateProject(params)
if err != nil {
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
|
Q:
how to redirect a page from flash using php
Hello community I have the following scenario.
I have a swf with a button that sends a URL request to a php file. Which is the following.
<?php
session_start();
if(isset($_SESSION['user'])){
header ('Location: http://mydomain.com/test/reroute.php');
exit();
}
?>
However, the php file does not reroute to the desired page. Am I missing something?
Thank you very much,
A:
To go to a page from flash, you need to do navigateToURL as follows:
navigateToURL(new URLRequest("http://mydomain.com/test/reroute.php"), "_self");
EDIT
To softcode the redirect url, I suggest you do this (I suspect this is more relevant to what you're looking for)
private var loader:URLLoader=new URLLoader();
private function init():void {
//In the class initialize handler
loader.addEventListener(Event.COMPLETE, redirectReceived);
}
private function redirectReceived(e:Event):void {
if(StringUtil.trim(e.target.data).length > 0) {
navigateToURL(new URLRequest(e.target.data), "_self");
}
}
private function buttonClick(e:MouseEvent):void {
loader.load(new URLRequest("http://path_to_the_php_which_tells_you_where_to_redirect"));
}
And the php which tells you where to redirect will be like this:
<?php
session_start();
if(isset($_SESSION['user'])){
echo('Location: http://mydomain.com/test/reroute.php');
}
?>
|
Cloud 9 (play)
Cloud 9 is a two-act play written by British playwright Caryl Churchill. It was workshopped with the Joint Stock Theatre Company in late 1978 and premiered at Dartington College of Arts, Devon, on 14 February 1979.
The two acts of the play form a contrapuntal structure. Act I is set in British colonial Africa in the Victorian era, and Act II is set in a London park in 1979. However, between the acts only twenty-five years pass for the characters. Each actor plays one role in Act I and a different role in Act II – the characters who appear in both acts are played by different actors in the first and second. Act I parodies the conventional comedy genre and satirizes Victorian society and colonialism. Act II shows what could happen when the restrictions of both the comic genre and Victorian ideology are loosened.
The play uses controversial portrayals of sexuality and obscene language, and establishes a parallel between colonial and sexual oppression. Its humour depends on incongruity and the carnivalesque, and helps to convey Churchill's political message about accepting people who are different and not dominating them or forcing them into particular social roles.
Characters
ROYAL COURT PRODUCTION
Act 1
Clive, a colonial administrator
Betty, his wife, played by a man
Joshua, his black servant, played by a white actor
Edward, his son, played by a woman
Victoria, his daughter, a ventriloquist's dummy
Maud, his mother-in-law
Ellen, Edward's governess
Harry Bagley, an explorer
Mrs. Saunders, a widow (played by the same actress who plays Ellen)
Act 2
Betty, now played by a woman (normally the same actress who plays Edward)
Edward, her son, now played by a man (normally the same actor who plays Betty)
Victoria, her daughter (normally played by the same actress who plays Maud)
Martin, Victoria's husband (normally played by the same actor who plays Harry)
Lin, a lesbian single mother (normally played by the same actress who plays Ellen/Mrs. Saunders)
Cathy, Lin's daughter, age 5, played by a man (normally the same actor who plays Clive)
Gerry, Edward's lover (normally played by the same actor who plays Joshua)
ROYAL COURT & NEW YORK PRODUCTIONS
Act 1
Clive, a colonial administrator
Betty, his wife, played by a man
Joshua, his black servant, played by a white
Edward, his son, played by a woman
Victoria, his daughter, a ventriloquist's dummy
Maud, his mother-in-law
Ellen, Edward's governess
Harry Bagley, an explorer
Mrs. Saunders, a widow (played by the same actress who plays Ellen)
Act 2
Betty, now played by a woman (normally the same actress who plays Ellen/Mrs. Saunders)
Edward, her son, now played by a man (normally the same actor who plays Clive)
Victoria, her daughter (normally played by the same actress who plays Edward)
Martin, Victoria's husband (normally played by the same actor who plays Harry)
Lin, a lesbian single mother (normally played by the same actress who plays Maud)
Cathy, Lin's daughter, age 5, played by a man (normally the same actor who plays Joshua)
Gerry, Edward's lover (normally played by the same actor who plays Betty)
Synopsis
Act I
Clive, A British colonial administrator, lives with his family, a governess and servant during turbulent times in Africa. The natives are rioting and Mrs Saunders, a widow, comes to them to seek safety. Her arrival is soon followed by Harry Bagley, an explorer. Clive makes passionate advances to Mrs Saunders, his wife Betty fancies Harry, who secretly has sex with Clive's son, Edward. The governess Ellen, who reveals herself to be a lesbian, is forced into marriage with Harry after his sexuality is discovered and condemned by Clive. Act 1 ends with the wedding celebrations; the final scene of the first act ends with Clive giving a speech while Joshua, watched by Edward (who does nothing), aims his rifle at him and fires as the scene ends with a blackout.
Act II
Although Act II is set in 1979, some of the characters of Act I reappear – for them, only 25 years have passed. Betty has left Clive, her daughter Victoria is now married to an overbearing Martin, and Edward has an openly gay relationship with Gerry. Victoria, upset and distant from Martin, starts a lesbian relationship with Lin. When Gerry leaves Edward, Edward, who discovers he is in fact bisexual, moves in with his sister and Lin. The three of them have a drunken ceremony in which they call up the Goddess, after which characters from Act I begin appearing. Act II has a looser structure, and Churchill played around with the ordering of the scenes. The final scene shows that Victoria has left Martin for a polyamorous relationship with Edward and Lin, and they are sharing custody of their son Tommy. Gerry and Edward are on good terms again, and Betty becomes friends with Gerry, who tells her about Edward's sexuality.
Interpretations and observations
Act I
Act I of Cloud 9 invites the audience to engage with Britain's colonial past, but does so by challenging 'the preconceived notions held by the audience in terms of gender and sexuality'. Churchill subverts gender and racial stereotypes, using cross-gender and cross-racial casting: Betty is played by a man in act I, but by a woman in act II; Joshua is played by a white; and Edward is played by a woman in act I and by a man in act II. Churchill deliberately uses this cross-gender,-racial and -age casting to unsettle the audience's expectations.
In the introduction to the play, Churchill explains why Betty is played by a man in the first act: "She wants to be what men want her to be ... Betty does not value herself as a woman." Michael Patterson confirms this, writing that "Betty is played by a man in order to show how femininity is an artificial and imposed construct". James Harding suggests that by cross-casting Betty and Edward in Act I, Churchill is also playing it safe: It makes same-sex relationships visibly heterosexual and normative.
The black servant, Joshua, is played by a white man for similar reasons. He says, "My skin is black, but oh my soul is white. I hate my tribe. My master is my light"; Amelia Howe Kritzer argues that "the reversal exposes the rupture in Joshua's identity caused by his internalization of colonial values". Joshua does not identify with his "own" people; in act I, scene 3, Mrs. Saunders asks if he doesn't mind beating his own people and Joshua replies that they are not his people, and they are "bad."
Act II
The second act is set in London 1979, but for the characters only twenty-five years have passed. Churchill explains her reason for this in the introduction: "The first act, like the society it shows, is male-dominated and firmly structured. In the second act, more energy comes from the women and the gays." In Act II, British colonial oppression remains present, this time in the armed presence in Northern Ireland. Michael Patterson writes that "the actors ... established a 'parallel between colonial and sexual oppression,' showing how the British occupation of Africa in the nineteenth century and its post-colonial presence in Northern Ireland relate to the patriarchal values of society" Churchill shows the audience different views of oppression, both colonial and sexual. She amplifies social constructs by linking the two periods, using an unnatural time gap. Amelia Howe Kritzer argues that "Churchill remained close to the Brechtian spirit of encouraging the audience to actively criticize institutions and ideologies they have previously taken for granted".
There is a great deal of difference between the two acts: Act II contains much more sexual freedom for women, whereas in Act I the men dictate the relationships. Act II "focuses on changes in the structure of power and authority, as they affect sex and relationships," from the male-dominated structure in the first act. Churchill writes that she "explored Genet's idea that colonial oppression and sexual oppression are similar." She essentially uses the play as a social arena to explore "the Victorian origins of contemporary gender definitions and sexual attitudes, recent changes ... and some implications of these changes."
References
External links
Category:1979 plays
Category:Plays by Caryl Churchill
Category:West End plays
Category:Off-Broadway plays
Category:LGBT-related plays |
/*
* Copyright 2002-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.security.config.annotation.web.configurers;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.junit.Rule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.security.config.annotation.ObjectPostProcessor;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.test.SpringTestRule;
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.provisioning.InMemoryUserDetailsManager;
import org.springframework.security.web.AuthenticationEntryPoint;
import org.springframework.security.web.access.ExceptionTranslationFilter;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.web.accept.ContentNegotiationStrategy;
import org.springframework.web.context.request.NativeWebRequest;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.atLeastOnce;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.verify;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.redirectedUrl;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
/**
* Tests for {@link ExceptionHandlingConfigurer}
*
* @author Rob Winch
* @author Josh Cummings
*/
public class ExceptionHandlingConfigurerTests {
@Rule
public final SpringTestRule spring = new SpringTestRule();
@Autowired
MockMvc mvc;
@Test
public void configureWhenRegisteringObjectPostProcessorThenInvokedOnExceptionTranslationFilter() {
this.spring.register(ObjectPostProcessorConfig.class, DefaultSecurityConfig.class).autowire();
verify(ObjectPostProcessorConfig.objectPostProcessor).postProcess(any(ExceptionTranslationFilter.class));
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsApplicationXhtmlXmlThenRespondsWith302() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.APPLICATION_XHTML_XML))
.andExpect(status().isFound());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsImageGifThenRespondsWith302() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.IMAGE_GIF)).andExpect(status().isFound());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsImageJpgThenRespondsWith302() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.IMAGE_JPEG)).andExpect(status().isFound());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsImagePngThenRespondsWith302() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.IMAGE_PNG)).andExpect(status().isFound());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsTextHtmlThenRespondsWith302() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.TEXT_HTML)).andExpect(status().isFound());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsTextPlainThenRespondsWith302() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.TEXT_PLAIN)).andExpect(status().isFound());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsApplicationAtomXmlThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.APPLICATION_ATOM_XML))
.andExpect(status().isUnauthorized());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsApplicationFormUrlEncodedThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.APPLICATION_FORM_URLENCODED))
.andExpect(status().isUnauthorized());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsApplicationJsonThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.APPLICATION_JSON))
.andExpect(status().isUnauthorized());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsApplicationOctetStreamThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.APPLICATION_OCTET_STREAM))
.andExpect(status().isUnauthorized());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsMultipartFormDataThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.MULTIPART_FORM_DATA))
.andExpect(status().isUnauthorized());
}
// SEC-2199
@Test
public void getWhenAcceptHeaderIsTextXmlThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.TEXT_XML)).andExpect(status().isUnauthorized());
}
// gh-4831
@Test
public void getWhenAcceptIsAnyThenRespondsWith401() throws Exception {
this.spring.register(DefaultSecurityConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, MediaType.ALL)).andExpect(status().isUnauthorized());
}
@Test
public void getWhenAcceptIsChromeThenRespondsWith302() throws Exception {
this.spring.register(DefaultSecurityConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT,
"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"))
.andExpect(status().isFound());
}
@Test
public void getWhenAcceptIsTextPlainAndXRequestedWithIsXHRThenRespondsWith401() throws Exception {
this.spring.register(HttpBasicAndFormLoginEntryPointsConfig.class).autowire();
this.mvc.perform(get("/").header("Accept", MediaType.TEXT_PLAIN).header("X-Requested-With", "XMLHttpRequest"))
.andExpect(status().isUnauthorized());
}
@Test
public void getWhenCustomContentNegotiationStrategyThenStrategyIsUsed() throws Exception {
this.spring.register(OverrideContentNegotiationStrategySharedObjectConfig.class, DefaultSecurityConfig.class)
.autowire();
this.mvc.perform(get("/"));
verify(OverrideContentNegotiationStrategySharedObjectConfig.CNS, atLeastOnce())
.resolveMediaTypes(any(NativeWebRequest.class));
}
@Test
public void getWhenUsingDefaultsAndUnauthenticatedThenRedirectsToLogin() throws Exception {
this.spring.register(DefaultHttpConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, "bogus/type"))
.andExpect(redirectedUrl("http://localhost/login"));
}
@Test
public void getWhenDeclaringHttpBasicBeforeFormLoginThenRespondsWith401() throws Exception {
this.spring.register(BasicAuthenticationEntryPointBeforeFormLoginConfig.class).autowire();
this.mvc.perform(get("/").header(HttpHeaders.ACCEPT, "bogus/type")).andExpect(status().isUnauthorized());
}
@Test
public void getWhenInvokingExceptionHandlingTwiceThenOriginalEntryPointUsed() throws Exception {
this.spring.register(InvokeTwiceDoesNotOverrideConfig.class).autowire();
this.mvc.perform(get("/"));
verify(InvokeTwiceDoesNotOverrideConfig.AEP).commence(any(HttpServletRequest.class),
any(HttpServletResponse.class), any(AuthenticationException.class));
}
@EnableWebSecurity
static class ObjectPostProcessorConfig extends WebSecurityConfigurerAdapter {
static ObjectPostProcessor<Object> objectPostProcessor = spy(ReflectingObjectPostProcessor.class);
@Override
protected void configure(HttpSecurity http) throws Exception {
// @formatter:off
http
.exceptionHandling();
// @formatter:on
}
@Bean
static ObjectPostProcessor<Object> objectPostProcessor() {
return objectPostProcessor;
}
}
static class ReflectingObjectPostProcessor implements ObjectPostProcessor<Object> {
@Override
public <O> O postProcess(O object) {
return object;
}
}
@EnableWebSecurity
static class DefaultSecurityConfig {
@Bean
InMemoryUserDetailsManager userDetailsManager() {
// @formatter:off
return new InMemoryUserDetailsManager(User.withDefaultPasswordEncoder()
.username("user")
.password("password")
.roles("USER")
.build()
);
// @formatter:off
}
}
@EnableWebSecurity
static class HttpBasicAndFormLoginEntryPointsConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth
.inMemoryAuthentication()
.withUser("user").password("password").roles("USER");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
// @formatter:off
http
.authorizeRequests()
.anyRequest().authenticated()
.and()
.httpBasic()
.and()
.formLogin();
// @formatter:on
}
}
@EnableWebSecurity
static class OverrideContentNegotiationStrategySharedObjectConfig extends WebSecurityConfigurerAdapter {
static ContentNegotiationStrategy CNS = mock(ContentNegotiationStrategy.class);
@Bean
static ContentNegotiationStrategy cns() {
return CNS;
}
}
@EnableWebSecurity
static class DefaultHttpConfig extends WebSecurityConfigurerAdapter {
}
@EnableWebSecurity
static class BasicAuthenticationEntryPointBeforeFormLoginConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
// @formatter:off
http
.authorizeRequests()
.anyRequest().authenticated()
.and()
.httpBasic()
.and()
.formLogin();
// @formatter:on
}
}
@EnableWebSecurity
static class InvokeTwiceDoesNotOverrideConfig extends WebSecurityConfigurerAdapter {
static AuthenticationEntryPoint AEP = mock(AuthenticationEntryPoint.class);
@Override
protected void configure(HttpSecurity http) throws Exception {
// @formatter:off
http
.authorizeRequests()
.anyRequest().authenticated()
.and()
.exceptionHandling()
.authenticationEntryPoint(AEP).and()
.exceptionHandling();
// @formatter:on
}
}
}
|
While Israeli politicians flip the finger at the world to score points with right-wing voters
at home, they are alienating Israel’s most important, loyal allies: Progressive U.S. Jews.
Lately, American friends are asking me whether Israeli leaders are thinking straight, whether they realize how
unreasonable their statements sound here in Washington, and how odd some of their policies seem.
The holiday of Shavuot, which begins the night of Tuesday, June 3rd, and continues through Thursday, celebrates the
giving of the Torah at Mount Sinai. Shavuot completes the cycle that celebrates the origin of the Jews as a people,
starting with Passover. The holiday of Shavuot, once an agricultural festival, has come to represent the importance
of the Torah, and our understanding that freedom is not complete until there is rule of law to guide us.
This week, Alpher discusses the significance of the Pope's visit to the Holy Land;
whether the June 10 presidential elections is something to get excited about or just another ho-hum affair;
and whether we about to witness a new Israeli unilateral initiative.
We will be marching with the group of progressive, pro-Israel organizations committed to Israel's core values, as
laid out in its Declaration of Independence 66 years ago. Our organizations support and advance the ideals of
democracy, freedom, justice, peace and human rights and will be marching loud and proud on June 1.
Come out and make your voice heard! Don’t miss this opportunity to show the world that there is a strong and
vibrant progressive voice supporting Israel. Please register at this
link to receive our meeting point for the parade.
This weekend, Pope Francis will be visiting the Middle East on a historic trip. Many people the world over have
been impressed by the humility of this pope, by his loving mien, and his ethical stances. We also are encouraged by
his friendship with Rabbi Abraham Skorka and Omar Abboud - leaders of the Jewish and Muslim communities in
Argentina - and his determination to demonstrate the normalcy of interfaith friendship.
Following the recent increase in “Price Tag” hate-crimes in Israel, including the
desecration of Christian and Muslim houses of worship, we called on American friends of Israel
to join APN in urging the Israeli authorities to seriously confront this ugly phenomenon.
This week, Alpher discusses the new "iNakba" app; last week's visit by PM Binyamin
Netanyahu to Japan and of President Shimon Peres to Norway; what Israeli chief negotiator Tzipi Livni's meeting
in London with Palestinian leader Mahmoud Abbas against a backdrop of surprising progress in the Palestinian
drive to form a Fateh-Hamas unity government tells us about the chances for reviving the peace process; and the
level of intrigue over the Knesset electing a new president to replace Shimon Peres.
Lag B'Omer - the 33rd day between Pesach and Shavuot- is a minor Jewish holiday that celebrates (among other
things) the cessation of a divinely-sent plague that resulted from people not showing one another adequate respect.
It is celebrated with bonfires, and great joy. Let us also look forward to a cessation of all the ills that Israel
suffers from, and look forward to joy, peace and security.
This week, Alpher discusses Martin Indyk's remarks about his impressions from the now suspended Israeli-Palestinian
peace process; accusations of Israeli spying on the United States; and why the Syrian Army's recapturing rebel
strongholds in the city of Homs as rebels withdrew peaceably under a broader agreement is significant. |
An idler's miscellany of compendious amusements
Entertainment
High Noon unfolds in real time — the running time of the story closely parallels the running time of the film itself. Producer Stanley Kramer said that the filmmakers hoped this would “create a sense of urgency as the noon hour approached.” Director Fred Zinnemann wrote the word CLOCK next to many scenes in his script, and he prepared a list of inserts in which clocks would be prominently visible:
An insert for Scene 294 was never shot — it would have started on a pendulum and panned up to show a clock with no hands, superimposed on a closeup of Gary Cooper’s Will Kane. Zinnemann said he’d got the idea from a handless clock he’d seen in front of a funeral home on Sunset Boulevard. He said it “would have intensified the feeling of panic.”
The earliest known film comedy, Louis Lumière’s 1895 L’Arroseur arrosé (“The Waterer Watered”) is also one of the first film narratives of any kind — before this, movies tended simply to demonstrate the medium, depicting a sneeze, for example, or the arrival of a train.
This was also the first film with a dedicated poster (below) — making this simple 45-second story the forerunner of all modern film comedies.
When Elijah Bond, patentee of the Ouija board, died in 1921, he was buried in an unmarked grave, and as time passed its location was forgotten. In 1992, Robert Murch, chairman of the Talking Board Historical Society, set out to find it, and after a 15-year search he did — Bond had been buried with his wife’s family in Baltimore rather than with his own in Dorsey, Md.
Murch got permission to install a new headstone and raised the necessary funds through donations, and today Bond has the headstone above, with a simple inscription on the front and a Ouija board on the back — in case anyone wants to talk.
Recognize this locomotive? You’ve almost certainly seen it before: Built in 1891, “Sierra No. 3” was adopted by Hollywood in 1948 and became “the most photographed locomotive in the world,” appearing in The Red Glove, The Terror, The Virginian, The Texan, Young Tom Edison, Sierra Passage, Wyoming Mail, High Noon, The Cimarron Kid, Kansas Pacific, The Moonlighter, Apache, Rage at Dawn, The Return of Jack Slade, Texas Lady, The Big Land, Terror in a Texas Town, Man of the West, Face of a Fugitive, The Outrage, The Rare Breed, The Great Race, The Perils of Pauline, Finian’s Rainbow, A Man Called Gannon, The Great Bank Robbery, Joe Hill, The Great Northfield Minnesota Raid, Oklahoma Crude, Nickleodeon, Bound for Glory, The Apple Dumpling Gang Rides Again, The Long Riders, Pale Rider, Blood Red, Back to the Future Part III, Unforgiven, and Bad Girls.
Gary Cooper alone starred in four movies with it, including High Noon; Clint Eastwood, who appeared with it in Rawhide, Pale Rider, and Unforgiven, said it was “like a treasured old friend.” TV shows:
The Lone Ranger, Tales of Wells Fargo, Casey Jones, Rawhide, Overland Trail, Lassie, Death Valley Days, The Raiders, Petticoat Junction, The Wild Wild West, The Big Valley, The Legend of Jesse James, Scalplock, Iron Horse, Cimarron Strip, Dundee and the Culhane, The Man From U.N.C.L.E., Ballad of the Iron Horse, Gunsmoke, Bonanza, The Great Man’s Whiskers, Inventing of America, Little House on the Prairie, Law of the Land, A Woman Called Moses, Lacy and the Mississippi Queen, Kate Bliss and the Ticker Tape Kid, The Night Rider, The Last Ride of the Dalton Gang, Belle Starr, East of Eden, Father Murphy, The A-Team, Bonanza: The Next Generation, The Adventures of Brisco County, Jr., and Doctor Quinn, Medicine Woman.
William L. Withhuhn, former transportation history curator at the Smithsonian Institution, wrote, “Sierra Railway No. 3 has appeared in more motion pictures, documentaries, and television productions than any other locomotive. It is undisputedly the image of the archetypal steam locomotive that propelled the USA from the 19th century into the 20th.”
Since much of the Netherlands is below sea level, Dutch farmers needed a way to leap waterways to reach their various plots of land. Over time this evolved into a competitive sport, known as fierljeppen (“far leaping”) in which each contestant sprints to the water, seizes a 10-meter pole, and climbs it as it lurches forward over the channel. The winner is the one who lands farthest from his starting point in the sand bed on the opposite side.
The current record holder is Jaco de Groot of Utrecht, who leapt, clambered, swayed, and fell 22.21 meters in August.
Each of the first 10 letters of the alphabet is represented by both a word and a number, so BAD, for example, could be represented by “Answer, Pray, Now.” Letters beyond the 10th would be represented with two digits; for example, S, the 19th letter, could be indicated by 1 and 9, “Pray-Look.”
After Houdini died in 1926, Bess waited for a message in this code, according to an agreement between them. In 1929, psychic Arthur Ford claimed to have received it:
“Rosabelle” is a song that Bess used to sing. The rest, decoded, spells out BELIEVE. At first Bess took this as a genuine message from her husband, but skeptics pointed out that by this time she had revealed the code to Harold Kellock, who had published it in a biography that had appeared the previous year. So Ford could simply have learned the code and prepared the message himself. Bess repudiated Ford’s claim and in 1936 stopped attending séances. She said, “Ten years is long enough to wait for any man.”
“Houdini never said he could come back,” observed Henry Muller, curator of the Houdini Magical Hall of Fame. “He just thought that if anybody could do it, it would be him.”
“Serious sport has nothing to do with fair play. It is bound up with hatred, jealousy, boastfulness, disregard of all rules and sadistic pleasure in witnessing violence: in other words it is war minus the shooting.”
— George Orwell, “The Sporting Spirit,” 1945
“[It is] to be utterly abjected of al noble men in likewise, footballe, wherein is nothinge but beastly furie and extreme violence whereof procedeth hurte and consequently rancour and malice do remaine with them that be wounded wherefore it is to be put in perpetuell silence.”
— Sir Thomas Elyot, The Governour, 1531
“For as concerning football playing, I protest unto you it may rather be called a freendly kinde of fight, then a play or recreation; A bloody and murthering practise, then a felowly sporte or pastime. … and hereof groweth envie, malice, rancour, cholor, hatred, displeasure, enmitie, and what not els: and sometimes fighting, brawling, contention, quarrel picking, murther, homicide, and great effusion of blood, as experience dayly teacheth.”
Chess boxing has evolved from a performance art piece to a serious worldwide professional sport. Two competitors engage in six rounds of chess and five rounds of boxing, switching between the two every three minutes. A player can win by knockout, technical knockout, or checkmate, or if his opponent resigns, exceeds the time limit, or is disqualified. If both the contests end in a draw, the player of the black pieces wins.
In football tennis (below), you have to return the ball over the net without using your hands. Up to three players can play on each side, with corresponding rules regarding the number of touches and bounces allowed on each return. This sport is growing too — the first rules were written in 1940, and it held its 11th world championship in 2014. Now we need a way to combine all four of these.
Site Admin
Info
ABOUT
Futility Closet is a collection of entertaining curiosities in history, literature, language, art, philosophy, and mathematics, designed to help you waste time as enjoyably as possible.
You can read Futility Closet on the web, subscribe by RSS, or sign up to receive a free daily email -- see "Subscribe by Email" in the sidebar.
You can support us by telling your friends about us, leaving a review of the books or podcast on Amazon or iTunes, pledging to our Patreon campaign to support the podcast, or contributing directly via the Donate button in the sidebar above. Thanks! |
//
// NSObject+BBlock.m
// BBlock
//
// Created by David Keegan on 5/31/12.
// Copyright 2012 David Keegan. All rights reserved.
//
#import "NSObject+BBlock.h"
#import <objc/runtime.h>
static char BBKVOObjectKey;
@interface BBKVOObject : NSObject
@end
@implementation BBKVOObject{
NSObjectBBlock _block;
}
- (instancetype)initWithBlock:(NSObjectBBlock)block{
if((self = [super init])){
_block = [block copy];
}
return self;
}
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context{
if(_block){
_block(keyPath, object, change);
}
}
@end
@implementation NSObject(BBlock)
- (NSString *)addObserverForKeyPath:(NSString *)keyPath options:(NSKeyValueObservingOptions)options block:(NSObjectBBlock)block{
NSAssert([keyPath length], @"Invalid keyPath");
BBKVOObject *kvoObject = [[BBKVOObject alloc] initWithBlock:block];
[self addObserver:kvoObject forKeyPath:keyPath options:options context:nil];
NSString *identifier = [[NSProcessInfo processInfo] globallyUniqueString];
NSMutableDictionary *observers = objc_getAssociatedObject(self, &BBKVOObjectKey) ?: [NSMutableDictionary dictionary];
[observers setObject:@{@"observer":kvoObject, @"keypath":keyPath} forKey:identifier];
objc_setAssociatedObject(self, &BBKVOObjectKey, observers, OBJC_ASSOCIATION_RETAIN);
return identifier;
}
- (void)removeObserverForToken:(NSString *)identifier{
NSMutableDictionary *observers = objc_getAssociatedObject(self, &BBKVOObjectKey);
if(observers){
if(observers[identifier]){
[self removeObserver:observers[identifier][@"observer"] forKeyPath:observers[identifier][@"keypath"]];
}
[observers removeObjectForKey:identifier];
objc_setAssociatedObject(self, &BBKVOObjectKey, observers, OBJC_ASSOCIATION_RETAIN);
}
}
- (void)removeObserverBlocksForKeyPath:(NSString *)keyPath{
NSMutableDictionary *observers = objc_getAssociatedObject(self, &BBKVOObjectKey);
if(observers){
for(NSString *identifier in [observers allKeys]){
if([observers[identifier][@"keypath"] isEqualToString:keyPath]){
[self removeObserver:observers[identifier][@"observer"] forKeyPath:observers[identifier][@"keypath"]];
[observers removeObjectForKey:identifier];
}
}
objc_setAssociatedObject(self, &BBKVOObjectKey, observers, OBJC_ASSOCIATION_RETAIN);
}
}
- (void)changeValueWithKey:(NSString *)key changeBlock:(void(^)())changeBlock{
[self willChangeValueForKey:key];
changeBlock();
[self didChangeValueForKey:key];
}
@end
|
// Copyright (C) 2019 Joseph Artsimovich <joseph.artsimovich@gmail.com>, 4lex4 <4lex49@zoho.com>
// Use of this source code is governed by the GNU GPLv3 license that can be found in the LICENSE file.
#ifndef SCANTAILOR_PAGE_SPLIT_THUMBNAIL_H_
#define SCANTAILOR_PAGE_SPLIT_THUMBNAIL_H_
#include <QPixmap>
#include <memory>
#include "PageLayout.h"
#include "ThumbnailBase.h"
class QPointF;
class QSizeF;
class QPolygonF;
class ThumbnailPixmapCache;
class ImageId;
class ImageTransformation;
namespace page_split {
class Thumbnail : public ThumbnailBase {
public:
Thumbnail(std::shared_ptr<ThumbnailPixmapCache> thumbnailCache,
const QSizeF& maxSize,
const ImageId& imageId,
const ImageTransformation& xform,
const PageLayout& layout,
bool leftHalfRemoved,
bool rightHalfRemoved);
void prePaintOverImage(QPainter& painter,
const QTransform& imageToDisplay,
const QTransform& thumbToDisplay) override;
private:
QPointF subPageCenter(const QPolygonF& leftPage,
const QPolygonF& rightPage,
const QTransform& imageToDisplay,
int subpageIdx);
PageLayout m_layout;
QPixmap m_trashPixmap;
bool m_leftHalfRemoved;
bool m_rightHalfRemoved;
};
} // namespace page_split
#endif // ifndef SCANTAILOR_PAGE_SPLIT_THUMBNAIL_H_
|
Q:
Simpler notation for enumerated lists
I'm creating documents with many multilevel enumerated lists and I'm looking for ways to simplyfy the notation.
Ideally I would like to write something like the following:
\documentclass{article}
\usepackage{enumitem}
\begin{document}
Some text here
\i first item
\i second item
more text
\i third item
\par third item continues
\i fourth item
\ii fifth item
\ii sixth item
\ii seventh item
more text
\ii eighth item
\i ninth item
\end{document}
And I would like it to be translated to:
\documentclass{article}
\usepackage{enumitem}
\begin{document}
Some text here
\begin{enumerate}[resume=level1]
\item\setcounter{enumii}{0} first item
\item\setcounter{enumii}{0} second item
\end{enumerate}
more text
\begin{enumerate}[resume=level1]
\item\setcounter{enumii}{0} third item
\par third item continues
\item\setcounter{enumii}{0} fourth item
\begin{enumerate}[resume=level2]
\item fifth item
\item sixth item
\item seventh item
\end{enumerate}
\end{enumerate}
more text
\begin{enumerate}[resume=level1]
\item[]
\begin{enumerate}[resume=level2]
\item eighth item
\end{enumerate}
\item\setcounter{enumii}{0} ninth item
\end{enumerate}
\end{document}
More specifically:
If \i is not preceeded by another item, then \begin{enumerate}[resume=level1] is added before.
If \i is not followed by a paragraph not beginning with \i or \ii, then \end{enumerate} is added after.
If paragraph break is marked with \par instead of two line braks, then it is included in the previous item and the list is not terminated.
Level2 items \ii are preceeded or followed by \begin{enumerate}[resume=level2] and \end{enumerate} accordingly.
If \ii is not preceeded by level 1 item or level 2 item, then \begin{enumerate}[resume=level1] and a dummy level 1 item \item[] are also added.
If \ii should resume from the previous level 2 numbering if there has been no level 1 item inbetween.
Would it be possible with reasonable effor to define macros that would achieve this?
A:
Here's a LuaLaTeX-based solution. Its three main working assumptions are (A) there are two levels of enumerated lists, (B) the strings \i and \ii occur at the start of a line, while possibly being preceded by whitespace, and (C) any lines with \par instructions are not preceded by all-blank lines. The third assumption is not (at least not explicitly) in the the OP's list of working assumptions; however, without this assumption the already-complicated code even more complicated.
The bulk of the work is performed by a Lua function called FancyEnum. Two utility LaTeX macros, called \FancyEnumOn and \FancyEnumOff, serve to activate and deactivate the Lua function's operation on the input stream. For sure, you should run \FancyEnumOff ahead of verbatim material that features the strings \i and \ii.
Update, after receiving more information from the OP about the purpose of the \setcounter{enumii}{0} directives. (Their purpose was to make sure that a "new" level-2 list -- where "new" means that there's been at least one intervening level-1 \item directive -- starts at 0.) The purpose is much more easily achieved by running \begin{enumerate}[series=level2] instead of \begin{enumerate}[resume=level2] whenever a new level-2 lists begins. I've updated the Lua code accordingly.
The following screenshot shows the result of the operations of the Lua function on (a slightly modified form of) the OP's suggested example on the left and the corresponding hard-coded enumerated lists on the right.
% !TEX TS-program = lualatex
\documentclass{article}
\usepackage{enumitem}
\usepackage{multicol} % to create two-column output
\usepackage{luacode} % for 'luacode*' environment
\begin{luacode*}
-- Begin by defining 2 Boolean variables. They will be set to 'true' if
-- LaTeX is in a level-1 or level-2 enumerated environment, respectively.
local in_enumi =false
local in_enumii=false
-- The Lua function 'FancyEnum' does most of the work.
function FancyEnum ( s )
-- Input line starts with '\i' (possibly preceded by whitespace):
if s:find ( "^%s-\\i " ) then
if in_enumii==true then -- need to fall back one list level
s = s:gsub ( "^%s-\\i " , "\\end{enumerate}\\item " )
in_enumii=false
elseif in_enumi==true then -- continue at level 1
s = s:gsub ( "^%s-\\i " , "\\item " )
else -- neither in_enumi nor in_enumii are true -- start a level-1 list
s = s:gsub ( "^%s-\\i " , "\\begin{enumerate}[resume=level1] \\item " )
in_enumi=true
end
-- Input line starts with '\ii' (possibly preceded by whitespace):
elseif s:find ( "^%s-\\ii " ) then
if in_enumii==true then -- continue at level 2
s = s:gsub ( "^%s-\\ii " , "\\item " )
elseif in_enumi==true then -- Moving from level 1 to level 2. Hence,
-- we use '[series=level2]' rather than '[resume=level2]'
in_enumii=true
s = s:gsub ( "^%s-\\ii " , "\\begin{enumerate}[series=level2]\\item " )
else -- jumping straight to level-2 list
in_enumi=true
in_enumii=true
s = s:gsub ( "^%s-\\ii " , "\\begin{enumerate}[resume=level1]\\item[]" ..
"\\begin{enumerate}[resume=level2]\\item " )
end
-- Input line is all-blank. Terminate 'enumerate' if 'in_enumi' and/or
-- 'in_enumii' are 'true'.
elseif s=="" then
if in_enumii==true then -- terminate two 'enumerate' levels
s = "\\end{enumerate}\\end{enumerate}"
in_enumii=false
in_enumi=false
elseif in_enumi==true then -- terminate one 'enumerate' level
s = "\\end{enumerate}"
in_enumi=false
end
end
-- Place 's' back on the input stream, for further processing by LaTeX.
return s
end -- end of Lua function
\end{luacode*}
%% Two LaTeX utility macros that activate and deactivate 'FancyEnum':
\newcommand\FancyEnumOn{\directlua{luatexbase.add_to_callback(
"process_input_buffer", FancyEnum, "FancyEnum")}}
\newcommand\FancyEnumOff{\directlua{luatexbase.remove_from_callback(
"process_input_buffer", "FancyEnum")}}
\begin{document}
\begin{multicols}{2}
\FancyEnumOn % activate the Lua function 'FancyEnum'
\verb+\FancyEnum+ on
\i first item
\i second item
more text
\i third item
\par third item continues % note: no blank line before this line
\i fourth item
\ii fifth item
\ii sixth item
\par sixth item continues % note: no blank line before this line
\i seventh item
\ii eighth item % note: no blank line before this line
more text
\ii ninth item
\i tenth item
\FancyEnumOff % deactivate 'FancyEnum'
\columnbreak % optional
\verb+\FancyEnum+ off (explicit lists)
\begin{enumerate}[series=level1] % not [resume=level1]
\item first item
\item second item
\end{enumerate}
more text
\begin{enumerate}[resume=level1]
\item third item
third item continues
\item fourth item
\begin{enumerate}[series=level2]
\item fifth item
\item sixth item
sixth item continues
\end{enumerate}
\item seventh item
\begin{enumerate}[series=level2]
\item eighth item
\end{enumerate}
\end{enumerate}
more text
\begin{enumerate}[resume=level1]
\item[]
\begin{enumerate}[resume=level2]
\item ninth item
\end{enumerate}
\item tenth item
\end{enumerate}
\end{multicols}
\end{document}
A:
I think that you want to use the easylist package. It does not quite give you all of the features that you want but it is pretty close. For example, the code:
\documentclass{article}
\usepackage[sharp]{easylist}% sharp => # is used for \item
\begin{document}
Some text here
\begin{easylist}[articletoc]
# first item
# second item
more text
# third item
\par third item continues
# fourth item
## fifth item
## sixth item
## seventh item
more text
## eighth item
# ninth item
\end{easylist}
\end{document}
produces
There are many extra options that can be controlled using \ListProperties, such as Hang=<n> for setting hanging indents. The manual is very readable.
EDIT
The folllowing might be close enough to what you want. You can disable the counter for the first list level using
\ListProperties(
Hang=true,
Hide1=1,
Hide2=1,
Hide3=1,
Hide4=1,
Progressive*=2em,
Style1*=\theblank,
Indent1=0em,
)
As a result, the following code
\begin{easylist}
# Some text here
## first item
## second item
# more text
## third item
\par third item continues
## fourth item
### fifth item
### sixth item
### seventh item
# more text
### eighth item
## ninth item
\end{easylist}
produces
So every line has at least one #, corresponding to not being inside a list, whereas the lines with ## are inside the first list (level 1), those with ### are at level 2 and so on.
Here is the full code:
\documentclass{article}
\usepackage[sharp]{easylist}% sharp => # is used for \item
\begin{document}
\ListProperties(
Hang=true,
Hide1=1, % hide counter 1 on level 1
Hide2=1, % hide counter 1 on level 2
Hide3=1, % hide counter 1 on level 3
Hide4=1, % hide counter 1 on level 4
Progressive*=2em,
Style1*=\theblank,
Indent1=0em,
)
\begin{easylist}
# Some text here
## first item
## second item
# more text
## third item
\par third item continues
## fourth item
### fifth item
### sixth item
### seventh item
# more text
### eighth item
## ninth item
\end{easylist}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.