markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We provide `distance` and `radial_velocity` to prepare the data for reflex correction, which we explain below.
type(skycoord)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is an Astropy `SkyCoord` object ([documentation here](https://docs.astropy.org/en/stable/api/astropy.coordinates.SkyCoord.htmlastropy.coordinates.SkyCoord)), which provides `transform_to`, so we can transform the coordinates to other frames.
import gala.coordinates as gc transformed = skycoord.transform_to(gc.GD1Koposov10) type(transformed)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is another `SkyCoord` object, now in the `GD1Koposov10` frame. The next step is to correct the proper motion measurements from Gaia for reflex due to the motion of our solar system around the Galactic center.When we created `skycoord`, we provided `distance` and `radial_velocity` as arguments, which means we...
gd1_coord = gc.reflex_correct(transformed) type(gd1_coord)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is a `SkyCoord` object that contains * The transformed coordinates as attributes named `phi1` and `phi2`, which represent right ascension and declination in the `GD1Koposov10` frame.* The transformed and corrected proper motions as `pm_phi1_cosphi2` and `pm_phi2`.We can select the coordinates like this:
phi1 = gd1_coord.phi1 phi2 = gd1_coord.phi2
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
And plot them like this:
plt.plot(phi1, phi2, 'ko', markersize=0.1, alpha=0.2) plt.xlabel('ra (degree GD1)') plt.ylabel('dec (degree GD1)');
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Remember that we started with a rectangle in GD-1 coordinates. When transformed to ICRS, it's a non-rectangular polygon. Now that we have transformed back to GD-1 coordinates, it's a rectangle again. Pandas DataFrameAt this point we have three objects containing different subsets of the data.
type(results) type(gaia_data) type(gd1_coord)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
On one hand, this makes sense, since each object provides different capabilities. But working with three different object types can be awkward.It will be more convenient to choose one object and get all of the data into it. We'll use a Pandas DataFrame, for two reasons:1. It provides capabilities that are pretty much...
import pandas as pd df = results.to_pandas() df.shape
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
`DataFrame` provides `shape`, which shows the number of rows and columns.It also provides `head`, which displays the first few rows. It is useful for spot-checking large results as you go along.
df.head()
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Python detail: `shape` is an attribute, so we can display it's value without calling it as a function; `head` is a function, so we need the parentheses. Now we can extract the columns we want from `gd1_coord` and add them as columns in the `DataFrame`. `phi1` and `phi2` contain the transformed coordinates.
df['phi1'] = gd1_coord.phi1 df['phi2'] = gd1_coord.phi2 df.shape
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
`pm_phi1_cosphi2` and `pm_phi2` contain the components of proper motion in the transformed frame.
df['pm_phi1'] = gd1_coord.pm_phi1_cosphi2 df['pm_phi2'] = gd1_coord.pm_phi2 df.shape
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
**Detail:** If you notice that `SkyCoord` has an attribute called `proper_motion`, you might wonder why we are not using it.We could have: `proper_motion` contains the same data as `pm_phi1_cosphi2` and `pm_phi2`, but in a different format. Plot proper motionNow we are ready to replicate one of the panels in Figure 1 ...
phi2 = df['phi2'] type(phi2)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is a `Series`, which is the structure Pandas uses to represent columns.We can use a comparison operator, `>`, to compare the values in a `Series` to a constant.
phi2_min = -1.0 * u.deg phi2_max = 1.0 * u.deg mask = (df['phi2'] > phi2_min) type(mask) mask.dtype
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is a `Series` of Boolean values, that is, `True` and `False`.
mask.head()
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
A Boolean `Series` is sometimes called a "mask" because we can use it to mask out some of the rows in a `DataFrame` and select the rest, like this:
subset = df[mask] type(subset)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
`subset` is a `DataFrame` that contains only the rows from `df` that correspond to `True` values in `mask`.The previous mask selects all stars where `phi2` exceeds `phi2_min`; now we'll select stars where `phi2` falls between `phi2_min` and `phi2_max`.
phi_mask = ((df['phi2'] > phi2_min) & (df['phi2'] < phi2_max))
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The `&` operator computes "logical AND", which means the result is true where elements from both Boolean `Series` are true.The sum of a Boolean `Series` is the number of `True` values, so we can use `sum` to see how many stars are in the selected region.
phi_mask.sum()
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
And we can use `phi1_mask` to select stars near the centerline, which are more likely to be in GD-1.
centerline = df[phi_mask] len(centerline)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Here's a scatter plot of proper motion for the selected stars.
pm1 = centerline['pm_phi1'] pm2 = centerline['pm_phi2'] plt.plot(pm1, pm2, 'ko', markersize=0.1, alpha=0.1) plt.xlabel('Proper motion phi1 (GD1 frame)') plt.ylabel('Proper motion phi2 (GD1 frame)');
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Looking at these results, we see a large cluster around (0, 0), and a smaller cluster near (0, -10).We can use `xlim` and `ylim` to set the limits on the axes and zoom in on the region near (0, 0).
pm1 = centerline['pm_phi1'] pm2 = centerline['pm_phi2'] plt.plot(pm1, pm2, 'ko', markersize=0.3, alpha=0.3) plt.xlabel('Proper motion phi1 (GD1 frame)') plt.ylabel('Proper motion phi2 (GD1 frame)') plt.xlim(-12, 8) plt.ylim(-10, 10);
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Now we can see the smaller cluster more clearly.You might notice that our figure is less dense than the one in the paper. That's because we started with a set of stars from a relatively small region. The figure in the paper is based on a region about 10 times bigger.In the next lesson we'll go back and select stars f...
pm1_min = -8.9 pm1_max = -6.9 pm2_min = -2.2 pm2_max = 1.0
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
To draw these bounds, we'll make two lists containing the coordinates of the corners of the rectangle.
pm1_rect = [pm1_min, pm1_min, pm1_max, pm1_max, pm1_min] * u.mas/u.yr pm2_rect = [pm2_min, pm2_max, pm2_max, pm2_min, pm2_min] * u.mas/u.yr
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Here's what the plot looks like with the bounds we chose.
plt.plot(pm1, pm2, 'ko', markersize=0.3, alpha=0.3) plt.plot(pm1_rect, pm2_rect, '-') plt.xlabel('Proper motion phi1 (GD1 frame)') plt.ylabel('Proper motion phi2 (GD1 frame)') plt.xlim(-12, 8) plt.ylim(-10, 10);
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
To select rows that fall within these bounds, we'll use the following function, which uses Pandas operators to make a mask that selects rows where `series` falls between `low` and `high`.
def between(series, low, high): """Make a Boolean Series. series: Pandas Series low: lower bound high: upper bound returns: Boolean Series """ return (series > low) & (series < high)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The following mask select stars with proper motion in the region we chose.
pm_mask = (between(df['pm_phi1'], pm1_min, pm1_max) & between(df['pm_phi2'], pm2_min, pm2_max))
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Again, the sum of a Boolean series is the number of `True` values.
pm_mask.sum()
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Now we can use this mask to select rows from `df`.
selected = df[pm_mask] len(selected)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
These are the stars we think are likely to be in GD-1. Let's see what they look like, plotting their coordinates (not their proper motion).
phi1 = selected['phi1'] phi2 = selected['phi2'] plt.plot(phi1, phi2, 'ko', markersize=0.5, alpha=0.5) plt.xlabel('ra (degree GD1)') plt.ylabel('dec (degree GD1)');
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Now that's starting to look like a tidal stream! Saving the DataFrameAt this point we have run a successful query and cleaned up the results; this is a good time to save the data.To save a Pandas `DataFrame`, one option is to convert it to an Astropy `Table`, like this:
selected_table = Table.from_pandas(selected) type(selected_table)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Then we could write the `Table` to a FITS file, as we did in the previous lesson. But Pandas provides functions to write DataFrames in other formats; to see what they are [find the functions here that begin with `to_`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).One of the best op...
filename = 'gd1_dataframe.hdf5' df.to_hdf(filename, 'df', mode='w')
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Because an HDF5 file can contain more than one Dataset, we have to provide a name, or "key", that identifies the Dataset in the file.We could use any string as the key, but in this example I use the variable name `df`. Exercise We're going to need `centerline` and `selected` later as well. Write a line or two of code...
# Solution centerline.to_hdf(filename, 'centerline') selected.to_hdf(filename, 'selected')
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
**Detail:** Reading and writing HDF5 tables requires a library called `PyTables` that is not always installed with Pandas. You can install it with pip like this:```pip install tables```If you install it using Conda, the name of the package is `pytables`.```conda install pytables``` We can use `ls` to confirm that the ...
!ls -lh gd1_dataframe.hdf5
-rw-rw-r-- 1 downey downey 17M Nov 18 19:06 gd1_dataframe.hdf5
MIT
03_motion.ipynb
abostroem/AstronomicalData
If you are using Windows, `ls` might not work; in that case, try:```!dir gd1_dataframe.hdf5```We can read the file back like this:
read_back_df = pd.read_hdf(filename, 'df') read_back_df.shape
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Preparation for ColabMake sure you're running a GPU runtime; if not, select "GPU" as the hardware accelerator in Runtime > Change Runtime Type in the menu. The next cells will install the `clip` package and its dependencies, and check if PyTorch 1.7.1 or later is installed.
# ! pip install ftfy regex tqdm # ! pip install git+https://github.com/openai/CLIP.git import numpy as np import torch import clip # from tqdm.notebook import tqdm from tqdm import tqdm print("Torch version:", torch.__version__) assert torch.__version__.split(".") >= ["1", "7", "1"], "PyTorch 1.7.1 or later is requir...
Torch version: 1.9.0
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
Loading the modelDownload and instantiate a CLIP model using the `clip` module that we just installed.
print(clip.available_models()) print(os.getcwd()) # model, preprocess = clip.load("ViT-B/32") model, preprocess = clip.load("../models/200m0.988.pt") input_resolution = model.visual.input_resolution context_length = model.context_length vocab_size = model.vocab_size print("Model parameters:", f"{np.sum([int(np.prod(p....
Model parameters: 151,277,313 Input resolution: 224 Context length: 77 Vocab size: 49408
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
Preparing ImageNet labels and promptsThe following cell contains the 1,000 labels for the ImageNet dataset, followed by the text templates we'll use as "prompt engineering".
classes = [ 'apple', 'aquarium fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', ...
_____no_output_____
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
A subset of these class names are modified from the default ImageNet class names sourced from Anish Athalye's imagenet-simple-labels.These edits were made via trial and error and concentrated on the lowest performing classes according to top_1 and top_5 accuracy on the ImageNet training set for the RN50, RN101, and RN5...
print(f"{len(classes)} classes, {len(templates)} templates")
100 classes, 18 templates
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
A similar, intuition-guided trial and error based on the ImageNet training set was used for templates. This list is pretty haphazard and was gradually made / expanded over the course of about a year of the project and was revisited / tweaked every few months. A surprising / weird thing was adding templates intended to ...
# ! pip install git+https://github.com/modestyachts/ImageNetV2_pytorch # from imagenetv2_pytorch import ImageNetV2Dataset # images = ImageNetV2Dataset(transform=preprocess) from torchvision.datasets import CIFAR100 import os cifar100 = CIFAR100(root=os.path.expanduser("./data/cifar100"), download=True, train=True, t...
Files already downloaded and verified
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
Creating zero-shot classifier weights
def zeroshot_classifier(classnames, templates): with torch.no_grad(): zeroshot_weights = [] for classname in tqdm(classnames): texts = [template.format(classname) for template in templates] #format with class texts = clip.tokenize(texts).cuda() #tokenize class_emb...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:01<00:00, 74.94it/s]
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
Zero-shot prediction
def accuracy(output, target, topk=(1,)): pred = output.topk(max(topk), 1, True, True)[1].t() correct = pred.eq(target.view(1, -1).expand_as(pred)) return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk] with torch.no_grad(): top1, top5, n = 0., 0., 0. for i, ...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1563/1563 [00:43<00:00, 35.92it/s]
MIT
notebooks/Prompt_Engineering_for_CIFAR.ipynb
KAndHisC/CLIP
Car Accidents During COVID Stay-At-Home Periods AnalysisThis notebook is using data collected and cleaned in the `car_accidents_eda.ipynb` notebook, and will be used for examining differences state-by-state for severity and frequency of car accidents. Tableau graphs will be used for final graphs, but this notebook gi...
import pickle import pandas as pd import numpy as np
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
Setup First import the dataset created and prepared for analysis >
# open our master dataset for analysis with open('pickle/car_accidents_master.pickle','rb') as read_file: car_accidents_master = pickle.load(read_file)
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
Note that we are looking at our dataset before it was narrowed for modeling (e.g. only severity 2 & 3, only using mapquest-sourced data). For the analysis we'll be doing here we can look at the full dataset.But first we'll narrow to only the columns we'll be using, which are date, state, whether in period of shutdown,...
accident_covid = car_accidents_master[['Severity','Date', 'State','Year','Month', 'Day','Day_Of_Week', 'Shut_Down' ]]
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
Now we'll look at amount of accidents in each states >
accident_covid.State.value_counts()
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
And then see if we narrowed the number of states, how many we'd need to get to at least 75% of total accidents >
accident_covid.State.value_counts()[:20].sum()/accident_covid.State.value_counts().sum() accident_covid.State.value_counts()[:20]
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
First we'll focus on the state with the highest amount of accidents, as they will provided more data to then draw better conclusions from and also are of greater interest since more need to address car accidents. We'll focus on the 20 states that have the most car accidents, which also represent 86% of total accidents...
def severity_percent(state_data): ''' Takes in state data and returns a list of percentages for each severity level to total accidents. ''' percentages = [] if 1 not in sorted(state_data.Severity.unique()): percentages.append(0) for x in sorted(state_data.Severity.unique())...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- Top 20 States Analysis California First create periods we'll want to compare for the state >
# full state period ca_ttl = accident_covid[(accident_covid.State == 'CA')] # shutdown period ca_sd = accident_covid[ (accident_covid.State == 'CA') & (accident_covid.Shut_Down == 1)] # 2019 # note CA is still in shutdown, so we're comparing to when dataset ended in 2020 # not when their stay-at-home ord...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period ca_freqs = [ca_ttl.shape[0], ca_sd.shape[0], ca_2019.shape[0], ca_2018.shape[0], ca_2017.shape[0] ] # then develop a simple dataframe to compare them ca_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 76.44%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period ca_sp_ttl = severity_percent(ca_ttl) ca_sp_sd = severity_percent(ca_sd) ca_sp_2019 = severity_percent(ca_2019) ca_sp_2018 = severity_percent(ca_2018) ca_sp_2017 = severity_percent(ca_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
CA Takeaways:- See increase in fequency during shut down period compared to same periods in past three years- Seeing increase in 'edges' of severity -- 1 & 4 -- of car accidents with lowering of 'middle' severities -- 2 & 3 -- for shutdown period compared to average ttl and past year's prior periods - Texas First crea...
# full state period tx_ttl = accident_covid[(accident_covid.State == 'TX')] # shutdown period tx_sd = accident_covid[ (accident_covid.State == 'TX') & (accident_covid.Shut_Down == 1)] # 2019 tx_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-31') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period tx_freqs = [tx_ttl.shape[0], tx_sd.shape[0], tx_2019.shape[0], tx_2018.shape[0], tx_2017.shape[0] ] # then develop a simple dataframe to compare them tx_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: -37.65%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period tx_sp_ttl = severity_percent(tx_ttl) tx_sp_sd = severity_percent(tx_sd) tx_sp_2019 = severity_percent(tx_2019) tx_sp_2018 = severity_percent(tx_2018) tx_sp_2017 = severity_percent(tx_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
TX Takeaways:- Frequency: Unlike CA, TX had a drop in frequency during shutdown compared to past periods- Severity: Similar to CA, still the 'edges' of severity (1 & 4) growing, 4 in particular to a greater degree than CA - FloridaFirst create periods we'll want to compare for the state >
# full state period fl_ttl = accident_covid[(accident_covid.State == 'FL')] # shutdown period fl_sd = accident_covid[ (accident_covid.State == 'FL') & (accident_covid.Shut_Down == 1)] # 2019 fl_2019 = accident_covid[(car_accidents_master['Date'] > '2019-04-03') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period fl_freqs = [fl_ttl.shape[0], fl_sd.shape[0], fl_2019.shape[0], fl_2018.shape[0], fl_2017.shape[0] ] # then develop a simple dataframe to compare them fl_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 6.62%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period fl_sp_ttl = severity_percent(fl_ttl) fl_sp_sd = severity_percent(fl_sd) fl_sp_2019 = severity_percent(fl_2019) fl_sp_2018 = severity_percent(fl_2018) fl_sp_2017 = severity_percent(fl_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FL Takeaways:- Frequency: Not really much change positively or negatively for shutdown compared to past periods- Severity: Don't see a lot of difference in severity 4 as with CA & TX, but do see significant increase in 1s while 2 & 3 dropped - South CarolinaFirst create periods we'll want to compare for the state >
# full state period sc_ttl = accident_covid[(accident_covid.State == 'SC')] # shutdown period sc_sd = accident_covid[ (accident_covid.State == 'SC') & (accident_covid.Shut_Down == 1)] # 2019 sc_2019 = accident_covid[(car_accidents_master['Date'] > '2019-04-06') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period sc_freqs = [sc_ttl.shape[0], sc_sd.shape[0], sc_2019.shape[0], sc_2018.shape[0], sc_2017.shape[0] ] # then develop a simple dataframe to compare them sc_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 23.58%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period sc_sp_ttl = severity_percent(sc_ttl) sc_sp_sd = severity_percent(sc_sd) sc_sp_2019 = severity_percent(sc_2019) sc_sp_2018 = severity_percent(sc_2018) sc_sp_2017 = severity_percent(sc_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SC Takeaways:- Frequency: no significant difference for shutdown period. 2017 seems low, but this may be due to less data collected that year. - Severity: in general, seemed to lower in severity. See more 1 & 2 and less 3 & 4 - North CarolinaFirst create periods we'll want to compare for the state >
# full state period nc_ttl = accident_covid[(accident_covid.State == 'NC')] # shutdown period nc_sd = accident_covid[ (accident_covid.State == 'NC') & (accident_covid.Shut_Down == 1)] # 2019 nc_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-30') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period nc_freqs = [nc_ttl.shape[0], nc_sd.shape[0], nc_2019.shape[0], nc_2018.shape[0], nc_2017.shape[0] ] # then develop a simple dataframe to compare them nc_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 22.40%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period nc_sp_ttl = severity_percent(nc_ttl) nc_sp_sd = severity_percent(nc_sd) nc_sp_2019 = severity_percent(nc_2019) nc_sp_2018 = severity_percent(nc_2018) nc_sp_2017 = severity_percent(nc_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
NC Takeaways:- Fequency: No significant change during shutdown period- Severity: Do see an increase in level 4 severity, but greatest change is increase in severity 1 with drop in severity 2 - New YorkFirst create periods we'll want to compare for the state >
# full state period ny_ttl = accident_covid[(accident_covid.State == 'NY')] # shutdown period ny_sd = accident_covid[ (accident_covid.State == 'NY') & (accident_covid.Shut_Down == 1)] # 2019 ny_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-22') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period ny_freqs = [ny_ttl.shape[0], ny_sd.shape[0], ny_2019.shape[0], ny_2018.shape[0], ny_2017.shape[0] ] # then develop a simple dataframe to compare them ny_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 50.28%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period ny_sp_ttl = severity_percent(ny_ttl) ny_sp_sd = severity_percent(ny_sd) ny_sp_2019 = severity_percent(ny_2019) ny_sp_2018 = severity_percent(ny_2018) ny_sp_2017 = severity_percent(ny_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
NY Takeaways:- Frequency: See an increase in shutdown period, although there was also an even larger jump from 2018 to 2019- Severity: See increase in edge severities (1 & 2) - PennsylvaniaFirst create periods we'll want to compare for the state >
# full state period pa_ttl = accident_covid[(accident_covid.State == 'PA')] # shutdown period pa_sd = accident_covid[ (accident_covid.State == 'PA') & (accident_covid.Shut_Down == 1)] # 2019 pa_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-23') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period pa_freqs = [pa_ttl.shape[0], pa_sd.shape[0], pa_2019.shape[0], pa_2018.shape[0], pa_2017.shape[0] ] # then develop a simple dataframe to compare them pa_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 90.76%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period pa_sp_ttl = severity_percent(pa_ttl) pa_sp_sd = severity_percent(pa_sd) pa_sp_2019 = severity_percent(pa_2019) pa_sp_2018 = severity_percent(pa_2018) pa_sp_2017 = severity_percent(pa_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
PA Takeaways:- Frequency: See a fairly significant increase in accidents during shutdown period- Severity: See increase in severity 1 and drop in severity 2, but not that large - IllinoisFirst create periods we'll want to compare for the state >
# full state period il_ttl = accident_covid[(accident_covid.State == 'IL')] # shutdown period il_sd = accident_covid[ (accident_covid.State == 'IL') & (accident_covid.Shut_Down == 1)] # 2019 il_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-25') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period il_freqs = [il_ttl.shape[0], il_sd.shape[0], il_2019.shape[0], il_2018.shape[0], il_2017.shape[0] ] # then develop a simple dataframe to compare them il_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 24.55%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period il_sp_ttl = severity_percent(il_ttl) il_sp_sd = severity_percent(il_sd) il_sp_2019 = severity_percent(il_2019) il_sp_2018 = severity_percent(il_2018) il_sp_2017 = severity_percent(il_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
IL Takeaways:- Frequency: Increase during shutdown period- Severity: Large increase in severity 3 and decrese in severity 2. Also increase in severity 1 - VirginiaFirst create periods we'll want to compare for the state >
# full state period va_ttl = accident_covid[(accident_covid.State == 'VA')] # shutdown period va_sd = accident_covid[ (accident_covid.State == 'VA') & (accident_covid.Shut_Down == 1)] # 2019 va_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-30') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period va_freqs = [va_ttl.shape[0], va_sd.shape[0], va_2019.shape[0], va_2018.shape[0], va_2017.shape[0] ] # then develop a simple dataframe to compare them va_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 113.11%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period va_sp_ttl = severity_percent(va_ttl) va_sp_sd = severity_percent(va_sd) va_sp_2019 = severity_percent(va_2019) va_sp_2018 = severity_percent(va_2018) va_sp_2017 = severity_percent(va_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
VA Takeaways:- Frequency: See a large increase in frequency during shutdown- Severity: Significant increase in severity 1 and drop in severity 3 - MichiganFirst create periods we'll want to compare for the state >
# full state period mi_ttl = accident_covid[(accident_covid.State == 'MI')] # shutdown period mi_sd = accident_covid[ (accident_covid.State == 'MI') & (accident_covid.Shut_Down == 1)] # 2019 mi_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-24') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period mi_freqs = [mi_ttl.shape[0], mi_sd.shape[0], mi_2019.shape[0], mi_2018.shape[0], mi_2017.shape[0] ] # then develop a simple dataframe to compare them mi_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: -51.62%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period mi_sp_ttl = severity_percent(mi_ttl) mi_sp_sd = severity_percent(mi_sd) mi_sp_2019 = severity_percent(mi_2019) mi_sp_2018 = severity_percent(mi_2018) mi_sp_2017 = severity_percent(mi_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
MI Takeaways:- Frequency: Drop in number of accidents- Severity: Increase in edges of severity 1 & 4 -- most dramatic with severity 4 - GeorgiaFirst create periods we'll want to compare for the state >
# full state period ga_ttl = accident_covid[(accident_covid.State == 'GA')] # shutdown period ga_sd = accident_covid[ (accident_covid.State == 'GA') & (accident_covid.Shut_Down == 1)] # 2019 ga_2019 = accident_covid[(car_accidents_master['Date'] > '2019-04-03') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period ga_freqs = [ga_ttl.shape[0], ga_sd.shape[0], ga_2019.shape[0], ga_2018.shape[0], ga_2017.shape[0] ] # then develop a simple dataframe to compare them ga_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: -13.71%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period ga_sp_ttl = severity_percent(ga_ttl) ga_sp_sd = severity_percent(ga_sd) ga_sp_2019 = severity_percent(ga_2019) ga_sp_2018 = severity_percent(ga_2018) ga_sp_2017 = severity_percent(ga_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- OregonFirst create periods we'll want to compare for the state >
# full state period or_ttl = accident_covid[(accident_covid.State == 'OR')] # shutdown period or_sd = accident_covid[ (accident_covid.State == 'OR') & (accident_covid.Shut_Down == 1)] # 2019 or_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-23') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >m
# find total frequencies for state total and in each period or_freqs = [or_ttl.shape[0], or_sd.shape[0], or_2019.shape[0], or_2018.shape[0], or_2017.shape[0] ] # then develop a simple dataframe to compare them or_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 187.12%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period or_sp_ttl = severity_percent(or_ttl) or_sp_sd = severity_percent(or_sd) or_sp_2019 = severity_percent(or_2019) or_sp_2018 = severity_percent(or_2018) or_sp_2017 = severity_percent(or_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- MinnesotaFirst create periods we'll want to compare for the state >
# full state period mn_ttl = accident_covid[(accident_covid.State == 'MN')] # shutdown period mn_sd = accident_covid[ (accident_covid.State == 'MN') & (accident_covid.Shut_Down == 1)] # 2019 mn_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-27') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period mn_freqs = [mn_ttl.shape[0], mn_sd.shape[0], mn_2019.shape[0], mn_2018.shape[0], mn_2017.shape[0] ] # then develop a simple dataframe to compare them mn_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 83.18%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period mn_sp_ttl = severity_percent(mn_ttl) mn_sp_sd = severity_percent(mn_sd) mn_sp_2019 = severity_percent(mn_2019) mn_sp_2018 = severity_percent(mn_2018) mn_sp_2017 = severity_percent(mn_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- ArizonaFirst create periods we'll want to compare for the state >
# full state period az_ttl = accident_covid[(accident_covid.State == 'AZ')] # shutdown period az_sd = accident_covid[ (accident_covid.State == 'AZ') & (accident_covid.Shut_Down == 1)] # 2019 az_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-31') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period az_freqs = [az_ttl.shape[0], az_sd.shape[0], az_2019.shape[0], az_2018.shape[0], az_2017.shape[0] ] # then develop a simple dataframe to compare them az_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 152.00%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period az_sp_ttl = severity_percent(az_ttl) az_sp_sd = severity_percent(az_sd) az_sp_2019 = severity_percent(az_2019) az_sp_2018 = severity_percent(az_2018) az_sp_2017 = severity_percent(az_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- TennesseeFirst create periods we'll want to compare for the state >
# full state period tn_ttl = accident_covid[(accident_covid.State == 'TN')] # shutdown period tn_sd = accident_covid[ (accident_covid.State == 'TN') & (accident_covid.Shut_Down == 1)] # 2019 tn_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-31') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period tn_freqs = [tn_ttl.shape[0], tn_sd.shape[0], tn_2019.shape[0], tn_2018.shape[0], tn_2017.shape[0] ] # then develop a simple dataframe to compare them tn_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 2.54%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period tn_sp_ttl = severity_percent(tn_ttl) tn_sp_sd = severity_percent(tn_sd) tn_sp_2019 = severity_percent(tn_2019) tn_sp_2018 = severity_percent(tn_2018) tn_sp_2017 = severity_percent(tn_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- WashingtonFirst create periods we'll want to compare for the state >
# full state period wa_ttl = accident_covid[(accident_covid.State == 'WA')] # shutdown period wa_sd = accident_covid[ (accident_covid.State == 'WA') & (accident_covid.Shut_Down == 1)] # 2019 wa_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-23') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period wa_freqs = [wa_ttl.shape[0], wa_sd.shape[0], wa_2019.shape[0], wa_2018.shape[0], wa_2017.shape[0] ] # then develop a simple dataframe to compare them wa_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: -16.54%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period wa_sp_ttl = severity_percent(wa_ttl) wa_sp_sd = severity_percent(wa_sd) wa_sp_2019 = severity_percent(wa_2019) wa_sp_2018 = severity_percent(wa_2018) wa_sp_2017 = severity_percent(wa_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- OhioFirst create periods we'll want to compare for the state >
# full state period oh_ttl = accident_covid[(accident_covid.State == 'OH')] # shutdown period oh_sd = accident_covid[ (accident_covid.State == 'OH') & (accident_covid.Shut_Down == 1)] # 2019 oh_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-22') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period oh_freqs = [oh_ttl.shape[0], oh_sd.shape[0], oh_2019.shape[0], oh_2018.shape[0], oh_2017.shape[0] ] # then develop a simple dataframe to compare them oh_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 117.88%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period oh_sp_ttl = severity_percent(oh_ttl) oh_sp_sd = severity_percent(oh_sd) oh_sp_2019 = severity_percent(oh_2019) oh_sp_2018 = severity_percent(oh_2018) oh_sp_2017 = severity_percent(oh_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- LouisianaFirst create periods we'll want to compare for the state >
# full state period la_ttl = accident_covid[(accident_covid.State == 'LA')] # shutdown period la_sd = accident_covid[ (accident_covid.State == 'LA') & (accident_covid.Shut_Down == 1)] # 2019 la_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-22') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
FrequencyNow compare frequencies (total number) of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018, & 2017 being in same month-day date range as shut-down period) >
# find total frequencies for state total and in each period la_freqs = [la_ttl.shape[0], la_sd.shape[0], la_2019.shape[0], la_2018.shape[0], la_2017.shape[0] ] # then develop a simple dataframe to compare them la_freq_data = pd.DataFrame({'Timeframe': ['Tota...
Change Frequency: 13.45%
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
SeverityNow compare severities of car accidents for total, shut-down period in 2020, 2019, 2018, and 2017 (2019, 2018 & 2017 being in same month-day date range as shut-down period) >
# first find percentages of severity ranks to total accidents for # total and for each period la_sp_ttl = severity_percent(la_ttl) la_sp_sd = severity_percent(la_sd) la_sp_2019 = severity_percent(la_2019) la_sp_2018 = severity_percent(la_2018) la_sp_2017 = severity_percent(la_2017) # then again create a simple da...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents
- OklahomaFirst create periods we'll want to compare for the state >
# full state period ok_ttl = accident_covid[(accident_covid.State == 'OK')] # shutdown period ok_sd = accident_covid[ (accident_covid.State == 'OK') & (accident_covid.Shut_Down == 1)] # 2019 ok_2019 = accident_covid[(car_accidents_master['Date'] > '2019-03-24') & (car_accidents_mas...
_____no_output_____
AAL
modeling_and_analysis/accident_covid_analysis.ipynb
cecann10/covid-impact-car-accidents