markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Load and Analyse the dataset
# load positive tweets positive_tweets = twitter_samples.strings('positive_tweets.json') positive_tweets[:3] # load negative tweets negative_tweets = twitter_samples.strings('negative_tweets.json') negative_tweets[:3] ## total number of pos and neg tweets print(f"Total No. of Positive tweets: {len(positive_tweets)}"...
nlp/1. NLP Classification - Logistic Classification.ipynb
rishuatgithub/MLPy
apache-2.0
Processing of the data to create word frequencies list
from nltk.corpus import stopwords import re def clean_tweet(tweet): ''' clean the tweet to tokenise, remove stop words and stem the words ''' stop_words = stopwords.words('english') #print(f'Total stop words in the vocab: {len(stop_words)}') tweet = re.sub(r'#','',tweet) ## remove the ...
nlp/1. NLP Classification - Logistic Classification.ipynb
rishuatgithub/MLPy
apache-2.0
Model Training
## Generate the vector word frequency for all of the training tweets train_X = np.zeros((len(train_data),3)) for i in range(len(train_data)): train_X[i,:] = extract_features(train_data[i], tweet_freq_vocab) train_y = train_label test_X = np.zeros((len(test_data),3)) for i in range(len(test_data)): test_X[i,:...
nlp/1. NLP Classification - Logistic Classification.ipynb
rishuatgithub/MLPy
apache-2.0
Making your own predictions
my_tweet1 = 'i liked my prediction score. happy with the results' model.predict(extract_features(my_tweet1,tweet_freq_vocab)) my_tweet2 = 'i am sad with the result of the football match' model.predict(extract_features(my_tweet2,tweet_freq_vocab)) my_tweet3 = 'shame that i couldnt get an entry to the competition' mode...
nlp/1. NLP Classification - Logistic Classification.ipynb
rishuatgithub/MLPy
apache-2.0
TensorBoard の DataFrames データにアクセスする 概要 TensorBoard の主な機能はインタラクティブ GUI ですが、ログれーたの事後分析やカスタム視覚化の作成目的で、TensorBoard に保存されているデータログを プログラムで 読み取るユーザーもいます。 TensorBoard 2.3 は、tensorboard.data.experimental.ExperimentFromDev() でこのようなユースケースをサポートしており、TensorBoard のスカラーログにプログラムを使ってアクセスすることができます。このページでは、この新しい API の基本的な使用方法を実演します。 注意: ...
!pip install tensorboard pandas !pip install matplotlib seaborn from packaging import version import pandas as pd from matplotlib import pyplot as plt import seaborn as sns from scipy import stats import tensorboard as tb major_ver, minor_ver, _ = version.parse(tb.__version__).release assert major_ver >= 2 and minor...
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
pandas.DataFrame として TensorBoard スカラーを読み込む TensorBoard logdir が TensorBoard.dev にアップロードされると、logdir は「実験」となります。各実験には一意の ID が割り当てられており、実験の TensorBoard.dev URL で確認することができます。次のデモでは、https://tensorboard.dev/experiment/c1KCv3X3QvGwaXfgX1c4tg にある TensorBoard.dev を使用しています。
experiment_id = "c1KCv3X3QvGwaXfgX1c4tg" experiment = tb.data.experimental.ExperimentFromDev(experiment_id) df = experiment.get_scalars() df
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
df は、実験のすべてのスカラーログを含む pandas.DataFrame です。 DataFrame の列は次のとおりです。 run: run(実行)は、元の logdir のサブディレクトリに対応しています。この実験では、run は特定のオプティマイザタイプ(トレーニングハイパーパラメータ)を使用した MNIST データセットのニューラルネットワーク(CNN)の完全なトレーニングに由来しています。この DataFrame は、このような run が複数含まれており、別のオプティマイザタイプの配下にある反復トレーニングに対応しています。 tag: これは、同一の行にある value の意味、つまり値が表現するメトリックが何であ...
print(df["run"].unique()) print(df["tag"].unique())
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
ピボット(ワイドフォーム)DataFrame を取得する この実験では、各実行の同じステップ時Iに 2 つのタグ(epoch_loss と epoch_accuracy)が存在します。このため、pivot=True キーワード引数を使用することで、「ワイドフォーム」DataFrame を get_scalars() から直接取得することができます。すべてのタグがワイドフォーム DataFrame の列として含まれているため、このケースを含み、場合によっては操作がより便利になります。 ただし、すべての実行のすべてのタグで統一したステップ値を持つ条件が満たされる場合、pivot=True を使用するとエラーになることに注意してください。
dfw = experiment.get_scalars(pivot=True) dfw
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
ワイドフォーム DataFrame には、1 つの「value」列の代わりに、epoch_accuracy と epoch_loss の 2 つのタグ(メトリック)が列として明示的に含まれています。 DataFrame を CSV として保存する pandas.DataFrame has good interoperability with CSV. You can store it as a local CSV file and load it back later. For example:
csv_path = '/tmp/tb_experiment_1.csv' dfw.to_csv(csv_path, index=False) dfw_roundtrip = pd.read_csv(csv_path) pd.testing.assert_frame_equal(dfw_roundtrip, dfw)
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
カスタム視覚化と統計分析を実行する
# Filter the DataFrame to only validation data, which is what the subsequent # analyses and visualization will be focused on. dfw_validation = dfw[dfw.run.str.endswith("/validation")] # Get the optimizer value for each row of the validation DataFrame. optimizer_validation = dfw_validation.run.apply(lambda run: run.spli...
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
上記のプロットは、検証精度と検証損失のタイムコースを示し、それぞれの曲線は、あるオプティマイザタイプによる 5 回の実行の平均を示します。seaborn.lineplot() に組み込まれた機能により、それぞれの曲線は、平均に関する ±1 の標準偏差も表示するため、曲線の変動性と 3 つのオプティマイザの差の重要性がわかりやすくなります。この変動性の視覚化は、TensorBoard の GUI ではまだサポートされていません。 最小検証損失が「adam」、「rmsprop」、および「sgd」オプティマイザ間で大きく異なるという仮説を調べるため、それぞれのオプティマイザにおける最小検証損失の DataFrame を抽出します。 そして...
adam_min_val_loss = dfw_validation.loc[optimizer_validation=="adam", :].groupby( "run", as_index=False).agg({"epoch_loss": "min"}) rmsprop_min_val_loss = dfw_validation.loc[optimizer_validation=="rmsprop", :].groupby( "run", as_index=False).agg({"epoch_loss": "min"}) sgd_min_val_loss = dfw_validation.loc[optimi...
site/ja/tensorboard/dataframe_api.ipynb
tensorflow/docs-l10n
apache-2.0
Preparing data set sweep First, we're going to define the data sets that we'll sweep over. As the simulated novel taxa dataset names depend on how the database generation notebook was executed, we must define the variables used to create these datasets. If you modified any variables in that notebook, set these same var...
iterations = 3 data_dir = join(project_dir, "data", analysis_name) # databases is a list of names given as dictionary keys in the second # cell of the database generation notebook. Just list the names here. databases = ['B1-REF', 'F1-REF'] # Generate a list of input directories (dataset_reference_combinations, referen...
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
Preparing the method/parameter combinations and generating commands Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below. Assignment Using QIIME 1 or ...
method_parameters_combinations = { # probabalistic classifiers 'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}, # global alignment classifiers 'uclust': {'min_consensus_fraction': [0.51, 0.76, 1....
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep(). Fields must adhere to following format: {0} = output directory {1} = input data {2} = output destination {3} = reference taxonomy ...
command_template = "source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, ...
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
As a sanity check, we can look at the first command that was generated and the number of commands generated.
print(len(commands)) commands[0]
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
Finally, we run our commands.
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
BLAST+
method_parameters_combinations = { 'blast+' : {'p-evalue': [0.001], 'p-maxaccepts': [1, 10], 'p-min-id': [0.80, 0.97, 0.99], 'p-min-consensus': [0.51, 0.99]} } command_template = ("mkdir -p {0}; " ...
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
VSEARCH
method_parameters_combinations = { 'vsearch' : {'p-maxaccepts': [1, 10], 'p-min-id': [0.80, 0.99], 'p-min-consensus': [0.51, 0.99]} } command_template = ("mkdir -p {0}; " "qiime feature-classifier vsearch --i-query {1}...
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
Move result files to repository Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial...
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name) method_dirs = glob(join(results_dir, '*', '*', '*', '*')) move_results_to_repository(method_dirs, precomputed_results_dir)
ipynb/novel-taxa/taxonomy-assignment.ipynb
nbokulich/short-read-tax-assignment
bsd-3-clause
<h1 id="tocheading">Table of Contents</h1> <div id="toc"></div>
%%javascript $.getScript('misc/kmahelona_ipython_notebook_toc.js')
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Getting and Knowing your Data Task: load the following file as a data frame
fn = r"data/drinks.csv" # Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: See the first 10 entries
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Which country has the highest alcohol consumption (total litres of pure alcohol)?
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Groupby Task: Which continent drinks most beer on average?
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: List all unique continents.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Which countries have missing values in the continent column?
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Set "the" missing continent with a name of your choice.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: For each continent print "the" statistics (summary stats using "df.describe()") for wine consumption.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Print the median alcoohol consumption per continent for every column
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Print the mean, min and max values for spirit consumption.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: GroupBy Continent and create a Boxplot. (Hint: using e.g. figsize=(12, 9), rot=90 might help with legibility.)
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Concatenate, Merge & Join Task: Import the first dataset cars1 and cars2. Assign each to a to a variable called cars1 and cars2.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: It seems our first dataset has some unnamed blank columns, fix cars1.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Join cars1 and cars2 into a single DataFrame called cars
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Apply (interspersed) Task: Create function that returns the first word of the string in the "car" column, the manufacturer name. Use the "apply" method to create a new column in the DataFrame.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Consider the following DataFrames for the next exercises
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3]) df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], ...
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Concatenate the three DataFrames along the rows.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: How many missing values (NaNs) are produced if you concatenate along the other axis (appending the columns)?
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Let's consider another data set to do some more Merge, Join & Concatenate exerciseses
raw_data_1 = { 'subject_id': ['1', '2', '3', '4', '5'], 'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'], 'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']} raw_data_2 = { 'subject_id': ['4', '5', '6', '7', '8', '9', '10'], 'first_name': ['Alice', 'Ayoung...
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Join the two dataframes, data1 and data2, along rows and assign all_data. Make sure that the row index is unique.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Join the two dataframes, data1 and data2, along columns and assing to all_data_col.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Merge all_data and data3 along the subject_id value.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: How many test_ids have missing values in the first or last name column?
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Merge only the data that has the same 'subject_id' in both data1 and data2.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Transform The transform method returns an object that is indexed the same (same size) as the one being grouped. Task: Given a DataFrame with a column of group IDs, 'groups', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
# Write your answer here # Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Use groupby in conjunction with transform across multiple columns: We want to group by one to n columns and apply a function on these groups across two columns. 1. Calculate the sum of a and b and assign it to a column named e. 2. Group by 'c' and d, and calculate the sum of e
df = pd.DataFrame({'a':[1,2,3,4,5,6], 'b':[1,2,3,4,5,6], 'c':['q', 'q', 'q', 'q', 'w', 'w'], 'd':['z','z','z','o','o','o']}) # Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Normalize (standardize) the data by calculating the z-score. Group the data by year and calculate the z-score per group. z = (value - mean) / standard_deviation <div style="font-size: 150%;"> $$z=\frac{x-\mu}{\sigma}$$ </div>
index = pd.date_range('10/1/1999', periods=1100) ser = pd.Series(np.random.normal(0.5, 2, 1100), index=index) ser = ser.rolling(window=100,min_periods=100).mean().dropna() # Answer: key = lambda x: x.year zscore = lambda x: (x - x.mean()) / x.std() transformed = ser.groupby(key).transform(zscore)
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: We would expect the result to now have mean 0 and standard deviation 1 within each group, which we can easily check. Calculate the mean and standard deviation within each group.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Visually compare the original and transformed data sets.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Pivot Task: Let's reshape this small example DataFrame of ICD10 codes. Each person has different code-associations. Only positive associations are listed. Transform (reshape) the DataFrame to a wide format (one column per code) that lists positive and negative (missing) associations as Booleans.
df = pd.DataFrame({"Person": ["a", "a", "a", "b", "c", "c"], "Code": ["D99", "E32", "A41", "D99", "D99", "A41"]}, columns=["Person", "Code"]) df # Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Combine DataFrames Task: In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single Dat...
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
GroupBy Titanic data Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic. Task: Women and children first? Use the groupby method to calculate the proportion of passengers that survived by sex. Calculate the same proportion, but by class and sex. Create age categories: ...
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Let's plot the number of survivors grouped by sex and passenger class.
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Let's also look at the deaths (and not only at the survivors) within the groups and create a stacked Barplot of survivers vs. deaths grouped by sex and passenger-class (as before). 1. Convert the "survived" column to boolean values 2. Compute the cross tabulation (a.k.a. contingency table) of passenger-class and ...
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
Task: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group. Create a stacked, horizontal Barplot of the adjusted death counts 1. Sum the death_counts per passenger-class and sex, and convert to data type float (for Python 2.x division purposes). 2. Com...
# Write your answer here
Exercises_part_B.ipynb
dblyon/PandasIntro
mit
As seen, when we assign a lambda expression to a label we can use it. Note that at default lambda expressions return the expected type of whatever it is handling. If you send in a number and do a numerical operation, you will receive back a number, a string a string, etc. Below, we want to create more complex lambda ex...
# Even or Odd lambda even_odd = lambda x: x % 2 == 0
Functions and Methods/Lambda Expressions.ipynb
mohsinhaider/pythonbootcampacm
mit
As we can see, the lambda expressions returns the number we expected. How do we return a True or False value? Note: we can't use if statements or returns. This limits the power of the lambda. However, we can have lambda expressions return True or False, still. Observe the following syntax.
even_odd = lambda x: True if x % 2 == 0 else False even_odd(9)
Functions and Methods/Lambda Expressions.ipynb
mohsinhaider/pythonbootcampacm
mit
Soon, we will learn about creating our own classes, and eventually data structures. When we learn how to make our own data structures, we'll rewrite what are "conventionally" (take it lightly) known as a "magic methods". These methods are not called upon explicitly, but are triggered by some internal action that Python...
# script that "converts" a tuple to a list some_tup = ("[", 3, 4, "hello", "]") x = lambda tup: ", ".join(str(item) for item in tup) print(x(some_tup))
Functions and Methods/Lambda Expressions.ipynb
mohsinhaider/pythonbootcampacm
mit
Somewhat scrappy, but let's just say it's pretty close to looking like an actual list. Strings are not mutable so we can't just add a bracket at the beginning or right at the end. There are certain ways to make this possible, however. Operations like this are better suited for functions, anyways. Lambdas as Parameters ...
sorted([4, 2, 8, 5, 2, 9])
Functions and Methods/Lambda Expressions.ipynb
mohsinhaider/pythonbootcampacm
mit
However, we have much more power with the key. The key accepts a form of some type of filter. We can send in a lambda expression to change what sorted means to us. What if we wanted the even numbers to be at the end?
sorted([4, 2, 8, 5, 2, 9], key=lambda x: x%2 == 0)
Functions and Methods/Lambda Expressions.ipynb
mohsinhaider/pythonbootcampacm
mit
What if we wanted the numbers to be reversed, such as in descending order?
sorted([1, 5, 2, 5, 2, 9, 4], reverse=True) lst = [1, 3, 4,5] lst[::-1]
Functions and Methods/Lambda Expressions.ipynb
mohsinhaider/pythonbootcampacm
mit
TO GET STARTED, CLICK "CELL" IN THE MENU BAR ABOVE, THEN SELECT "RUN ALL"
from SQL_support_code import *
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Structure and Formatting Query Basics: Indentations and Returns: Mostly arbitrary in SQL Usually for readability Capitalization: Convention to put keywords (functions, clauses) in CAPS Consistency is best Order of Clauses: Very strict Not all clauses need to be present in a query, but when they are present, ...
describe_differences
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
These are the names of the tables in our mini SQLite database: sales_table car_table salesman_table cust_table Start by looking at the columns and their data types in the sales_table.
run(''' PRAGMA TABLE_INFO(sales_table) ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query to look at the other tables:
run(''' PRAGMA TABLE_INFO(sales_table) ''') #print(describe_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Different RDBMS have different datatypes available: - Oracle: http://docs.oracle.com/cd/B10501_01/appdev.920/a96624/03_types.htm - MySQL: - Numeric: http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html - Date/time: http://dev.mysql.com/doc/refman/5.0/en/date-and-time-type-overview.html - String/text: ht...
run(''' SELECT * FROM sales_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query to select all columns from the car_table:
run(''' SELECT NULL ''') #print(select_cheat1)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
SELECT COLUMN: SELECT column_a, # comma-separate multiple columns column_b FROM table_name Instead of using an asterisk for "all columns", you can specify a particular column or columns:
run(''' SELECT model_id, revenue FROM sales_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query to select model_id and model from the car_table:
run(''' SELECT NULL ''') #print(select_cheat2)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
One more quick note on the basics of SELECT - technically you can SELECT a value without using FROM to specify a table. You could just tell the query exactly what you want to see in the result-set. If it's a number, you can write the exact number. If you are using various characters, put them in quotes. See the query b...
run(''' SELECT 4, 5, 7, 'various characters or text' ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
SELECT DISTINCT VALUES IN COLUMNS: SELECT DISTINCT column_a # returns a list of each unique value in column_a FROM table_name Use DISTINCT to return unique values from a column More on DISTINCT: http://www.w3schools.com/sql/sql_distinct.asp The query below pulls each distinct value from the model_id colu...
run(''' SELECT DISTINCT model_id FROM sales_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Use DISTINCT to select unqiue values from the salesman_id column in sales_table. Delete DISTINCT and rerun to see the effect.
run(''' SELECT NULL ''') #print(select_cheat3)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
WHERE SELECT column_a FROM table_name WHERE column_a = x # filters the result-set to rows where column_a's value is exactly x A few more options for the where clause: WHERE column_a = 'some_text' # put text in quotations. CAPITALIZATION IS IMPORTANT WHERE column_a != x # filters the result...
run(''' SELECT * FROM sales_table WHERE payment_type = 'cash' AND model_id = 46 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query to return rows where payment_type is NOT cash, and the model_id is either 31 or 36 - Extra: Try changing 'cash' to 'Cash' to see what happens.
run(''' SELECT NULL ''') #print(where_cheat1)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Using BETWEEN, rewrite the query to return rows where the revenue was between 24,000 and 25,000:
run(''' SELECT NULL ''') #print(where_cheat2)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
WHERE column LIKE: SELECT column_a FROM table_name WHERE column_a LIKE '%text or number%' # Filters the result_set to rows where that text or value can be found, with % standing in as a wildcard LIKE lets you avoid issues with capitalization in quotes, and you can use % as a wildcard to stand in for...
run(''' SELECT * FROM sales_table WHERE payment_type LIKE 'Cas%' ''').head()
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Be careful with LIKE though - it can't deal with extra characters or mispellings:
run(''' SELECT * FROM sales_table WHERE payment_type LIKE 'ces%' LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
LIKE and % will also return too much if you're not specific enough. This returns both 'cash' and 'finance' because both have a 'c' with some letters before or after:
run(''' SELECT * FROM sales_table WHERE payment_type LIKE '%c%' LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
You can use different wildcards besides % to get more specific. An underscore is a substitute for a single letter or character, rather than any number. The query below uses 3 underscores after c to get 'cash':
run(''' SELECT * FROM sales_table WHERE payment_type LIKE 'c___' LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Say you can't remember the model of the car you're trying to look up. You know it's "out"...something. Outcast? Outstanding? Write a query to return the model_id and model from the car_table and use LIKE to help you search:
run(''' SELECT NULL ''') #print(where_cheat3)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
ORDER BY SELECT column_a FROM table_name WHERE # optional column_a = x ORDER BY # sorts the result-set by column_a column_a DESC # DESC is optional. It sorts results in descending order (100-&gt;1) instead of ascending (1-&gt;100) Without an ORDER BY clause, the defa...
run(''' SELECT * FROM sales_table ORDER BY revenue DESC LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query above to look at the sticker_price of cars from the car_table in descending order:
run(''' SELECT NULL ''') #print(order_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
LIMIT SELECT column_a FROM table_name WHERE columna_a = x # optional ORDER BY column_a # optional LIMIT # Limits the result-set to N rows N LIMIT just limits the number of rows in your result set More on LIMIT: http://www.w3schools.com/sql/sql_top.asp The ability t...
limit_differences
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
The query below limits the number of rows to 5 results. Change it to 10 to get a quick sense of what we're doing here:
run(''' SELECT * FROM sales_table LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
ALIASES SELECT T.column_a AS alias_a # creates a nickname for column_a, and states that it's from table_name (whose alias is T) FROM table_name AS T # creates a nickname for table_name WHERE alias_a = z # refer to an alias in the WHERE clause ORDER BY alias_a # ref...
run(''' SELECT model_id AS Model_of_car, revenue AS Rev_per_car FROM sales_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
You can use an alias in the ORDER BY and WHERE clauses now. Write a query to: - pull the model_id and revenue for each transaction - give model_id the alias "Model" - give revenue the alias "Rev" - limit the results to only include rows where the model_id id 36, use the alias in the WHERE clause - order the results by ...
run(''' SELECT NULL ''') #print(alias_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
You can also assign an alias to a table, and use the alias to tell SQL which table the column is coming from. This isn't of much use when you're only using one table, but it will come in handy when you start using multiple tables. Below,the sales_table has the alias "S". Read "S.model_id" as "the model_id column from S...
run(''' SELECT S.model_id, S.revenue FROM sales_table AS S LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
JOINS SELECT * FROM table_x JOIN table_y # use JOIN to add the second table ON table_x.column_a = table_y.column_a # use ON to specify which columns correspond on each table Joining tables is the most fundamental and useful part ...
run(''' SELECT * FROM sales_table LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Now the first few rows of the car_table:
run(''' SELECT * FROM car_table LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
These tables are related. There's a column named "model_id" in the sales_table and a "model_id" in the car_table - but the column names don't need to be the same, what's important is that the values in the sales_table's model_id column correspond to the values in the car_table's model_id column. You can join these tab...
run(''' SELECT * FROM sales_table JOIN car_table ON sales_table.model_id = car_table.model_id LIMIT 10 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query to join the cust_table to the sales_table, using the customer_id columns in both tables as the key:
run(''' SELECT NULL ''') #print(join_cheat1)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query from above, but instead of selecting all columns, specify just the customer gender and the revenue:
run(''' SELECT NULL ''') #print(join_cheat2)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query from above, but this time select the customer_id, gender, and revenue: - You'll probably hit an error at first. Try to use what you've learned about this structure "table_x.column_a" to fix the issue. Why do you think you need to use this?
run(''' SELECT NULL ''') #print(join_cheat3)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
A column with the name customer_id appears in both the cust_table and the sales_table. SQL doesn't know which one you want to see. You have to tell it from which table you want the customer_id. This can be important when columns in different tables have the same names but totally unrelated values. Look at the sales_ta...
run(''' SELECT * FROM sales_table LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Above, there's a column called "id". Now look at the salesman_table again:
run(''' SELECT * FROM salesman_table LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
There's a column named "id" in the salesman_table too. However, it doesn't look like those IDs correspond to the sales_table IDs. In fact, it's the salesman_id column in the sales_table that corresponds to the id column in the salesman_table. More often than not, your tables will use different names for corresponding c...
run(''' SELECT NULL ''') #print(join_cheat4)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Practice applying this "table_x.column_a" format to all columns in the SELECT clause when you are joining multiple tables, since multiple tables frequenty use the same column names even when they don't correspond. It's common to use single-letter aliases for tables to make queries shorter. Take a look at the query bel...
run(''' SELECT S.customer_id, C.gender, S.revenue FROM sales_table AS S JOIN cust_table AS C on S.customer_id = C.customer_id ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Join the sales_table (assign it the alias S) and salesman_table (alias SM) again. - Select the id and salesman_id column from the sales_table - Also, select the id column from the salesman_table - Optional: assign aliases to the columns in the SELECT clause to make the result-set easier to read
run(''' SELECT NULL ''') #print(join_cheat5)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Different Types of Joins There are different types of joins you can do according to your needs. Here's a helpful way to visualize your options: http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins However, not all types of joins are compatible with SQLite and MySQL. The table below breaks down c...
join_differences
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
So far, we've just done a simple join, also called an "inner join". To illustrate different types of joins, we're going to use a different "database" for the following lesson. First, let's take a look at each one:
run(''' SELECT * FROM Dog_Table ''') run(''' SELECT * FROM Cat_Table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit