markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Notice that the Owner_Name columns on each table have some corresponding values (Michael, Gilbert, May and Elizabeth and Donna are in both tables), but they both also have values that don't overlap. JOINS or INNER JOINS SELECT * FROM table_x X JOIN table_y Y ON X.column_a = Y.column_a # Returns rows ...
run(''' SELECT NULL ''') #print(inner_join_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Notice that the result-set only includes the names that are in both tables. Think of inner joins as being the overlapping parts of a Venn Diagram. So, essentially we're looking at results only where the pet owner has both a cat and a dog. LEFT JOINS or LEFT OUTER JOINS SELECT * FROM table_x X LEFT JOIN t...
run(''' SELECT NULL ''') #print(left_join_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
This time, you're seeing everything from the Dog_Table, but only results from the Cat_Table IF the owner also has a dog. OUTER JOINS or FULL OUTER JOINS: SELECT * FROM table_x X OUTER JOIN table_y Y ON X.column_a = Y.column_a # Returns all rows, regardless of whether values match Outer joins include ...
run(''' SELECT C.Owner_Name, Cat_Name, Dog_Name FROM Cat_Table C LEFT JOIN Dog_Table D ON D.Owner_Name = C.Owner_Name UNION ALL SELECT D.Owner_Name, ' ', Dog_Name FROM Dog_Table D WHERE Owner_Name NOT IN (SE...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Essentially, in Venn Diagram terms, and outer join lets you see all contents of both circles. This join will let you see all pet owners, regardless of whether the own only a cat or only a dog Using the "WHERE" Clause to Join Tables SELECT * FROM table_x X JOIN table y Y WHERE X.column_a = Y.column_a ...
run(''' SELECT C.model, S.revenue FROM sales_table S, car_table C WHERE S.model_id = C.model_id LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
When the query is longer, this method is messy. Suddenly it's harder to parse out which parts of the "WHERE" clause are actual filters, and which parts are just facilitating the join. Note that we've covered all of these clauses and expressions by now, try to parse out what's going on:
run(''' SELECT C.make, C.model, S.revenue, CUST.gender, SM.first_name FROM sales_table S JOIN car_table C JOIN salesman_table SM JOIN cust_table CUST WHERE S.customer_id = CUST.customer_id AND S.model_id = C.model_id ...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
OPERATORS ADDING / SUBSTRACTING / MULTIPLYING / DIVIDING SELECT column_a + column_b # adds the values in column_a to the values in columns_b FROM table_name Use the standard formats for add, subtract, mutiply, and divide: + - * / The query below subtracts cogs (from the car_table) from revenue (fro...
run(''' SELECT S.id, C.model, S.revenue, C.cogs, S.revenue - C.cogs AS gross_profit FROM sales_table S JOIN car_table C on S.model_id = C.model_id LIMIT 5 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query above to return gross margin instead of gross profit. Rename the alias as well. Limit it to 5 results
run(''' SELECT NULL ''') #print(operator_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
CONCATENATING: Concatenating varies by RDBMS:
concat_differences
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Here we'll use SQLite and use the concatenating operator || to combine words/values in different columns:
run(''' SELECT last_name, first_name, last_name || ', ' || first_name AS full_name FROM salesman_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Use || to pull the make and model from the car_table and make it appear in this format: "Model (Make)" - give it an alias to clean up the column header, otherwise it'll look pretty messy
run(''' SELECT NULL ''') #print(concat_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
FUNCTIONS: SELECT SUM(column_a), # sums up the values in column_a AVG(column_a), # averages the values in column_a ROUND(AVG(column_a), 2), # rounds the averaged values in column_a to 2 digits COUNT(column_a), # counts the number of rows in column_a MAX(column_a...
run(''' SELECT SUM(revenue) AS Total_Revenue FROM sales_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query to return the average cost of goods for a car in the car table. Try rounding it to cents. - If you can't remember the name of the column for cost of goods in the car_table, remember you can use "SELECT * FROM car_table LIMIT 1" to see the first row of all columns, or you can use "PRAGMA TABLE_INFO(ca...
run(''' SELECT NULL ''') #print(avg_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Using COUNT(*) will return the number of rows in any given table. Rewrite the query to return the number of rows in the car_table: - After you've run the query, try changing it by adding "WHERE make = 'Subaru'" and see what happens
run(''' SELECT NULL ''') #print(count_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
You can apply functions on top of other operators. Below is the sum of gross profits:
run(''' SELECT '$ ' || SUM(S.revenue - C.cogs) total_gross_profit FROM sales_table S JOIN car_table C on S.model_id = C.model_id ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query to show the average difference between the sticker_price (in car_table) and the revenue. If you want a challenge, try to join cust_table and limit the query to only look at transactions where the customer's age is over 35
run(''' SELECT NULL ''') #print(avg_cheat2)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
GROUP_CONCAT SELECT GROUP_CONCAT(column_a, '[some character separating items]') FROM table_x This function is useful to return comma-separated lists of the values in a column
run(''' SELECT GROUP_CONCAT(model, ', ') as Car_Models FROM car_table ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Use GROUP_CONCAT to return a comma-separated list of last names from the salesman_table:
run(''' SELECT NULL ''') #print(concat_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
GROUP BY: SELECT column_a, SUM(column_b) # sums up the values in column_b FROM table_name GROUP BY # creates one group for each unique value in column_a column_a Creates a group for each unique value in the column you specify Extremely helpful when you're using fun...
run(''' SELECT C.model AS Car_Model, SUM(revenue) AS Total_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY Car_Model ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query above to return the average gross profit (revenue - cogs) per make (remember that "make" is in the car_table) Extra things to try: - Round average revenue to two decimal points - Order the results by gross profit in descending order - Rename the make column as "Car_Maker" and use the alias in the GROU...
run(''' SELECT NULL ''') #print(group_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query to make a comma-separated list of models for each car maker:
run(''' SELECT NULL ''') #print(group_cheat1)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
GROUP BY, when used with joins and functions, can help you quickly see trends in your data. Parse out what's going on here:
run(''' SELECT C.model AS Car_Model, MIN(S.revenue) || ' - ' || MAX(S.revenue) AS Min_to_Max_Sale, MAX(S.revenue) - MIN(S.revenue) AS Range, ROUND(AVG(S.revenue), 2) AS Average_Sale FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY ...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
You can also use GROUP BY with multiple columns to segment out the results further:
run(''' SELECT C.make AS car_caker, payment_type, ROUND(AVG(revenue)) as avg_revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY C.Make, payment_type ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query to find the total revenue grouped by each salesperson's first_name and by the customer's gender (gender column in cust_table) - For an extra challenge, use the concatenating operator to use the salesperson's full name instead - Add COUNT(S.id) to the SELECT clause to see the number of transactions in ...
run(''' SELECT NULL ''') #print(group_cheat2)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
"HAVING" in GROUP BY statements: SELECT column_a, SUM(column_b) AS alias_b FROM table_name GROUP BY column_a HAVING alias_b > x # only includes groups in column_a when the sum of column_b is greater than x If you've applied a function to a column and want to filter to only show results meeting a ...
run(''' SELECT C.Make as Car_Maker, SUM(revenue) as Total_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY Car_Maker HAVING Total_Revenue > 500000 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Rewrite the query above to look at average revenue per model, and using HAVING to filter your result-set to only include models whose average revenue is less than 18,000:
run(''' SELECT NULL ''') #print(having_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
HAVING vs WHERE: WHERE filters which rows will be included in the function, whereas HAVING filters what's returned after the function has been applied. Take a look at the query below. It might look like the query you just wrote (above) if you'd tried to use WHERE instead of HAVING: SELECT C.model as Car_Model, ...
run(''' SELECT C.model as Car_Model, AVG(S.revenue) as Avg_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id WHERE S.revenue < 18000 GROUP BY Car_Model ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
All model_ids are returned, but the averages are all much lower than they should be. That's because the query first drops all rows that have revenue greater than 18000, and then averages the remaining rows. When you use HAVING, SQL follows these steps instead (this query should look like the one you wrote in the last c...
run(''' SELECT C.model as Car_Model, AVG(S.revenue) as Avg_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY Car_Model HAVING Avg_Revenue < 18000 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
HAVING & WHERE in the same query: Sometimes, you will want to use WHERE and HAVING in the same query Just be aware of the order of the steps that SQL takes Rule of thumb: if you're applying a function to a column, you probably don't want that column in there WHERE clause This query is only looking at Toyotas whose re...
run(''' SELECT C.model as Car_Model, AVG(S.revenue) as Avg_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id WHERE C.make = 'Toyota' GROUP BY Car_Model HAVING Avg_Revenue < 18000 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query with the following criteria: - SELECT clause: - salesman's last name and average revenue, rounded to the nearest cent - FROM clause: - sales_table joined with the salesman_table and the cust_table - WHERE clause: - only female customers - GROUP BY clause: - only salespeople whose average revenue was ...
run(''' SELECT NULL ''') #print(having_where_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
ROLLUP SELECT column_a, SUM(column_b) FROM table_x GROUP BY ROLLUP(column_a) # adds up all groups' values in a single final row Rollup, used with GROUP BY, provides subtotals and totals for your groups Useful for quick analysis Varies by RDBMS
rollup_differences
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Because SQLite doesn't support ROLLUP, the query below is just intended to illustrate how ROLLUP would work. Don't worry about understanding the query itself, just get familiar with what's going on in the result-set:
run(''' SELECT C.model AS Car_Model, SUM(S.revenue) as Sum_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY C.model UNION ALL SELECT 'NULL', SUM(S.revenue) FROM sales_table S ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Conditional Expressions: IF & CASE WHEN SELECT CASE WHEN column_a = x THEN some_value WHEN column_a = y THEN some_value2 ELSE some_other_value END some_alias # alias optional after END FROM table_name Conditional expressions let you use IF/THEN logic in SQL I...
conditional_differences
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Starting with a simple example, here we'll use CASE WHEN to create a new column on the sales_table:
run(''' SELECT revenue, CASE WHEN revenue > 20000 THEN 'Revenue is more than 20,000' END Conditional_Column FROM sales_table LIMIT 10 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
CASE WHEN gives you the value "Revenue is more MORE 20,000" when revenue in that same row is greater than 20,000. Otherwise, it has no value. Now let's add a level:
run(''' SELECT revenue, CASE WHEN revenue > 20000 THEN 'Revenue is MORE than 20,000' WHEN revenue < 15000 THEN 'Revenue is LESS than 15,000' END Conditional_Column FROM sales_table LIMIT 10 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Now to deal with the blank spaces. You can assign an "ELSE" value to catch anything that's not included in the prior expressions:
run(''' SELECT revenue, CASE WHEN revenue > 20000 THEN 'Revenue is MORE than 20,000' WHEN revenue < 15000 THEN 'Revenue is LESS than 15,000' ELSE 'NEITHER' END Conditional_Column FROM sales_table LIMIT 10 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
You can use values from another column as well. Remember this query from the GROUP BY lesson? It's often helpful to look at information broken out by multiple groups, but it's not especially easy to digest:
run(''' SELECT C.Make as car_maker, payment_type, ROUND(AVG(S.revenue)) as avg_revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY C.Make, payment_type ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Look at what's going on in that query without the AVG( ) function and the GROUP BY clause:
run(''' SELECT C.Make as Car_Maker, payment_type, S.revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
The result-set above is essentially what SQL is working with right before it separates the rows into groups and averages the revenue within those groups. Now, we're going to use some CASE WHEN statements to change this a little:
run(''' SELECT C.Make as Car_Maker, payment_type, CASE WHEN payment_type = 'cash' THEN S.revenue END Cash_Revenue, CASE WHEN payment_type = 'finance' THEN S.revenue END Finance_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Now let's add back the ROUND() and AVG() functions and the GROUP BY statement:
run(''' SELECT C.Make as Car_Maker, ROUND(AVG(CASE WHEN payment_type = 'cash' THEN S.revenue END)) AS Avg_Cash_Revenue, ROUND(AVG(CASE WHEN payment_type = 'finance' THEN S.revenue END)) AS Avg_Finance_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
CASE WHEN makes this same information a lot easier to read by letting you pivot the result set a little. Write a query using CASE WHEN to look at total revenue per gender, grouped by each car model
run(''' SELECT NULL ''') #print(case_cheat)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
CASE WHEN also lets you create new groups. Start by looking at the cust_table grouped by age - remember that COUNT(***) tells you how many rows are in each group (which is the same as telling you the number of customers in each group):
run(''' SELECT age, COUNT(*) customers FROM cust_table GROUP BY age ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
When you want to segment your results, but there are too many different values for GROUP BY to be helpful, use CASE WHEN to make your own groups. GROUP BY the column you created with CASE WHEN to look at your newly created segments.
run(''' SELECT CASE WHEN age BETWEEN 18 AND 24 THEN '18-24 years' WHEN age BETWEEN 25 AND 34 THEN '25-34 years' WHEN age BETWEEN 35 AND 44 THEN '35-45 years' WHEN age BETWEEN 45 AND 54 THEN '45-54 years' WHEN age BETWEEN 55 AND 64 THEN '55-64 years' ...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Ta-DA! Useful customer segments! Try to break up the "Customers" column into 2 columns - one for male and one for female. Keep the age segments intact. - Note that COUNT(***) cannot be wrapped around a CASE WHEN expression the way that other functions can. Try to think of a different way to get a count. - Extra challe...
run(''' SELECT NULL ''') #print(case_cheat2)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
NESTING Nested queries allow you to put a query within a query Depending on your needs, you might put a nested query in the SELECT clause, the FROM clause, or the WHERE clause Consider the following query. We're using a nested query in the SELECT clause to see the sum of all revenue in the sales_table, and then using...
run(''' SELECT C.model AS Car_Model, SUM(S.revenue) AS Revenue_Per_Model, (SELECT SUM(revenue) FROM sales_table) AS Total_Revenue, SUM(S.revenue) / (SELECT SUM(revenue) FROM sales_table) AS Contribution_to_Revenue FROM sales_table S JOIN car_table C ON C.model_id ...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Write a query to look at the model name and COGs for each car in car_table, then use a nested query to also look at the average COGs off all car models in a third column - Extra Challenge: add a fourth colum using another nested query to return the difference between each car model's COGs and the average COGs
run(''' SELECT NULL ''') #print(nest_cheat1)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
UNION & UNION ALL SELECT column_a FROM table_x UNION # or UNION ALL SELECT column_b FROM table_y UNION allows you to run a 2nd query (or 3rd or 4th), the results will be ordered by default with the results of the first query UNION ALL ensures that the results in the result set appear i...
run(''' SELECT model FROM car_table WHERE model = 'Tundra' UNION SELECT first_name FROM salesman_table WHERE first_name = 'Jared' ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Some things to note: - Although these queries and their results are unrelated, the column header is dictated by the query that appears first - Even though the query for "Tundra" is first, "Tundra" is second in the results. UNION will sort all results according to the normal default rules, acending order. - Replace UNI...
run(''' SELECT NULL ''') #print(union_cheat1)
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Consider the issue we had before, where SQLite didn't support WITH ROUNDUP. We used this query as a workaround. Does it make sense now?
run(''' SELECT C.model AS Car_Model, SUM(S.revenue) as Sum_Revenue FROM sales_table S JOIN car_table C on S.model_id = C.model_id GROUP BY C.model UNION ALL SELECT 'NULL', SUM(S.revenue) FROM sales_table S ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Optimization: Non-optimized queries can cause a lot of problems because tables frequently have thousands or millions of rows: If you haven't optimized your query, it might: Take several minutes (or even hours) to return the information you're requesting Crash your computer Muck up the server's processes, and you'll fa...
run(''' SELECT date, revenue FROM sales_table ''').head()
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
DON'T use an asterisk unless you absolutely have to: This can put a lot of strain on servers. Only use if you know for certain that your using a small table
run(''' SELECT * FROM sales_table ''').head()
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
DO use LIKE on small tables and in simple queries: LIKE is helpful if you know where to find something but you can't quite remember what it's called. Try to use a wildcard sparingly - don't use 2 when 1 will suffice:
run(''' SELECT model_id, model FROM car_table WHERE model LIKE '%undra' ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
DON'T use LIKE on large tables or when using JOINs:
run(''' SELECT C.model, AVG(revenue) FROM sales_table S JOIN car_table C on S.model_id = C.model_id WHERE C.model LIKE '%undra' ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
If you want to look at average revenue for car models that are like "%undra", run the LIKE query on the small table (car_table) first to figure out exacly what you're looking for, then use that information to search for the data you need from the sales_table DO dip your toe in by starting with a small data set Use WHER...
run(''' SELECT revenue, date FROM sales_table WHERE date = '1/1/14' ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
DO use a UNION to look at result-sets that aren't mutually exclusive Let's say you were interested in seeing all Toyotas as well as cars with COGs of more than 13000. Write a query for the first group, then a query for the second group, and unite them with UNION. The result set won't show you repeats - if a row matches...
run(''' SELECT make, model, cogs FROM car_table WHERE make = 'Toyota' UNION SELECT make, model, cogs FROM car_table WHERE cogs > 13000 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
DON'T use OR when a UNION will generate the same results Note that we'll get the same results as above, but this query could run MUCH slower on a large table. It's tempting to use OR because it's faster to write, but unless you're dealing with very small tables, avoid the temptation. In 5 years of doing business analyt...
run(''' SELECT make, model, cogs FROM car_table WHERE make = 'Toyota' OR cogs > 13000 ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
DON'T use negative filters when a positive filter is possible Let's say you want to look at cars made by Toyato and Honda, but you don't care about Subaru. It might be tempting to use a negative filter:
run(''' SELECT * FROM car_table WHERE make != 'Subaru' ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
On a big table, this will run much more slowly than if you use a positive filter. Try this instead - it might require a little extra typing, but it will run much faster:
run(''' SELECT * FROM car_table WHERE make in ('Toyota', 'Honda') ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Wrapping Up: Debugging: If you run into errors when you start writing your own queries, here are some things to make sure your query has: - The right names for columns in the SELECT clause - Columns that can be found in the tables in the FROM clause - Consistent use of aliases throughout (if using aliases) - Joined tab...
run(''' SELECT first_name || ' ' || last_name as Salesperson, COUNT(*) as Cars_Sold FROM sales_table S JOIN salesman_table M ON S.salesman_id = M.id GROUP BY Salesperson ORDER BY Cars_Sold DESC ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Add on the average amount of revenue made per sale:
run(''' SELECT first_name || ' ' || last_name as Salesperson, COUNT(*) as Cars_Sold, ROUND(AVG(revenue)) as Revenue_per_Sale FROM sales_table S JOIN salesman_table M ON S.salesman_id = M.id GROUP BY Salesperson ORDER BY Cars_Sold DESC ''')
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Make it easier to compare the average revenue of Jared's sales to the average revenue of per sale overall by adding a column to see by what percent each salesperson's sales are more or less than average:
run(''' SELECT first_name || ' ' || last_name as Salesperson, COUNT(*) as Cars_Sold, ROUND(AVG(revenue), 2) as Rev_per_Sale, ROUND((((AVG(revenue) - (SELECT AVG(revenue) from sales_table)) /(SELECT AVG(revenue) from sales_table))*100), 1) || ' %' ...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
So maybe Jared is just selling cheaper cars. Let's go further and compare the sale price of each car against the sticker price to see how low Jared was willing to negotiate with customers. Sticker price is in anther table, but again, that's no problem with SQL:
run(''' SELECT first_name || ' ' || last_name as Salesperson, COUNT(*) as Cars_Sold, '$ ' || ROUND(AVG(revenue), 2) as Rev_per_Sale, ROUND((((AVG(revenue) - (SELECT AVG(revenue) from sales_table where salesman_id != 215)) /(SELECT AVG(revenue) from sales_t...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Looks like Jared is letting customers negotiate prices down much more than his peers. But is this a real problem? How much is each salesperson contributing to our gross profits?
run(''' SELECT first_name || ' ' || last_name as Salesperson, COUNT(*) as Cars_Sold, '$ ' || ROUND(AVG(revenue), 2) as Rev_per_Sale, ROUND((((AVG(revenue) - (SELECT AVG(revenue) from sales_table where salesman_id != 215)) /(SELECT AVG(revenue) from sales_t...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
SQL really lets you dig. Some other quick examples - we could do a gender breakdown of customers per car model and add a total at the bottom:
run(''' SELECT C.model as Car_Model, ROUND(SUM(CASE WHEN CUST.gender = 'female' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Female Customers', ROUND(SUM(CASE WHEN CUST.gender = 'male' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Male Customers' FROM sales_table S JOIN car_table C...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
Easily create age groups and see how aggressively each group negotiates (judged by the difference between the actual sale amount and the sticker price):
run(''' SELECT CASE WHEN age BETWEEN 18 AND 24 THEN '18-24 years' WHEN age BETWEEN 25 AND 34 THEN '25-34 years' WHEN age BETWEEN 35 AND 44 THEN '35-44 years' WHEN age BETWEEN 45 AND 54 THEN '45-54 years' WHEN age BETWEEN 55 AND 64 THEN '55-64 years' ...
Code/SQL/SQL_Intro_DBcopy.ipynb
ky822/Data_Bootcamp
mit
ck will be a ckanapi instance that carries your CKAN account's write permissions, and is able to read all public datasets.
ck = ckanapi.RemoteCKAN(CKAN["dpaw-internal"]["url"], apikey=CKAN["dpaw-internal"]["key"])
Kimberley LCI.ipynb
florianm/biosys-etl
mit
A CKAN resource's URL changes if the file resource changes, but the resource ID will be persistent. The config dict LCI lists resource names (from original data worksheet names) against their CKAN resource ID. A helper function get_data reads all configured datasets (CSV resources in CKAN).
data = h.get_data(ck, LCI) data [r for r in data["sites"]][0]
Kimberley LCI.ipynb
florianm/biosys-etl
mit
Clean checkpoint folder if exists
try: shutil.rmtree('/tmp/chainer_examples') except OSError: pass
common/notebook/skchainer/iris_save_restore.ipynb
bizreach/common-ml
apache-2.0
Save model, parameters and learned variables.
os.makedirs('/tmp/chainer_examples/') serializers.save_hdf5('/tmp/chainer_examples/iris_custom_model', classifier.model.predictor) serializers.save_hdf5('/tmp/chainer_examples/iris_custom_optimizer', classifier.optimizer) classifier = None
common/notebook/skchainer/iris_save_restore.ipynb
bizreach/common-ml
apache-2.0
Restore everything
model = Model(X_train.shape[1]) serializers.load_hdf5('/tmp/chainer_examples/iris_custom_model', model) new_classifier = ChainerEstimator(model=SoftmaxCrossEntropyClassifier(model), optimizer=optimizers.AdaGrad(lr=0.1), batch_size=100, ...
common/notebook/skchainer/iris_save_restore.ipynb
bizreach/common-ml
apache-2.0
The frame 0 displays the initial vertical segment, as the dragon cuve defined in step 0 of the iterative process of construction.
alpha = pi/10 # The rotation of 90 degrees is defined as 5 successive rotations of 18 degrees=pi10 radians n_rot90 = 13 # we have 13 steps frames = [] for k in range(n_rot90): #Record the last point on the dragon, defined in the previous step x0, y0 = X[-1], Y[-1] x = X-x0 #Translation with origin at (x0,...
Animating-the-Dragon-curve-construction.ipynb
empet/Math
bsd-3-clause
Define a button that triggers the animation:
buttonPlay = {'args': [None, {'frame': {'duration': 100, 'redraw': False}, 'transition': {'duration': 0}, 'fromcurrent': True, 'mode': 'immediate'}], ...
Animating-the-Dragon-curve-construction.ipynb
empet/Math
bsd-3-clause
What's this PyTorch business? You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized. For the last...
class ChunkSampler(sampler.Sampler): """Samples elements sequentially from some offset. Arguments: num_samples: # of desired datapoints start: offset where we should start selecting from """ def __init__(self, num_samples, start = 0): self.num_samples = num_samples self....
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
dtype = torch.FloatTensor # the CPU datatype # Constant to control how frequently we print train loss print_every = 100 # This is a little utility that we'll use to reset the model # if we want to re-initialize all our parameters def reset(m): if hasattr(m, 'reset_parameters'): m.reset_parameters()
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Example Model Some assorted tidbits Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs. We'll provide you with a Flatten function, wh...
class Flatten(nn.Module): def forward(self, x): N, C, H, W = x.size() # read in N, C, H, W return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
The example model itself The first step to training your own model is defining its architecture. Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll...
# Here's where we define the architecture of the model... simple_model = nn.Sequential( nn.Conv2d(3, 32, kernel_size=7, stride=2), nn.ReLU(inplace=True), Flatten(), # see above for explanation nn.Linear(5408, 10), # affine layer ) # Set the...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class "spatial batch norm" is called "Batch...
fixed_model_base = nn.Sequential( nn.Conv2d(3, 32, kernel_size=7, stride=1), # N x 32 x 32 x 3 -> N x 26 x 26 x 32 nn.ReLU(inplace=True), nn.BatchNorm2d(32), nn.MaxPool2d(kernel_size=2, stride=2), # N x 26 x 26 x 32 -> N x 13 x 13 x 32 ...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
## Now we're going to feed a random batch into the model you defined and make sure the output is the right size x = torch.randn(64, 3, 32, 32).type(dtype) x_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data ans = fixed_model(x_var) # Feed it through the model! # Check to make ...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
GPU! Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one. If this returns false, or otherwise fails in a not-graceful way (i.e., with some er...
# Verify that CUDA is properly configured and you have a GPU available torch.cuda.is_available() import copy gpu_dtype = torch.cuda.FloatTensor fixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype) x_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype) x_var_gpu = Variable(x.type(gpu_dtype)) # Construct a P...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Run the following cell to evaluate the performance of the forward pass running on the CPU:
%%timeit ans = fixed_model(x_var)
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
... and now the GPU:
%%timeit torch.cuda.synchronize() # Make sure there are no pending GPU computations ans = fixed_model_gpu(x_var_gpu) # Feed it through the model! torch.cuda.synchronize() # Make sure there are no pending GPU computations
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is torch.cuda.FloatTensor (in our ...
loss_fn = nn.CrossEntropyLoss().type(dtype) optimizer = optim.RMSprop(fixed_model_gpu.parameters(), lr=1e-3) # This sets the model in "training" mode. This is relevant for some layers that may have different behavior # in training mode vs testing mode, such as Dropout and BatchNorm. fixed_model_gpu.train() # Load on...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:
def train(model, loss_fn, optimizer, num_epochs = 1): for epoch in range(num_epochs): print('Starting epoch %d / %d' % (epoch + 1, num_epochs)) model.train() for t, (x, y) in enumerate(loader_train): x_var = Variable(x.type(gpu_dtype)) y_var = Variable(y.type(gpu_dtyp...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Check the accuracy of the model. Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below. You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be trai...
torch.cuda.random.manual_seed(12345) fixed_model_gpu.apply(reset) train(fixed_model_gpu, loss_fn, optimizer, num_epochs=1) check_accuracy(fixed_model_gpu, loader_val)
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Don't forget the validation set! And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperpara...
# Train your model here, and make sure the output of this cell is the accuracy of your best model on the # train, val, and test sets. Here's some code to get you started. The output of this cell should be the training # and validation accuracy on your best model (measured by validation accuracy). model_base = nn.Sequ...
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Describe what you did In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network. Tell us here! Test set -- run this only once Now that we've gotten a result we're ...
best_model = model check_accuracy(best_model, loader_test)
CS231n/assignment2/PyTorch.ipynb
ALEXKIRNAS/DataScience
mit
Pre-processing transcriptome data and testing differentially expressed genes with Bioconductor You need to run the following code in R source("http://bioconductor.org/biocLite.R") biocLite(c("genefilter", "ecoliLeucine")) library("ecoliLeucine") library("genefilter") data("ecoliLeucine") eset = rma(ecoliLeucine) r = ro...
import pandas as pd ttest_df = pd.read_csv('ttest.csv') ttest_df.head()
advanced/CytoscapeREST_KEGG_f1000.ipynb
muratcemkose/cy-rest-python
mit
Getting node table from Cytoscape and merge with ttest.csv
deftable = requests.get('http://localhost:1234/v1/networks/' + str(pathway_suid) + '/tables/defaultnode.tsv') handle = open('defaultnode.tsv','w') handle.write(deftable.content) handle.close() deftable_df = pd.read_table('defaultnode.tsv') deftable_df.head() import re bnum_re = re.compile('b[0-9]{4}') keggids = [] k...
advanced/CytoscapeREST_KEGG_f1000.ipynb
muratcemkose/cy-rest-python
mit
Example Market In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.
# make a supplier and get the asks supplier = Supplier() supplier.set_quantity(60,10,30) ask = supplier.get_ask() # make a buyer and get the bids (n,l,u) buyer = Buyer() buyer.set_quantity(60,10,30) bid = buyer.get_bid() # make a market where the buyers and suppliers can meet # the bids and asks are a list of prices ...
backup/Matching Market v1.ipynb
Housebeer/Natural-Gas-Model
mit
Introduction The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is unimodal and continuous. That is, we want to model our system using floating point math (continuous) and to have only one belief repres...
import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason...
x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
As a convenience NumPy arrays provide the method mean().
x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean()
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The mode of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a unimodal set, and if two or more numbers occur the most with equal frequency than the set is multimodal. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set...
np.median(x)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Expected Value of a Random Variable The expected value of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we expect $x$ to have, on average? It...
total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. Exercise What is the expected value of a die role? Solution Each side is equally likely, so each has a probability of 1/6. Hence $$\begin{aligned} \mathbb E[X...
X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8]
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
Using NumPy we see that the mean height of each class is the same.
print(np.mean(X), np.mean(Y), np.mean(Z))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class. The mean tells us something about the data, but not the whole story. We want to be able to specify how m...
print("{:.2f} meters squared".format(np.var(X)))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the standard deviation, which is defined as the square root of the variance: $$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$ It is typical to use...
print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance. What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gau...
from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X)
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students. We write one standard deviation as $1\sigma...
from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std))
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit
By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code.
np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100.
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
zaqwes8811/micro-apps
mit