markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
T3: Função negativo
f3 = T3[f] nb.nbshow(f,'original') plt.plot(T3) plt.title('T3: negativo') nb.nbshow(f3,'T3[f]') nb.nbshow()
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
T4: Função threshold 128
f4 = T4[f] nb.nbshow(f,'original') plt.plot(T4) plt.title('T4: threshold 128') nb.nbshow(f4,'T4[f]') nb.nbshow()
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
T5: Função quantização
f5 = T5[f] nb.nbshow(f,'original') plt.plot(T5) plt.title('T5: quantização') nb.nbshow(f5,'T5[f]') nb.nbshow()
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Observando o histograma de cada imagem após o mapaemento:
h = ia.histogram(f) h2 = ia.histogram(f2) #logaritmica h3 = ia.histogram(f3) # negativo h4 = ia.histogram(f4) # threshold h5 = ia.histogram(f5) # quantização plt.plot(h) #plt.plot(h2) #plt.plot(h3) #plt.plot(h4) plt.plot(h5) plt
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Do ponto de vista de eficiência, qual é o melhor, utilizar o mapeamento pela tabela, ou processar a imagem diretamente?
f = ia.normalize(np.arange(1000000).reshape(1000,1000)) %timeit g2t = T2[f] %timeit g2 = ia.normalize(np.log(f+1.)) %timeit g3t = T3[f] %timeit g3 = 255 - f
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Exoplanet properties Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets. http://iopscience.iop.org/1402-4896/2008/T130/014001 Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo: https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
!head -n 30 open_exoplanet_catalogue.txt
assignments/assignment04/MatplotlibEx02.ipynb
edwardd1/phys202-2015-work
mit
Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',') data #raise NotImplementedError() assert data.shape==(1993,24)
assignments/assignment04/MatplotlibEx02.ipynb
edwardd1/phys202-2015-work
mit
Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. Pick the number of bins for the histogram appropriately.
np.histogram(data) #raise NotImplementedError() assert True # leave for grading
assignments/assignment04/MatplotlibEx02.ipynb
edwardd1/phys202-2015-work
mit
Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data.
# YOUR CODE HERE raise NotImplementedError() assert True # leave for grading
assignments/assignment04/MatplotlibEx02.ipynb
edwardd1/phys202-2015-work
mit
In Why Most Published Research Findings Are False John Ioannidis argues that if most hypotheses we test are false, we end up with more false research findings than true findings, even if we do rigorous hypothesis testing. The argument hinges on a vanilla application of Bayes' rule. Lets assume that science is "really hard" and that only 50 out of 1000 hypotheses we formulate are in fact true. Say we test our hypotheses at significance level alpha=0.05 and with power beta=0.80. Out of our 950 incorrect hypotheses, our hypothesis testing will lead to 950x0.05 = 47.5 false positives i.e. false research findings. Out of our 100 correct hypotheses, we will correctly identify 50x0.8 = 40 true research findings. To our horror, we find that most published findings are false! Most applications of AB testing involve running multiple repeated experiments in order to optimize a metric. At each iteration, we test a hypothesis: Does the new design perform better than the control? If so, we adopt the new design as our control and test the next idea. After many iterations, we expect to have a design that is better than when we started. But Ioannidis' argument about how most research findings could be false should make us wonder: Is it possible, that if the chances of generating a better new design are slim, that we adopt bad designs more often than we adopt good designs? What effect does this have on our performance in the long run? How can we change our testing strategy in such a way that we still expect to increase performance over time? Conversely, how can we take advantage of a situation where the chances of generating a design that is better than the control is really high? To investigate these questions, lets simulate the process of repeated AB testing for optimizing some conversion rate (CR) under different scenarios for how hard our optimization problem is. For example, our CR could be the fraction of users who donate to Wikipedia in response to being shown a particular fundraising banner. I will model the difficulty of the problem using a distribution over the percent lift in conversion rate (CR) that a new idea has over the control. In practice we might expect the mean of this distribution to change with time. As we work on a problem longer, the average idea probably gives a smaller performance increase. For our purposes, I will assume this distribution (call it $I$) is fixed and normally distributed. We start with a control banner with some fixed conversion rate (CR). At each iteration, we test the control against a new banner whose percent lift over the control is drawn from $I$. If the new banner wins, it becomes the new control. We repeat this step several times to see what final the CR is after running a sequence of tests. I will refer to a single sequence of tests as a campaign. We can simulate several campaigns to characterize the distribution of outcomes we can expect at the end of a campaign. Code For those who are interested, this section describes the simulation code. The Test class, simulates running a single AB test. The parameters significance, power and mde correspond to the significance, power and minimum effect size of the z-test used to test the hypothesis that the new design and the control have the same CR. The optimistic parameter determines which banner we choose if we fail to reject the null hypothesis that the two designs are the same.
import numpy as np np.random.seed(seed=0) from statsmodels.stats.weightstats import ztest from statsmodels.stats.power import tt_ind_solve_power from scipy.stats import bernoulli class Test(): def __init__(self, significance, power, mde, optimistic): self.significance = significance self.power = power self.mde = mde self.optimistic = optimistic def compute_sample_size(self, u_hat): var_hat = u_hat*(1-u_hat) absolute_effect = u_hat - (u_hat*(1+self.mde)) standardized_effect = absolute_effect / np.sqrt(var_hat) sample_size = tt_ind_solve_power(effect_size=standardized_effect, alpha=self.significance, power=self.power) return sample_size def run(self, control_cr, treatment_cr): # run null hypothesis test with a fixed sample size N = self.compute_sample_size(control_cr) data_control = bernoulli.rvs(control_cr,size=N) data_treatment = bernoulli.rvs(treatment_cr,size=N) p = ztest(data_control, data_treatment)[1] # if p > alpha, no clear winner if p > self.significance: if self.optimistic: return treatment_cr else: return control_cr # other wise pick the winner else: if data_control.sum() > data_treatment.sum(): return control_cr else: return treatment_cr
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
The Campaign class simulates running num_tests AB tests, starting with a base_rate CR. The parameters mu and sigma characterize $I$, the distribution over the percent gain in performance of a new design compared to the control.
class Campaign(): def __init__(self, base_rate, num_tests, test, mu, sigma): self.num_tests = num_tests self.test = test self.mu = mu self.sigma = sigma self.base_rate = base_rate def run(self): true_rates = [self.base_rate,] for i in range(self.num_tests): #the control of the current test is the winner of the last test control_cr = true_rates[-1] # create treatment banner with a lift drawn from the lift distribution lift = np.random.normal(self.mu, self.sigma) treatment_cr = min(0.9, control_cr*(1.0+lift/100.0)) winning_cr = self.test.run(control_cr, treatment_cr) true_rates.append (winning_cr) return true_rates
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
The expected_campaign_results function implements running many campaigns with the same starting conditions. It generates a plot depicting the expected CR as a function of the number of sequential AB test.
import matplotlib.pyplot as plt import pandas as pd def expected_campaign_results(campaign, sim_runs): fig = plt.figure(figsize=(10, 6), dpi=80) d = pd.DataFrame() for i in range(sim_runs): d[i] = campaign.run() d2 = pd.DataFrame() d2['mean'] = d.mean(axis=1) d2['lower'] = d2['mean'] + 2*d.std(axis=1) d2['upper'] = d2['mean'] - 2*d.std(axis=1) plt.plot(d2.index, d2['mean'], label= 'CR') plt.fill_between(d2.index, d2['lower'], d2['upper'], alpha=0.31, edgecolor='#3F7F4C', facecolor='0.75',linewidth=0) plt.xlabel('num tests') plt.ylabel('CR') plt.plot(d2.index, [base_rate]*(num_tests+1), label = 'Start CR') plt.legend()
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Simulations I will start out with a moderately pessimistic scenario and assume the average new design is 5% worse than the control and that standard deviation sigma is 3. The plot below shows the distribution over percent gains from new designs.
def plot_improvements(mu, sigma): plt.figure(figsize = (7, 3)) x = np.arange(-45.0, 45.0, 0.5) plt.xticks(np.arange(-45.0, 45.0, 5)) plt.plot(x, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (x - mu)**2 / (2 * sigma**2) )) plt.xlabel('lift') plt.ylabel('probability density') plt.title('Distribution over lift in CR of a new design compared to the control') #Distribution over % Improvements mu = -5.0 sigma = 3 plot_improvements(mu, sigma)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Lets start out with some standard values of alpha = 0.05, beta = 0.8 and mde = 0.10 for the hypothesis tests. The plot below shows the expected CR after a simulating a sequence of 30 AB tests 100 times.
# hypothesis test params significance = 0.05 power = 0.8 mde = 0.10 #camapign params num_tests = 30 base_rate = 0.2 #number of trials sim_runs = 100 test = Test(significance, power, mde, optimistic = False) campaign = Campaign(base_rate, num_tests, test, mu, sigma) expected_campaign_results(campaign, sim_runs)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Even though we went through all work of running 100 AB test, we cannot expect to improve our CR. The good news is that although most of our ideas were bad, doing the AB testing prevented us from loosing performance. The plot below shows what would happen if we had used the new idea as the control when the hypothesis test could not discern a significant difference.
test = Test(significance, power, mde, optimistic = True) campaign = Campaign(base_rate, num_tests, test, mu, sigma) expected_campaign_results(campaign, sim_runs)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Impressive. The CR starts tanking at a rapid pace. This is an extreme example but it spells out a clear warning: if your optimization problem is hard, stick to your control. Now lets imagine a world in which most ideas are neutral but there is still the potential for big wins and big losses. The plot below shows our new distribution over the quality of new ideas.
mu = 0.0 sigma = 5 plot_improvements(mu, sigma)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
And here are the result of the new simulation:
test = Test(significance, power, mde, optimistic = False) campaign = Campaign(base_rate, num_tests, test, mu, sigma) expected_campaign_results(campaign, sim_runs)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Now there is huge variance in how things could turn out. In expectation, we get a 2% absolute gain every 10 tests. As you might have guessed, in this scenario it does not matter which banner you choose when the hypothesis test does not detect a significant difference. Lets see if we can reduce the variance in outcomes by decreasing the minimum detectable effect mde to 0.05. This will cost us in terms of runtime for each test, but it also should reduce the variance in the expected results.
mde = 0.05 test = Test(significance, power, mde, optimistic = False) campaign = Campaign(base_rate, num_tests, test, mu, sigma) expected_campaign_results(campaign, sim_runs)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Now we can expect 5% absolute gain every 15 tests. Furthermore, it is very unlikely that we have not improved out CR after 30 tests. Finally, lets consider the rosy scenario in which most new ideas are a winner.
mu = 5 sigma = 3 plot_improvements(mu, sigma)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Again, here are the result of the new simulation:
mde = 0.10 test = Test(significance, power, mde, optimistic = False) campaign = Campaign(base_rate, num_tests, test, mu, sigma) expected_campaign_results(campaign, sim_runs)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Having good ideas is a recipe for runaway success. You might even decide that its foolish to choose the control banner when you don't have significance since chances are that your new idea is better, even if you could not detect it. The plot below shows that choosing the new idea over the control leads to even faster growth in performance.
test = Test(significance, power, mde, optimistic = True) campaign = Campaign(base_rate, num_tests, test, mu, sigma) expected_campaign_results(campaign, sim_runs)
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
ewulczyn/ewulczyn.github.io
mit
Use class
a = A() # create an instance of class A print (a) print (type(a))
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Definition of a class with attributes (properties)
class Human(object): name = '' age = 0 human1 = Human() # create instance of Human human1.name = 'Anton' # name him (add data to this object) human1.age = 39 # set the age (add data to this object) print (type(human1)) print (human1.name) print (human1.age)
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Definition of a class with constructor
class Human(object): name = '' age = 0 def __init__(self, name): self.name = name
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Create a Human instance and give him a name instantly
h1 = Human('Anton') print (h1.name) print (h1.age)
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Definition of a class with several methods
class Human(object): ''' Human being ''' name = '' age = 0 def __init__(self, name): ''' Create a Human ''' self.name = name def grow(self): ''' Grow a Human by one year (in-place) ''' self.age += 1
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Create a Human, give him a name, grow by one year (in-place)
human1 = Human('Adam') human1.grow() print (human1.name) print (human1.age)
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Add get_ methods to the class
class Human(object): ''' Human being ''' name = '' age = 0 def __init__(self, name): ''' Create a Human ''' self.name = name def grow(self): ''' Grow a Human by one year (in-place) ''' self.age += 1 def get_name(self): ''' Return name of a Human ''' return self.name def get_age(self): ''' Return name of a Human ''' return self.age h1 = Human('Eva') print (h1.get_name())
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Create a class with Inheritance
class Teacher(Human): ''' Teacher of Python ''' def give_lecture(self): ''' Print lecture on the screen ''' print ('bla bla bla')
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Create an Teacher with name, grow him sufficiently, use him.
t1 = Teacher('Anton') while t1.get_age() < 50: t1.grow() print (t1.get_name()) print (t1.get_age()) t1.give_lecture()
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Import class definition from a module Store class definition in a separate file. E.g.: https://github.com/nansencenter/nansat-lectures/blob/master/human_teacher.py
# add directory scripts to PYTHONPATH (searchable path) import sys sys.path.append('scripts') from human_teacher import Teacher t1 = Teacher('Morten') t1.give_lecture()
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
Practical example
## add scripts to the list of searchable directories import sys sys.path.append('scripts') # import class definiton our module from ts_profile import Profile # load data p = Profile('data/tsprofile.txt') # work with the object print (p.get_ts_at_level(5)) print (p.get_ts_at_depth(200)) print (p.get_mixed_layer_depth(.1))
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
How would it look without OOP? 1. A lot of functions to import
from st_profile import load_profile, get_ts_at_level, get_ts_at_depth from st_profile import get_mixed_layer_depth, plot_ts
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
2. A lot of data to unpack and to pass between functions
depth, temp, sal = load_profile('tsprofile.txt') print (get_ts_at_level(depth, temp, sal))
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
3. And imagine now we open a satellite image which has: many matrices with data georeference information (e.g. lon, lat of corners) description of data (metadata) and so on... And here comes OOP:
from nansat import Nansat n = Nansat('satellite_filename.hdf')
notebooks/03 object oriented programming.ipynb
nansencenter/nansat-lectures
gpl-3.0
For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an optlang object of class Model.
type(model.solver)
documentation_builder/solvers.ipynb
opencobra/cobrapy
gpl-2.0
We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.
%sql -sampledata
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Table of Contents Outer Join Operator CHAR datatype size increase Binary Data Type Boolean Data Type Synonyms for Data Types Function Synonymns Netezza Compatibility Select Enhancements Hexadecimal Functions Table Creation with Data <a id='outer'></a> Outer Join Operator Db2 allows the use of the Oracle outer-join operator when Oracle compatibility is turned on within a database. In Db2 11, the outer join operator is available by default and does not require the DBA to turn on Oracle compatibility. Db2 supports standard join syntax for LEFT and RIGHT OUTER JOINS. However, there is proprietary syntax used by Oracle employing a keyword: "(+)" to mark the "null-producing" column reference that precedes it in an implicit join notation. That is (+) appears in the WHERE clause and refers to a column of the inner table in a left outer join. For instance: Python SELECT * FROM T1, T2 WHERE T1.C1 = T2.C2 (+) Is the same as: Python SELECT * FROM T1 LEFT OUTER JOIN T2 ON T1.C1 = T2.C2 In this example, we get list of departments and their employees, as well as the names of departments who have no employees. This example uses the standard Db2 syntax.
%%sql -a SELECT DEPTNAME, LASTNAME FROM DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E ON D.DEPTNO = E.WORKDEPT
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
This example works in the same manner as the last one, but uses the "+" sign syntax. The format is a lot simpler to remember than OUTER JOIN syntax, but it is not part of the SQL standard.
%%sql SELECT DEPTNAME, LASTNAME FROM DEPARTMENT D, EMPLOYEE E WHERE D.DEPTNO = E.WORKDEPT (+)
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='char'></a> CHAR Datatype Size Increase The CHAR datatype was limited to 254 characters in prior releases of Db2. In Db2 11, the limit has been increased to 255 characters to bring it in line with other SQL implementations. First we drop the table if it already exists.
%%sql -q DROP TABLE LONGER_CHAR; CREATE TABLE LONGER_CHAR ( NAME CHAR(255) );
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='binary'></a> Binary Data Types Db2 11 introduces two new binary data types: BINARY and VARBINARY. These two data types can contain any combination of characters or binary values and are not affected by the codepage of the server that the values are stored on. A BINARY data type is fixed and can have a maximum length of 255 bytes, while a VARBINARY column can contain up to 32672 bytes. Each of these data types is compatible with columns created with the FOR BIT DATA keyword. The BINARY data type will reduce the amount of conversion required from other data bases. Although binary data was supported with the FOR BIT DATA clause on a character column, it required manual DDL changes when migrating a table definition. This example shows the creation of the three types of binary data types.
%%sql -q DROP TABLE HEXEY; CREATE TABLE HEXEY ( AUDIO_SHORT BINARY(255), AUDIO_LONG VARBINARY(1024), AUDIO_CHAR VARCHAR(255) FOR BIT DATA );
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Inserting data into a binary column can be done through the use of BINARY functions, or the use of X'xxxx' modifiers when using the VALUE clause. For fixed strings you use the X'00' format to specify a binary value and BX'00' for variable length binary strings. For instance, the following SQL will insert data into the previous table that was created.
%%sql INSERT INTO HEXEY VALUES (BINARY('Hello there'), BX'2433A5D5C1', VARCHAR_BIT_FORMAT(HEX('Hello there'))); SELECT * FROM HEXEY;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Handling binary data with a FOR BIT DATA column was sometimes tedious, so the BINARY columns will make coding a little simpler. You can compare and assign values between any of these types of columns. The next SQL statement will update the AUDIO_CHAR column with the contents of the AUDIO_SHORT column. Then the SQL will test to make sure they are the same value.
%%sql UPDATE HEXEY SET AUDIO_CHAR = AUDIO_SHORT
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
We should have one record that is equal.
%%sql SELECT COUNT(*) FROM HEXEY WHERE AUDIO_SHORT = AUDIO_CHAR
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='boolean'></a> Boolean Data Type The boolean data type (true/false) has been available in SQLPL and PL/SQL scripts for some time. However, the boolean data type could not be used in a table definition. Db2 11 FP1 now allows you to use this data type in a table definition and use TRUE/FALSE clauses to compare values. This simple table will be used to demonstrate how BOOLEAN types can be used.
%%sql -q DROP TABLE TRUEFALSE; CREATE TABLE TRUEFALSE ( EXAMPLE INT, STATE BOOLEAN );
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
The keywords for a true value are TRUE, 'true', 't', 'yes', 'y', 'on', and '1'. For false the values are FALSE, 'false', 'f', 'no', 'n', and '0'.
%%sql INSERT INTO TRUEFALSE VALUES (1, TRUE), (2, FALSE), (3, 0), (4, 't'), (5, 'no')
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Now we can check to see what has been inserted into the table.
%sql SELECT * FROM TRUEFALSE
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Retrieving the data in a SELECT statement will return an integer value for display purposes. 1 is true and 0 is false (binary 1 and 0). Comparison operators with BOOLEAN data types will use TRUE, FALSE, 1 or 0 or any of the supported binary values. You have the choice of using the equal (=) operator or the IS or IS NOT syntax as shown in the following SQL.
%%sql SELECT * FROM TRUEFALSE WHERE STATE = TRUE OR STATE = 1 OR STATE = 'on' OR STATE IS TRUE
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='synonyms'></a> Synonym Data types Db2 has the standard data types that most developers are familiar with, like CHAR, INTEGER, and DECIMAL. There are other SQL implementations that use different names for these data types, so Db2 11 now allows these data types as syonomys for the base types. These data types are: |Type |Db2 Equivalent |:----- |:------------- |INT2 |SMALLINT |INT4 |INTEGER |INT8 |BIGINT |FLOAT4 |REAL |FLOAT8 |FLOAT The following SQL will create a table with all of these data types.
%%sql -q DROP TABLE SYNONYM_EMPLOYEE; CREATE TABLE SYNONYM_EMPLOYEE ( NAME VARCHAR(20), SALARY INT4, BONUS INT2, COMMISSION INT8, COMMISSION_RATE FLOAT4, BONUS_RATE FLOAT8 );
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
When you create a table with these other data types, Db2 does not use these "types" in the catalog. What Db2 will do is use the Db2 type instead of these synonym types. What this means is that if you describe the contents of a table, you will see the Db2 types displayed, not these synonym types.
%%sql SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS WHERE TBNAME='SYNONYM_EMPLOYEE' AND TBCREATOR=CURRENT USER
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='function'></a> Function Name Compatibility Db2 has a wealth of built-in functions that are equivalent to competitive functions, but with a different name. In Db2 11, these alternate function names are mapped to the Db2 function so that there is no re-write of the function name required. This first SQL statement generates some data required for the statistical functions. Generate Linear Data This command generates X,Y coordinate pairs in the xycoord table that are based on the function y = 2x + 5. Note that the table creation uses Common Table Expressions and recursion to generate the data!
%%sql -q DROP TABLE XYCOORDS; CREATE TABLE XYCOORDS ( X INT, Y INT ); INSERT INTO XYCOORDS WITH TEMP1(X) AS ( VALUES (0) UNION ALL SELECT X+1 FROM TEMP1 WHERE X < 10 ) SELECT X, 2*X + 5 FROM TEMP1;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
COVAR_POP is an alias for COVARIANCE
%%sql SELECT 'COVAR_POP', COVAR_POP(X,Y) FROM XYCOORDS UNION ALL SELECT 'COVARIANCE', COVARIANCE(X,Y) FROM XYCOORDS
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
VAR_POP is an alias for VARIANCE
%%sql SELECT 'STDDEV_POP', STDDEV_POP(X) FROM XYCOORDS UNION ALL SELECT 'STDDEV', STDDEV(X) FROM XYCOORDS
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
VAR_SAMP is an alias for VARIANCE_SAMP
%%sql SELECT 'VAR_SAMP', VAR_SAMP(X) FROM XYCOORDS UNION ALL SELECT 'VARIANCE_SAMP', VARIANCE_SAMP(X) FROM XYCOORDS
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
ISNULL, NOTNULL is an alias for IS NULL, IS NOT NULL
%%sql WITH EMP(LASTNAME, WORKDEPT) AS ( VALUES ('George','A01'), ('Fred',NULL), ('Katrina','B01'), ('Bob',NULL) ) SELECT * FROM EMP WHERE WORKDEPT ISNULL
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
LOG is an alias for LN
%%sql VALUES ('LOG',LOG(10)) UNION ALL VALUES ('LN', LN(10))
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
RANDOM is an alias for RAND Notice that the random number that is generated for the two calls results in a different value! This behavior is the not the same with timestamps, where the value is calculated once during the execution of the SQL.
%%sql VALUES ('RANDOM', RANDOM()) UNION ALL VALUES ('RAND', RAND())
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
STRPOS is an alias for POSSTR
%%sql VALUES ('POSSTR',POSSTR('Hello There','There')) UNION ALL VALUES ('STRPOS',STRPOS('Hello There','There'))
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
STRLEFT is an alias for LEFT
%%sql VALUES ('LEFT',LEFT('Hello There',5)) UNION ALL VALUES ('STRLEFT',STRLEFT('Hello There',5))
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
STRRIGHT is an alias for RIGHT
%%sql VALUES ('RIGHT',RIGHT('Hello There',5)) UNION ALL VALUES ('STRRIGHT',STRRIGHT('Hello There',5))
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Additional Synonyms There are a couple of additional keywords that are synonyms for existing Db2 functions. The list below includes only those features that were introduced in Db2 11. |Keyword | Db2 Equivalent |:------------| :----------------------------- |BPCHAR | VARCHAR (for casting function) |DISTRIBUTE ON| DISTRIBUTE BY Back to Top <a id='netezza'></a> Netezza Compatibility Db2 provides features that enable applications that were written for a Netezza Performance Server (NPS) database to use a Db2 database without having to be rewritten. The SQL_COMPAT global variable is used to activate the following optional NPS compatibility features: Double-dot notation - When operating in NPS compatibility mode, you can use double-dot notation to specify a database object. TRANSLATE parameter syntax - The syntax of the TRANSLATE parameter depends on whether NPS compatibility mode is being used. Operators - Which symbols are used to represent operators in expressions depends on whether NPS compatibility mode is being used. Grouping by SELECT clause columns - When operating in NPS compatibility mode, you can specify the ordinal position or exposed name of a SELECT clause column when grouping the results of a query. Routines written in NZPLSQL - When operating in NPS compatibility mode, the NZPLSQL language can be used in addition to the SQL PL language. Special Characters A quick review of Db2 special characters. Before we change the behavior of Db2, we need to understand what some of the special characters do. The following SQL shows how some of the special characters work. Note that the HASH/POUND sign (#) has no meaning in Db2.
%%sql WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS ( VALUES (' | ','OR ', '2 | 3 ', 2 | 3), (' & ','AND ', '2 & 3 ', 2 & 3), (' ^ ','XOR ', '2 ^ 3 ', 2 ^ 3), (' ~ ','COMPLEMENT', '~2 ', ~2), (' # ','NONE ', ' ',0) ) SELECT * FROM SPECIAL
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
If we turn on NPS compatibility, you see a couple of special characters change behavior. Specifically the ^ operator becomes a "power" operator, and the # becomes an XOR operator.
%%sql SET SQL_COMPAT = 'NPS'; WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS ( VALUES (' | ','OR ', '2 | 3 ', 2 | 3), (' & ','AND ', '2 & 3 ', 2 & 3), (' ^ ','POWER ', '2 ^ 3 ', 2 ^ 3), (' ~ ','COMPLIMENT', '~2 ', ~2), (' # ','XOR ', '2 # 3 ', 2 # 3) ) SELECT * FROM SPECIAL;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
GROUP BY Ordinal Location The GROUP BY command behavior also changes in NPS mode. The following SQL statement groups results using the default Db2 syntax:
%%sql SET SQL_COMPAT='DB2'; SELECT WORKDEPT,INT(AVG(SALARY)) FROM EMPLOYEE GROUP BY WORKDEPT;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
If you try using the ordinal location (similar to an ORDER BY clause), you will get an error message.
%%sql SELECT WORKDEPT, INT(AVG(SALARY)) FROM EMPLOYEE GROUP BY 1;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
If NPS compatibility is turned on then then you use the GROUP BY clause with an ordinal location.
%%sql SET SQL_COMPAT='NPS'; SELECT WORKDEPT, INT(AVG(SALARY)) FROM EMPLOYEE GROUP BY 1;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
TRANSLATE Function The translate function syntax in Db2 is: Python TRANSLATE(expression, to_string, from_string, padding) The TRANSLATE function returns a value in which one or more characters in a string expression might have been converted to other characters. The function converts all the characters in char-string-exp in from-string-exp to the corresponding characters in to-string-exp or, if no corresponding characters exist, to the pad character specified by padding. If no parameters are given to the function, the original string is converted to uppercase. In NPS mode, the translate syntax is: Python TRANSLATE(expression, from_string, to_string) If a character is found in the from string, and there is no corresponding character in the to string, it is removed. If it was using Db2 syntax, the padding character would be used instead. Note: If ORACLE compatibility is ON then the behavior of TRANSLATE is identical to NPS mode. This first example will uppercase the string.
%%sql SET SQL_COMPAT = 'NPS'; VALUES TRANSLATE('Hello');
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
In this example, the letter 'o' will be replaced with an '1'.
%sql VALUES TRANSLATE('Hello','o','1')
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Note that you could replace more than one character by expanding both the "to" and "from" strings. This example will replace the letter "e" with an "2" as well as "o" with "1".
%sql VALUES TRANSLATE('Hello','oe','12')
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Translate will also remove a character if it is not in the "to" list.
%sql VALUES TRANSLATE('Hello','oel','12')
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Reset the behavior back to Db2 mode.
%sql SET SQL_COMPAT='DB2'
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='select'></a> SELECT Enhancements Db2 has the ability to limit the amount of data retrieved on a SELECT statement through the use of the FETCH FIRST n ROWS ONLY clause. In Db2 11, the ability to offset the rows before fetching was added to the FETCH FIRST clause. Simple SQL with Fetch First Clause The FETCH first clause can be used in a variety of locations in a SELECT clause. This first example fetches only 10 rows from the EMPLOYEE table.
%%sql SELECT LASTNAME FROM EMPLOYEE FETCH FIRST 5 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
You can also add ORDER BY and GROUP BY clauses in the SELECT statement. Note that Db2 still needs to process all of the records and do the ORDER/GROUP BY work before limiting the answer set. So you are not getting the first 5 rows "sorted". You are actually getting the entire answer set sorted before retrieving just 5 rows.
%%sql SELECT LASTNAME FROM EMPLOYEE ORDER BY LASTNAME FETCH FIRST 5 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Here is an example with the GROUP BY statement. This first SQL statement gives us the total answer set - the count of employees by WORKDEPT.
%%sql SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY WORKDEPT
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Adding the FETCH FIRST clause only reduces the rows returned, not the rows that are used to compute the GROUPing result.
%%sql SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY WORKDEPT FETCH FIRST 5 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
OFFSET Extension The FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword allows you to retrieve the answer set after skipping "n" number of rows. The syntax of the OFFSET keyword is: Python OFFSET n ROWS FETCH FIRST x ROWS ONLY The OFFSET n ROWS must precede the FETCH FIRST x ROWS ONLY clause. The OFFSET clause can be used to scroll down an answer set without having to hold a cursor. For instance, you could have the first SELECT call request 10 rows by just using the FETCH FIRST clause. After that you could request the first 10 rows be skipped before retrieving the next 10 rows. The one thing you must be aware of is that that answer set could change between calls if you use this technique of a "moving" window. If rows are updated or added after your initial query you may get different results. This is due to the way that Db2 adds rows to a table. If there is a DELETE and then an INSERT, the INSERTed row may end up in the empty slot. There is no guarantee of the order of retrieval. For this reason you are better off using an ORDER by to force the ordering although this too won't always prevent rows changing positions. Here are the first 10 rows of the employee table (not ordered).
%%sql SELECT LASTNAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
You can specify a zero offset to begin from the beginning.
%%sql SELECT LASTNAME FROM EMPLOYEE OFFSET 0 ROWS FETCH FIRST 10 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Now we can move the answer set ahead by 5 rows and get the remaining 5 rows in the answer set.
%%sql SELECT LASTNAME FROM EMPLOYEE OFFSET 5 ROWS FETCH FIRST 5 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
FETCH FIRST and OFFSET in SUBSELECTs The FETCH FIRST/OFFSET clause is not limited to regular SELECT statements. You can also limit the number of rows that are used in a subselect. In this case you are limiting the amount of data that Db2 will scan when determining the answer set. For instance, say you wanted to find the names of the employees who make more than the average salary of the 3rd highest paid department. (By the way, there are multiple ways to do this, but this is one approach). The first step is to determine what the average salary is of all departments.
%%sql SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY AVG(SALARY) DESC;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
We only want one record from this list (the third one), so we can use the FETCH FIRST clause with an OFFSET to get the value we want (Note: we need to skip 2 rows to get to the 3rd one).
%%sql SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY AVG(SALARY) DESC OFFSET 2 ROWS FETCH FIRST 1 ROWS ONLY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
And here is the list of employees that make more than the average salary of the 3rd highest department in the company.
%%sql SELECT LASTNAME, SALARY FROM EMPLOYEE WHERE SALARY > ( SELECT AVG(SALARY) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY AVG(SALARY) DESC OFFSET 2 ROWS FETCH FIRST 1 ROW ONLY ) ORDER BY SALARY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Alternate Syntax for FETCH FIRST The FETCH FIRST n ROWS ONLY and OFFSET clause can also be specified using a simpler LIMIT/OFFSET syntax. The LIMIT clause and the equivalent FETCH FIRST syntax are shown below. |Syntax |Equivalent |:-----------------|:----------------------------- |LIMIT x |FETCH FIRST x ROWS ONLY |LIMIT x OFFSET y |OFFSET y ROWS FETCH FIRST x ROWS ONLY |LIMIT y,x |OFFSET y ROWS FETCH FIRST x ROWS ONLY The previous examples are rewritten using the LIMIT clause. We can use the LIMIT clause with an OFFSET to get the value we want from the table.
%%sql SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY AVG(SALARY) DESC LIMIT 1 OFFSET 2
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Here is the list of employees that make more than the average salary of the 3rd highest department in the company. Note that the LIMIT clause specifies only the offset (LIMIT x) or the offset and limit (LIMIT y,x) when you do not use the LIMIT keyword. One would think that LIMIT x OFFSET y would translate into LIMIT x,y but that is not the case. Don't try to figure out the SQL standards reasoning behind the syntax!
%%sql SELECT LASTNAME, SALARY FROM EMPLOYEE WHERE SALARY > ( SELECT AVG(SALARY) FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY AVG(SALARY) DESC LIMIT 2,1 ) ORDER BY SALARY
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id='hexadecimal'></a> Hexadecimal Functions A number of new HEX manipulation functions have been added to Db2 11. There are a class of functions that manipulate different size integers (SMALL, INTEGER, BIGINT) using NOT, OR, AND, and XOR. In addition to these functions, there are a number of functions that display and convert values into hexadecimal values. INTN Functions The INTN functions are bitwise functions that operate on the "two's complement" representation of the integer value of the input arguments and return the result as a corresponding base 10 integer value. The function names all include the size of the integers that are being manipulated: N = 2 (Smallint), 4 (Integer), 8 (Bigint) There are four functions: INTNAND - Performs a bitwise AND operation, 1 only if the corresponding bits in both arguments are 1 INTNOR - Performs a bitwise OR operation, 1 unless the corresponding bits in both arguments are zero INTNXOR Performs a bitwise exclusive OR operation, 1 unless the corresponding bits in both arguments are the same INTNNOT - Performs a bitwise NOT operation, opposite of the corresponding bit in the argument Six variables will be created to use in the examples. The X/Y values will be set to X=1 (01) and Y=3 (11) and different sizes to show how the functions work.
%%sql -q DROP VARIABLE XINT2; DROP VARIABLE YINT2; DROP VARIABLE XINT4; DROP VARIABLE YINT4; DROP VARIABLE XINT8; DROP VARIABLE YINT8; CREATE VARIABLE XINT2 INT2 DEFAULT(1); CREATE VARIABLE YINT2 INT2 DEFAULT(3); CREATE VARIABLE XINT4 INT4 DEFAULT(1); CREATE VARIABLE YINT4 INT4 DEFAULT(3); CREATE VARIABLE XINT8 INT8 DEFAULT(1); CREATE VARIABLE YINT8 INT8 DEFAULT(3);
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
This example will show the four functions used against SMALLINT (INT2) data types.
%%sql WITH LOGIC(EXAMPLE, X, Y, RESULT) AS ( VALUES ('INT2AND(X,Y)',XINT2,YINT2,INT2AND(XINT2,YINT2)), ('INT2OR(X,Y) ',XINT2,YINT2,INT2OR(XINT2,YINT2)), ('INT2XOR(X,Y)',XINT2,YINT2,INT2XOR(XINT2,YINT2)), ('INT2NOT(X) ',XINT2,YINT2,INT2NOT(XINT2)) ) SELECT * FROM LOGIC
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
This example will use the 4 byte (INT4) data type.
%%sql WITH LOGIC(EXAMPLE, X, Y, RESULT) AS ( VALUES ('INT4AND(X,Y)',XINT4,YINT4,INT4AND(XINT4,YINT4)), ('INT4OR(X,Y) ',XINT4,YINT4,INT4OR(XINT4,YINT4)), ('INT4XOR(X,Y)',XINT4,YINT4,INT4XOR(XINT4,YINT4)), ('INT4NOT(X) ',XINT4,YINT4,INT4NOT(XINT4)) ) SELECT * FROM LOGIC
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Finally, the INT8 data type is used in the SQL. Note that you can mix and match the INT2, INT4, and INT8 values in these functions but you may get truncation if the value is too big.
%%sql WITH LOGIC(EXAMPLE, X, Y, RESULT) AS ( VALUES ('INT8AND(X,Y)',XINT8,YINT8,INT8AND(XINT8,YINT8)), ('INT8OR(X,Y) ',XINT8,YINT8,INT8OR(XINT8,YINT8)), ('INT8XOR(X,Y)',XINT8,YINT8,INT8XOR(XINT8,YINT8)), ('INT8NOT(X) ',XINT8,YINT8,INT8NOT(XINT8)) ) SELECT * FROM LOGIC
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
TO_HEX Function The TO_HEX function converts a numeric expression into a character hexadecimal representation. For example, the numeric value 255 represents x'FF'. The value returned from this function is a VARCHAR value and its length depends on the size of the number you supply.
%sql VALUES TO_HEX(255)
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
RAWTOHEX Function The RAWTOHEX function returns a hexadecimal representation of a value as a character string. The result is a character string itself.
%sql VALUES RAWTOHEX('Hello')
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
The string "00" converts to a hex representation of x'3030' which is 12336 in Decimal. So the TO_HEX function would convert this back to the HEX representation.
%sql VALUES TO_HEX(12336)
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
The string that is returned by the RAWTOHEX function should be the same.
%sql VALUES RAWTOHEX('00');
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id="create"><a/> Table Creation Extensions The CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the data at the same time. Create Table Syntax The syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause: Python CREATE TABLE &lt;name&gt; AS (SELECT ...) [ WITH DATA | DEFINITION ONLY ] The table definition will be generated based on the SQL statement that you specify. The column names are derived from the columns that are in the SELECT list and can only be changed by specifying the columns names as part of the table name: EMP(X,Y,Z,...) AS (...). For example, the following SQL will fail because a column list was not provided:
%sql -q DROP TABLE AS_EMP %sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
You can name a column in the SELECT list or place it in the table definition.
%sql -q DROP TABLE AS_EMP %sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) DEFINITION ONLY;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
You can check the SYSTEM catalog to see the table definition.
%%sql SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS WHERE TBNAME='AS_EMP' AND TBCREATOR=CURRENT USER
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
The DEFINITION ONLY clause will create the table but not load any data into it. Adding the WITH DATA clause will do an INSERT of rows into the newly created table. If you have a large amount of data to load into the table you may be better off creating the table with DEFINITION ONLY and then using LOAD or other methods to load the data into the table.
%sql -q DROP TABLE AS_EMP %sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) WITH DATA;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
The SELECT statement can be very sophisticated. It can do any type of calculation or limit the data to a subset of information.
%%sql -q DROP TABLE AS_EMP; CREATE TABLE AS_EMP(LAST,PAY) AS ( SELECT LASTNAME, SALARY FROM EMPLOYEE WHERE WORKDEPT='D11' FETCH FIRST 3 ROWS ONLY ) WITH DATA;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
You can also use the OFFSET clause as part of the FETCH FIRST ONLY to get chunks of data from the original table.
%%sql -q DROP TABLE AS_EMP; CREATE TABLE AS_EMP(DEPARTMENT, LASTNAME) AS (SELECT WORKDEPT, LASTNAME FROM EMPLOYEE OFFSET 5 ROWS FETCH FIRST 10 ROWS ONLY ) WITH DATA; SELECT * FROM AS_EMP;
Db2 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Adding Datasets Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials:
b.add_dataset('mesh', compute_times=[0.75], dataset='mesh01')
2.3/examples/detached_rotstar.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Running Compute Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
b['requiv@primary@component'] = 1.8
2.3/examples/detached_rotstar.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now we'll compute synthetics at the times provided using the default options
b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel') b.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')
2.3/examples/detached_rotstar.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting
afig, mplfig = b.plot(model='rochemodel',show=True) afig, mplfig = b.plot(model='rotstarmodel',show=True)
2.3/examples/detached_rotstar.ipynb
phoebe-project/phoebe2-docs
gpl-3.0