markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The flags are on IOC standard, thus 1 means good while 4 means bad. 0 is used when no QC test was applied. For instance, the spike test is defined so that it depends on the previous and following measurements, thus the first and last data point of the array will always have a spike flag equal to 0. How could we use tha...
idx = pqc.flags["PSAL"]["global_range"] >= 3 pqc["PSAL"][idx]
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
The flag "overall" combines all criteria, and it is the maximum flag value among all the criteria applied, as recommended by the IOC. Therefore, if one measurement is flagged bad (flag=4) in a single test, it will get a flag 4. Likewise, a measurement with flag 1 means that the maximum value from all applied tests was ...
pqc.flags["PSAL"]["overall"]
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
EuroGOOS automatic QC Let's visualize what the automatic EuroGOOS procedure can detect for temperature and salinity. The concept is the same for all variables evaluated, i.e. there is a flag "overall" for "TEMP" and another one for "PSAL".
# ToDo: Include a shaded area for unfeasible values idx_good = pqc.flags["TEMP"]["overall"] <= 2 idx_bad = pqc.flags["TEMP"]["overall"] >= 3 p1 = figure(plot_width=420, plot_height=600, title="QC according to EuroGOOS") p1.circle(data['TEMP'][idx_good], -data['PRES'][idx_good], size=8, line_color="seagreen", fill_col...
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
The result from the EuroGOOS recommendations is pretty good and it is one of my favorite QC setup when considering only the traditional methods. Most of the bad measurements were automatically detected, but if you zoom in below 800m you will notice some questionable measurements that were not flagged. In the following ...
from bokeh.models import ColumnDataSource, CustomJS, Slider threshold = Slider(title="threshold", value=0.05, start=0.0, end=6.0, step=0.05, orientation="horizontal") tmp = dict( depth=-pqc["PRES"], temp=pqc["PSAL"], temp_good=pqc["PSAL"].copy(), temp_bad=pqc["PSAL"].copy(), spike=np.absolute(pqc...
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Because the thresholds were wisely defined with tolerant values, the traditional QC procedure does a great job flagging bad values, i.e. there is a high confidence that a measurement flagged as bad is indeed a bad one. To avoid the mistake of flagging good measurements as bad ones, To achieve that, some bad measurement...
print("PRES: {}".format(pqc["PRES"][825])) print("TEMP: {}".format(pqc["TEMP"][825])) for c in ["gradient", "spike", "woa_normbias"]: print("{}: {}".format(c, pqc.features["TEMP"][c][825]))
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
EuroGOOS - Gradient below 500m: 3.0 - Spike below 500m: 2.0 - Climatology: 6 standard deviations None of the criteria failed individually. For the climatology comparison we have a scaled value in standard deviations, but how large was the estimated spike? How uncommon was that? Could we combine the information?
pqc.flags["PSAL"]
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Let's look at the salinity in respect to the spike and WOA normalized bias. Near the bottom of the profile there some bad salinity measurement, which are mostly identified with the spike test. A few measurements aren't critically bad in respect to the spike or the climatology individually. One of the goals of the Anoma...
idx_good = pqc.flags["PSAL"]["spike_depthconditional"] <= 2 idx_bad = pqc.flags["PSAL"]["spike_depthconditional"] >= 3 p1 = figure(plot_width=500, plot_height=600) p1.circle(pqc.features["PSAL"]["spike"][idx_good], -pqc['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p1.triangle(pqc...
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Check data api connection Make data api request to test that the api key is working
# Setup Planet Data API base URL API_URL = "https://api.planet.com/data/v1" # Setup the session session = requests.Session() # Authenticate session.auth = (PLANET_API_KEY, "") # Make a GET request to the Planet Data API resp = session.get(API_URL) if not resp.ok: print("Something is wrong:", resp.content)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Data API Search In this next part, we will search for items that match a given date range, item_type, and location Data API quick-search wrapper Make a search function that can take a geojson geometry and give us item_ids
from datetime import datetime def get_item_ids(geometry, item_type='PSScene', start_date=None, end_date=None, limit=100): """Get Planet Data API item_id values for matching filters. Args: geometry: geojson geometry dict item_type: item_type (see https://developers.planet.com/docs/api/items-asse...
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Geometry helper Convert coordinates to geojson geometry format
def coords_to_geometry(lat, lon): """Given latitude and longitude floats, construct a geojson geometry dict""" return { "type": "Point", "coordinates": [lon, lat] }
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Make a geometry dict for coordinates in San Francisco
geom = coords_to_geometry(37.77493, -122.41942) print(geom)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Try getting item ids
get_item_ids(geom, start_date="2019-01-01T00:00:00.000Z", end_date="2019-10-01T00:00:00.000Z", limit=5)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Getting Webtiles Although we could download images for the item_ids above, we can get a nice visual preview through webtiles. These are 256x256 PNG images on a spatial grid, often used for web maps. Generating tile urls We want to get urls for many tiles over time for a given latitude, longitude, and zoom level. Let's ...
def get_tile_urls(lat, lon, zoom=15, item_type='PSScene', start_date='2019-01-01T00:00:00.000Z', end_date='2019-10-01T00:00:00.000Z', limit=5): """Get webtile urls for given coordinates, zoom, and matching filters. Args: lat: latitude float lon: longitude float zoom: zoom level int (usua...
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Testing tile urls Click the links below to see tile images in your browser
tile_urls = get_tile_urls(37.77493, -122.41942, limit=5) for url in tile_urls: print(url) print()
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Display a tile
from IPython.display import Image resp = requests.get(tile_urls[0]) Image(resp.content)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Animate tiles over time
%matplotlib inline from IPython.display import HTML import random import time def animate(urls, delay=1.0, loops=1): """Display an animated loop of images Args: urls: list of image url strings delay: how long in seconds to display each image loops: how many times to repeat the image sequ...
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Read the data
fname = './Korendijk_data.txt' with open(fname, 'r') as f: data = f.readlines() # read the data as a list of strings hdr = data[0].split() # get the first line, i.e. the header data = data[1:] # remove the header line from the data # split each line (string) into its individual tokens # each token is still a...
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Same, but using log scale
plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.xscale('log') plt.grid() for r in distances: I = data[:,0] == r plt.plot(data[I,-2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Drawdown on double log scale
plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.xscale('log') plt.yscale('log') plt.grid() for r in distances: I = data[:,0] == r plt.plot(data[I,-2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Drawdown on double log scale using $t/r^2$ on x-axis
plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('$t/r^2$ [min/m$^2$]') plt.ylabel('dd [m]') plt.xscale('log') #plt.yscale('log') plt.grid() for r in distances: I = data[:,0] == r tr2 = data[I, -2] / r**2 plt.plot(tr2, data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Interpretation using the approximation of the Theis solution $$ s = \frac {Q} {4 \pi kD} \ln \left( \frac {2.25 kD t} {r^2 S} \right) $$ or $$ s = \frac {2.3 Q} {4 \pi kD} \log \left( \frac {2.25 kD t} {r^2 S} \right) $$ First determine the drawdown per log cyclus from the graph $\approx (1.1 - 0.21) / 3 \approx 0.30 $...
Q = 788 # m3/d ds = (1.1 - 0.21) / 2 # drawdown increase per log cycle of time kD = 2.3 * Q / (4 * np.pi * ds) print('kD = {:.0f} m2/d'.format(kD))
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
For the storage coefficient determine the intersection with the straight line with the line of zero drawdown. This is $t/r^2 = 2 \times 10 ^{-4}$ min. We have to convert to days to get answer consistent with the transmissivity. Then setting the argument of the solution equal to 1 so that the computed drawdown is 0 and ...
tr2 = 2e-4 / (24 * 60) # d/m2 r = distances[0] S = 2.25 * kD * tr2 print('S = {:.2e} [-]'.format(S))
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Clearly, the result depends somewhat on the exact straigt line drawn through the bundel of curves for the observation wells. In the ideal situation, these curves fall onto each other. In this real-world case this is not true, which is due to non-uniformity of the real-world aquifer. There are many real-world pumping te...
A = 7 B = 1.0e7 u = np.logspace(-4, 1, 41) plt.title('Type curve and $A \times s$ vs $B \times t/r^2$, with $A$={}, $B$={}'.format(A, B)) plt.xlabel('$1/u$ and $B \, t/r^2$') plt.ylabel('W(u) and $A \, s$') plt.xscale('log') plt.yscale('log') plt.grid() # the Theis type curve plt.plot(1/u, exp1(u), label='Theis') # T...
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
So $A s = W(u)$ and $s = \frac Q {2 \pi kD} W(u)$ and, therefore $A = \frac {4 \pi kD} {Q}$ and $ kD = \frac {A Q} {4 \pi}$
kD = A * Q /4 /np.pi print('kD = {:.0f} m2/d'.format(kD))
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
As one sees, the results obtained this way are consistent with those obtained by the previous method. Directly optmizing $kD$ and $S$ instead of $A$ and $B$ The previous method was inspired by the shifting of the measurements drawn on double log paper over the Theis type curve also drawn on double log paper. However, ...
kD = 450 S = 0.0002 u = np.logspace(-4, 1, 41) plt.title('Direct comparison between computed and measured drawdown, $kD$={:.0f} m$^2$/d, $S$={:.3e} [-]'.format(kD,S)) plt.xlabel('$1/u$') plt.ylabel('$W[u]$') plt.xscale('log') plt.yscale('log') plt.grid() # the Theis type curve plt.plot(1/u, exp1(u), label='Theis') # ...
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Data Source Chicago publishes its crime data in a massive 1.4GB csv. Here's a small sample.
sample = pd.read_csv('clearn/data/fixtures/tinyCrimeSample.csv')
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Data Format Lots of features. And lots of possible discrete values.
sample
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Cleaning up the Crimes We wrote a munge module to tame the data.
from clearn import munge
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Bin, drop, and reindex Bin crimes into 4 categories. Convert numbers to community area names. Turn timestamp string into pandas time series index.
munge.make_clean_timestamps(sample)
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Group by community area and resample by day For each community area, create a series of summaries of each day's criminal activity from 2001 to present.
every_community_area = munge.get_master_dict() where_wills_sister_lives = every_community_area['Edgewater'] where_wills_sister_lives[-5:]
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Extra preprocessing for each model For nonsequential prediction, we added history to each day.
from clearn.predict import NonsequentialPredictor with_history = NonsequentialPredictor.preprocess(every_community_area) with_history['Edgewater'][-5:]
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Let's predict crime!
from datetime import date log_reg_predictor = NonsequentialPredictor(with_history['Edgewater']) log_reg_predictor.predict(date(2015, 4, 3))
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Which algorithm performs best?
from clearn.evaluate import evaluate # Generate a sample of 2500 days to predict evaluate(2500)
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Persistent random walk model In this example, we choose to model the price evolution of SPY as a simple, well-known random walk model: the auto-regressive process of first order. We assume that subsequent log-return values $r_t$ of SPY obey the following recursive instruction: $$ r_t = \rho_t \cdot r_{t-1} + \sqrt{1-\r...
logPrices = np.log(prices) logReturns = np.diff(logPrices) plt.figure(figsize=(8,2)) plt.plot(np.arange(1, 390), logReturns, c='r') plt.ylabel('log-returns') plt.xlabel('Nov 28, 2016') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.yticks([-0.001, -0...
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Online study bayesloop provides the class OnlineStudy to analyze on-going data streams and perform model selection for each new data point. In contrast to other Study types, more than just one transition model can be assigned to an OnlineStudy instance, using the method addTransitionModel. Here, we choose to add two di...
S = bl.OnlineStudy(storeHistory=True) L = bl.om.ScaledAR1('rho', bl.oint(-1, 1, 100), 'sigma', bl.oint(0, 0.006, 400)) S.set(L)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em"> **Note:** While the parameter `rho` is naturally constrained to the interval ]-1, 1[, the parameter boundaries of `sigma` have to be specified by the user. Typically, one can review past log-retu...
T1 = bl.tm.CombinedTransitionModel( bl.tm.GaussianRandomWalk('s1', bl.cint(0, 1.5e-01, 15), target='rho'), bl.tm.GaussianRandomWalk('s2', bl.cint(0, 1.5e-04, 50), target='sigma') ) T2 = bl.tm.Independent() S.add('normal', T1) S.add('chaotic', T2)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Before any data points are passed to the study instance, we further provide prior probabilities for the two scenarios. We expect about one news announcement containing unexpected information per day and set a prior probability of $1/390$ for the chaotic scenario (one normal trading day consists of 390 trading minutes).
S.setTransitionModelPrior([389/390., 1/390.])
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Finally, we can supply log-return values to the study instance, data point by data point. We use the step method to infer new parameter estimates and the updated probabilities of the two scenarios. Note that in this example, we use a for loop to feed all data points to the algorithm because all data points are already ...
for r in tqdm_notebook(logReturns): S.step(r)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Volatility spikes Before we analyze how the probability values of our two market scenarios change over time, we check whether the inferred temporal evolution of the time-varying parameters is realistic. Below, the log-returns are displayed together with the inferred marginal distribution (shaded red) and mean value (bl...
plt.figure(figsize=(8, 4.5)) # data plot plt.subplot(211) plt.plot(np.arange(1, 390), logReturns, c='r') plt.ylabel('log-returns') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.yticks([-0.001, -0.0005, 0, 0.0005, 0.001]) plt.xlim([0, 390]) # param...
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Note that the volatility estimates of the first few trading minutes are not as accurate as later ones, as we initialize the algorithm with a non-informative prior distribution. One could of course provide a custom prior distribution as a more realistic starting point. Despite this fade-in period, the period of increase...
plt.figure(figsize=(8, 4.5)) # data plot plt.subplot(211) plt.plot(prices) plt.ylabel('price [USD]') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlim([0, 390]) # parameter plot plt.subplot(212) S.plot('rho', color='#0000FF') plt.xticks([28, 88,...
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
As a correlation coefficient that deviates significantly from zero would be immediately exploitable to predict future price movements, we mostly find correlation values near zero (in accordance with the efficient market hypothesis). However, between 1:15pm and 2:15pm, we find a short period of negative correlation with...
# extract parameter grid values (rho) and corresponding prob. values (p) rho, p = S.getParameterDistributions('rho') # evaluate Prob.(rho < 0) for all time steps P = bl.Parser(S) p_neg_rho = np.array([P('rho < 0.', t=t, silent=True) for t in range(1, 389)]) # plotting plt.figure(figsize=(8, 4.5)) plt.subplot(211) plt...
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Automatic tuning One major advantage of the OnlineStudy class is that it not only infers the time-varying parameters of the low-level correlated random walk (the observation model ScaledAR1), but further infers the magnitude (the standard deviation of the transition model GaussianRandomWalk) of the parameter fluctuatio...
plt.figure(figsize=(8, 4.5)) plt.subplot(221) S.plot('s1', color='green') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([-2, 388]) plt.ylim([0, 0.06]) plt.subplot(222) S.plot('s1', t=388, facecolor='green', alpha=0.7)...
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Real-time model selection Finally, we investigate which of our two market scenarios - normal vs. chaotic - can explain the price movements best. Using the method plot('chaotic'), we obtain the probability values for the chaotic scenario compared to the normal scenario, with respect to all past data points:
plt.figure(figsize=(8, 2)) S.plot('chaotic', lw=2, c='k') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([0, 388]) plt.ylabel('p("chaotic")')
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
As expected, the probability that the chaotic scenario can explain all past log-return values at a given point in time quickly falls off to practically zero. Indeed, a correlated random walk with slowly changing volatility and correlation of subsequent returns is better suited to describe the price fluctuations of SPY ...
plt.figure(figsize=(8, 2)) S.plot('chaotic', local=True, c='k', lw=2) plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([0, 388]) plt.ylabel('p("chaotic")') plt.axvline(58, 0, 1, zorder=1, c='r', lw=1.5, ls='dashed', alpha...
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Here, we find clear peaks indicating an increased probability for the chaotic scenario, i.e. that previously gained information about the market dynamics has become useless. Lets assume that we are concerned about market behavior as soon as there is at least a 1% risk that normal market dynamics can not describe the cu...
p = S.getTransitionModelProbabilities('chaotic', local=True) np.argwhere(p > 0.01)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "linear_algebra_basic_I.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것이다. jupyter notebook linear_algebra_basic_I.ipynb linear_algebra_basic_I.py 코드 구조 본 Lab은 vector와 matrix의 기초적인 연산을 수행하는 12개의 함수를 작성합니다. 각각 함수의 기...
def vector_size_check(*vector_variables): return None # 실행결과 print(vector_size_check([1,2,3], [2,3,4], [5,6,7])) # Expected value: True print(vector_size_check([1, 3], [2,4], [6,7])) # Expected value: True print(vector_size_check([1, 3, 4], [4], [6,7])) # Expected value: False
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #2 - vector_addition (one line code available) $$ \left[\begin{array}{r} a & b & c \ \end{array}\right] + \left[\begin{array}{r} x & y & z \ \end{array}\right] = \left[\begin{array}{r} a+x & b+y & c+z \ \end{array}\right] $$
def vector_addition(*vector_variables): return None # 실행결과 print(vector_addition([1, 3], [2, 4], [6, 7])) # Expected value: [9, 14] print(vector_addition([1, 5], [10, 4], [4, 7])) # Expected value: [15, 16] print(vector_addition([1, 3, 4], [4], [6,7])) # Expected value: ArithmeticError
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #3 - vector_subtraction (one line code available) $$ \left[\begin{array}{r} a & b & c \ \end{array}\right] - \left[\begin{array}{r} x & y & z \ \end{array}\right] = \left[\begin{array}{r} a-x & b-y & c-z \ \end{array}\right] $$
def vector_subtraction(*vector_variables): if vector_size_check(*vector_variables) == False: raise ArithmeticError return None # 실행결과 print(vector_subtraction([1, 3], [2, 4])) # Expected value: [-1, -1] print(vector_subtraction([1, 5], [10, 4], [4, 7])) # Expected value: [-13, -6]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #4 - scalar_vector_product (one line code available) $$ \alpha \times \left[\begin{array}{r} x & y & z \ \end{array}\right] = \left[\begin{array}{r} \alpha \times x & \alpha \times y & \alpha \times z \ \end{array}\right] $$
def scalar_vector_product(alpha, vector_variable): return None # 실행결과 print (scalar_vector_product(5,[1,2,3])) # Expected value: [5, 10, 15] print (scalar_vector_product(3,[2,2])) # Expected value: [6, 6] print (scalar_vector_product(4,[1])) # Expected value: [4]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #5 - matrix_size_check (one line code available)
def matrix_size_check(*matrix_variables): return None # 실행결과 matrix_x = [[2, 2], [2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] matrix_w = [[2, 5], [1, 1], [2, 2]] print (matrix_size_check(matrix_x, matrix_y, matrix_z)) # Expected value: False print (matrix_size_check(matrix_y, matrix_...
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #6 - is_matrix_equal (one line code available) if $x=a, y=b, z=c, w=d $ then $$ \left[\begin{array}{rr} x & y \ z & w \ \end{array}\right] = \left[\begin{array}{rr} a & b \ c & d \ \end{array}\right] $$
def is_matrix_equal(*matrix_variables): return None # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] print (is_matrix_equal(matrix_x, matrix_y, matrix_y, matrix_y)) # Expected value: False print (is_matrix_equal(matrix_x, matrix_x)) # Expected value: True
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #7 - matrix_addition (one line code available) $$ \left[\begin{array}{rr} x & y \ z & w \ \end{array}\right] + \left[\begin{array}{rr} a & b \ c & d \ \end{array}\right] = \left[\begin{array}{rr} x + a & y + b \ z + c & w + d \ \end{array}\right] $$
def matrix_addition(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return None # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] print (matrix_addition(matrix_x, matrix_y)) # Expected value: [[4, 7], [4, 3]] print ...
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #8 - matrix_subtraction (one line code available) $$ \left[\begin{array}{rr} x & y \ z & w \ \end{array}\right] - \left[\begin{array}{rr} a & b \ c & d \ \end{array}\right] = \left[\begin{array}{rr} x - a & y - b \ z - c & w - d \ \end{array}\right] $$
def matrix_subtraction(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return None # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] print (matrix_subtraction(matrix_x, matrix_y)) # Expected value: [[0, -3], [0, 1]]...
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #9 - matrix_transpose (one line code available) Let $A = \left[\begin{array}{rrr} a & b \ c & d \ e & f \ \end{array}\right] $, Then $A^T\ = \left[\begin{array}{rr} a & c & e \ b & d & e \ \end{array}\right] $
def matrix_transpose(matrix_variable): return None # 실행결과 matrix_w = [[2, 5], [1, 1], [2, 2]] matrix_transpose(matrix_w)
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #10 - scalar_matrix_product (one line code available) $$ \alpha \times \left[\begin{array}{rr} a & c & d \ e & f & g \ \end{array}\right] = \left[\begin{array}{rr} \alpha \times a & \alpha \times c & \alpha \times d \ \alpha \times e & \alpha \times f & \alpha \times g \ \end{array}\right]...
def scalar_matrix_product(alpha, matrix_variable): return None # 실행결과 matrix_x = [[2, 2], [2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] matrix_w = [[2, 5], [1, 1], [2, 2]] print(scalar_matrix_product(3, matrix_x)) #Expected value: [[6, 6], [6, 6], [6, 6]] print(scalar_matrix_product(2,...
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #11 - is_product_availability_matrix (one line code available) The matrix product of $A$ and $B$ (written $AB$) is defined if and only if Number of columns in $A$ = Number of rows in $B$
def is_product_availability_matrix(matrix_a, matrix_b): return None # 실행결과 matrix_x= [[2, 5], [1, 1]] matrix_y = [[1, 1, 2], [2, 1, 1]] matrix_z = [[2, 4], [5, 3], [1, 3]] print(is_product_availability_matrix(matrix_y, matrix_z)) # Expected value: True print(is_product_availability_matrix(matrix_z, matrix_x)) # ...
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #12 - matrix_product (one line code available)
def matrix_product(matrix_a, matrix_b): if is_product_availability_matrix(matrix_a, matrix_b) == False: raise ArithmeticError return None # 실행결과 matrix_x= [[2, 5], [1, 1]] matrix_y = [[1, 1, 2], [2, 1, 1]] matrix_z = [[2, 4], [5, 3], [1, 3]] print(matrix_product(matrix_y, matrix_z)) # Expected value:...
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
결과 제출 하기 문제없이 숙제를 제출하면 아래 결과가 모두 PASS로 표시 될 것 입니다.
import gachon_autograder_client as g_autograder THE_TEMLABIO_ID = "#YOUR_ID" PASSWORD = "#YOUR_PASSWORD" ASSIGNMENT_FILE_NAME = "linear_algebra_basic_I.ipynb" g_autograder.submit_assignment(THE_TEMLABIO_ID, PASSWORD, ASSIGNMENT_FILE_NAME)
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
数据集 我们使用我们熟悉的MNIST手写体数据集来训练我们的CGAN,我们同样提供了一个简化版本的数据集来加快我们的训练速度,与上次的数据集不一样的是,这次的数据集包含0到9共10类的手写数字,每类各200张,共2000张.图片同样为28*28的单通道灰度图(我们将其resize到32*32).下面是加载mnist数据集的代码.
def load_mnist_data(): """ load mnist(0,1,2) dataset """ transform = torchvision.transforms.Compose([ # transform to 1-channel gray image since we reading image in RGB mode transforms.Grayscale(1), # resize image from 28 * 28 to 32 * 32 transforms.Resize(32), ...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
接下来让我们查看一下各个类上真实的手写体数据集的数据吧.(运行一下2个cell的代码,无需理解)
def denorm(x): # denormalize out = (x + 1) / 2 return out.clamp(0, 1) from utils import show """ you can pass code in this cell """ # show mnist real data train_dataset = load_mnist_data() images = [] for j in range(5): for i in range(10): images.append(train_dataset[i * 200 + j][0]) show(torch...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
训练部分的代码代码与之前相似, 不同的地方在于要根据类别生成y_vec(one-hot向量如类别2对应[0,1,0,0,0,0,0,0,0,0])和y_fill(将y_vec扩展到大小为(class_num, image_size, image_size),正确的类别的channel全为1,其他channel全为0),分别输入G和D作为条件变量.其他训练过程与普通的GAN相似.我们可以先为每个类别标签生成vecs和fills.
# class number class_num = 10 # image size and channel image_size=32 image_channel=1 # vecs: one-hot vectors of size(class_num, class_num) # fills: vecs expand to size(class_num, class_num, image_size, image_size) vecs = torch.eye(class_num) fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_si...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
visualize_results和run_gan的代码不再详细说明.
def visualize_results(G, device, z_dim, class_num, class_result_size=5): G.eval() z = torch.rand(class_num * class_result_size, z_dim).to(device) y = torch.LongTensor([i for i in range(class_num)] * class_result_size) y_vec = vecs[y.long()].to(device) g_z = G(z, y_vec) show(torchv...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
下面尝试训练一下我们的CGAN吧.
# hyper params # z dim latent_dim = 100 # Adam lr and betas learning_rate = 0.0002 betas = (0.5, 0.999) # epochs and batch size n_epochs = 120 batch_size = 32 # device : cpu or cuda:0/1/2/3 device = torch.device('cuda:2') # mnist dataset and dataloader train_dataset = load_mnist_data() trainloader = torch.utils.dat...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
作业 : 1. 在D中,可以将输入图片和labels分别通过两个不同的卷积层然后在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?
class DCDiscriminator1(nn.Module): def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True): super().__init__() self.image_size = image_size self.input_channel = input_channel self.class_num = class_num self.fc_size = image_size // 8 # ...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 观察两次训练的loss曲线,可以发现给图片和标签加上卷积之后,G的loss值一直稳定在一定的范围内,而没加上卷积处理的网络中,G的loss值一开始很低,后来逐渐升高。从loss曲线上分析,在第一次训练中G的变化更大。因此,第二次训练能得到效果更好的生成器。 从输出的图片上比较,也可以很明显可以看到第二次训练输出的结果比第一次好。 在D中,可以将输入图片通过1个卷积层然后和(尺寸与输入图片一致的)labels在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能,并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?
class DCDiscriminator2(nn.Module): def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True): super().__init__() self.image_size = image_size self.input_channel = input_channel self.class_num = class_num self.fc_size = image_size // 8 # ...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 可以观察到生成网络的loss曲线变化的程度要较前两次训练的小,说明得到的G的能力相比前两次训练得到的G要强。 不过,在最终输出的图片中,肉眼分辨是效果比前两次训练差,这大概是输出选择的那一代中G的效果比较差。 若输入的类别标签不用one-hot的向量表示,我们一开始先为每个类随机生成一个随机向量,然后使用这个向量作为类别标签,这样对结果会有改变吗?试尝试运行下面代码,与之前的结果对比,说说有什么不同?
vecs = torch.randn(class_num, class_num) fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size) print(vecs) print(fills) # hyper params # device : cpu or cuda:0/1/2/3 device = torch.device('cuda:2') # G and D model G = DCGenerator(image_size=image_size, latent_dim=latent_dim, out...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 可以观察到网络的结果是比较差的,因为类别标签是随机生成的,这导致生成器生成的假图是很容易被判别器正确识别。loss曲线中,G的loss值上升的速率较快,基本不会在固定的范围内波动,而D的loss值也有下降的趋势,可以看出该生成网络的效果是不如前面三次训练所得到的网络。 这大概是因为生成器的生成结果与类别标签的关系随机性强,判别器因此更加容易判断该图为假图,而生成网络得到的调整弱,因此能力较前面三次训练弱。 Image-image translation 下面介绍一个使用CGAN来做Image-to-Image Translation的模型--pix2pix。
import os import numpy as np import math import itertools import time import datetime import sys import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader from torchvision import datasets import torch.nn as nn import torch.nn.functional as F import torch
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
本次实验使用的是Facade数据集,由于数据集的特殊性,一张图片包括两部分,如下图,左半边为groundtruth,右半边为轮廓,我们需要重写数据集的读取类,下面这个cell是就是用来读取数据集。最终使得我们的模型可以从右边部分的轮廓生成左边的建筑. (可以跳过阅读)下面是dataset部分代码.
import glob import random import os import numpy as np from torch.utils.data import Dataset from PIL import Image import torchvision.transforms as transforms class ImageDataset(Dataset): def __init__(self, root, transforms_=None, mode="train"): self.transform = transforms_ # read image se...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
生成网络G,一个Encoder-Decoder模型,借鉴了U-Net结构,所谓的U-Net是将第i层拼接到第n-i层,这样做是因为第i层和第n-i层的图像大小是一致的。 判别网络D,Pix2Pix中的D被实现为Patch-D,所谓Patch,是指无论生成的图像有多大,将其切分为多个固定大小的Patch输入进D去判断。
import torch.nn as nn import torch.nn.functional as F import torch ############################## # U-NET ############################## class UNetDown(nn.Module): def __init__(self, in_size, out_size, normalize=True, dropout=0.0): super(UNetDown, self).__init__() layers = [nn.Conv2d(i...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
(可以跳过阅读)下面这个函数用来保存轮廓图,生成图片,groundtruth,以作对比。
from utils import show def sample_images(dataloader, G, device): """Saves a generated sample from the validation set""" imgs = next(iter(dataloader)) real_A = imgs["A"].to(device) real_B = imgs["B"].to(device) fake_B = G(real_A) img_sample = torch.cat((real_A.data, fake_B.data, real_B.data), -2)...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
接着定义一些超参数lambda_pixel
# hyper param n_epochs = 200 batch_size = 2 lr = 0.0002 img_size = 64 channels = 3 device = torch.device('cuda:2') betas = (0.5, 0.999) # Loss weight of L1 pixel-wise loss between translated image and real image lambda_pixel = 1
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
对于pix2pix的loss function,包括CGAN的loss,加上L1Loss,其中L1Loss之前有一个系数lambda,用于调节两者之间的权重。 这里定义损失函数和优化器,这里损失函数使用了MSEloss作为GAN的loss(LSGAN).
from utils import weights_init_normal # Loss functions criterion_GAN = torch.nn.MSELoss().to(device) criterion_pixelwise = torch.nn.L1Loss().to(device) # Calculate output of image discriminator (PatchGAN) patch = (1, img_size // 16, img_size // 16) # Initialize generator and discriminator G = GeneratorUNet().to(devic...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
下面开始训练pix2pix,训练的过程: 首先训练G,对于每张图片A(轮廓),用G生成fakeB(建筑),然后fakeB与realB(ground truth)计算L1loss,同时使用D判别(fakeB,A),计算MSEloss(label为1),用这2个loss一起更新G; 再训练D,使用(fakeB,A)与(realB,A)计算MSEloss(label前者为0,后者为1),更新D.
for epoch in range(n_epochs): for i, batch in enumerate(dataloader): # G:B -> A real_A = batch["A"].to(device) real_B = batch["B"].to(device) # Adversarial ground truths real_label = torch.ones((real_A.size(0), *patch)).to(device) fake_label = torch.zeros((real_A.s...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
作业: 只用L1 Loss的情况下训练pix2pix.说说有结果什么不同.
# Loss functions criterion_pixelwise = torch.nn.L1Loss().to(device) # Initialize generator and discriminator G = GeneratorUNet().to(device) D = Discriminator().to(device) G.apply(weights_init_normal) D.apply(weights_init_normal) optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas) optimizer_D = torch.op...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 只用L1 loss训练网络的时候,可以观察到网络在一开始并没有像第一次训练得到的结果那样具有各种五颜六色的噪点。一开始的几次迭代可以很迅速地得到建筑的边框痕迹,相对第一次训练的噪点要少很多。但是若干次迭代之后,只用L1 loss的网络生成的假图很模糊,效果比第一次训练的结果差很多。 只用CGAN Loss训练pix2pix(在下面的cell填入对应代码并运行).说说有结果什么不同.
# Loss functions criterion_GAN = torch.nn.MSELoss().to(device) # Initialize generator and discriminator G = GeneratorUNet().to(device) D = Discriminator().to(device) G.apply(weights_init_normal) D.apply(weights_init_normal) optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas) optimizer_D = torch.optim.A...
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
<h3 id="exo1">Exercice 1 : créer un fichier Excel</h3> On souhaite récupérer les données donnees_enquete_2003_television.txt (source : INSEE). POIDSLOG : Pondération individuelle relative POIDSF : Variable de pondération individuelle cLT1FREQ : Nombre d'heures en moyenne passées à regarder la télévision cLT2FREQ : U...
import pandas from ensae_teaching_cs.data import donnees_enquete_2003_television df = pandas.read_csv(donnees_enquete_2003_television(), sep="\t", engine="python") df.head()
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
On enlève les colonnes vides :
df = df [[ c for c in df.columns if "Unnamed" not in c]] df.head() notnull = df [ ~df.cLT2FREQ.isnull() ] # équivalent ) df [ df.cLT2FREQ.notnull() ] print(len(df),len(notnull)) notnull.tail() notnull.to_excel("data.xlsx") # question 4
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
Pour lancer Excel, vous pouvez juste écrire ceci :
%system "data.xlsx"
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
Vous devriez voir quelque chose comme ceci :
from IPython.display import Image Image("td10exc.png")
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="qu">Questions</h3> Que changerait l'ajout du paramètre how='outer' dans ce cas ? On cherche à joindre deux tables A,B qui ont chacune trois clés distinctes : $c_1, c_2, c_3$. Il y a respectivement dans chaque table $A_i$ et $B_i$ lignes pour la clé $c_i$. Combien la table finale issue de la fusion des deux ta...
def delta(x,y): return max(x,y)- min(x,y) delta = lambda x,y : max(x,y)- min(x,y) delta(4,5) import random df["select"]= df.apply( lambda row : random.randint(1,10), axis=1) echantillon = df [ df["select"] ==1 ] echantillon.shape, df.shape
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo2">Exercice 3 : moyennes par groupes</h3> Toujours avec le même jeu de données (marathon.txt), on veut ajouter une ligne à la fin du tableau croisé dynamique contenant la moyenne en secondes des temps des marathons pour chaque ville.
from ensae_teaching_cs.data import marathon import pandas df = pandas.read_csv(marathon(), sep="\t", names=["ville", "annee", "temps","secondes"]) df.head()
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
En résumé, cela donne (j'ajoute aussi le nombre de marathons courus) :
import pandas, urllib.request from ensae_teaching_cs.data import marathon df = pandas.read_csv(marathon(filename=True), sep="\t", names=["ville", "annee", "temps","secondes"]) piv = df.pivot("annee","ville","secondes") gr = df[["ville","secondes"]].groupby("ville", as_index=False).mean() gr["annee...
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo4">Exercice 4 : écart entre les mariés</h3> En ajoutant une colonne et en utilisant l'opération group by, on veut obtenir la distribution du nombre de mariages en fonction de l'écart entre les mariés. Au besoin, on changera le type d'une colone ou deux. On veut tracer un nuage de points avec en abscisse l'...
import urllib.request import zipfile import http.client def download_and_save(name, root_url): try: response = urllib.request.urlopen(root_url+name) except (TimeoutError, urllib.request.URLError, http.client.BadStatusLine): # back up plan root_url = "http://www.xavierdupre.fr/enseigneme...
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo5">Exercice 5 : graphe de la distribution avec pandas</h3> Le module pandas propose un panel de graphiques standard faciles à obtenir. On souhaite représenter la distribution sous forme d'histogramme. A vous de choisir le meilleure graphique depuis la page Visualization.
df["ANAISH"] = df.apply (lambda r: int(r["ANAISH"]), axis=1) df["ANAISF"] = df.apply (lambda r: int(r["ANAISF"]), axis=1) df["differenceHF"] = df.ANAISH - df.ANAISF df["nb"] = 1 dist = df[["nb","differenceHF"]].groupby("differenceHF", as_index=False).count() df["differenceHF"].hist(figsize=(16,6), bins=50)
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo6">Exercice 6 : distribution des mariages par jour</h3> On veut obtenir un graphe qui contient l'histogramme de la distribution du nombre de mariages par jour de la semaine et d'ajouter une seconde courbe correspond avec un second axe à la répartition cumulée.
df["nb"] = 1 dissem = df[["JSEMAINE","nb"]].groupby("JSEMAINE",as_index=False).sum() total = dissem["nb"].sum() repsem = dissem.cumsum() repsem["nb"] /= total ax = dissem["nb"].plot(kind="bar") repsem["nb"].plot(ax=ax, secondary_y=True) ax.set_title("distribution des mariages par jour de la semaine")
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
Create PyZDDE object
l1 = pyz.createLink() # create a DDE link object for communication
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Load an existing lens design file (Cooke 40 degree field) into Zemax's DDE server
zfile = os.path.join(l1.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx') l1.zLoadFile(zfile)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
The following figure is the 2-D Layout plot of the lens.
# Surfaces in the sequential lens data editor l1.ipzGetLDE() # note that ipzCaptureWindow() doesn't work in the new OpticStudio because # the dataitem 'GetMetafile' has become obsolete l1.ipzCaptureWindow('Lay') # General System properties l1.zGetSystem()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Few DDE dataitems that return information about the system have duplicate functions with the prefix ipz. They generally produce outputs that are suitable for interactive environments, and more suitable for human understanding. One example of such function pair is zGetFirst() and ipzGetFirst():
# Paraxial/ first order properties of the system l1.zGetFirst() # duplicate of zGetFirst() for use in the notebook l1.ipzGetFirst() # ... another example is the zGetSystemAper() that returns information about the aperture. # The aperture type is retuned as a code which we might not remember always ... l1.zGetSystemA...
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Analysis plots - ray-fan analysis plot as an example To plot the ray-fan analysis graph using ipzCaptureWindow() function, we need to provide the 3-letter button code of the corresponding analysis function. If we don't remember the exact button code, we can use a helper function in pyzdde to get some help:
pyz.findZButtonCode('ray') l1.ipzCaptureWindow('Ray', gamma=0.4)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Note that there are few other options for retrieving and plotting analysis plots from Zemax. They are discussed in a separate notebook. However, here is one worth quickly mentioning. You can ask ipzCaptureWindow() to just return the image pixel array instead of directly plotting. Then you can use matplotlib (or any oth...
lay_arr = l1.ipzCaptureWindow('Lay', percent=15, gamma=0.08, retArr=True) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) pyz.imshow(lay_arr, cropBorderPixels=(5, 5, 1, 90), fig=fig, faxes=ax) ax.set_title('Layout plot', fontsize=16) # Annotate Lens numbers ax.text(41, 70, "L1", fontsize=12) ax.text(98, 10...
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Now, lets say that we want to find the angular magnification of the above optics and we want to find if Zemax provides any operand whose value we can directly read. For that we can use another module level helper function to find all operands related to angular magnification:
pyz.findZOperand('angular magnification')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Bingo! AMAG is the operand we want. Now we can use
l1.zOperandValue('AMAG', 1) # the argument "1" is for the wavelength
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Of course, there is a function in PyZDDE called zGetPupilMagnification() that we can use to get the angular magnification since the inverse of the pupil magnificaiton is the angular magnification (as a consequence of the Lagrange Optical Invariant).
1.0/l1.zGetPupilMagnification()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Connecting to another Zemax session simultaneously Now, a second pyzdde object is created to communicate with a second ZEMAX server. Note that the first object is still present.
l2 = pyz.createLink() # create a second DDE communication link object
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Set up lens surfaces in the second ZEMAX DDE server. Towards the end, zPushLens() is called so that the LDE is updated with the just-made lens.
# Erase all lens data in the LDE (good practice) l2.zNewLens() # Wavelength data wavelengths = (0.48613270, 0.58756180, 0.65627250) #mm weights = (1.0, 1.0, 1.0) l2.zSetWaveTuple((wavelengths, weights)) l2.zSetPrimaryWave(2) # Set 0.58756180 as primary # System aperture data, and global reference surface. aType, sto...
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Layout plot of the second lens
l2.ipzCaptureWindow('L3d')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Spot diagram of the second lens
l2.ipzCaptureWindow('Spt', percent=15, gamma=0.55)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Just to demonstrate that the first lens (in the first ZEMAX server) is still available, the Layout plot is rendered again.
l1.ipzCaptureWindow('Lay')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Spot diagram of the first lens
l1.ipzCaptureWindow('Spt', percent=15, gamma=0.55)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit