markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The flags are on IOC standard, thus 1 means good while 4 means bad. 0 is used when no QC test was applied. For instance, the spike test is defined so that it depends on the previous and following measurements, thus the first and last data point of the array will always have a spike flag equal to 0. How could we use that? Let's check which are the unfeasible measurements of salinity, i.e. flagged as bad (flag=4) or probably bad (flag=3) according to the Global Range check.
idx = pqc.flags["PSAL"]["global_range"] >= 3 pqc["PSAL"][idx]
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
The flag "overall" combines all criteria, and it is the maximum flag value among all the criteria applied, as recommended by the IOC. Therefore, if one measurement is flagged bad (flag=4) in a single test, it will get a flag 4. Likewise, a measurement with flag 1 means that the maximum value from all applied tests was 1, hence there is no suggestion of being a bad measurement.
pqc.flags["PSAL"]["overall"]
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
EuroGOOS automatic QC Let's visualize what the automatic EuroGOOS procedure can detect for temperature and salinity. The concept is the same for all variables evaluated, i.e. there is a flag "overall" for "TEMP" and another one for "PSAL".
# ToDo: Include a shaded area for unfeasible values idx_good = pqc.flags["TEMP"]["overall"] <= 2 idx_bad = pqc.flags["TEMP"]["overall"] >= 3 p1 = figure(plot_width=420, plot_height=600, title="QC according to EuroGOOS") p1.circle(data['TEMP'][idx_good], -data['PRES'][idx_good], size=8, line_color="seagreen", fill_color="mediumseagreen", fill_alpha=0.3, legend_label="Good values") p1.triangle(data['TEMP'][idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3, legend_label="Bad values") p1.xaxis.axis_label = "Temperature [C]" p1.yaxis.axis_label = "Depth [m]" p1.legend.location = "top_right" idx_good = pqc.flags["PSAL"]["overall"] <= 2 idx_bad = pqc.flags["PSAL"]["overall"] >= 3 p2 = figure(plot_width=420, plot_height=600, title="QC according to EuroGOOS") p2.y_range = p1.y_range p2.circle(data['PSAL'][idx_good], -data['PRES'][idx_good], size=8, line_color="seagreen", fill_color="mediumseagreen", fill_alpha=0.3, legend_label="Good values") p2.triangle(data['PSAL'][idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3, legend_label="Bad values") p2.xaxis.axis_label = "Pratical Salinity" p2.yaxis.axis_label = "Depth [m]" p2.legend.location = "top_right" p = row(p1, p2) show(p)
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
The result from the EuroGOOS recommendations is pretty good and it is one of my favorite QC setup when considering only the traditional methods. Most of the bad measurements were automatically detected, but if you zoom in below 800m you will notice some questionable measurements that were not flagged. In the following section we will see why did that happened and how can we improve that. Limitations of the traditional unidimensional test The traditional approach to QC oceanographic data is based on projecting the data in a new dimension and then apply hard thresholds, such as the spike test (see the notebook profile_CTD). To avoid false positives, i.e. flag good data as bad, those thresholds are usually tolerant enought to accept extreme events. For instance, if we define a gradient threshold too tight we risk to flag the intense gradients in the thermocline as bad by mistake. In the following figure you have the salinity in the left and the respective "spikeness". With the slider you can choose what is the threshold, such that the measurements above that value would be flagged as bad (red triangle). Note that with a threshold of 0.05, we would flag the some measurements near the surface but that wouldn't be enough to flag that jump in 826m depth. For reference, the EuroGOOS's recommended threshold for deep ocean is 0.3 The same issue is observed in the temperature of this profile.
from bokeh.models import ColumnDataSource, CustomJS, Slider threshold = Slider(title="threshold", value=0.05, start=0.0, end=6.0, step=0.05, orientation="horizontal") tmp = dict( depth=-pqc["PRES"], temp=pqc["PSAL"], temp_good=pqc["PSAL"].copy(), temp_bad=pqc["PSAL"].copy(), spike=np.absolute(pqc.features["PSAL"]["spike"]), spike_good=np.absolute(pqc.features["PSAL"]["spike"]), spike_bad=np.absolute(pqc.features["PSAL"]["spike"]), ) idx = tmp["spike"] > threshold.value tmp["temp_good"][idx] = np.nan tmp["temp_bad"][~idx] = np.nan tmp["spike_good"][idx] = np.nan tmp["spike_bad"][~idx] = np.nan source = ColumnDataSource(data=tmp) callback = CustomJS(args=dict(source=source), code=""" var data = source.data; var f = cb_obj.value var temp = data['temp'] var temp_good = data['temp_good'] var temp_bad = data['temp_bad'] var spike = data['spike'] var spike_good = data['spike_good'] var spike_bad = data['spike_bad'] for (var i = 0; i < temp.length; i++) { if (spike[i] > f) { temp_good[i] = "NaN" temp_bad[i] = temp[i] spike_good[i] = "NaN" spike_bad[i] = spike[i] } else { temp_good[i] = temp[i] temp_bad[i] = "NaN" spike_good[i] = spike[i] spike_bad[i] = "NaN" } } source.change.emit(); """) threshold.js_on_change('value', callback) p1 = figure(plot_width=420, plot_height=600) p1.circle("temp_good", "depth", source=source, size=8, line_color="green", fill_color="green", fill_alpha=0.3) p1.triangle("temp_bad", "depth", source=source, size=8, line_color="red", fill_color="red", fill_alpha=0.3) p2 = figure(plot_width=420, plot_height=600) p2.y_range = p1.y_range p2.circle("spike_good", "depth", source=source, size=8, line_color="green", fill_color="green", fill_alpha=0.3) p2.triangle("spike_bad", "depth", source=source, size=8, line_color="red", fill_color="red", fill_alpha=0.3) # inputs = row(threshold) #threshold = column(slider) p = column(threshold, row(p1, p2)) show(p)
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Because the thresholds were wisely defined with tolerant values, the traditional QC procedure does a great job flagging bad values, i.e. there is a high confidence that a measurement flagged as bad is indeed a bad one. To avoid the mistake of flagging good measurements as bad ones, To achieve that, some bad measurements are wrongly flagged as good. That is achieved by a lower confidenceis confident in detecting bad measurements, but tend to mistake a few bad measurements as good ones. It would be nice if could somehow account by how much a
print("PRES: {}".format(pqc["PRES"][825])) print("TEMP: {}".format(pqc["TEMP"][825])) for c in ["gradient", "spike", "woa_normbias"]: print("{}: {}".format(c, pqc.features["TEMP"][c][825]))
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
EuroGOOS - Gradient below 500m: 3.0 - Spike below 500m: 2.0 - Climatology: 6 standard deviations None of the criteria failed individually. For the climatology comparison we have a scaled value in standard deviations, but how large was the estimated spike? How uncommon was that? Could we combine the information?
pqc.flags["PSAL"]
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Let's look at the salinity in respect to the spike and WOA normalized bias. Near the bottom of the profile there some bad salinity measurement, which are mostly identified with the spike test. A few measurements aren't critically bad in respect to the spike or the climatology individually. One of the goals of the Anomaly Detection is to combine multiple features to an overall decision, so that
idx_good = pqc.flags["PSAL"]["spike_depthconditional"] <= 2 idx_bad = pqc.flags["PSAL"]["spike_depthconditional"] >= 3 p1 = figure(plot_width=500, plot_height=600) p1.circle(pqc.features["PSAL"]["spike"][idx_good], -pqc['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p1.triangle(pqc.features["PSAL"]["spike"][idx_bad], -pqc['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p2 = figure(plot_width=500, plot_height=600) p2.y_range = p1.y_range p2.circle(pqc['PSAL'][idx_good], -pqc['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p2.line(pqc.features["PSAL"]["woa_mean"] - 6 * pqc.features["PSAL"]["woa_std"], -data['PRES'], line_width=4, line_color="orange", alpha=0.4) p2.line(pqc.features["PSAL"]["woa_mean"] + 6 * pqc.features["PSAL"]["woa_std"], -data['PRES'], line_width=4, line_color="orange", alpha=0.4) p2.triangle(data['PSAL'][idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p = row(p1, p2) show(p) pqc = cotede.ProfileQC(data, cfg="cotede") print(pqc.features["TEMP"].keys()) pqc.features["TEMP"]["anomaly_detection"][824:827] t_spike = pqc.features["TEMP"]["anomaly_detection"] idx_good = np.absolute(t_spike) <= 2 idx_bad = np.absolute(t_spike) > 2 p1 = figure(plot_width=420, plot_height=500) p1.circle(data['TEMP'][idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p1.triangle(data['TEMP'][idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p1.xaxis.axis_label = "Temperature [C]" p1.yaxis.axis_label = "Depth [m]" p2 = figure(plot_width=420, plot_height=500) p2.y_range = p1.y_range p2.circle(t_spike[idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p2.triangle(t_spike[idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p2.xaxis.axis_label = "Spike(T)" p2.yaxis.axis_label = "Depth [m]" s_spike = pqc.features["PSAL"]["woa_normbias"] idx_good = np.absolute(s_spike) <= 2 idx_bad = np.absolute(s_spike) > 2 p3 = figure(plot_width=420, plot_height=500) p3.y_range = p1.y_range p3.circle(data['PSAL'][idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p3.triangle(data['PSAL'][idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p3.xaxis.axis_label = "Salinity" p3.yaxis.axis_label = "Depth [m]" p4 = figure(plot_width=420, plot_height=500) p4.y_range = p1.y_range p4.circle(s_spike[idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p4.triangle(s_spike[idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p4.xaxis.axis_label = "Spike(S)" p4.yaxis.axis_label = "Depth [m]" p = column(row(p1, p2), row(p3, p4)) show(p) pqc.features["TEMP"]["anomaly_detection"] y_spike N = y_spike.size n_greater = np.array([y_spike[np.absolute(y_spike) >= t].size/N for t in np.absolute(y_spike)]) p = figure(plot_width=840, plot_height=400) p.circle(np.absolute(y_spike), n_greater, size=8, line_color="green", fill_color="green", fill_alpha=0.3) show(p) from scipy.stats import exponweib spike_scale = np.arange(0.0005, 0.2, 1e-3) param = [1.078231, 0.512053, 0.0004, 0.002574] tmp = exponweib.sf(spike_scale, *param[:-2], loc=param[-2], scale=param[-1]) p = figure(plot_width=840, plot_height=400) # p.line(x_ref, pdf, line_color="orange", line_width=8, alpha=0.7, legend_label="PDF") p.circle(spike_scale, tmp, size=8, line_color="green", fill_color="green", fill_alpha=0.3) show(p) N = y_spike.size n_greater = np.array([y_spike[np.absolute(y_spike) >= t].size/N for t in np.absolute(y_spike)]) p1 = figure(plot_width=420, plot_height=600) p1.circle(data['TEMP'][idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p1.triangle(data['TEMP'][idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p2 = figure(plot_width=420, plot_height=600) p2.circle(SF[idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p2.triangle(SF[idx_bad], -data['PRES'][idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p = row(p1, p2) show(p) p1 = figure(plot_width=420, plot_height=600) p1.circle(data['TEMP'][idx_good], -data['PRES'][idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) pqc_eurogoos = cotede.ProfileQC(data, cfg="eurogoos") flag_eurogoos = pqc_eurogoos.flags["TEMP"]["overall"] pqc = cotede.ProfileQC(data, cfg="cotede") pqc.features["TEMP"].keys() AD_good = pqc.features["TEMP"]["anomaly_detection"][flag_eurogoos <= 2] AD_bad = pqc.features["TEMP"]["anomaly_detection"][flag_eurogoos >= 3] min(AD_good) x = AD_bad x = AD_good bins = 100 hist, edges = np.histogram(x, density=True, bins=bins) #title = 'test' # p = figure(title=title, tools='', background_fill_color="#fafafa") p = figure(plot_width=750, plot_height=300, background_fill_color="#fafafa") # tools='', background_fill_color="#fafafa") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="navy", line_color="white", alpha=0.5) # p.line(x, pdf, line_color="#ff8888", line_width=4, alpha=0.7, legend_label="PDF") # p.line(x, cdf, line_color="orange", line_width=2, alpha=0.7, legend_label="CDF") p.y_range.start = 0 # p.legend.location = "center_right" # p.legend.background_fill_color = "#fefefe" p.xaxis.axis_label = 'x' p.yaxis.axis_label = 'Pr(x)' p.grid.grid_line_color="white" show(p) def draw_histogram(x, bins=50): """Plot an histogram Create an histogram from the output of numpy.hist(). We will create several histograms in this notebook so let's save this as a function to reuse this code. """ x = x[np.isfinite(x)] hist, edges = np.histogram(x, density=True, bins=bins) #title = 'test' # p = figure(title=title, tools='', background_fill_color="#fafafa") p = figure(plot_width=750, plot_height=300, background_fill_color="#fafafa") # tools='', background_fill_color="#fafafa") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="navy", line_color="white", alpha=0.5) # p.line(x, pdf, line_color="#ff8888", line_width=4, alpha=0.7, legend_label="PDF") # p.line(x, cdf, line_color="orange", line_width=2, alpha=0.7, legend_label="CDF") p.y_range.start = 0 # p.legend.location = "center_right" # p.legend.background_fill_color = "#fefefe" p.xaxis.axis_label = 'x' p.yaxis.axis_label = 'Pr(x)' p.grid.grid_line_color="white" return p p = draw_histogram(y_spike[idx_good], bins=50) show(p) data.attrs import oceansdb WOADB = oceansdb.WOA() woa = WOADB['TEMP'].extract(var=['mean', 'standard_deviation'], doy=data.attrs['datetime'], lat=data.attrs['LATITUDE'], lon=data.attrs['LONGITUDE'], depth=data['PRES']) pqc.features["TEMP"]["woa_mean"] - 6 * pqc.features["TEMP"]["woa_std"] pqc = cotede.ProfileQC(data) pqc.flags['TEMP'].keys() pqc.features['TEMP'] p = figure(plot_width=500, plot_height=600) p.circle(y_spike, -data['PRES'], size=8, line_color="green", fill_color="green", fill_alpha=0.3) show(p) idx_valid np.percentile(y_tukey53H[np.isfinite(y_tukey53H)], 25) idx = y_tukey53H[np.absolute(y_tukey53H) < 6] p = draw_histogram(y_tukey53H[idx & idx_valid]) show(p) idx = idx_valid & np.isfinite(y_tukey53H) mu_estimated, sigma_estimated = stats.norm.fit(y_tukey53H[idx]) print("Estimated mean: {:.3f}, and standard deviation: {:.3f}".format(mu_estimated, sigma_estimated)) y_tukey53H_scaled = (y_tukey53H - mu_estimated) / sigma_estimated p = figure(plot_width=500, plot_height=600) p.circle(y_tukey53H_scaled, -data['PRES'], size=8, line_color="green", fill_color="green", fill_alpha=0.3) show(p) idx_good = pqc.flags["PSAL"]["overall"] <= 2 idx_bad = pqc.flags["PSAL"]["overall"] >= 3 pressure = -pqc["PRES"] salinity = pqc["PSAL"] woa_normbias = pqc.features["PSAL"]["woa_normbias"] p1 = figure(plot_width=420, plot_height=500) p1.circle(salinity[idx_good], pressure[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p1.triangle(salinity[idx_bad], pressure[idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p1.xaxis.axis_label = "Salinity" p1.yaxis.axis_label = "Depth [m]" p2 = figure(plot_width=420, plot_height=500) p2.y_range = p1.y_range p2.circle(woa_normbias[idx_good], pressure[idx_good], size=8, line_color="green", fill_color="green", fill_alpha=0.3) p2.triangle(woa_normbias[idx_bad], pressure[idx_bad], size=8, line_color="red", fill_color="red", fill_alpha=0.3) p2.xaxis.axis_label = "WOA normalized bias" p2.yaxis.axis_label = "Depth [m]" p = row(p1, p2) show(p)
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Check data api connection Make data api request to test that the api key is working
# Setup Planet Data API base URL API_URL = "https://api.planet.com/data/v1" # Setup the session session = requests.Session() # Authenticate session.auth = (PLANET_API_KEY, "") # Make a GET request to the Planet Data API resp = session.get(API_URL) if not resp.ok: print("Something is wrong:", resp.content)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Data API Search In this next part, we will search for items that match a given date range, item_type, and location Data API quick-search wrapper Make a search function that can take a geojson geometry and give us item_ids
from datetime import datetime def get_item_ids(geometry, item_type='PSScene', start_date=None, end_date=None, limit=100): """Get Planet Data API item_id values for matching filters. Args: geometry: geojson geometry dict item_type: item_type (see https://developers.planet.com/docs/api/items-assets/#item-types) start_date: inclusive lower bound ISO 8601 datetime string (include items captured on or after this date) end_date: exclusive lower bound ISO 8601 datetime string (include items captured before this date) limit: max number of ids to return Returns: item_ids: list of id strings """ # Data API Geometry Filter geometry_filter = { "type": "GeometryFilter", "field_name": "geometry", "config": geometry } # use a default end_date of the current time if not end_date: end_date = datetime.utcnow().isoformat() + 'Z' date_filter = { "type": "DateRangeFilter", # Type of filter -> Date Range "field_name": "acquired", # The field to filter on: "acquired" -> Date on which the "image was taken" "config": { "lt": end_date, # "lt" -> Less than } } # start_date is optional if start_date: # greater than or equal to start date date_filter["config"]["gte"] = start_date # combine geometry and date filters with an AndFilter and_filter = { "type": "AndFilter", "config": [geometry_filter, date_filter] } quick_url = "{}/quick-search".format(API_URL) # Setup the request filter_request = { "item_types" : [item_type], "filter" : and_filter } # get ids from search results resp = session.post(quick_url, json=filter_request) results = resp.json() ids = [f['id'] for f in results['features']] # follow pagination links until we hit the limit while len(ids) < limit and results['_links'].get('next'): results = requests.get(results['_links'].get('next')).json() more_ids = [f['id'] for f in results['features']] ids += more_ids return ids[:limit]
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Geometry helper Convert coordinates to geojson geometry format
def coords_to_geometry(lat, lon): """Given latitude and longitude floats, construct a geojson geometry dict""" return { "type": "Point", "coordinates": [lon, lat] }
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Make a geometry dict for coordinates in San Francisco
geom = coords_to_geometry(37.77493, -122.41942) print(geom)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Try getting item ids
get_item_ids(geom, start_date="2019-01-01T00:00:00.000Z", end_date="2019-10-01T00:00:00.000Z", limit=5)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Getting Webtiles Although we could download images for the item_ids above, we can get a nice visual preview through webtiles. These are 256x256 PNG images on a spatial grid, often used for web maps. Generating tile urls We want to get urls for many tiles over time for a given latitude, longitude, and zoom level. Let's re-use some of the filters we exposed in the Data API search wrapper.
def get_tile_urls(lat, lon, zoom=15, item_type='PSScene', start_date='2019-01-01T00:00:00.000Z', end_date='2019-10-01T00:00:00.000Z', limit=5): """Get webtile urls for given coordinates, zoom, and matching filters. Args: lat: latitude float lon: longitude float zoom: zoom level int (usually between 1 and 15) item_type: item_type (see https://developers.planet.com/docs/api/items-assets/#item-types) start_date: inclusive lower bound ISO 8601 datetime string (include items captured on or after this date) end_date: exclusive lower bound ISO 8601 datetime string (include items captured before this date) limit: max number of ids to return Returns: item_ids: list of id strings """ geom = coords_to_geometry(lat, lon) item_ids = get_item_ids(geom, item_type=item_type, start_date=start_date, end_date=end_date, limit=limit) tile = mercantile.tile(lon, lat, zoom) tile_url_template = 'https://tiles.planet.com/data/v1/{item_type}/{item_id}/{z}/{x}/{y}.png?api_key={api_key}' return [tile_url_template.format(item_type=item_type, item_id=i, x=tile.x, y=tile.y, z=zoom, api_key=PLANET_API_KEY) for i in item_ids]
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Testing tile urls Click the links below to see tile images in your browser
tile_urls = get_tile_urls(37.77493, -122.41942, limit=5) for url in tile_urls: print(url) print()
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Display a tile
from IPython.display import Image resp = requests.get(tile_urls[0]) Image(resp.content)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Animate tiles over time
%matplotlib inline from IPython.display import HTML import random import time def animate(urls, delay=1.0, loops=1): """Display an animated loop of images Args: urls: list of image url strings delay: how long in seconds to display each image loops: how many times to repeat the image sequence """ disp_id = str(random.random()) display("placeholder", display_id=disp_id) for loop in range(loops): for frame_url in urls: htmlDisplay = f'<img src="{frame_url}" class="mySlides">' display(HTML(htmlDisplay), display_id=disp_id, update=True) time.sleep(delay) animate(tile_urls, delay=0.5, loops=3) tile_urls = get_tile_urls(37.77493, -122.41942, limit=100) animate(tile_urls, delay=1, loops=3)
jupyter-notebooks/webtiles/visualize_imagery_over_time.ipynb
planetlabs/notebooks
apache-2.0
Read the data
fname = './Korendijk_data.txt' with open(fname, 'r') as f: data = f.readlines() # read the data as a list of strings hdr = data[0].split() # get the first line, i.e. the header data = data[1:] # remove the header line from the data # split each line (string) into its individual tokens # each token is still a string not yet a number toklist = [d.split() for d in data] # convert this list of lines with string tokens into a list of lists with numbers data = [] # start empty for line in toklist: data.append([float(d) for d in line]) # convert this line # when done, convert this list of lists of numbers into a numpy array data = np.array(data) #data # show what we've got # get the piezometer distances from the first data column, the unique values distances = np.unique(data[:,0]) plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.grid() for r in distances: I = data[:,0] == r # boolean array telling which data belong to this observation well plt.plot(data[I, -2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Same, but using log scale
plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.xscale('log') plt.grid() for r in distances: I = data[:,0] == r plt.plot(data[I,-2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Drawdown on double log scale
plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('t [min]') plt.ylabel('dd [m]') plt.xscale('log') plt.yscale('log') plt.grid() for r in distances: I = data[:,0] == r plt.plot(data[I,-2], data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Drawdown on double log scale using $t/r^2$ on x-axis
plt.title('Korendijk pumping test measured drawdowns') plt.xlabel('$t/r^2$ [min/m$^2$]') plt.ylabel('dd [m]') plt.xscale('log') #plt.yscale('log') plt.grid() for r in distances: I = data[:,0] == r tr2 = data[I, -2] / r**2 plt.plot(tr2, data[I,-1], '.-', label='r={:.0f} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Interpretation using the approximation of the Theis solution $$ s = \frac {Q} {4 \pi kD} \ln \left( \frac {2.25 kD t} {r^2 S} \right) $$ or $$ s = \frac {2.3 Q} {4 \pi kD} \log \left( \frac {2.25 kD t} {r^2 S} \right) $$ First determine the drawdown per log cyclus from the graph $\approx (1.1 - 0.21) / 3 \approx 0.30 $ $$ \Delta s = s_{10t} - s_t = 0.30 = \frac {2.3 Q} {4 \pi kD} $$ Notice that it doesn't matter in what dimension time is, at it drops out of the drawdown at $10t$ is compared with that at $t$. Therefore, with Q = 788 m$^3$/d, we get
Q = 788 # m3/d ds = (1.1 - 0.21) / 2 # drawdown increase per log cycle of time kD = 2.3 * Q / (4 * np.pi * ds) print('kD = {:.0f} m2/d'.format(kD))
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
For the storage coefficient determine the intersection with the straight line with the line of zero drawdown. This is $t/r^2 = 2 \times 10 ^{-4}$ min. We have to convert to days to get answer consistent with the transmissivity. Then setting the argument of the solution equal to 1 so that the computed drawdown is 0 and using the already obtained transmissivity yields the storage coefficient.
tr2 = 2e-4 / (24 * 60) # d/m2 r = distances[0] S = 2.25 * kD * tr2 print('S = {:.2e} [-]'.format(S))
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Clearly, the result depends somewhat on the exact straigt line drawn through the bundel of curves for the observation wells. In the ideal situation, these curves fall onto each other. In this real-world case this is not true, which is due to non-uniformity of the real-world aquifer. There are many real-world pumping test where the match is mucht closer. But it is good to keep in mind that the real world is less homogeneous than the analytic solution presumes. Interpretation using the match on double log scales (Classical method) The classical interpreation plots the measured drawdowns on double log paper (drawdown $s$ versus $t/r^2$ and compares them with the Theis type curve $W(u)$ versus $1/u$ also drawn on double log paper. Because $1/u = (4 kD t) / (r^2 S)$ it follows that on logarthmic scales $1/u$ and $t/r^2$ differ only by a constant factor, which represents a horizontal shift on the log scale. The drawdown $s$ only differs the constant $Q/(4 \pi kD$ from the well function $W(u)$, and so this implies a vertical shift on logarithmic scale. Hence the measured drawdown versus $t/r^2$ on double log scale looks exactly the same as the theis type curve but it is only shifted a given distance along the horizontal axis and a given distance along the vertical axis. These two shifts yield the sought transmissivity and storage coefficient. Below we draw the Theis type curve and the drawdown $s$ multiplied by a factor $A$ and the $t/r^2$ multiplied by a factor $B$, choosing $A$ and $B$ interactively untill the measured and the type curve match best. In this worked out example, I already optmized the values of $A$ and $B$ by hand. Set them both to 1 and try optimizing them yourself.
A = 7 B = 1.0e7 u = np.logspace(-4, 1, 41) plt.title('Type curve and $A \times s$ vs $B \times t/r^2$, with $A$={}, $B$={}'.format(A, B)) plt.xlabel('$1/u$ and $B \, t/r^2$') plt.ylabel('W(u) and $A \, s$') plt.xscale('log') plt.yscale('log') plt.grid() # the Theis type curve plt.plot(1/u, exp1(u), label='Theis') # The measurements for r in distances : I = data[:,0] == r t = data[I,-2] / (24 * 60) s = data[I,-1] # Q /(4 * np.pi * kD) * exp1(r**2 * S / (4 * kD * t)) plt.plot(B * t/r**2, A * s, '.', label='$r$= {:.3g} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
So $A s = W(u)$ and $s = \frac Q {2 \pi kD} W(u)$ and, therefore $A = \frac {4 \pi kD} {Q}$ and $ kD = \frac {A Q} {4 \pi}$
kD = A * Q /4 /np.pi print('kD = {:.0f} m2/d'.format(kD))
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
As one sees, the results obtained this way are consistent with those obtained by the previous method. Directly optmizing $kD$ and $S$ instead of $A$ and $B$ The previous method was inspired by the shifting of the measurements drawn on double log paper over the Theis type curve also drawn on double log paper. However, because we now have a computer, we could just as well directly optimize $kD$ and $S$ by trial and error to find the best match between type curve and measurements. It may then be most convenient to let the type curve as it is and compute $W(u) = \frac s {Q/(4 \pi kD)}$ and $\frac 1 u = \frac {4 kD t} {r^2 S}$ This is done next.
kD = 450 S = 0.0002 u = np.logspace(-4, 1, 41) plt.title('Direct comparison between computed and measured drawdown, $kD$={:.0f} m$^2$/d, $S$={:.3e} [-]'.format(kD,S)) plt.xlabel('$1/u$') plt.ylabel('$W[u]$') plt.xscale('log') plt.yscale('log') plt.grid() # the Theis type curve plt.plot(1/u, exp1(u), label='Theis') # The measurements for r in distances : I = data[:,0] == r t = data[I,-2] / (24 * 60) s = data[I,-1] plt.plot(4 * kD * t/(S * r**2), s / (Q/(4 * np.pi * kD)), '.', label='$r$= {:.3g} m'.format(r)) plt.legend() plt.show()
Syllabus_in_notebooks/Sec6_5_Korendijk-pumptest-Theis.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Data Source Chicago publishes its crime data in a massive 1.4GB csv. Here's a small sample.
sample = pd.read_csv('clearn/data/fixtures/tinyCrimeSample.csv')
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Data Format Lots of features. And lots of possible discrete values.
sample
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Cleaning up the Crimes We wrote a munge module to tame the data.
from clearn import munge
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Bin, drop, and reindex Bin crimes into 4 categories. Convert numbers to community area names. Turn timestamp string into pandas time series index.
munge.make_clean_timestamps(sample)
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Group by community area and resample by day For each community area, create a series of summaries of each day's criminal activity from 2001 to present.
every_community_area = munge.get_master_dict() where_wills_sister_lives = every_community_area['Edgewater'] where_wills_sister_lives[-5:]
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Extra preprocessing for each model For nonsequential prediction, we added history to each day.
from clearn.predict import NonsequentialPredictor with_history = NonsequentialPredictor.preprocess(every_community_area) with_history['Edgewater'][-5:]
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Let's predict crime!
from datetime import date log_reg_predictor = NonsequentialPredictor(with_history['Edgewater']) log_reg_predictor.predict(date(2015, 4, 3))
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Which algorithm performs best?
from clearn.evaluate import evaluate # Generate a sample of 2500 days to predict evaluate(2500)
clearn/notebooks/CS1675_Presentation.ipynb
chi-learn/chi-learn
mit
Persistent random walk model In this example, we choose to model the price evolution of SPY as a simple, well-known random walk model: the auto-regressive process of first order. We assume that subsequent log-return values $r_t$ of SPY obey the following recursive instruction: $$ r_t = \rho_t \cdot r_{t-1} + \sqrt{1-\rho^2} \cdot \sigma_t \cdot \epsilon_t $$ with the time-varying correlation coefficient $\rho_t$ and the time-varying volatility parameter $\sigma_t$. Here, $\epsilon_t$ is drawn from a standard normal distribution and represents the driving noise of the process and the scaling factor $\sqrt{1-\rho_t^2}$ makes sure that $\sigma_t$ is the standard deviation of the process. In bayesloop, we define this observation model as follows: bl.om.ScaledAR1('rho', bl.oint(-1, 1, 100), 'sigma', bl.oint(0, 0.006, 400)) This implementation of the correlated random walk model will be discussed in detail in the next section. Looking at the log-returns of our example, we find that the magnitude of the fluctuations (i.e. the volatility) is higher after market open and before market close. While these variations happen quite gradually, a large peak around 10:30am (and possibly another one around 12:30pm) represents an abrupt price correction.
logPrices = np.log(prices) logReturns = np.diff(logPrices) plt.figure(figsize=(8,2)) plt.plot(np.arange(1, 390), logReturns, c='r') plt.ylabel('log-returns') plt.xlabel('Nov 28, 2016') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.yticks([-0.001, -0.0005, 0, 0.0005, 0.001]) plt.xlim([0, 390]);
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Online study bayesloop provides the class OnlineStudy to analyze on-going data streams and perform model selection for each new data point. In contrast to other Study types, more than just one transition model can be assigned to an OnlineStudy instance, using the method addTransitionModel. Here, we choose to add two distinct scenarios: normal: Both volatility and correlation of subsequent returns are allowed to vary gradually over time, to account for periods with above average trading activity after market open and before market close. This scenario therefore represents a smoothly running market. chaotic: This scenario assumes that we know nothing about the value of volatility or correlation. The probability that this scenario gets assigned in each minute therefore represents the probability that previously gathered knowledge about market dynamics cannot explain the last price movement. By evaluating how likely the chaotic scenario explains each new minute close price of SPY compared to the normal scenario, we can identify specific events that lead to extreme fluctuations in intra-day trading. First, we create a new instance of the OnlineStudy class and set the observation model introduced above. The keyword argument storeHistory is set to True, because we want to access the parameter estimates of all time steps afterwards, not only the estimates of the last time step.
S = bl.OnlineStudy(storeHistory=True) L = bl.om.ScaledAR1('rho', bl.oint(-1, 1, 100), 'sigma', bl.oint(0, 0.006, 400)) S.set(L)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
<div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em"> **Note:** While the parameter `rho` is naturally constrained to the interval ]-1, 1[, the parameter boundaries of `sigma` have to be specified by the user. Typically, one can review past log-return data and estimate the upper boundary as a multiple of the standard deviation of past data points. </div> Both scenarios for the dynamic market behavior are implemented via the method add. The normal case consists of a combined transition model that allows both volatility and correlation to perform a Gaussian random walk. As the standard deviation (magnitude) of the parameter fluctuations is a-priori unknown, we supply a wide range of possible values (bl.cint(0, 1.5e-01, 15) for rho, which corresponds to 15 equally spaced values within the closed interval [0, 0.15], and 50 equally spaced values within the interval [0, 1.5$\cdot$10$^{-4}$] for sigma). <div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em"> Since we have no prior assumptions about the standard deviations of the Gaussian random walks, we let *bayesloop* assign equal probability to all values. If one wants to analyze more than just one trading day, the (hyper-)parameter distributions from the end of one day can be used as the prior distribution for the next day! One might also want to suppress large variations of `rho` or `sigma` with an exponential prior, e.g.: </div> bl.tm.GaussianRandomWalk('s1', bl.cint(0, 1.5e-01, 15), target='rho', prior=stats.Exponential('expon', 1./3.0e-02)) The chaotic case is implemented by the transition model Independent. This model sets a flat prior distribution for the parameters volatility and correlation in each time step. This way, previous knowledge about the parameters is not used when analyzing a new data point.
T1 = bl.tm.CombinedTransitionModel( bl.tm.GaussianRandomWalk('s1', bl.cint(0, 1.5e-01, 15), target='rho'), bl.tm.GaussianRandomWalk('s2', bl.cint(0, 1.5e-04, 50), target='sigma') ) T2 = bl.tm.Independent() S.add('normal', T1) S.add('chaotic', T2)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Before any data points are passed to the study instance, we further provide prior probabilities for the two scenarios. We expect about one news announcement containing unexpected information per day and set a prior probability of $1/390$ for the chaotic scenario (one normal trading day consists of 390 trading minutes).
S.setTransitionModelPrior([389/390., 1/390.])
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Finally, we can supply log-return values to the study instance, data point by data point. We use the step method to infer new parameter estimates and the updated probabilities of the two scenarios. Note that in this example, we use a for loop to feed all data points to the algorithm because all data points are already available. In a real application of the OnlineStudy class, one can supply each new data point as it becomes available and analyze it in real-time.
for r in tqdm_notebook(logReturns): S.step(r)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Volatility spikes Before we analyze how the probability values of our two market scenarios change over time, we check whether the inferred temporal evolution of the time-varying parameters is realistic. Below, the log-returns are displayed together with the inferred marginal distribution (shaded red) and mean value (black line) of the volatility parameter, using the method plotParameterEvolution.
plt.figure(figsize=(8, 4.5)) # data plot plt.subplot(211) plt.plot(np.arange(1, 390), logReturns, c='r') plt.ylabel('log-returns') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.yticks([-0.001, -0.0005, 0, 0.0005, 0.001]) plt.xlim([0, 390]) # parameter plot plt.subplot(212) S.plot('sigma', color='r') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.ylim([0, 0.00075]) plt.xlim([-2, 388]);
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Note that the volatility estimates of the first few trading minutes are not as accurate as later ones, as we initialize the algorithm with a non-informative prior distribution. One could of course provide a custom prior distribution as a more realistic starting point. Despite this fade-in period, the period of increased volatility after market open is captured nicely, as well as the (more subtle) increase in volatility during the last 45 minutes of the trading day. Large individual log-return values also result in an volatility spikes (around 10:30am and more subtle around 12:30pm). Islands of stability The persistent random walk model does not only provide information about the magnitude of price fluctuations, but further quantifies whether subsequent log-return values are correlated. A positive correlation coefficient indicates diverging price movements, as a price increase is more likely followed by another increase, compared to a decrease. In contrast, a negative correlation coefficient indicates islands of stability, i.e. trading periods during which prices do not diffuse randomly (as with a corr. coeff. of zero). Below, we plot the price evolution of SPY on November 28, together with the inferred marginal distribution (shaded blue) and the corresponding mean value (black line) of the time-varying correlation coefficient.
plt.figure(figsize=(8, 4.5)) # data plot plt.subplot(211) plt.plot(prices) plt.ylabel('price [USD]') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlim([0, 390]) # parameter plot plt.subplot(212) S.plot('rho', color='#0000FF') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.ylim([-0.4, 0.4]) plt.xlim([-2, 388]);
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
As a correlation coefficient that deviates significantly from zero would be immediately exploitable to predict future price movements, we mostly find correlation values near zero (in accordance with the efficient market hypothesis). However, between 1:15pm and 2:15pm, we find a short period of negative correlation with a value around -0.2. During this period, subsequent price movements tend to cancel each other out, resulting in an unusually strong price stability. Using the Parser sub-module of bayesloop, we can evaluate the probability that subsequent return values are negatively correlated. In the figure below, we tag all time steps with a probability for rho &lt; 0 of 80% or higher and find that this indicator nicely identifies the period of increased market stability! <div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em"> **Note:** The arbitrary threshold of 80% for our market indicator is of course chosen with hindsight in this example. In a real application, more than one trading day of data needs to be analyzed to create robust indicators! </div>
# extract parameter grid values (rho) and corresponding prob. values (p) rho, p = S.getParameterDistributions('rho') # evaluate Prob.(rho < 0) for all time steps P = bl.Parser(S) p_neg_rho = np.array([P('rho < 0.', t=t, silent=True) for t in range(1, 389)]) # plotting plt.figure(figsize=(8, 4.5)) plt.subplot(211) plt.axhline(y=0.8, lw=1.5, c='g') plt.plot(p_neg_rho, lw=1.5, c='k') plt.fill_between(np.arange(len(p_neg_rho)), 0, p_neg_rho > 0.8, lw=0, facecolor='g', alpha=0.5) plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.ylabel('prob. of neg. corr.') plt.xlim([-2, 388]) plt.subplot(212) plt.plot(prices) plt.fill_between(np.arange(2, len(p_neg_rho)+2), 220.2, 220.2 + (p_neg_rho > 0.8)*1.4, lw=0, facecolor='g', alpha=0.5) plt.ylabel('price [USD]') plt.xticks([30, 90, 150, 210, 270, 330, 390], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlim([0, 390]) plt.ylim([220.2, 221.6]) plt.xlabel('Nov 28, 2016');
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Automatic tuning One major advantage of the OnlineStudy class is that it not only infers the time-varying parameters of the low-level correlated random walk (the observation model ScaledAR1), but further infers the magnitude (the standard deviation of the transition model GaussianRandomWalk) of the parameter fluctuations and thereby tunes the parameter dynamics as new data arrives. As we can see below (left sub-figures), both magnitudes - one for rho and one for sigma - start off at a high level. This is due to our choice of a uniform prior, assigning equal probability to all hyper-parameter values before seeing any data. Over time, the algorithm learns that the true parameter fluctuations are less severe than previously assumed and adjusts the hyper-parameters accordingly. This newly gained information, summarized in the hyper-parameter distributions of the last time step (right sub-figures), could then represent the prior distribution for the next trading day.
plt.figure(figsize=(8, 4.5)) plt.subplot(221) S.plot('s1', color='green') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([-2, 388]) plt.ylim([0, 0.06]) plt.subplot(222) S.plot('s1', t=388, facecolor='green', alpha=0.7) plt.yticks([]) plt.xticks([0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08], ['0', '1', '2', '3', '4', '5', '6', '7', '8']) plt.xlabel('s1 ($\cdot 10^{-2}$)') plt.xlim([-0.005, 0.08]) plt.subplot(223) S.plot('s2', color='green') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([-2, 388]) plt.ylim([0, 0.0001]) plt.subplot(224) S.plot('s2', t=388, facecolor='green', alpha=0.7) plt.yticks([]) plt.xticks([0, 0.00001, 0.00002, 0.00003], ['0', '1', '2', '3']) plt.xlabel('s2 ($\cdot 10^{-5}$)') plt.xlim([0, 0.00003]) plt.tight_layout()
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Real-time model selection Finally, we investigate which of our two market scenarios - normal vs. chaotic - can explain the price movements best. Using the method plot('chaotic'), we obtain the probability values for the chaotic scenario compared to the normal scenario, with respect to all past data points:
plt.figure(figsize=(8, 2)) S.plot('chaotic', lw=2, c='k') plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([0, 388]) plt.ylabel('p("chaotic")')
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
As expected, the probability that the chaotic scenario can explain all past log-return values at a given point in time quickly falls off to practically zero. Indeed, a correlated random walk with slowly changing volatility and correlation of subsequent returns is better suited to describe the price fluctuations of SPY in the majority of time steps. However, we may also ask for the probability that each individual log-return value is produced by either of the two market scenarios by using the keyword argument local=True:
plt.figure(figsize=(8, 2)) S.plot('chaotic', local=True, c='k', lw=2) plt.xticks([28, 88, 148, 208, 268, 328, 388], ['10am', '11am', '12pm', '1pm', '2pm', '3pm', '4pm']) plt.xlabel('Nov 28, 2016') plt.xlim([0, 388]) plt.ylabel('p("chaotic")') plt.axvline(58, 0, 1, zorder=1, c='r', lw=1.5, ls='dashed', alpha=0.7) plt.axvline(178, 0, 1, zorder=1, c='r', lw=1.5, ls='dashed', alpha=0.7);
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
Here, we find clear peaks indicating an increased probability for the chaotic scenario, i.e. that previously gained information about the market dynamics has become useless. Lets assume that we are concerned about market behavior as soon as there is at least a 1% risk that normal market dynamics can not describe the current price movement. This leaves us with three distinct events in the following time steps:
p = S.getTransitionModelProbabilities('chaotic', local=True) np.argwhere(p > 0.01)
docs/source/examples/stockmarketfluctuations.ipynb
christophmark/bayesloop
mit
위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "linear_algebra_basic_I.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것이다. jupyter notebook linear_algebra_basic_I.ipynb linear_algebra_basic_I.py 코드 구조 본 Lab은 vector와 matrix의 기초적인 연산을 수행하는 12개의 함수를 작성합니다. 각각 함수의 기능과 역할은 아래와 같다. 비교적 문제가 평이하니 웃는 얼굴로 도전해보기 바랍니다. 함수명 | 함수내용 --------------------|-------------------------- vector_size_check | vector 간 덧셈 또는 뺄셈 연산을 할 때, 연산이 가능한 사이즈인지를 확인하여 가능 여부를 True 또는 False로 반환함 vector_addition | vector간 덧셈을 실행하여 결과를 반환함, 단 입력되는 vector의 갯수와 크기는 일정하지 않음 vector_subtraction | vector간 뺄셈을 실행하여 결과를 반환함, 단 입력되는 vector의 갯수와 크기는 일정하지 않음 scalar_vector_product | 하나의 scalar 값을 vector에 곱함, 단 입력되는 vector의 크기는 일정하지 않음 matrix_size_check | matrix 간 덧셈 또는 뺄셈 연산을 할 때, 연산이 가능한 사이즈인지를 확인하여 가능 여부를 True 또는 False로 반환함 is_matrix_equal | 비교가 되는 n개의 matrix가 서로 동치인지 확인하여 True 또는 False를 반환함 matrix_addition | matrix간 덧셈을 실행하여 결과를 반환함, 단 입력되는 matrix의 갯수와 크기는 일정하지 않음 matrix_subtraction | matrix간 뺄셈을 실행하여 결과를 반환함, 단 입력되는 matrix의 갯수와 크기는 일정하지 않음 matrix_transpose | matrix의 역행렬을 구하여 결과를 반환함, 단 입력되는 matrix의 크기는 일정하지 않음 scalar_matrix_product | 하나의 scalar 값을 matrix에 곱함, 단 입력되는 matrix의 크기는 일정하지 않음 is_product_availability_matrix | 두 개의 matrix가 입력 되었을 경우, 두 matrix의 곱셈 연산의 가능 여부를 True 또는 False로 반환함 matrix_product | 곱셈 연산이 가능한 두 개의 matrix의 곱셈을 실행하여 반환함 Problem #1 - vector_size_check (one line code available)
def vector_size_check(*vector_variables): return None # 실행결과 print(vector_size_check([1,2,3], [2,3,4], [5,6,7])) # Expected value: True print(vector_size_check([1, 3], [2,4], [6,7])) # Expected value: True print(vector_size_check([1, 3, 4], [4], [6,7])) # Expected value: False
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #2 - vector_addition (one line code available) $$ \left[\begin{array}{r} a & b & c \ \end{array}\right] + \left[\begin{array}{r} x & y & z \ \end{array}\right] = \left[\begin{array}{r} a+x & b+y & c+z \ \end{array}\right] $$
def vector_addition(*vector_variables): return None # 실행결과 print(vector_addition([1, 3], [2, 4], [6, 7])) # Expected value: [9, 14] print(vector_addition([1, 5], [10, 4], [4, 7])) # Expected value: [15, 16] print(vector_addition([1, 3, 4], [4], [6,7])) # Expected value: ArithmeticError
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #3 - vector_subtraction (one line code available) $$ \left[\begin{array}{r} a & b & c \ \end{array}\right] - \left[\begin{array}{r} x & y & z \ \end{array}\right] = \left[\begin{array}{r} a-x & b-y & c-z \ \end{array}\right] $$
def vector_subtraction(*vector_variables): if vector_size_check(*vector_variables) == False: raise ArithmeticError return None # 실행결과 print(vector_subtraction([1, 3], [2, 4])) # Expected value: [-1, -1] print(vector_subtraction([1, 5], [10, 4], [4, 7])) # Expected value: [-13, -6]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #4 - scalar_vector_product (one line code available) $$ \alpha \times \left[\begin{array}{r} x & y & z \ \end{array}\right] = \left[\begin{array}{r} \alpha \times x & \alpha \times y & \alpha \times z \ \end{array}\right] $$
def scalar_vector_product(alpha, vector_variable): return None # 실행결과 print (scalar_vector_product(5,[1,2,3])) # Expected value: [5, 10, 15] print (scalar_vector_product(3,[2,2])) # Expected value: [6, 6] print (scalar_vector_product(4,[1])) # Expected value: [4]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #5 - matrix_size_check (one line code available)
def matrix_size_check(*matrix_variables): return None # 실행결과 matrix_x = [[2, 2], [2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] matrix_w = [[2, 5], [1, 1], [2, 2]] print (matrix_size_check(matrix_x, matrix_y, matrix_z)) # Expected value: False print (matrix_size_check(matrix_y, matrix_z)) # Expected value: True print (matrix_size_check(matrix_x, matrix_w)) # Expected value: True
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #6 - is_matrix_equal (one line code available) if $x=a, y=b, z=c, w=d $ then $$ \left[\begin{array}{rr} x & y \ z & w \ \end{array}\right] = \left[\begin{array}{rr} a & b \ c & d \ \end{array}\right] $$
def is_matrix_equal(*matrix_variables): return None # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] print (is_matrix_equal(matrix_x, matrix_y, matrix_y, matrix_y)) # Expected value: False print (is_matrix_equal(matrix_x, matrix_x)) # Expected value: True
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #7 - matrix_addition (one line code available) $$ \left[\begin{array}{rr} x & y \ z & w \ \end{array}\right] + \left[\begin{array}{rr} a & b \ c & d \ \end{array}\right] = \left[\begin{array}{rr} x + a & y + b \ z + c & w + d \ \end{array}\right] $$
def matrix_addition(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return None # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] print (matrix_addition(matrix_x, matrix_y)) # Expected value: [[4, 7], [4, 3]] print (matrix_addition(matrix_x, matrix_y, matrix_z)) # Expected value: [[6, 11], [9, 6]]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #8 - matrix_subtraction (one line code available) $$ \left[\begin{array}{rr} x & y \ z & w \ \end{array}\right] - \left[\begin{array}{rr} a & b \ c & d \ \end{array}\right] = \left[\begin{array}{rr} x - a & y - b \ z - c & w - d \ \end{array}\right] $$
def matrix_subtraction(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return None # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] print (matrix_subtraction(matrix_x, matrix_y)) # Expected value: [[0, -3], [0, 1]] print (matrix_subtraction(matrix_x, matrix_y, matrix_z)) # Expected value: [[-2, -7], [-5, -2]]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #9 - matrix_transpose (one line code available) Let $A = \left[\begin{array}{rrr} a & b \ c & d \ e & f \ \end{array}\right] $, Then $A^T\ = \left[\begin{array}{rr} a & c & e \ b & d & e \ \end{array}\right] $
def matrix_transpose(matrix_variable): return None # 실행결과 matrix_w = [[2, 5], [1, 1], [2, 2]] matrix_transpose(matrix_w)
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #10 - scalar_matrix_product (one line code available) $$ \alpha \times \left[\begin{array}{rr} a & c & d \ e & f & g \ \end{array}\right] = \left[\begin{array}{rr} \alpha \times a & \alpha \times c & \alpha \times d \ \alpha \times e & \alpha \times f & \alpha \times g \ \end{array}\right] $$
def scalar_matrix_product(alpha, matrix_variable): return None # 실행결과 matrix_x = [[2, 2], [2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] matrix_w = [[2, 5], [1, 1], [2, 2]] print(scalar_matrix_product(3, matrix_x)) #Expected value: [[6, 6], [6, 6], [6, 6]] print(scalar_matrix_product(2, matrix_y)) #Expected value: [[4, 10], [4, 2]] print(scalar_matrix_product(4, matrix_z)) #Expected value: [[8, 16], [20, 12]] print(scalar_matrix_product(3, matrix_w)) #Expected value: [[6, 15], [3, 3], [6, 6]]
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #11 - is_product_availability_matrix (one line code available) The matrix product of $A$ and $B$ (written $AB$) is defined if and only if Number of columns in $A$ = Number of rows in $B$
def is_product_availability_matrix(matrix_a, matrix_b): return None # 실행결과 matrix_x= [[2, 5], [1, 1]] matrix_y = [[1, 1, 2], [2, 1, 1]] matrix_z = [[2, 4], [5, 3], [1, 3]] print(is_product_availability_matrix(matrix_y, matrix_z)) # Expected value: True print(is_product_availability_matrix(matrix_z, matrix_x)) # Expected value: True print(is_product_availability_matrix(matrix_z, matrix_w)) # Expected value: False //matrix_w가없습니다 print(is_product_availability_matrix(matrix_x, matrix_x)) # Expected value: True
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Problem #12 - matrix_product (one line code available)
def matrix_product(matrix_a, matrix_b): if is_product_availability_matrix(matrix_a, matrix_b) == False: raise ArithmeticError return None # 실행결과 matrix_x= [[2, 5], [1, 1]] matrix_y = [[1, 1, 2], [2, 1, 1]] matrix_z = [[2, 4], [5, 3], [1, 3]] print(matrix_product(matrix_y, matrix_z)) # Expected value: [[9, 13], [10, 14]] print(matrix_product(matrix_z, matrix_x)) # Expected value: [[8, 14], [13, 28], [5, 8]] print(matrix_product(matrix_x, matrix_x)) # Expected value: [[9, 15], [3, 6]] print(matrix_product(matrix_z, matrix_w)) # Expected value: False
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
결과 제출 하기 문제없이 숙제를 제출하면 아래 결과가 모두 PASS로 표시 될 것 입니다.
import gachon_autograder_client as g_autograder THE_TEMLABIO_ID = "#YOUR_ID" PASSWORD = "#YOUR_PASSWORD" ASSIGNMENT_FILE_NAME = "linear_algebra_basic_I.ipynb" g_autograder.submit_assignment(THE_TEMLABIO_ID, PASSWORD, ASSIGNMENT_FILE_NAME)
assignment/ps1/linear_algebra_basic_I.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
数据集 我们使用我们熟悉的MNIST手写体数据集来训练我们的CGAN,我们同样提供了一个简化版本的数据集来加快我们的训练速度,与上次的数据集不一样的是,这次的数据集包含0到9共10类的手写数字,每类各200张,共2000张.图片同样为28*28的单通道灰度图(我们将其resize到32*32).下面是加载mnist数据集的代码.
def load_mnist_data(): """ load mnist(0,1,2) dataset """ transform = torchvision.transforms.Compose([ # transform to 1-channel gray image since we reading image in RGB mode transforms.Grayscale(1), # resize image from 28 * 28 to 32 * 32 transforms.Resize(32), transforms.ToTensor(), # normalize with mean=0.5 std=0.5 transforms.Normalize(mean=(0.5, ), std=(0.5, )) ]) train_dataset = torchvision.datasets.ImageFolder(root='./data/mnist', transform=transform) return train_dataset
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
接下来让我们查看一下各个类上真实的手写体数据集的数据吧.(运行一下2个cell的代码,无需理解)
def denorm(x): # denormalize out = (x + 1) / 2 return out.clamp(0, 1) from utils import show """ you can pass code in this cell """ # show mnist real data train_dataset = load_mnist_data() images = [] for j in range(5): for i in range(10): images.append(train_dataset[i * 200 + j][0]) show(torchvision.utils.make_grid(denorm(torch.stack(images)), nrow=10))
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
训练部分的代码代码与之前相似, 不同的地方在于要根据类别生成y_vec(one-hot向量如类别2对应[0,1,0,0,0,0,0,0,0,0])和y_fill(将y_vec扩展到大小为(class_num, image_size, image_size),正确的类别的channel全为1,其他channel全为0),分别输入G和D作为条件变量.其他训练过程与普通的GAN相似.我们可以先为每个类别标签生成vecs和fills.
# class number class_num = 10 # image size and channel image_size=32 image_channel=1 # vecs: one-hot vectors of size(class_num, class_num) # fills: vecs expand to size(class_num, class_num, image_size, image_size) vecs = torch.eye(class_num) fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size) print(vecs) print(fills) def train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, z_dim, class_num): """ train a GAN with model G and D in one epoch Args: trainloader: data loader to train G: model Generator D: model Discriminator G_optimizer: optimizer of G(etc. Adam, SGD) D_optimizer: optimizer of D(etc. Adam, SGD) loss_func: Binary Cross Entropy(BCE) or MSE loss function device: cpu or cuda device z_dim: the dimension of random noise z """ # set train mode D.train() G.train() D_total_loss = 0 G_total_loss = 0 for i, (x, y) in enumerate(trainloader): x = x.to(device) batch_size_ = x.size(0) image_size = x.size(2) # real label and fake label real_label = torch.ones(batch_size_, 1).to(device) fake_label = torch.zeros(batch_size_, 1).to(device) # y_vec: (batch_size, class_num) one-hot vector, for example, [0,0,0,0,1,0,0,0,0,0] (label: 4) y_vec = vecs[y.long()].to(device) # y_fill: (batch_size, class_num, image_size, image_size) # y_fill: the i-th channel is filled with 1, and others is filled with 0. y_fill = fills[y.long()].to(device) z = torch.rand(batch_size_, z_dim).to(device) # update D network # D optimizer zero grads D_optimizer.zero_grad() # D real loss from real images d_real = D(x, y_fill) d_real_loss = loss_func(d_real, real_label) # D fake loss from fake images generated by G g_z = G(z, y_vec) d_fake = D(g_z, y_fill) d_fake_loss = loss_func(d_fake, fake_label) # D backward and step d_loss = d_real_loss + d_fake_loss d_loss.backward() D_optimizer.step() # update G network # G optimizer zero gradsinput_dim=100, output_dim=1, input_size=32, class_num=10 G_optimizer.zero_grad() # G loss g_z = G(z, y_vec) d_fake = D(g_z, y_fill) g_loss = loss_func(d_fake, real_label) # G backward and step g_loss.backward() G_optimizer.step() D_total_loss += d_loss.item() G_total_loss += g_loss.item() return D_total_loss / len(trainloader), G_total_loss / len(trainloader)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
visualize_results和run_gan的代码不再详细说明.
def visualize_results(G, device, z_dim, class_num, class_result_size=5): G.eval() z = torch.rand(class_num * class_result_size, z_dim).to(device) y = torch.LongTensor([i for i in range(class_num)] * class_result_size) y_vec = vecs[y.long()].to(device) g_z = G(z, y_vec) show(torchvision.utils.make_grid(denorm(g_z.detach().cpu()), nrow=class_num)) def run_gan(trainloader, G, D, G_optimizer, D_optimizer, loss_func, n_epochs, device, latent_dim, class_num): d_loss_hist = [] g_loss_hist = [] for epoch in range(n_epochs): d_loss, g_loss = train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, latent_dim, class_num) print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss)) d_loss_hist.append(d_loss) g_loss_hist.append(g_loss) if epoch == 0 or (epoch + 1) % 10 == 0: visualize_results(G, device, latent_dim, class_num) return d_loss_hist, g_loss_hist
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
下面尝试训练一下我们的CGAN吧.
# hyper params # z dim latent_dim = 100 # Adam lr and betas learning_rate = 0.0002 betas = (0.5, 0.999) # epochs and batch size n_epochs = 120 batch_size = 32 # device : cpu or cuda:0/1/2/3 device = torch.device('cuda:2') # mnist dataset and dataloader train_dataset = load_mnist_data() trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) # use BCELoss as loss function bceloss = nn.BCELoss().to(device) # G and D model G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num) D = DCDiscriminator(image_size=image_size, input_channel=image_channel, class_num=class_num) G.to(device) D.to(device) print(D) print(G) # G and D optimizer, use Adam or SGD G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas) D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas) d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, n_epochs, device, latent_dim, class_num) from utils import loss_plot loss_plot(d_loss_hist, g_loss_hist)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
作业 : 1. 在D中,可以将输入图片和labels分别通过两个不同的卷积层然后在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?
class DCDiscriminator1(nn.Module): def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True): super().__init__() self.image_size = image_size self.input_channel = input_channel self.class_num = class_num self.fc_size = image_size // 8 # model : img -> conv1_1 # labels -> conv1_2 # (img U labels) -> Conv2d(3,2,1) -> BN -> LeakyReLU # Conv2d(3,2,1) -> BN -> LeakyReLU self.conv1_1 = nn.Sequential(nn.Conv2d(input_channel, 64, 3, 2, 1), nn.BatchNorm2d(64)) self.conv1_2 = nn.Sequential(nn.Conv2d(class_num, 64, 3, 2, 1), nn.BatchNorm2d(64)) self.conv = nn.Sequential( nn.LeakyReLU(0.2), nn.Conv2d(128, 256, 3, 2, 1), nn.BatchNorm2d(256), nn.LeakyReLU(0.2), nn.Conv2d(256, 512, 3, 2, 1), nn.BatchNorm2d(512), nn.LeakyReLU(0.2), ) # fc: Linear -> Sigmoid self.fc = nn.Sequential( nn.Linear(512 * self.fc_size * self.fc_size, 1), ) if sigmoid: self.fc.add_module('sigmoid', nn.Sigmoid()) initialize_weights(self) def forward(self, img, labels): """ img : input image labels : (batch_size, class_num, image_size, image_size) the i-th channel is filled with 1, and others is filled with 0. """ """ To Do """ input_img = self.conv1_1(img) input_labels = self.conv1_2(labels) input_ = torch.cat((input_img, input_labels), dim=1) out = self.conv(input_) out = out.view(out.shape[0], -1) out = self.fc(out) return out # hyper params # device : cpu or cuda:0/1/2/3 device = torch.device('cuda:2') # G and D model G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num) D = DCDiscriminator1(image_size=image_size, input_channel=image_channel, class_num=class_num) G.to(device) D.to(device) # G and D optimizer, use Adam or SGD G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas) D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas) d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, n_epochs, device, latent_dim, class_num) loss_plot(d_loss_hist, g_loss_hist)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 观察两次训练的loss曲线,可以发现给图片和标签加上卷积之后,G的loss值一直稳定在一定的范围内,而没加上卷积处理的网络中,G的loss值一开始很低,后来逐渐升高。从loss曲线上分析,在第一次训练中G的变化更大。因此,第二次训练能得到效果更好的生成器。 从输出的图片上比较,也可以很明显可以看到第二次训练输出的结果比第一次好。 在D中,可以将输入图片通过1个卷积层然后和(尺寸与输入图片一致的)labels在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能,并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?
class DCDiscriminator2(nn.Module): def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True): super().__init__() self.image_size = image_size self.input_channel = input_channel self.class_num = class_num self.fc_size = image_size // 8 # model : img -> conv1 # labels -> maxpool # (img U labels) -> Conv2d(3,2,1) -> BN -> LeakyReLU # Conv2d(3,2,1) -> BN -> LeakyReLU self.conv1 = nn.Sequential(nn.Conv2d(input_channel, 128, 3, 2, 1), nn.BatchNorm2d(128)) self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2) self.conv = nn.Sequential( nn.LeakyReLU(0.2), nn.Conv2d(128 + class_num, 256, 3, 2, 1), nn.BatchNorm2d(256), nn.LeakyReLU(0.2), nn.Conv2d(256, 512, 3, 2, 1), nn.BatchNorm2d(512), nn.LeakyReLU(0.2), ) # fc: Linear -> Sigmoid self.fc = nn.Sequential( nn.Linear(512 * self.fc_size * self.fc_size, 1), ) if sigmoid: self.fc.add_module('sigmoid', nn.Sigmoid()) initialize_weights(self) def forward(self, img, labels): """ img : input image labels : (batch_size, class_num, image_size, image_size) the i-th channel is filled with 1, and others is filled with 0. """ """ To Do """ input_img = self.conv1(img) input_labels = self.maxpool(labels) input_ = torch.cat((input_img, input_labels), dim=1) out = self.conv(input_) out = out.view(out.shape[0], -1) out = self.fc(out) return out # hyper params # device : cpu or cuda:0/1/2/3 device = torch.device('cuda:2') # G and D model G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num) D = DCDiscriminator2(image_size=image_size, input_channel=image_channel, class_num=class_num) G.to(device) D.to(device) # G and D optimizer, use Adam or SGD G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas) D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas) d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, n_epochs, device, latent_dim, class_num) loss_plot(d_loss_hist, g_loss_hist)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 可以观察到生成网络的loss曲线变化的程度要较前两次训练的小,说明得到的G的能力相比前两次训练得到的G要强。 不过,在最终输出的图片中,肉眼分辨是效果比前两次训练差,这大概是输出选择的那一代中G的效果比较差。 若输入的类别标签不用one-hot的向量表示,我们一开始先为每个类随机生成一个随机向量,然后使用这个向量作为类别标签,这样对结果会有改变吗?试尝试运行下面代码,与之前的结果对比,说说有什么不同?
vecs = torch.randn(class_num, class_num) fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size) print(vecs) print(fills) # hyper params # device : cpu or cuda:0/1/2/3 device = torch.device('cuda:2') # G and D model G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num) D = DCDiscriminator(image_size=image_size, input_channel=image_channel, class_num=class_num) G.to(device) D.to(device) # G and D optimizer, use Adam or SGD G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas) D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas) d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, n_epochs, device, latent_dim, class_num) loss_plot(d_loss_hist, g_loss_hist)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 可以观察到网络的结果是比较差的,因为类别标签是随机生成的,这导致生成器生成的假图是很容易被判别器正确识别。loss曲线中,G的loss值上升的速率较快,基本不会在固定的范围内波动,而D的loss值也有下降的趋势,可以看出该生成网络的效果是不如前面三次训练所得到的网络。 这大概是因为生成器的生成结果与类别标签的关系随机性强,判别器因此更加容易判断该图为假图,而生成网络得到的调整弱,因此能力较前面三次训练弱。 Image-image translation 下面介绍一个使用CGAN来做Image-to-Image Translation的模型--pix2pix。
import os import numpy as np import math import itertools import time import datetime import sys import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader from torchvision import datasets import torch.nn as nn import torch.nn.functional as F import torch
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
本次实验使用的是Facade数据集,由于数据集的特殊性,一张图片包括两部分,如下图,左半边为groundtruth,右半边为轮廓,我们需要重写数据集的读取类,下面这个cell是就是用来读取数据集。最终使得我们的模型可以从右边部分的轮廓生成左边的建筑. (可以跳过阅读)下面是dataset部分代码.
import glob import random import os import numpy as np from torch.utils.data import Dataset from PIL import Image import torchvision.transforms as transforms class ImageDataset(Dataset): def __init__(self, root, transforms_=None, mode="train"): self.transform = transforms_ # read image self.files = sorted(glob.glob(os.path.join(root, mode) + "/*.*")) def __getitem__(self, index): # crop image,the left half if groundtruth image, and the right half is outline of groundtruth. img = Image.open(self.files[index % len(self.files)]) w, h = img.size img_B = img.crop((0, 0, w / 2, h)) img_A = img.crop((w / 2, 0, w, h)) if np.random.random() < 0.5: # revese the image by 50% img_A = Image.fromarray(np.array(img_A)[:, ::-1, :], "RGB") img_B = Image.fromarray(np.array(img_B)[:, ::-1, :], "RGB") img_A = self.transform(img_A) img_B = self.transform(img_B) return {"A": img_A, "B": img_B} def __len__(self): return len(self.files)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
生成网络G,一个Encoder-Decoder模型,借鉴了U-Net结构,所谓的U-Net是将第i层拼接到第n-i层,这样做是因为第i层和第n-i层的图像大小是一致的。 判别网络D,Pix2Pix中的D被实现为Patch-D,所谓Patch,是指无论生成的图像有多大,将其切分为多个固定大小的Patch输入进D去判断。
import torch.nn as nn import torch.nn.functional as F import torch ############################## # U-NET ############################## class UNetDown(nn.Module): def __init__(self, in_size, out_size, normalize=True, dropout=0.0): super(UNetDown, self).__init__() layers = [nn.Conv2d(in_size, out_size, 4, 2, 1, bias=False)] if normalize: # when baych-size is 1, BN is replaced by instance normalization layers.append(nn.InstanceNorm2d(out_size)) layers.append(nn.LeakyReLU(0.2)) if dropout: layers.append(nn.Dropout(dropout)) self.model = nn.Sequential(*layers) def forward(self, x): return self.model(x) class UNetUp(nn.Module): def __init__(self, in_size, out_size, dropout=0.0): super(UNetUp, self).__init__() layers = [ nn.ConvTranspose2d(in_size, out_size, 4, 2, 1, bias=False), # when baych-size is 1, BN is replaced by instance normalization nn.InstanceNorm2d(out_size), nn.ReLU(inplace=True), ] if dropout: layers.append(nn.Dropout(dropout)) self.model = nn.Sequential(*layers) def forward(self, x, skip_input): x = self.model(x) x = torch.cat((x, skip_input), 1) return x class GeneratorUNet(nn.Module): def __init__(self, in_channels=3, out_channels=3): super(GeneratorUNet, self).__init__() self.down1 = UNetDown(in_channels, 64, normalize=False) self.down2 = UNetDown(64, 128) self.down3 = UNetDown(128, 256) self.down4 = UNetDown(256, 256, dropout=0.5) self.down5 = UNetDown(256, 256, dropout=0.5) self.down6 = UNetDown(256, 256, normalize=False, dropout=0.5) self.up1 = UNetUp(256, 256, dropout=0.5) self.up2 = UNetUp(512, 256) self.up3 = UNetUp(512, 256) self.up4 = UNetUp(512, 128) self.up5 = UNetUp(256, 64) self.final = nn.Sequential( nn.Upsample(scale_factor=2), nn.ZeroPad2d((1, 0, 1, 0)), nn.Conv2d(128, out_channels, 4, padding=1), nn.Tanh(), ) def forward(self, x): # U-Net generator with skip connections from encoder to decoder d1 = self.down1(x)# 32x32 d2 = self.down2(d1)#16x16 d3 = self.down3(d2)#8x8 d4 = self.down4(d3)#4x4 d5 = self.down5(d4)#2x2 d6 = self.down6(d5)#1x1 u1 = self.up1(d6, d5)#2x2 u2 = self.up2(u1, d4)#4x4 u3 = self.up3(u2, d3)#8x8 u4 = self.up4(u3, d2)#16x16 u5 = self.up5(u4, d1)#32x32 return self.final(u5)#64x64 ############################## # Discriminator ############################## class Discriminator(nn.Module): def __init__(self, in_channels=3): super(Discriminator, self).__init__() def discriminator_block(in_filters, out_filters, normalization=True): """Returns downsampling layers of each discriminator block""" layers = [nn.Conv2d(in_filters, out_filters, 4, stride=2, padding=1)] if normalization: # when baych-size is 1, BN is replaced by instance normalization layers.append(nn.InstanceNorm2d(out_filters)) layers.append(nn.LeakyReLU(0.2, inplace=True)) return layers self.model = nn.Sequential( *discriminator_block(in_channels * 2, 64, normalization=False),#32x32 *discriminator_block(64, 128),#16x16 *discriminator_block(128, 256),#8x8 *discriminator_block(256, 256),#4x4 nn.ZeroPad2d((1, 0, 1, 0)), nn.Conv2d(256, 1, 4, padding=1, bias=False)#4x4 ) def forward(self, img_A, img_B): # Concatenate image and condition image by channels to produce input img_input = torch.cat((img_A, img_B), 1) return self.model(img_input)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
(可以跳过阅读)下面这个函数用来保存轮廓图,生成图片,groundtruth,以作对比。
from utils import show def sample_images(dataloader, G, device): """Saves a generated sample from the validation set""" imgs = next(iter(dataloader)) real_A = imgs["A"].to(device) real_B = imgs["B"].to(device) fake_B = G(real_A) img_sample = torch.cat((real_A.data, fake_B.data, real_B.data), -2) show(torchvision.utils.make_grid(img_sample.cpu().data, nrow=5, normalize=True))
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
接着定义一些超参数lambda_pixel
# hyper param n_epochs = 200 batch_size = 2 lr = 0.0002 img_size = 64 channels = 3 device = torch.device('cuda:2') betas = (0.5, 0.999) # Loss weight of L1 pixel-wise loss between translated image and real image lambda_pixel = 1
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
对于pix2pix的loss function,包括CGAN的loss,加上L1Loss,其中L1Loss之前有一个系数lambda,用于调节两者之间的权重。 这里定义损失函数和优化器,这里损失函数使用了MSEloss作为GAN的loss(LSGAN).
from utils import weights_init_normal # Loss functions criterion_GAN = torch.nn.MSELoss().to(device) criterion_pixelwise = torch.nn.L1Loss().to(device) # Calculate output of image discriminator (PatchGAN) patch = (1, img_size // 16, img_size // 16) # Initialize generator and discriminator G = GeneratorUNet().to(device) D = Discriminator().to(device) G.apply(weights_init_normal) D.apply(weights_init_normal) optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas) optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas) # Configure dataloaders transforms_ = transforms.Compose([ transforms.Resize((img_size, img_size), Image.BICUBIC), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) dataloader = DataLoader( ImageDataset("./data/facades", transforms_=transforms_), batch_size=batch_size, shuffle=True, num_workers=8, ) val_dataloader = DataLoader( ImageDataset("./data/facades", transforms_=transforms_, mode="val"), batch_size=10, shuffle=True, num_workers=1, )
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
下面开始训练pix2pix,训练的过程: 首先训练G,对于每张图片A(轮廓),用G生成fakeB(建筑),然后fakeB与realB(ground truth)计算L1loss,同时使用D判别(fakeB,A),计算MSEloss(label为1),用这2个loss一起更新G; 再训练D,使用(fakeB,A)与(realB,A)计算MSEloss(label前者为0,后者为1),更新D.
for epoch in range(n_epochs): for i, batch in enumerate(dataloader): # G:B -> A real_A = batch["A"].to(device) real_B = batch["B"].to(device) # Adversarial ground truths real_label = torch.ones((real_A.size(0), *patch)).to(device) fake_label = torch.zeros((real_A.size(0), *patch)).to(device) # ------------------ # Train Generators # ------------------ optimizer_G.zero_grad() # GAN loss fake_B = G(real_A) pred_fake = D(fake_B, real_A) loss_GAN = criterion_GAN(pred_fake, real_label) # Pixel-wise loss loss_pixel = criterion_pixelwise(fake_B, real_B) # Total loss loss_G = loss_GAN + lambda_pixel * loss_pixel loss_G.backward() optimizer_G.step() # --------------------- # Train Discriminator # --------------------- optimizer_D.zero_grad() # Real loss pred_real = D(real_B, real_A) loss_real = criterion_GAN(pred_real, real_label) # Fake loss pred_fake = D(fake_B.detach(), real_A) loss_fake = criterion_GAN(pred_fake, fake_label) # Total loss loss_D = 0.5 * (loss_real + loss_fake) loss_D.backward() optimizer_D.step() # Print log print( "\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f, pixel: %f, adv: %f]" % ( epoch, n_epochs, i, len(dataloader), loss_D.item(), loss_G.item(), loss_pixel.item(), loss_GAN.item(), ) ) # If at sample interval save image if epoch == 0 or (epoch + 1) % 5 == 0: sample_images(val_dataloader, G, device)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
作业: 只用L1 Loss的情况下训练pix2pix.说说有结果什么不同.
# Loss functions criterion_pixelwise = torch.nn.L1Loss().to(device) # Initialize generator and discriminator G = GeneratorUNet().to(device) D = Discriminator().to(device) G.apply(weights_init_normal) D.apply(weights_init_normal) optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas) optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas) for epoch in range(n_epochs): for i, batch in enumerate(dataloader): # G:B -> A real_A = batch["A"].to(device) real_B = batch["B"].to(device) # ------------------ # Train Generators # ------------------ optimizer_G.zero_grad() # GAN loss fake_B = G(real_A) # Pixel-wise loss loss_pixel = criterion_pixelwise(fake_B, real_B) # Total loss loss_G = loss_pixel loss_G.backward() optimizer_G.step() # Print log print( "\r[Epoch %d/%d] [Batch %d/%d] [G loss: %f]" % ( epoch, n_epochs, i, len(dataloader), loss_G.item() ) ) # If at sample interval save image if epoch == 0 or (epoch + 1) % 5 == 0: sample_images(val_dataloader, G, device)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
答: 只用L1 loss训练网络的时候,可以观察到网络在一开始并没有像第一次训练得到的结果那样具有各种五颜六色的噪点。一开始的几次迭代可以很迅速地得到建筑的边框痕迹,相对第一次训练的噪点要少很多。但是若干次迭代之后,只用L1 loss的网络生成的假图很模糊,效果比第一次训练的结果差很多。 只用CGAN Loss训练pix2pix(在下面的cell填入对应代码并运行).说说有结果什么不同.
# Loss functions criterion_GAN = torch.nn.MSELoss().to(device) # Initialize generator and discriminator G = GeneratorUNet().to(device) D = Discriminator().to(device) G.apply(weights_init_normal) D.apply(weights_init_normal) optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas) optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas) for epoch in range(n_epochs): for i, batch in enumerate(dataloader): """ To Do """ # G:B -> A real_A = batch["A"].to(device) real_B = batch["B"].to(device) # Adversarial ground truths real_label = torch.ones((real_A.size(0), *patch)).to(device) fake_label = torch.zeros((real_A.size(0), *patch)).to(device) # ------------------ # Train Generators # ------------------ optimizer_G.zero_grad() # GAN loss fake_B = G(real_A) pred_fake = D(fake_B, real_A) loss_G = criterion_GAN(pred_fake, real_label) loss_G.backward() optimizer_G.step() # --------------------- # Train Discriminator # --------------------- optimizer_D.zero_grad() # Real loss pred_real = D(real_B, real_A) loss_real = criterion_GAN(pred_real, real_label) # Fake loss pred_fake = D(fake_B.detach(), real_A) loss_fake = criterion_GAN(pred_fake, fake_label) # Total loss loss_D = 0.5 * (loss_real + loss_fake) loss_D.backward() optimizer_D.step() # Print log print( "\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]" % ( epoch, n_epochs, i, len(dataloader), loss_D.item(), loss_G.item() ) ) # If at sample interval save image if epoch == 0 or (epoch + 1) % 5 == 0: sample_images(val_dataloader, G, device)
Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb
MegaShow/college-programming
mit
<h3 id="exo1">Exercice 1 : créer un fichier Excel</h3> On souhaite récupérer les données donnees_enquete_2003_television.txt (source : INSEE). POIDSLOG : Pondération individuelle relative POIDSF : Variable de pondération individuelle cLT1FREQ : Nombre d'heures en moyenne passées à regarder la télévision cLT2FREQ : Unité de temps utilisée pour compter le nombre d'heures passées à regarder la télévision, cette unité est représentée par les quatre valeurs suivantes 0 : non concerné 1 : jour 2 : semaine 3 : mois Ensuite, on veut : Supprimer les colonnes vides Obtenir les valeurs distinctes pour la colonne cLT2FREQ Modifier la matrice pour enlever les lignes pour lesquelles l'unité de temps (cLT2FREQ) n'est pas renseignée ou égale à zéro. Sauver le résultat au format Excel. Vous aurez peut-être besoin des fonctions suivantes : numpy.isnan DataFrame.apply DataFrame.fillna ou DataFrame.isnull DataFrame.copy
import pandas from ensae_teaching_cs.data import donnees_enquete_2003_television df = pandas.read_csv(donnees_enquete_2003_television(), sep="\t", engine="python") df.head()
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
On enlève les colonnes vides :
df = df [[ c for c in df.columns if "Unnamed" not in c]] df.head() notnull = df [ ~df.cLT2FREQ.isnull() ] # équivalent ) df [ df.cLT2FREQ.notnull() ] print(len(df),len(notnull)) notnull.tail() notnull.to_excel("data.xlsx") # question 4
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
Pour lancer Excel, vous pouvez juste écrire ceci :
%system "data.xlsx"
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
Vous devriez voir quelque chose comme ceci :
from IPython.display import Image Image("td10exc.png")
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="qu">Questions</h3> Que changerait l'ajout du paramètre how='outer' dans ce cas ? On cherche à joindre deux tables A,B qui ont chacune trois clés distinctes : $c_1, c_2, c_3$. Il y a respectivement dans chaque table $A_i$ et $B_i$ lignes pour la clé $c_i$. Combien la table finale issue de la fusion des deux tables contiendra-t-elle de lignes ? L'ajout du paramètres how='outer' ne changerait rien dans ce cas car les deux tables fusionnées contiennent exactement les mêmes clés. Le nombre de lignes obtenus serait $\sum_{i=1}^{3} A_i B_i$. Il y a trois clés, chaque ligne de la table A doit être associée à toutes les lignes de la table B partageant la même clé. <h3 id="exo3">Exercice 2 : lambda fonction</h3> Ecrire une lambda fonction qui prend deux paramètres et qui est équivalente à la fonction suivante :
def delta(x,y): return max(x,y)- min(x,y) delta = lambda x,y : max(x,y)- min(x,y) delta(4,5) import random df["select"]= df.apply( lambda row : random.randint(1,10), axis=1) echantillon = df [ df["select"] ==1 ] echantillon.shape, df.shape
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo2">Exercice 3 : moyennes par groupes</h3> Toujours avec le même jeu de données (marathon.txt), on veut ajouter une ligne à la fin du tableau croisé dynamique contenant la moyenne en secondes des temps des marathons pour chaque ville.
from ensae_teaching_cs.data import marathon import pandas df = pandas.read_csv(marathon(), sep="\t", names=["ville", "annee", "temps","secondes"]) df.head()
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
En résumé, cela donne (j'ajoute aussi le nombre de marathons courus) :
import pandas, urllib.request from ensae_teaching_cs.data import marathon df = pandas.read_csv(marathon(filename=True), sep="\t", names=["ville", "annee", "temps","secondes"]) piv = df.pivot("annee","ville","secondes") gr = df[["ville","secondes"]].groupby("ville", as_index=False).mean() gr["annee"] = "moyenne" pivmean = gr.pivot("annee","ville","secondes") pandas.concat([piv, pivmean]).tail()
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo4">Exercice 4 : écart entre les mariés</h3> En ajoutant une colonne et en utilisant l'opération group by, on veut obtenir la distribution du nombre de mariages en fonction de l'écart entre les mariés. Au besoin, on changera le type d'une colone ou deux. On veut tracer un nuage de points avec en abscisse l'âge du mari, en ordonnée, l'âge de la femme. Il faudra peut-être jeter un coup d'oeil sur la documentation de la méthode plot.
import urllib.request import zipfile import http.client def download_and_save(name, root_url): try: response = urllib.request.urlopen(root_url+name) except (TimeoutError, urllib.request.URLError, http.client.BadStatusLine): # back up plan root_url = "http://www.xavierdupre.fr/enseignement/complements/" response = urllib.request.urlopen(root_url+name) with open(name, "wb") as outfile: outfile.write(response.read()) def unzip(name): with zipfile.ZipFile(name, "r") as z: z.extractall(".") filenames = ["etatcivil2012_mar2012_dbase.zip", "etatcivil2012_nais2012_dbase.zip", "etatcivil2012_dec2012_dbase.zip", ] root_url = 'http://telechargement.insee.fr/fichiersdetail/etatcivil2012/dbase/' for filename in filenames: download_and_save(filename, root_url) unzip(filename) print("Download of {}: DONE!".format(filename)) import pandas try: from dbfread_ import DBF use_dbfread = True except ImportError as e : use_dbfread = False if use_dbfread: print("use of dbfread") def dBase2df(dbase_filename): table = DBF(dbase_filename, load=True, encoding="cp437") return pandas.DataFrame(table.records) df = dBase2df('mar2012.dbf') else : print("use of zipped version") import pyensae.datasource data = pyensae.datasource.download_data("mar2012.zip") df = pandas.read_csv(data[0], sep="\t", encoding="utf8", low_memory=False) print(df.shape, df.columns) df.head() df["ageH"] = df.apply (lambda r: 2014 - int(r["ANAISH"]), axis=1) df["ageF"] = df.apply (lambda r: 2014 - int(r["ANAISF"]), axis=1) df.head() df.plot(x="ageH",y="ageF", kind="scatter") df.plot(x="ageH",y="ageF", kind="hexbin")
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo5">Exercice 5 : graphe de la distribution avec pandas</h3> Le module pandas propose un panel de graphiques standard faciles à obtenir. On souhaite représenter la distribution sous forme d'histogramme. A vous de choisir le meilleure graphique depuis la page Visualization.
df["ANAISH"] = df.apply (lambda r: int(r["ANAISH"]), axis=1) df["ANAISF"] = df.apply (lambda r: int(r["ANAISF"]), axis=1) df["differenceHF"] = df.ANAISH - df.ANAISF df["nb"] = 1 dist = df[["nb","differenceHF"]].groupby("differenceHF", as_index=False).count() df["differenceHF"].hist(figsize=(16,6), bins=50)
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
<h3 id="exo6">Exercice 6 : distribution des mariages par jour</h3> On veut obtenir un graphe qui contient l'histogramme de la distribution du nombre de mariages par jour de la semaine et d'ajouter une seconde courbe correspond avec un second axe à la répartition cumulée.
df["nb"] = 1 dissem = df[["JSEMAINE","nb"]].groupby("JSEMAINE",as_index=False).sum() total = dissem["nb"].sum() repsem = dissem.cumsum() repsem["nb"] /= total ax = dissem["nb"].plot(kind="bar") repsem["nb"].plot(ax=ax, secondary_y=True) ax.set_title("distribution des mariages par jour de la semaine")
_doc/notebooks/td2a/td2a_correction_session_1.ipynb
sdpython/ensae_teaching_cs
mit
Create PyZDDE object
l1 = pyz.createLink() # create a DDE link object for communication
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Load an existing lens design file (Cooke 40 degree field) into Zemax's DDE server
zfile = os.path.join(l1.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx') l1.zLoadFile(zfile)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
The following figure is the 2-D Layout plot of the lens.
# Surfaces in the sequential lens data editor l1.ipzGetLDE() # note that ipzCaptureWindow() doesn't work in the new OpticStudio because # the dataitem 'GetMetafile' has become obsolete l1.ipzCaptureWindow('Lay') # General System properties l1.zGetSystem()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Few DDE dataitems that return information about the system have duplicate functions with the prefix ipz. They generally produce outputs that are suitable for interactive environments, and more suitable for human understanding. One example of such function pair is zGetFirst() and ipzGetFirst():
# Paraxial/ first order properties of the system l1.zGetFirst() # duplicate of zGetFirst() for use in the notebook l1.ipzGetFirst() # ... another example is the zGetSystemAper() that returns information about the aperture. # The aperture type is retuned as a code which we might not remember always ... l1.zGetSystemAper() # ...with the duplicate, ipzGetSystemAper(), we can immediately know that # the aperture type is the Entrance Pupil Diameter (EPD) l1.ipzGetSystemAper() # information about the field definition l1.ipzGetFieldData()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Analysis plots - ray-fan analysis plot as an example To plot the ray-fan analysis graph using ipzCaptureWindow() function, we need to provide the 3-letter button code of the corresponding analysis function. If we don't remember the exact button code, we can use a helper function in pyzdde to get some help:
pyz.findZButtonCode('ray') l1.ipzCaptureWindow('Ray', gamma=0.4)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Note that there are few other options for retrieving and plotting analysis plots from Zemax. They are discussed in a separate notebook. However, here is one worth quickly mentioning. You can ask ipzCaptureWindow() to just return the image pixel array instead of directly plotting. Then you can use matplotlib (or any other plotting libraries) to plotting. Here is an example:
lay_arr = l1.ipzCaptureWindow('Lay', percent=15, gamma=0.08, retArr=True) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) pyz.imshow(lay_arr, cropBorderPixels=(5, 5, 1, 90), fig=fig, faxes=ax) ax.set_title('Layout plot', fontsize=16) # Annotate Lens numbers ax.text(41, 70, "L1", fontsize=12) ax.text(98, 105, "L2", fontsize=12) ax.text(149, 89, "L3", fontsize=12) # Annotate the lens with radius of curvature information col = (0.08,0.08,0.08) s1_r = 1.0/l1.zGetSurfaceData(1,2) ax.annotate("{:0.2f}".format(s1_r), (37, 232), (8, 265), fontsize=12, arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5))) s2_r = 1.0/l1.zGetSurfaceData(2,2) ax.annotate("{:0.2f}".format(s2_r), (47, 232), (50, 265), fontsize=12, arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5))) s6_r = 1.0/l1.zGetSurfaceData(6,2) ax.annotate("{:0.2f}".format(s6_r), (156, 218), (160, 251), fontsize=12, arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5))) ax.text(5, 310, "Cooke Triplet, EFL = {} mm, F# = {}, Total track length = {} mm" .format(50, 5, 60.177), fontsize=14) plt.show()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Now, lets say that we want to find the angular magnification of the above optics and we want to find if Zemax provides any operand whose value we can directly read. For that we can use another module level helper function to find all operands related to angular magnification:
pyz.findZOperand('angular magnification')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Bingo! AMAG is the operand we want. Now we can use
l1.zOperandValue('AMAG', 1) # the argument "1" is for the wavelength
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Of course, there is a function in PyZDDE called zGetPupilMagnification() that we can use to get the angular magnification since the inverse of the pupil magnificaiton is the angular magnification (as a consequence of the Lagrange Optical Invariant).
1.0/l1.zGetPupilMagnification()
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Connecting to another Zemax session simultaneously Now, a second pyzdde object is created to communicate with a second ZEMAX server. Note that the first object is still present.
l2 = pyz.createLink() # create a second DDE communication link object
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Set up lens surfaces in the second ZEMAX DDE server. Towards the end, zPushLens() is called so that the LDE is updated with the just-made lens.
# Erase all lens data in the LDE (good practice) l2.zNewLens() # Wavelength data wavelengths = (0.48613270, 0.58756180, 0.65627250) #mm weights = (1.0, 1.0, 1.0) l2.zSetWaveTuple((wavelengths, weights)) l2.zSetPrimaryWave(2) # Set 0.58756180 as primary # System aperture data, and global reference surface. aType, stopSurf, appValue = 0, 1, 100 # EPD,STO is 1st sur, value = 100 l2.zSetSystemAper(aType, stopSurf, appValue) # General data (we need set whatever is really required ... the following # is just shown as an example) unitCode, rayAimingType, globalRefSurf = 0, 0, 1 # mm, off,ref=1st surf useEnvData, temp, pressure = 0, 20, 1 # off, 20C, 1ATM setSystemArg = (unitCode, stopSurf, rayAimingType, useEnvData, temp, pressure, globalRefSurf) l2.zSetSystem(*setSystemArg) # Setup Field data l2.zSetField(0, 0, 3, 1) # number of fields = 3 l2.zSetField(1, 0, 0) # 1st field, on-axis x, on-axis y, weight = 1 (default) l2.zSetField(3, 0, 10, 1.0, 0.0, 0.0, 0.0) # 2nd field l2.zSetField(2,0,5,2.0,0.5,0.5,0.5,0.5, 0.5) # 3rd field #Setup the system, wavelength, (but not the field points) l2.zInsertSurface(2) l2.zInsertSurface(3) #Set surface data, note that by default, all surfaces are Standard type # OBJ: Surface 0 l2.zSetSurfaceData(0,3,500.00) #OBJ thickness = 0.5 m or 500 mm #STO: Surface 1 l2.zSetSurfaceData(1,2,0) #STO Radius = Infinity l2.zSetSurfaceData(1,3,20.00) #STO Thickness = 20 mm l2.zSetSurfaceData(1,5,50.00) #STO Semi-diameter = 50 mm #Surface 2 l2.zSetSurfaceData(2,2,1/150) #Surf2 Radius = 150 mm l2.zSetSurfaceData(2,3,100.0) #Surf2 Thickness = 100 mm l2.zSetSurfaceData(2,4,'BK7') #Surf2 Glass, type = BK7 l2.zSetSurfaceData(2,5,65.00) #Surf2 Semi-diameter = 65.00 mm #Surface 3 l2.zSetSurfaceData(3,2,-1/600) #Surf3 Radius = -600 mm l2.zSetSurfaceData(3,3,300.00) #Surf3 Thickness = 184 mm l2.zSetSurfaceData(3,5,65.00) #Surf3 Semi-diameter = 65.00 mm # Perform Quick Focus l2.zQuickFocus(3,1) # push lens l2.zPushLens(update=1)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Layout plot of the second lens
l2.ipzCaptureWindow('L3d')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Spot diagram of the second lens
l2.ipzCaptureWindow('Spt', percent=15, gamma=0.55)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Just to demonstrate that the first lens (in the first ZEMAX server) is still available, the Layout plot is rendered again.
l1.ipzCaptureWindow('Lay')
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit
Spot diagram of the first lens
l1.ipzCaptureWindow('Spt', percent=15, gamma=0.55)
Examples/IPNotebooks/00 Using ZEMAX and PyZDDE with IPython notebook.ipynb
indranilsinharoy/PyZDDE
mit