markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Decoding in sensor space using a LogisticRegression classifier
clf = LogisticRegression(solver='lbfgs') scaler = StandardScaler() # create a linear model with LogisticRegression model = LinearModel(clf) # fit the classifier on MEG data X = scaler.fit_transform(meg_data) model.fit(X, labels) # Extract and plot spatial filters and spatial patterns for name, coef in (('patterns', ...
0.18/_downloads/d1b18c3376911723f0257fe5003a8477/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's do the same on EEG data using a scikit-learn pipeline
X = epochs.pick_types(meg=False, eeg=True) y = epochs.events[:, 2] # Define a unique pipeline to sequentially: clf = make_pipeline( Vectorizer(), # 1) vectorize across time and channels StandardScaler(), # 2) normalize features across trials LinearModel( Logi...
0.18/_downloads/d1b18c3376911723f0257fe5003a8477/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
在实际的实现中,图层实现会很复杂,这里仅介绍相关的设计模式,做了比较大的抽象,用background表示背景的RGBA,简单用content表示内容,除了直接绘画,还可以设置透明度。
dog_layer=simpleLayer() dog_layer.paint('Dog') dog_layer.fillBackground([0,0,255,0]) print('background:',dog_layer.getBackgroud()) print('Painting:', dog_layer.getContent())
DesignPattern/ProtoTypePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
接下来,如果需要再生成一个同样的图层,再填充同样的颜色,再画一只同样狗,该如何做呢?还是按照新建图层、填充背景、画的顺序么?或许你已经发现了,这里可以用复制的方法来实现,而复制(clone)这个动作,就是原型模式的精髓了。 按照此思路,在图层类中新加入两个方法:clone和deep_clone
from copy import copy, deepcopy class simpleLayer(object): background=[0,0,0,0] content="blank" def getContent(self): return self.content def getBackgroud(self): return self.background def paint(self,painting): self.content=painting def setParent(self,p): self.bac...
DesignPattern/ProtoTypePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
This is a level 1 header cell. This is a level 2 header cell. This is a level 3 header cell. This is a level 4 header cell. This is a level 5 header cell. This is a level 6 header cell. This is a Markdown cell. Markdown is a text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-...
import pandas class Point(object): """ A class-level docstring. """ def __init__(self, x, y=3): """ Constructor docstring. SHIFT+TAB will show you this first line. SHIFT + two TABs will show you the entire docstring. """ self.x, self.y = x, y lon...
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
Try it out!
pan # Pressing TAB here will autocomplete pandas. pandas. # Pressing TAB here will show you the top-level attributes of pandas. pandas.D # Pressing TAB here will show you DataFrame, DatetimeIndex, and DateOffset. pandas.DataFrame # Pressing TAB here will show you methods and attributes o...
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
Documentation: Typing <expression>? shows the function signature and documentation for that expression. Typing <expression>?? takes you to the source code for the expression. You can also use the pinfo and pinfo2 magics to get the same info. Notebook Only: Press SHIFT+TAB while hovering over an object to o...
pandas.DataFrame? pandas.DataFrame.plot?? pandas.DataFrame.|plot
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
Cell Magics In addition to all the magics we saw above, there are additional magics that operate at the cell level. Many of these are focused around interoperation with other languages. Javascript
%%javascript alert('foo')
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
R Some cell magics are provided by extensions. Here we load the rpy2's cell magic for interacting with R.
%load_ext rpy2.ipython import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(10,5), columns=['A','B','C','D','E']) df # Push our DataFrame into R. %Rpush df %%R # Despite also being valid python, this is actually R code! col_A = df['A'] plot(col_A) # We can also pull values back out of R! %Rpull...
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
Builtin Rich Display Formats IPython supports a wide array of rich display formats, including: * LaTeX * Markdown * HTML * SVG * PNG ...and more
import IPython.display dir(IPython.display)[:18]
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
LaTeX
from IPython.display import Math Math(r'w_A = \frac{\sigma_B - Cov(r_A, r_B)}{\sigma_B^2 + \sigma_A^2 - 2 Cov(r_A, r_B)}')
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
HTML
from IPython.display import HTML HTML('''\ To learn more about IPython's rich display capabilities, click <a href="http://ipython.org/ipython-doc/dev/config/integrating.html">here</a>. ''')
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
YouTube Video
from IPython.display import YouTubeVideo YouTubeVideo("https://www.youtube.com/watch?v=B_XiSozs-SE")
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
Customizing Object Display If a class implements one of many _repr_ methods, IPython will use that method to display the object.
class Table(object): """ A simple table represented as a list of lists. """ def __init__(self, lists): self.lists = lists def make_row(self, l): columns = ''.join('<td>{value}</td>'.format(value=value) for value in l) return '<tr>{columns}</tr>'.format(columns=colum...
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
ssanderson/notebooks
apache-2.0
Load up some test data to play with
tips = pd.read_csv('input/tips.csv') tips['tip_percent'] = (tips['tip'] / tips['total_bill'] * 100) tips.head() tips.describe()
Part_3.ipynb
jpwhite3/python-analytics-demo
cc0-1.0
Plotting linear regression http://web.stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html
sns.jointplot("total_bill", "tip_percent", tips, kind='reg'); sns.lmplot(x="total_bill", y="tip_percent", hue="ordered_alc_bev", data=tips) sns.lmplot(x="total_bill", y="tip_percent", col="day", data=tips, aspect=.5) sns.lmplot(x="total_bill", y="tip_percent", hue='ordered_alc_bev', col="time", row='gender', size=6,...
Part_3.ipynb
jpwhite3/python-analytics-demo
cc0-1.0
Plotting logistic regression http://web.stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html
# Let's add some calculated columns tips['tip_above_avg'] = np.where(tips['tip_percent'] >= tips['tip_percent'].mean(), 1, 0) tips.replace({'Yes': 1, 'No': 0}, inplace=True) tips.head() sns.lmplot(x="tip_percent", y="ordered_alc_bev", col='gender', data=tips, logistic=True) sns.lmplot(x="ordered_alc_bev", y="tip_abo...
Part_3.ipynb
jpwhite3/python-analytics-demo
cc0-1.0
A Practical Guide to the Machine Learning Workflow: Separating Stars and Galaxies from SDSS Version 0.1 By AA Miller 2017 Jan 22 We will now follow the steps from the machine learning workflow lecture to develop an end-to-end machine learning model using actual astronomical data. As a reminder the workflow is as follo...
import numpy as np from astropy.table import Table import matplotlib.pyplot as plt %matplotlib inline
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1) Obtain and Examine Training Data As a reminder, for supervised-learning problems we use a training set, sources with known labels, i.e. they have been confirmed as normal stars, QSOs, or galaxies, to build a model to classify new observations where we do not know the source label. The training set for this e...
from astroquery.sdss import SDSS # enables direct queries to the SDSS database
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
While it is possible to look up each of the names of the $r$-band magnitudes in the SDSS PhotoObjAll schema, the schema list is long, and thus difficult to parse by eye. Fortunately, we can identify the desired columns using the database itself: select COLUMN_NAME from INFORMATION_SCHEMA.Columns where table_name = 'Pho...
sdss_query = """SELECT TOP 10000 p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, s.class FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid WHERE p.mode = 1 AND s.sciencePrimary =...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
To reiterate a point from above: data-driven models are only as good as the training set. Now that we have a potential training set, it is essential to inspect the data for any peculiarities. Problem 1a Can you easily identify any important properties of the data from the above table? If not - is there a better way to ...
# complete import seaborn as sns sns.pairplot(sdss_set.to_pandas(), hue = 'class', diag_kind = 'kde')
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 1b The visualization confirms our domain knowledge assertion: galaxy PSF measurements differ significantly from the other magnitude measurements. The visualization also reveals the magnitude distribution of the training set, as well as a potential bias: the dip in the distribution at $r' \approx 19$ mag. Ther...
from sklearn.model_selection import train_test_split rs = 2 # we are in second biggest metropolitan area in the US # complete X = np.array( # complete y = np.array( # complete train_X, test_X, train_y, test_y = train_test_split( X, y, # complete from sklearn.model_selection import train_test_split rs = 2 # we ar...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2) An Aside on the Importance of Feature Engineering It has been said that all machine learning is an exercise in feature engineering. Feature engineering - the process of creating new features, combining features, removing features, collecting new data to supplement existing features, etc. is essential in the...
bright_query = """SELECT TOP 10000 p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, s.class FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid WHERE p.mode = 1 AND s.sciencePrimary...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2a Train a $k$ Nearest Neighbors model with $k = 11$ neighbors on the 10k source training set. Note - for this particular problem, the number of neighbors does not matter much.
from sklearn.neighbors import KNeighborsClassifier feats = # complete bright_X = # complete bright_y = # complete KNNclf = # complete from sklearn.neighbors import KNeighborsClassifier feats = list(bright_set.columns) feats.remove('class') bright_X = np.array(bright_set[feats].to_pandas()) bright_y = np.array(br...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2b Evaluate the accuracy of the model when applied to the sources in the faint test set. Does the model perform well? Hint - you may find sklearn.metrics.accuracy_score useful for this exercise.
from sklearn.metrics import accuracy_score faint_X = # complete faint_y = # complete faint_preds = # complete print("The raw features produce a KNN model with accuracy ~{:.4f}".format( # complete from sklearn.metrics import accuracy_score faint_X = np.array(faint_set[feats].to_pandas()) faint_y = np.array(faint_se...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 2b Based on the pair plots generated above - stars and galaxies appear highly distinct based on their SDSS $r'$-band measurements, thus, this model likely exhibits poor performance. [we will see if we can confirm this] Leveraging the same domain knowledge discussed above, namely that galaxies cannot be modele...
bright_Xnorm = # complete KNNclf = # complete faint_predsNorm = # complete print("The normalized features produce an accuracy ~{:.4f}".format( # complete bright_Xnorm = bright_X[:,0][:,np.newaxis] - bright_X[:,1:] faint_Xnorm = faint_X[:,0][:,np.newaxis] - faint_X[:,1:] KNNclf = KNeighborsClassifier(n_neighbors =...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 2c Wow! Normalizing the features produces a huge ($\sim{35}\%$) increase in accuracy. Clearly, we should be using normalized magnitude features moving forward. In addition to demonstrating the importance of feature engineering, this exercise teaches another important lesson: contextual features can be dangero...
import # complete rs = 626 # aread code for Pasadena train_Xnorm = # complete RFclf = # complete from sklearn.ensemble import RandomForestClassifier rs = 626 # aread code for Pasadena train_Xnorm = train_X[:,0][:,np.newaxis] - train_X[:,1:] RFclf = RandomForestClassifier(n_estimators = 25, random_state = rs) RFclf...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
scikit-learn really makes it easy to build ML models. Another nice property of RF is that it naturally provides an estimate of the most important features in the model. [Once again - feature engineering comes into play, as it may be necessary to remove correlated features or unimportant features during the model const...
# complete print("The relative importance of the features is: \n{:s}".format( # complete print(RFclf.feature_importances_) # print the importances indicies = np.argsort(RFclf.feature_importances_)[::-1] # sort the features most imp. --> least imp. # recall that all features are normalized relative to psf...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 3b psfMag_r - deVMag_r is the most important feature. This makes sense based on the separation of stars and galaxies in the psfMag_r-deVMag_r plane (see the visualization results above). Note - the precise ordering of the features can change due to their strong correlation with each other, though the fiberMa...
# complete print("The SDSS phot model produces an accuracy ~{:.4f}".format( # complete phot_y = train_Xnorm[:,6] > 0.145 phot_class = np.empty(len(phot_y), dtype = '|S6') phot_class[phot_y] = 'GALAXY' phot_class[phot_y == False] = 'STAR' print("The SDSS phot model produces an accuracy ~{:.4f}".format(accuracy_score(...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The simple SDSS model sets a high standard! A $\sim{96}\%$ accuracy following a single hard cut is a phenomenal performance. Problem 4b Using 10-fold cross validation, estimate the accuracy of the RF model.
from sklearn.model_selection import # complete RFpreds = # complete print("The CV accuracy for the training set is {:.4f}".format( # complete from sklearn.model_selection import cross_val_predict RFpreds = cross_val_predict(RFclf, train_Xnorm, train_y, cv = 10) print("The CV accuracy for the training set is {:.4f}...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Phew! Our hard work to build a machine learning model has been rewarded, by creating an improved model: $\sim{96.9}\%$ accuracy vs. $\sim{96.4}\%$. [But - was our effort worth only a $0.5\%$ improvement in the model?] Problem 5) Model Optimization While the "off-the-shelf" model provides an improvement over the SDSS ph...
rs = 1936 # year JPL was founded CVpreds1 = # complete # complete # complete print("The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}".format( # complete rs = 1936 # year JPL was founded CVpreds1 = cross_val_predict(RandomForestClassifier(n_estimators = 1, random_state=rs), ...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 5a Using a single tree will produce high variance results, as the features selected at the top of the tree greatly influence the final classifications. Thus, we expect it to have the lowest accuracy. While (in this case) the affect is small, it is clear that $N_\mathrm{tree}$ affects the model output. Now w...
rs = 64 # average temperature in Los Angeles from sklearn.model_selection import GridSearchCV grid_results = # complete print("The optimal parameters are:") for key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3 print("{}: {}".format(key, item)) rs = 64 # avera...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Now that the model is fully optimized - we are ready for the moment of truth! Problem 5c Using the optimized model parameters, train a RF model and estimate the model's generalization error using the test set. How does this compare to the baseline model?
RFopt_clf = # complete test_preds = # complete print('The optimized model produces a generalization error of {:.4f}'.format( # complete RFopt_clf = RandomForestClassifier(n_estimators=30, max_features=3, min_samples_leaf=10) RFopt_clf.fit(train_Xnorm, train_y) test_Xnorm = test_X[:,0][:,np.newaxis] - test_X[:,1:] t...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 5c The optimized model provides a $\sim{0.6}\%$ improvement over the baseline model. We will now examine the performance of the model using some alternative metrics. Note - if these metrics are essential for judging the model performance, then they should be incorporated to the workflow in the evaluation stag...
from sklearn.metrics import # complete # complete from sklearn.metrics import confusion_matrix cm = confusion_matrix(test_y, test_preds) print(cm)
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 5d Adopting galaxies as the positive class, the TPR = 96.7%, while the TNR = 97.1%. Thus, yes, these is ~symmetry to the classifications. Problem 5e Calculate and plot the ROC curves for both stars and galaxies. Hint - you'll need probabilities in order to calculate the ROC curve.
from sklearn.metrics import roc_curve test_preds_proba = # complete # complete fpr, tpr, thresholds = roc_curve( # complete plt.plot( # complete plt.legend() from sklearn.metrics import roc_curve, roc_auc_score test_preds_proba = RFopt_clf.predict_proba(test_Xnorm) test_y_stars = np.zeros(len(test_y), dtype = int)...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5f Suppose you want a model that only misclassifies 1% of stars as galaxies. What classification threshold should be adopted for this model? What fraction of galaxies does this model miss? Can you think of a reason to adopt such a threshold?
# complete fpr01_idx = (np.abs(fpr-0.01)).argmin() tpr01 = tpr[fpr01_idx] threshold01 = thresholds[fpr01_idx] print("To achieve FPR = 0.01, a decision threshold = {:.4f} must be adopted".format(threshold01)) print("This threshold will miss {:.4f} of galaxies".format(1 - tpr01))
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Solution 5f When building galaxy 2-point correlation functions it is very important to avoid including stars in the statistics as they will bias the final measurement. Finally - always remember: worry about the data Challenge Problem) Taking the Plunge Applying the model to field data QSOs are unresolved sources that...
QSO_query = """SELECT TOP 10000 p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, s.class FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid WHERE p.mode = 1 AND s.sciencePrimary =...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Challenge 1 Calculate the accuracy with which the model classifies QSOs based on the 10k QSOs selected with the above command. How does that accuracy compare to that estimated by the test set?
qso_X = np.array(QSO_set[feats].to_pandas()) qso_y = np.empty(len(QSO_set),dtype='|S4') # we are defining QSOs as stars for this exercise qso_y[0:-1] = 'STAR' qso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:] qso_preds = RFclf.predict(qso_Xnorm) print("The RF model correctly classifies ~{:.4f} of the QSOs".forma...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Challenge 2 Can you think of any reasons why the performance would be so much worse for the QSOs than it is for the stars? Can you obtain a ~.97 accuracy when classifying QSOs?
# As discussed above, low-z AGN have resolved host galaxies which will confuse the classifier, # this can be resolved by only selecting high-z QSOs (z > 1.5) QSO_query = """SELECT TOP 10000 p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, p.deVMag_r, p.expMag_r, p.modelMag_r, p.cMode...
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Challenge 3 Perform an actual test of the model using "field" sources. The SDSS photometric classifier is nearly perfect for sources brighter than $r = 21$ mag. Download a random sample of $r < 21$ mag photometric sources, and classify them using the optimized RF model. Adopting the photometric classifications as grou...
# complete
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
First Step: Load Data and disassemble for our purposes
df = pd.read_csv('./insurance-customers-300.csv', sep=';') y=df['group'] df.drop('group', axis='columns', inplace=True) X = df.as_matrix() df.describe()
notebooks/booster/3-base-line.ipynb
DJCordhose/ai
mit
Second Step: Visualizing Prediction
# ignore this, it is just technical code # should come from a lib, consider it to appear magically # http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD...
notebooks/booster/3-base-line.ipynb
DJCordhose/ai
mit
By just randomly guessing, we get approx. 1/3 right, which is what we expect
random_clf.score(X_test_2_dim, y_test)
notebooks/booster/3-base-line.ipynb
DJCordhose/ai
mit
Third Step: Creating a Base Line Creating a naive classifier manually, how much better is it?
class BaseLineClassifier(ClassifierBase): def predict_single(self, x): try: speed, age, km_per_year = x except: speed, age = x km_per_year = 0 if age < 25: if speed > 180: return 0 else: return 2 ...
notebooks/booster/3-base-line.ipynb
DJCordhose/ai
mit
This is the baseline we have to beat
base_clf.score(X_test_2_dim, y_test)
notebooks/booster/3-base-line.ipynb
DJCordhose/ai
mit
No overfitting, which is too be expected, as we use general rules rather than inferring from single data points
base_clf.score(X_train_2_dim, y_train)
notebooks/booster/3-base-line.ipynb
DJCordhose/ai
mit
Installation Install Acme
!pip install dm-acme !pip install dm-acme[reverb] !pip install dm-acme[tf]
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Install the environment library
if environment_library == 'dm_control': import distutils.util import subprocess if subprocess.run('nvidia-smi').returncode: raise RuntimeError( 'Cannot communicate with GPU. ' 'Make sure you are using a GPU Colab runtime. ' 'Go to the Runtime menu and select Choose runtime type.') m...
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Install visualization packages
!sudo apt-get install -y xvfb ffmpeg !pip install imageio !pip install PILLOW !pip install pyvirtualdisplay
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Import Modules
import IPython from acme import environment_loop from acme import specs from acme import wrappers from acme.agents.tf import d4pg from acme.tf import networks from acme.tf import utils as tf2_utils from acme.utils import loggers import numpy as np import sonnet as snt # Import the selected environment lib if environm...
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Load an environment We can now load an environment. In what follows we'll create an environment and grab the environment's specifications.
if environment_library == 'dm_control': environment = suite.load('cartpole', 'balance') elif environment_library == 'gym': environment = gym.make('MountainCarContinuous-v0') environment = wrappers.GymWrapper(environment) # To dm_env interface. else: raise ValueError( "Unknown environment library: {};...
examples/quickstart.ipynb
deepmind/acme
apache-2.0
## Create a D4PG agent
#@title Build agent networks # Get total number of action dimensions from action spec. num_dimensions = np.prod(environment_spec.actions.shape, dtype=int) # Create the shared observation network; here simply a state-less operation. observation_network = tf2_utils.batch_concat # Create the deterministic policy networ...
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Run a training loop
# Run a `num_episodes` training episodes. # Rerun this cell until the agent has learned the given task. env_loop.run(num_episodes=100)
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Visualize an evaluation loop Helper functions for rendering and vizualization
# Create a simple helper function to render a frame from the current state of # the environment. if environment_library == 'dm_control': def render(env): return env.physics.render(camera_id=0) elif environment_library == 'gym': def render(env): return env.environment.render(mode='rgb_array') else: raise V...
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Run and visualize the agent in the environment for an episode
timestep = environment.reset() frames = [render(environment)] while not timestep.last(): # Simple environment loop. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Render the scene and add it to the frame stack. frames.append(render(environment)) # Save and display ...
examples/quickstart.ipynb
deepmind/acme
apache-2.0
Params
seed = 1 np.random.seed(seed) lhsmdu.setRandomSeed(seed) numDimensions = 2 numSamples = 100 numIterations = 100
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Theoretical values
theoretical_mean = 0.5 theoretical_std = np.sqrt(1./12)
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Emperical mean ($\mu$) and standard deviation ($\sigma$) estimates for 100 samples
mc_Mean, lhs_Mean = [], [] mc_Std, lhs_Std = [], [] for iterate in range(numIterations): a = np.random.random((numDimensions,numSamples)) b = lhsmdu.sample(numDimensions,numSamples) mc_Mean.append(np.mean(a)) lhs_Mean.append(np.mean(b)) mc_Std.append(np.std(a)) lhs_Std.append(np.std(b))
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Plotting mean estimates
fig, ax = plt.subplots() ax.plot(range(numIterations), mc_Mean, 'ko', label='numpy') ax.plot(range(numIterations), lhs_Mean, 'o', c='orange', label='lhsmdu') ax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3) ax.set_xlabel("Iteration #") ax.set_ylabel("$\mu$"...
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Plotting standard deviation estimates
fig, ax = plt.subplots() ax.plot(range(numIterations), mc_Std, 'ko', label='numpy') ax.plot(range(numIterations), lhs_Std, 'o', c='orange', label='lhsmdu') ax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3) ax.set_xlabel("Iteration #") ax.set_ylabel("$\sigma$"...
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Across different number of samples
mc_Std, lhs_Std = [], [] mc_Mean, lhs_Mean = [], [] numSamples = range(1,numIterations) for iterate in numSamples: a = np.random.random((numDimensions,iterate)) b = lhsmdu.sample(numDimensions,iterate) mc_Mean.append(np.mean(a)) lhs_Mean.append(np.mean(b)) mc_Std.append(np.std(a)) lhs_Std.append...
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Plotting mean estimates
fig, ax = plt.subplots() ax.plot(numSamples, mc_Mean, 'ko', label='numpy') ax.plot(numSamples, lhs_Mean, 'o', c='orange', label='lhsmdu') ax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3) ax.set_xlabel("Number of Samples") ax.set_ylabel("$\mu$") ax.legend(fr...
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Plotting standard deviation estimates
fig, ax = plt.subplots() ax.plot(numSamples, mc_Std, 'ko', label='numpy') ax.plot(numSamples, lhs_Std, 'o', c='orange', label='lhsmdu') ax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3) ax.set_xlabel("Number of Samples") ax.set_ylabel("$\sigma$") ax.legend(fr...
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
sahilm89/lhsmdu
mit
Then run the following cell and send some values from pypeoutgoing.ipynb running in another window. The will be sent over the "pype". Watch them printed below once they are received:
for x in pype: print(x)
src/jupyter/python/pypeincoming.ipynb
thalesians/tsa
apache-2.0
Once you have finished experimenting, you can close the pype:
pype.close()
src/jupyter/python/pypeincoming.ipynb
thalesians/tsa
apache-2.0
Plotting topographic maps of evoked data Load evoked data and plot topomaps for selected time points using multiple additional options.
# Authors: Christian Brodbeck <christianbrodbeck@nyu.edu> # Tal Linzen <linzen@nyu.edu> # Denis A. Engeman <denis.engemann@gmail.com> # Mikołaj Magnuski <mmagnuski@swps.edu.pl> # Eric Larson <larson.eric.d@gmail.com> # # License: BSD-3-Clause
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
sphinx_gallery_thumbnail_number = 5
import numpy as np import matplotlib.pyplot as plt from mne.datasets import sample from mne import read_evokeds print(__doc__) path = sample.data_path() fname = path + '/MEG/sample/sample_audvis-ave.fif' # load evoked corresponding to a specific condition # from the fif file and subtract baseline condition = 'Left ...
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Basic :func:~mne.viz.plot_topomap options We plot evoked topographies using :func:mne.Evoked.plot_topomap. The first argument, times allows to specify time instants (in seconds!) for which topographies will be shown. We select timepoints from 50 to 150 ms with a step of 20ms and plot magnetometer data:
times = np.arange(0.05, 0.151, 0.02) evoked.plot_topomap(times, ch_type='mag', time_unit='s')
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If times is set to None at most 10 regularly spaced topographies will be shown:
evoked.plot_topomap(ch_type='mag', time_unit='s')
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can use nrows and ncols parameter to create multiline plots with more timepoints.
all_times = np.arange(-0.2, 0.5, 0.03) evoked.plot_topomap(all_times, ch_type='mag', time_unit='s', ncols=8, nrows='auto')
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Instead of showing topographies at specific time points we can compute averages of 50 ms bins centered on these time points to reduce the noise in the topographies:
evoked.plot_topomap(times, ch_type='mag', average=0.05, time_unit='s')
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can plot gradiometer data (plots the RMS for each pair of gradiometers)
evoked.plot_topomap(times, ch_type='grad', time_unit='s')
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Additional :func:~mne.viz.plot_topomap options We can also use a range of various :func:mne.viz.plot_topomap arguments that control how the topography is drawn. For example: cmap - to specify the color map res - to control the resolution of the topographies (lower resolution means faster plotting) outlines='skirt' t...
evoked.plot_topomap(times, ch_type='mag', cmap='Spectral_r', res=32, outlines='skirt', contours=4, time_unit='s')
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you look at the edges of the head circle of a single topomap you'll see the effect of extrapolation. There are three extrapolation modes: extrapolate='local' extrapolates only to points close to the sensors. extrapolate='head' extrapolates out to the head circle. extrapolate='box' extrapolates to a large box stretc...
extrapolations = ['local', 'head', 'box'] fig, axes = plt.subplots(figsize=(7.5, 4.5), nrows=2, ncols=3) # Here we look at EEG channels, and use a custom head sphere to get all the # sensors to be well within the drawn head surface for axes_row, ch_type in zip(axes, ('mag', 'eeg')): for ax, extr in zip(axes_row, e...
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
More advanced usage Now we plot magnetometer data as topomap at a single time point: 100 ms post-stimulus, add channel labels, title and adjust plot margins:
evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False, size=6, res=128, title='Auditory response', time_unit='s') plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88)
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also highlight specific channels by adding a mask, to e.g. mark channels exceeding a threshold at a given time:
# Define a threshold and create the mask mask = evoked.data > 1e-13 # Select times and plot times = (0.09, 0.1, 0.11) evoked.plot_topomap(times, ch_type='mag', time_unit='s', mask=mask, mask_params=dict(markersize=10, markerfacecolor='y'))
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Or by manually picking the channels to highlight at different times:
times = (0.09, 0.1, 0.11) _times = ((np.abs(evoked.times - t)).argmin() for t in times) significant_channels = [ ('MEG 0231', 'MEG 1611', 'MEG 1621', 'MEG 1631', 'MEG 1811'), ('MEG 2411', 'MEG 2421'), ('MEG 1621')] _channels = [np.in1d(evoked.ch_names, ch) for ch in significant_channels] mask = np.zeros(ev...
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Animating the topomap Instead of using a still image we can plot magnetometer data as an animation, which animates properly only in matplotlib interactive mode.
times = np.arange(0.05, 0.151, 0.01) fig, anim = evoked.animate_topomap( times=times, ch_type='mag', frame_rate=2, time_unit='s', blit=False)
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gan_mnist/Intro_to_GANs_Exercises.ipynb
blua/deep-learning
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Th...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = # Generator network here g_model, g_logits = # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = d_model_fake, d_logits_fake =
gan_mnist/Intro_to_GANs_Exercises.ipynb
blua/deep-learning
mit
Create sample data set
scratch_df = pd.DataFrame({'x1': pd.Series(np.random.randn(20))}) scratch_df
02_analytical_data_prep/src/py_part_2_discretization.ipynb
jphall663/GWU_data_mining
apache-2.0
Discretize
scratch_df['x1_discrete'] = pd.DataFrame(pd.cut(scratch_df['x1'], 5)) scratch_df
02_analytical_data_prep/src/py_part_2_discretization.ipynb
jphall663/GWU_data_mining
apache-2.0
Dont forget to install conda install gcsfs zarr fsspec for the docs We are going to investigate the coordinate transformations using data from an indealized high resolution model run produced by Dhruv Balwada(to read more about the data see his paper).
import fsspec ds = xr.open_zarr(fsspec.get_mapper('gcs://pangeo-data/balwada/channel_ridge_resolutions/20km/tracer_10day_snap'), consolidated=True) ds = ds.isel(time=slice(0, 20)) # ds.to_zarr('offline_backup.zarr') # ds = xr.open_zarr('offline_backup.zarr')
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
Adjusting the vertical coordinate orientation All the following steps crucially assume that the depth values are increasing with the depth dimension. The values can be negative but then have progress towards less negative values as you follow the logical depth index. This dataset has negative depth values that decreas...
for dim in ['Z', 'Zp1', 'Zl', 'Zu']: ds.coords[dim] = -ds[dim] ds.Z
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
Another issue with this dataset is that in the surface, the profile is sometimes unstable (density does not strictly increase with depth). This will cause issues when interpolating, and thus for now we use a time mean of the dataset between ~200-1500m depth.
# check monotonicity for all profiles ds = ds.mean('time') ds = ds.isel(YC=slice(10,-10), Z=slice(28,60), Zp1=slice(28,61)) # since we operate on the depth dimension we convert it to a single chunk to avoid doing this all the time later ds = ds.chunk({'Z':-1}) assert (ds.T.diff('Z') < 0).all(['Z']).all()
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
Transforming to a different (spatially uniform) depth coordinate system In this case we will transform the dataset to different depth coordinates (we leave out the regridding step and manually provide a new depth grid). This example might be useful when you want to convert different depth grids (e.g. from several obser...
# define a new depth array z_new = np.arange(10,2000, 20) ds_z_new = ds.interp(Z=z_new) ds_z_new.Z
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
The depth coordinate Z is now regularly spaced instead of the surface refined resolution of the original dataset, we 'remapped' all data onto a new vertical grid using linear interpolation. Lets compare the actual data.
ds.PTRACER01.isel(XC=40, YC=40).plot(y='Z', yincrease=False) ds_z_new.PTRACER01.isel(XC=40, YC=40).plot(ls='--', y='Z', yincrease=False)
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
Visually that looks pretty good and these results might be sufficient for certain applications. The biggest downside of this approach is that the total amount of tracer is not conserved.
dz_original = ds.drF #vertical cell thickness of the model grid tracer_intz_original = (ds.PTRACER01 * dz_original).sum('Z') dz_new = 20 #This is easy to infer since the grid is uniformly spaced tracer_intz_new = (ds_z_new.PTRACER01 * dz_new).sum('Z') print(tracer_intz_original.isel(XC=40, YC=40).load()) print(tracer...
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
The difference might seem small but for certain applications (e.g. budget reconstruction), this is not acceptable. However, we can do better by using the conservative_remap function: Using conservative remapping For this method we need not only the cell centers, but instead need to provide the depths of the vertical b...
from xarrayutils.vertical_coordinates import conservative_remap # the conservative remapping needs information about the upper and lower bounds of the source and target cells. bounds_original = ds.Zp1 # depth position of vertical bounding surface bounds_new = xr.DataArray(np.arange(0,2020, 20), dims=['new_bounds']) #...
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
conservative_remap takes into account every overlap between source and target cells. See for instance the uppermost value of ds_z_cons_new, the low value is due to the fact that there is a traceramount in the upper half of the first source cell, which is then distributed over the much larger targer cell. This ensures t...
tracer_intz_cons_new = (ds_z_cons_new * dz_new).sum('remapped') np.isclose(tracer_intz_original.isel(XC=40, YC=40).load(), tracer_intz_cons_new.isel(XC=40, YC=40).load())
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
This is in fact true for every grid position:
np.isclose(tracer_intz_original, tracer_intz_cons_new).all()
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
Ok this is nice, but it really gets interesting when we define our new depth coordinates Switching to potential temperature coordinates using only linear interpolation Regridding using linear interpolation The xarray internals can only help us if we want to interpolate on values of a dimension (1D), e.g. if we want to...
from xarrayutils.vertical_coordinates import linear_interpolation_regrid t_vals = np.arange(0.6,3, 0.01) temperature_values = xr.DataArray(t_vals, coords=[('t', t_vals)]) # define the new temperature grid z_temp_coord = linear_interpolation_regrid(ds.Z, ds.T, temperature_values, target_value_dim='t') plt.subplot(2,...
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
As you can see in this example, the line of constant temperature is moving deeper with increasing y, and that is reflected in the depth values along a constant depth coordinate in the regridded values. Now we can remap other data values on the corresponding depths. The simplest method is again linear interpolation: Usi...
from xarrayutils.vertical_coordinates import linear_interpolation_remap # we cant have nans, so just fill them with an out of bounds value # z_temp_coord = z_temp_coord.fillna(1e8) # this should be fixed now? ds_temp_linear = linear_interpolation_remap(ds.Z, ds.T, z_temp_coord) # this requires me to downgrade dask, s...
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
As expected, when we remap the temperature field itself, we get a matching horizontal stratification. Now lets do something more interesting and look at the tracer field on a constant temperature surface of 2 deg:
ds_temp_linear = linear_interpolation_remap(ds.Z, ds.PTRACER01, z_temp_coord) ds_temp_linear.sel(remapped=2, method='nearest').plot(robust=True)
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
Pretty neat, but again there is no guarantee that the total tracer content is preserved with linear interpolation, but with a few modifications we can implement the conservative remapping here as well. Using conservative remapping
# we need to capture all the tracer cells, so if we dont specify our temperature range covering the Tracer min and max, the total tracer amount will not be conserved. t_vals = np.hstack([ds.T.min().load().data[np.newaxis], np.arange(0.6,3, 0.01), ds.T.max().load().data[np.newaxis]]) t_vals # for now the results are ord...
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
And now we can use these cell bounding values exactly like we did before.
# now we can remap the tracer data again (at the moment the depth dimensions have to be explicitly defined). ds_temp_cons = conservative_remap(ds.PTRACER01,bounds_original, z_temp_bounds, z_dim='Z', z_bnd_dim='Zp1', z_bnd_dim_target='regridded', mask=True) # the associated depth dimen...
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
And most importantly, the vertical tracer content is again conserved:
dz_remapped = z_temp_bounds.diff('regridded').rename({'regridded':'remapped'}) dz_remapped.coords['remapped'] = ds_temp_cons.coords['remapped'] tracer_intz_remapped_temp = (ds_temp_cons*dz_remapped).sum('remapped') xr.testing.assert_allclose(tracer_intz_original, tracer_intz_remapped_temp)
docs/vertical_coords.ipynb
jbusecke/xarrayutils
mit
1..2 Analysis the Cycle (a) The thermal efficiency The net power developed by the cycle is $\dot{W}_{cycle}=\dot{W}_t-\dot{W}_p$ Mass and energy rate balances for control volumes around the turbine and pump give,respectively $\frac{\dot{W}_t}{\dot{m}}=h_1-h_2$ $\frac{\dot{W}_p}{\dot{m}}=h_4-h_3$ where $\dot{m}$ is the...
# Part(a) # Mass and energy rate balances for control volumes # around the turbine and pump give, respectively # turbine wtdot = h1 - h2 # pump wpdot = h4-h3 # The rate of heat transfer to the working fluid as it passes # through the boiler is determined using mass and energy rate balances as qindot = h1-h4 # therma...
notebook/RankineCycle81-82.ipynb
PySEE/PyRankine
mit
(b) The back work ratio is $bwr=\frac{\dot{W}_p}{\dot{W}_t}=\frac{h_4-h_3}{h_1-h_2}$ (c) The mass flow rate of the steam can be obtained from the expression for the net power given in part (a) $\dot{m}=\frac{\dot{W}_{cycle}}{(h_1-h_2)-(h_4-h_3)}$ (d) With the expression for $\dot{Q}_{in}$ in from part (a) and previousl...
# Part(b) # back work ratio:bwr, defined as the ratio of the pump work input to the work # developed by the turbine. bwr = wpdot/wtdot # # Result print('(b) The back work ratio is {:>.2f}%'.format(bwr*100)) # Part(c) Wcycledot = 100.00 # the net power output of the cycle in MW m...
notebook/RankineCycle81-82.ipynb
PySEE/PyRankine
mit
2 Example8.2 :Analyzing a Rankine Cycle with Irreversibilities Reconsider the vapor power cycle of Example 8.1, but include in the analysis that the turbine and the pump each have an isentropic efficiency of 85%. Determine for the modified cycle (a) the thermal efficiency, (b) the mass flow rate of steam, in kg/...
from seuif97 import * # State 1 p1 = 8.0 # in MPa t1 =px2t(p1,1) h1=px2h(p1,1) # h1 = 2758.0 From table A-3 kj/kg s1=px2s(p1,1) # s1 = 5.7432 From table A-3 kj/kg.k # State 2 ,p2=0.008 p2=0.008 s2s = s1 h2s=ps2h(p2,s2s) t2s=ps2t(p2,s2s) etat_t=0.85 h2=h1-etat_t*(h1-h2s) t2 ...
notebook/RankineCycle81-82.ipynb
PySEE/PyRankine
mit