markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Saving/loading sessions Saving and loading Symca sessions is very simple and works similar to RateChar. Saving a session takes place with the save_session method, whereas the load_session method loads the saved expressions. As with the save_results method and most other saving and loading functionality, if no file_name...
# saving session sc.save_session() # create new Symca object and load saved results new_sc = psctb.Symca(mod) new_sc.load_session() # display saved results new_sc.cc_results
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Plotting Support
%matplotlib inline import matplotlib.pyplot as plt
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Jaynes-Cummings model, with the cavity as a non-Markovian bath As a simple example, we consider the Jaynes-Cummings mode, and the non-Markovian dynamics of the qubit when the cavity is traced out. In this example, the dynamical maps $\mathcal{E}_k$ are the reduced time-propagators for the qubit, after evolving and trac...
kappa = 1.0 # cavity decay rate wc = 0.0*kappa # cavity frequency wa = 0.0*kappa # qubit frequency g = 10.0*kappa # coupling strength N = 3 # size of cavity basis # intial state psi0c = qt.basis(N,0) rho0c = qt.ket2dm(psi0c) rho0a = qt.ket2dm(qt.basis(2,0)) rho0 = qt.tensor(rho0a,rho0c) rho0avec = qt.operator_to_vecto...
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Exact timepropagators to learn from The function dynmap generates an exact timepropagator for the qubit $\mathcal{E}_{k}$ for a time $t_k$. <br>
def dynmap(t): # reduced dynamical map for the qubit at time t Et = qt.mesolve(H, E0, [0.,t], c_ops, []).states[-1] return ptracesuper*(Et*superrho0cav)
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Exact time evolution using standard mesolve method
exacttimes = np.arange(0,5,0.01) exactsol = qt.mesolve(H, rho0, exacttimes, c_ops, [])
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Approximate solution using the Transfer Tensor Method for different learning times
times = np.arange(0,5,0.1) # total extrapolation time ttmsols = [] maxlearningtimes = [0.5, 2.0] # maximal learning times for T in maxlearningtimes: learningtimes = np.arange(0,T,0.1) learningmaps = [dynmap(t) for t in learningtimes] # generate exact dynamical maps to learn from ttmsols.append(ttm.ttmsolve(...
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Visualize results
fig, ax = plt.subplots(figsize=(10,7)) ax.plot(exactsol.times, qt.expect(sz, exactsol.states),'-b',linewidth=3.0) style = ['og','or'] for i,ttmsol in enumerate(ttmsols): ax.plot(ttmsol.times, qt.expect(qt.sigmaz(), ttmsol.states),style[i],linewidth=1.5,) ax.legend(['exact',str(maxlearningtimes[0]),str(maxlearningti...
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Discussion The figure above illustrates how the transfer tensor method needs a sufficiently long set of learning times to get good results. The green dots show results for learning times $t_k=0,0.1,\dots,0.5$, which is clearly not sufficient. The red dots show results for $t_k=0,0.1,\dots,2.0$, which gives results that...
version_table()
examples/nonmarkov-transfer-tensor-method.ipynb
qutip/qutip-notebooks
lgpl-3.0
Como podemos ver, utilizando simples expresiones de Python, podemos cargar la base de datos de la ONG en un Dataframe de Pandas; lo que nos va a permitir manipular los datos con suma facilidad. Comenzemos a explorar un poco más en detalle este dataset! En primer lugar, lo que deberíamos hacer es controlar si existen va...
# Controlando valores nulos ONG_data.isnull().any().any()
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Como podemos ver, el método nos devuelve el valor "True", lo que indica que existen valores nulos en nuestro dataset. Estos valores pueden tener una influencia significativa en nuestro modelo predictivo, por lo que siempre es una decisión importante determinar la forma en que los vamos a manejar. Las alternativas que t...
# Agrupando columnas por tipo de datos tipos = ONG_data.columns.to_series().groupby(ONG_data.dtypes).groups # Armando lista de columnas categóricas ctext = tipos[np.dtype('object')] len(ctext) # cantidad de columnas con datos categóricos. # Armando lista de columnas numéricas columnas = ONG_data.columns # lista de...
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Ahora ya logramos separar a las 481 columnas que tiene nuestro dataset. 68 columnas contienen datos categóricos y 413 contienen datos cuantitativos. Procedamos a inferir los valores faltantes.
# Completando valores faltantas datos cuantititavos for c in cnum: mean = ONG_data[c].mean() ONG_data[c] = ONG_data[c].fillna(mean) # Completando valores faltantas datos categóricos for c in ctext: mode = ONG_data[c].mode()[0] ONG_data[c] = ONG_data[c].fillna(mode) # Controlando que no hayan valores f...
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Perfecto! Ahora tenemos un dataset limpio de valores faltantes. Ya estamos listos para comenzar a explorar los datos, comencemos por determinar el porcentaje de personas que alguna vez fue donante de la ONG y están incluidos en la base de datos con la que estamos trabajando.
# Calculando el porcentaje de donantes sobre toda la base de datos porcent_donantes = (ONG_data[ONG_data.DONOR_AMOUNT > 0]['DONOR_AMOUNT'].count() * 1.0 / ONG_data['DONOR_AMOUNT'].count()) * 100.0 print("El procentaje de donantes de la base de datos es {0:.2f}%" .format(porcen...
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Aquí podemos ver que el porcentaje de personas que fueron donantes en el pasado es realmente muy bajo, solo un 5 % del total de la base de datos (2423 personas). Este es un dato importante a tener en cuenta ya que al existir tanta diferencia entre las clases a clasificar, esto puede afectar considerablemente a nuestro ...
# Analizando el importe de donanciones # Creando un segmentos de importes imp_segm = pd.cut(ONG_donantes['DONOR_AMOUNT'], [0, 10, 20, 30, 40, 50, 60, 100, 200]) # Creando el grafico de barras desde pandas plot = pd.value_counts(imp_segm).plot(kind='bar', title='...
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Este análisis nos muestra que la mayor cantidad de donaciones caen en un rango de importes entre 0 y 30, siendo la donación promedio 15.60. También podemos ver que donaciones que superen un importe de 50 son casos realmente poco frecuentes, por lo que constituyen valores atípicos y sería prudente eliminar estos casos a...
# Grafico del género de los donantes ONG_donantes.groupby('GENDER').size().plot(kind='bar') plt.title('Distribución por género') plt.show() # Donaciones segun el género ONG_donantes[(ONG_donantes.DONOR_AMOUNT <= 50) & (ONG_donantes.GENDER.isin(['F', 'M']) )][['DONOR_AMOUNT', 'GENDER']].boxp...
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Aquí vemos que las mujeres suelen estar más propensas a donar, aunque donan un importe promedio menor (14.61) al que donan los hombres (16.82). Veamos ahora como se comportan las donaciones respecto a la edad.
# Distribución de la edad de los donantes ONG_donantes['AGE'].hist().set_title('Distribución de donantes segun edad') plt.show() # Agrupando la edad por rango de a 10 AGE2 = pd.cut(ONG_donantes['AGE'], range(0, 100, 10)) ONG_donantes['AGE2'] = AGE2 # Gráfico de barras de donaciones por edad pd.value_counts(AGE2).plot...
content/notebooks/MachineLearningPractica.ipynb
relopezbriega/mi-python-blog
gpl-2.0
Question 2 Which one of the following modules is not part of the Bio.Blast package in Biopython: ParseBlastTable NCBIXML FastaIO Applications
import Bio.Blast help(Bio.Blast)
Python for Genomic Data Science/Lecture 8 Quiz.ipynb
ysh329/Homework
mit
Question 3 Using Biopython find out what species the following unknown DNA sequence comes from: TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTAC AATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCAC CTACGGTAGAG Hint. Identify the al...
from Bio.Blast import NCBIWWW fasta_string = "TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTACAATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCACCTACGGTAGAG" result_handle = NCBIWWW.qblast("blastn", "nt", fasta_string) from Bio.B...
Python for Genomic Data Science/Lecture 8 Quiz.ipynb
ysh329/Homework
mit
Question 4 Seq is a sequence object that can be imported from Biopython using the following statement: ''' from Bio.Seq import Seq ''' If my_seq is a Seq object, what is the correct Biopython code to print the reverse complement of my_seq? Hint. Use the built-in function help you find out the methods of the Seq object...
from Bio.Seq import Seq from Bio.Alphabet import generic_protein help(Seq) my_seq = Seq("MELKI", generic_protein) + "LV" help(my_seq) from Bio.Seq import Seq from Bio.Alphabet import IUPAC my_dna = Seq("CCCCCGATAG", IUPAC.unambiguous_dna) my_dna my_dna.complement() from Bio.Seq import Seq from Bio.Alphabet import I...
Python for Genomic Data Science/Lecture 8 Quiz.ipynb
ysh329/Homework
mit
Question 5 Create a Biopython Seq object that represents the following sequence: TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTAC AATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCAC CTACGGTAGAG Its protein translation is: ILASY...
from Bio.Blast import NCBIWWW my_seq = Seq("TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTACAATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCACCTACGGTAGAG") print my_seq.translate()
Python for Genomic Data Science/Lecture 8 Quiz.ipynb
ysh329/Homework
mit
Second Step: Visualizing Prediction
# ignore this, it is just technical code # should come from a lib, consider it to appear magically # http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD...
notebooks/ml/1-classic-code.ipynb
DJCordhose/ai
mit
By just randomly guessing, we get approx. 1/3 right, which is what we expect
random_clf.score(X, y)
notebooks/ml/1-classic-code.ipynb
DJCordhose/ai
mit
Third Step: Creating a Base Line Creating a naive classifier manually, how much better is it?
class BaseLineClassifier(ClassifierBase): def predict_single(self, x): try: speed, age, km_per_year = x except: speed, age = x km_per_year = 0 if age < 25: if speed > 180: return 0 else: return 2 ...
notebooks/ml/1-classic-code.ipynb
DJCordhose/ai
mit
This is the baseline we have to beat
base_clf.score(X, y)
notebooks/ml/1-classic-code.ipynb
DJCordhose/ai
mit
Enter Team Member Names here (double click to edit): Name 1: Name 2: Name 3: In Class Assignment Two In the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (or HTML of the render...
# fetch the images for the dataset # this will take a long time the first run because it needs to download # after the first time, the dataset will be save to your disk (in sklearn package somewhere) # if this does not run, you may need additional libraries installed on your system (install at your own risk!!) from sk...
In_Class/ICA2_MachineLearning_PartA.ipynb
Tsiems/machine-learning-projects
mit
Question 1: For the faces dataset, describe what the data represents? That is, what is each column? What is each row? What do the unique class values represent? Every column is a pixel location in a 125x94 photograph. Each row is a single image of someone's face. The unique class values are the names of the people in t...
# Enter any scratchwork or calculations here
In_Class/ICA2_MachineLearning_PartA.ipynb
Tsiems/machine-learning-projects
mit
Question 3: - Part A: Given the number of parameters calculated above, would you expect the model to train quickly using batch optimization techniques? Why or why not? - Part B: Is there a way to reduce training time? - Part C: If we transformed the X data using principle components analysis (PCA) with 100 components...
# Enter any scratchwork or calculations here print('Part C. With 100 features: ', '100')
In_Class/ICA2_MachineLearning_PartA.ipynb
Tsiems/machine-learning-projects
mit
Signal Processing Scope Signal processing today: Communication, Sensors, Images, Video, ... Analog vs. Digital signals Analog signal Analog signal is a continuous function of time: $$ x : t \mapsto x(t) $$ Digital signal Digital signal is a discrete function of time: $$ t_n = t_0 + n \times \delta t $$ $$ x_n = x(t...
import numpy as np import matplotlib.pyplot as plt def signal(t, f = 1.): return np.cos(2. * np.pi * f * t) D = 3.2 # duration t = np.linspace(0., D, 1000) x = signal(t) fs = 10. # sampling rate tn = np.linspace(0., D, fs * D) xn = signal(tn) plt.plot(t, x, "k-", label = "Analog") plt.plot(tn, xn, "bo--", l...
doc/Traitement_signal/signal_processing.ipynb
lcharleux/numerical_analysis
gpl-2.0
Digital signals Effect of the sampling rate
t = np.linspace(0., D, 1000) x = signal(t) Fs = [1., 2., 10.] # sampling rate plt.plot(t, x, "k-", label = "Analog") for fs in Fs: tn = np.linspace(0., D, fs * D) xn = signal(tn) plt.plot(tn, xn, "o--", label = "fs = {0}".format(fs)) plt.grid() plt.xlabel("Time, $t$") plt.ylabel("Amplitude, $x$") plt.legend(...
doc/Traitement_signal/signal_processing.ipynb
lcharleux/numerical_analysis
gpl-2.0
Higher sampling rate means better signal description, Lower sampling rate means loss of information, Aliasing
D = 1.5 # duration t = np.linspace(0., D, 1000) fs = 2.5 # sampling rate tn = np.linspace(0., D, fs * D) xn = signal(tn) F = .5 + np.arange(3) * fs # sampling rate tn = np.linspace(0., D, fs * D) xn = signal(tn, f = F[0]) plt.plot(tn, xn, "ok", label = "Samples") for f in F: x = signal(t, f = f) ...
doc/Traitement_signal/signal_processing.ipynb
lcharleux/numerical_analysis
gpl-2.0
BERT Question Answer with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/models/modify/model_maker/question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td>...
!sudo apt -y install libportaudio2 !pip install -q tflite-model-maker-nightly
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Import the required packages.
import numpy as np import os import tensorflow as tf assert tf.__version__.startswith('2') from tflite_model_maker import model_spec from tflite_model_maker import question_answer from tflite_model_maker.config import ExportFormat from tflite_model_maker.question_answer import DataLoader
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answer Each model_spec object represents a specific model for question answer. The Model Maker currently supports...
spec = model_spec.get('mobilebert_qa_squad')
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Load Input Data Specific to an On-device ML App and Preprocess the Data The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library. To load the data, convert the TriviaQA datas...
train_data_path = tf.keras.utils.get_file( fname='triviaqa-web-train-8000.json', origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json') validation_data_path = tf.keras.utils.get_file( fname='triviaqa-verified-web-dev.json', origin='https://sto...
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar. <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100"> If...
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True) validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Customize the TensorFlow Model Create a custom question answer model based on the loaded data. The create function comprises the following steps: Creates the model for question answer according to model_spec. Train the question answer model. The default epochs and the default batch size are set according to two variab...
model = question_answer.create(train_data, model_spec=spec)
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Have a look at the detailed model structure.
model.summary()
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Evaluate the Customized Model Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
model.evaluate(validation_data)
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Export to TensorFlow Lite Model Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is model.tflite. In many on-device ML application, the model size is an important factor....
model.export(export_dir='.')
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following: ExportFormat.TFLITE ExportFormat.VOCAB ExportFormat.SAVED_MODEL By ...
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
model.evaluate_tflite('model.tflite', validation_data)
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Advanced Usage The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQASpec class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The create function comprises the following steps: Creates the model for questio...
new_spec = model_spec.get('mobilebert_qa') new_spec.seq_len = 512
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
tensorflow/docs-l10n
apache-2.0
Step 1- split Given a list let's split it into two lists right down the middle
def split(input_list): """ Splits a list into two pieces :param input_list: list :return: left and right lists (list, list) """ input_list_len = len(input_list) midpoint = input_list_len // 2 return input_list[:midpoint], input_list[midpoint:] tests_split = [ ({'input_list': [1, 2, ...
algorithms/Merge-Sort.ipynb
amirziai/learning
mit
Step 2- merge sorted lists Given two sorted lists we should be able to "merge" them into a single list as a linear operation
def merge_sorted_lists(list_left, list_right): """ Merge two sorted lists This is a linear operation O(len(list_right) + len(list_right)) :param left_list: list :param right_list: list :return merged list """ # Special case: one or both of lists are empty if len(list_left) == 0: ...
algorithms/Merge-Sort.ipynb
amirziai/learning
mit
Step 3- merge sort Merge sort only needs to utilize the previous 2 functions We need to split the lists until they have a single element A list with a single element is sorted (duh) Now we can merge these single-element (or empty) lists
def merge_sort(input_list): if len(input_list) <= 1: return input_list else: left, right = split(input_list) # The following line is the most important piece in this whole thing return merge_sorted_lists(merge_sort(left), merge_sort(right)) random_list = [random.randint(1, 1000)...
algorithms/Merge-Sort.ipynb
amirziai/learning
mit
Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients. Ridge Regression Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. Th...
def polynomial_ridge_regression(data, deg, l2_penalty): model = graphlab.linear_regression.create(polynomial_features(data,deg), target='Y', l2_penalty=l2_penalty, validation_set=None,verbose=False) return model
Overfitting_Ridge_Lasso.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Let's look at fits for a sequence of increasing lambda values
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]: model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty) print 'lambda = %.2e' % l2_penalty print_coefficients(model) print '\n' plt.figure() plot_poly_predictions(data,model) plt.title('Ridge, lambda = %.2e' % l2_penalty)
Overfitting_Ridge_Lasso.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
We can take a look at the first 15 rows of the table:
t[:15]
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
The model name is a unique name that identifies each model and the viewing angle is indicated in the suffix (e.g. _01). The value of the inclination is also given in the inclination column. The remaining columns give the parameters for the models (which columns are present depends on the model set). The scattering colu...
from sedfitter.sed import SEDCube seds = SEDCube.read('sp--s-i/flux.fits')
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
This 'SED cube' is an efficient way to store the models fluxes in a single 3D array, where the three dimensions are the model, the aperture, and the wavelength. The model names can be accessed with:
print(seds.names)
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
while the apertures, wavelengths, and frequencies can be accessed with:
print(seds.apertures) print(seds.wav) print(seds.nu)
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
A valid flag is used to indicate models that do not have complete/valid SEDs (for example because the model run did not complete):
print(seds.valid)
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
The fluxes and errors can be obtained using the val and error attributes. We can check the shape of these arrays to check that they are indeed 3D arrays:
seds.val.shape seds.val.shape
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
For this model set, there are 90000 models (10000 physical models times 9 inclinations), 20 apertures, and 200 wavelengths. To access a specific SED, you can call seds.get_sed using a particular model name:
sed = seds.get_sed('00p13Elr_03')
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
The wavelength, flux, and error can then be accessed with:
print(sed.wav) print(sed.flux) print(sed.error)
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
The SED is a 2D array with dimensions the number of apertures (20) and the number of wavelengths (200):
sed.flux.shape
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
We can use this to visualize the SED:
%matplotlib inline import matplotlib.pyplot as plt _ = plt.loglog(sed.wav, sed.flux.transpose(), 'k-', alpha=0.5) _ = plt.ylim(1e-2, 1e8)
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
Fitting SEDs to data To fit SEDs to observed data, you can also make use of the sedfitter package. What follows is a very short example - for more information on using the sedfitter package, be sure to read over the documentation. To demonstrate this, we will fit the above models to the data for the NGC2264 source mod...
%cat data_ngc2264_20
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
We start off by setting up the list of filters/wavelengths and approximate aperture radii used:
from astropy import units as u filters = ['BU', 'BB', 'BV', 'BR', 'BI', '2J', '2H', '2K', 'I1', 'I2', 5.580 * u.micron, 7.650 * u.micron, 9.95 * u.micron, 12.93 * u.micron, 17.72 * u.micron, 24.28 * u.micron, 29.95 * u.micron, 35.06 * u.micron, 'M2', 'M3', 'W1', 'W2'] apert...
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
We also set up the extinction law used in Robitaille (2017):
from sedfitter.extinction import Extinction extinction = Extinction.from_file('whitney.r550.par')
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
Finally, we run the fitting:
import sedfitter sedfitter.fit('data_ngc2264_20', filters, apertures, 'sp--s-i', 'output_ngc2264_sp--s-i.fitinfo', extinction_law=extinction, distance_range=[0.869, 0.961] * u.kpc, av_range=[0., 40.], output_format=('F', 3.), output_co...
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
We now generate the SED plots with the data to examine the fit:
sedfitter.plot('output_ngc2264_sp--s-i.fitinfo', output_dir='plots_sed', format='png', plot_mode='A', select_format=('F', 3.), show_convolved=False, show_sed=True, x_mode='M', x_range=(0.1, 2000), y_mode='M', y_range=(1.e-14, 2e-8...
notebook/using_the_models.ipynb
hyperion-rt/paper-2017-sed-models
bsd-2-clause
Dataset
def ground_truth(x): return x * np.sin(x) + np.sin(2 * x) def gen_data(n_samples=200): np.random.seed(13) x = np.random.uniform(0, 10, size=n_samples) x.sort() y = ground_truth(x) + 0.75 * np.random.normal(size=n_samples) train_mask = np.random.randint(0, 2, size=n_samples).astype(np.bool) ...
notebooks/s2-1/Ensembles.ipynb
amorgun/shad-ml-notebooks
unlicense
RF
n_estimators = 1000 rf = RandomForestRegressor(n_estimators=n_estimators, random_state=30) loss = metrics.mean_squared_error rf.fit(X_train, y_train) rf_errors = [] rf_estimators = rf.estimators_ for n in range(1, n_estimators): rf.estimators_ = rf_estimators[:n] rf_errors.append(loss(y_test, rf.predict(X_test)...
notebooks/s2-1/Ensembles.ipynb
amorgun/shad-ml-notebooks
unlicense
GBT $$\tilde{x}^m = \tilde{x}^{m-1} - \lambda_m \nabla f(\tilde{x}^{m-1})$$ $$\tilde{y}^m = \tilde{y}^{m-1} - \lambda_m \nabla Q(\tilde{y}^{m-1}, y)$$ $$b_i = learn(X, -\nabla Q(\tilde{y}^{m-1}, y))$$ Example $$ Q(\tilde{y}^m, y) = \frac12 \sum_{i=1}^L (\tilde{y}i^m - y_i)^2 $$ $$ -\nabla Q(\tilde{y}^m, y) = -\nabla ...
def get_ensemble_errors(clf): clf.fit(X_train, y_train) train_loss , test_loss= [], [] estimators = clf.estimators_ for n in range(1, n_estimators): clf.estimators_ = estimators[:n] train_loss.append(loss(y_train, clf.predict(X_train))) test_loss.append(loss(y_test, clf.predict(X...
notebooks/s2-1/Ensembles.ipynb
amorgun/shad-ml-notebooks
unlicense
1. 随机变量 Random Variable 定义:设随机试验的样本空间为 S = {e}。X = X(e)是定义在样本空间S上的实值单值函数。称 X = X(e)为随机变量。 例:将一枚硬币抛掷三次,观察出现正面和反面的情况,样本空间是 S = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}。 以X记三次投掷得到正面H的总数,那么,对于样本空间 S = {e}(用 e 代表样本空间的元素,而将样本空间记成{e})中的每一个样本点 e,X 都有一个数与之对应。X 是定义在样本空间 S 上的一个实值单值函数。它的定义域是样本空间 S,值域是实数集合{0, 1, 2, 3}。使用函数记号可将X写成 $...
# 投掷硬币10次,正面朝上的次数;重复100次 n, p = 10, .5 np.random.binomial(n, p, 100)
Random Variable and its Distribution.ipynb
reata/ProbabilityAndStatistics
mit
一个现实生活中的例子。一家钻井公司探索九个矿井,预计每个开采成功率为0.1;九个矿井全部开采失败的概率是多少? 根据公式,$n = 9, p = 0.1, P{X = 0} = \binom{9}{0} \cdot 0.1^{0} \cdot 0.9^{9} \approx 0.3874$ 我们对该模型进行20000次试验,计算其中得到0的概率:
sum(np.random.binomial(9, 0.1, 20000) == 0) / 20000
Random Variable and its Distribution.ipynb
reata/ProbabilityAndStatistics
mit
将试验次数增加,可以模拟出更加逼近准确值的结果。 4. 泊松分布 Poisson Distribution 设随机变量 X 所有可能取的值为0, 1, 2, ..., 而取各个值的概率为 $$P{X=k} = \frac{\lambda^ke^{-\lambda}}{k!}, k=0,1,2,...,$$ 其中 $\lambda > 0$ 是常数,则称 $X$ 服从参数为 $\lambda$ 的泊松分布,记为 $X \sim \pi(\lambda)$。 易知,$P{X=k}\geq0,k=0,1,2,...$,且有 $$ \sum_{k=0}^\infty P{X=k} = \sum_{k=0}^\infty \frac{\lam...
lb = 5 s = np.random.poisson(lb, 10000) count, bins, ignored = plt.hist(s, 14, normed=True)
Random Variable and its Distribution.ipynb
reata/ProbabilityAndStatistics
mit
5. 均匀分布 Uniform Distribution 若连续型随机变量 X 具有概率密度 $$ f(x) =\left{ \begin{aligned} & \frac{1}{b-a}, & a < x < b, \ & 0, & 其它 \ \end{aligned} \right. $$ 则称 X 在区间(a, b)上服从均匀分布,记为$X \sim U(a, b)$ numpy.random.uniform函数可以根据均匀分布进行抽样:
# 取a = -1, b = 0, 样本数10000 a, b = -1, 0 s = np.random.uniform(a, b, 10000) # 所有样本的值均大于a np.all(s >= a) # 所有样本的值均小于b np.all(s < b) # 绘制样本直方图及密度函数 count, bins, ignored = plt.hist(s, 15, normed=True) plt.plot(bins, np.ones_like(bins) / (b - a), linewidth=2, color='r') plt.show()
Random Variable and its Distribution.ipynb
reata/ProbabilityAndStatistics
mit
6. 指数分布 Exponential Distribution 若连续型随机变量 X 具有概率密度 $$ f(x) =\left{ \begin{aligned} & \frac{1}{\theta}e^{-\frac{x}{\theta}}, & x > 0, \ & 0, & 其它 \ \end{aligned} \right. $$ 其中$\theta > 0$为常数,则称 X 服从参数为$\theta$的指数分布。 numpy.random.exponential函数可以根据均匀分布进行抽样:
# 取theta = 1,绘制样本直方图及密度函数 theta = 1 f = lambda x: math.e ** (-x / theta) / theta s = np.random.exponential(theta, 10000) count, bins, ignored = plt.hist(s, 100, normed=True) plt.plot(bins, f(bins), linewidth=2, color='r') plt.show()
Random Variable and its Distribution.ipynb
reata/ProbabilityAndStatistics
mit
7. 正态分布 Normal Distribution 若连续型随机变量 X 的概率密度为 $$ f(x) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}, -\infty < x < \infty $$ 其中$\mu, \sigma(\sigma > 0)$为常数,则称 X 服从参数为$\mu, \sigma$的正态分布或高斯(Gauss)分布,记为$X \sim N(\mu, \sigma^2)$。 f(x)的图形具有以下性质: 曲线关于$x = \mu$对称。这表明对于任意$h > 0$有 $$ P{\mu - h < X \leq \mu } =...
# 取均值0,标准差0.1 mu, sigma = 0, 0.1 s = np.random.normal(mu, sigma, 1000) # 验证均值 abs(mu - np.mean(s)) < 0.01 # 验证标准差 abs(sigma - np.std(s, ddof=1)) < 0.01 # 绘制样本直方图及密度函数 count, bins, ignored = plt.hist(s, 30, normed=True) plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linew...
Random Variable and its Distribution.ipynb
reata/ProbabilityAndStatistics
mit
<img align=left src="files/images/pyspark-page2.svg" width=500 height=500 />
# print Spark version print("pyspark version:" + str(sc.version))
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.map"> <img align=left src="files/images/pyspark-page3.svg" width=500 height=500 /> </a>
# map x = sc.parallelize([1,2,3]) # sc = spark context, parallelize creates an RDD from the passed object y = x.map(lambda x: (x,x**2)) print(x.collect()) # collect copies RDD elements to a list on the driver print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.flatMap"> <img align=left src="files/images/pyspark-page4.svg" width=500 height=500 /> </a>
# flatMap x = sc.parallelize([1,2,3]) y = x.flatMap(lambda x: (x, 100*x, x**2)) print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mapPartitions"> <img align=left src="files/images/pyspark-page5.svg" width=500 height=500 /> </a>
# mapPartitions x = sc.parallelize([1,2,3], 2) def f(iterator): yield sum(iterator) y = x.mapPartitions(f) print(x.glom().collect()) # glom() flattens elements on the same partition print(y.glom().collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mapPartitionsWithIndex"> <img align=left src="files/images/pyspark-page6.svg" width=500 height=500 /> </a>
# mapPartitionsWithIndex x = sc.parallelize([1,2,3], 2) def f(partitionIndex, iterator): yield (partitionIndex,sum(iterator)) y = x.mapPartitionsWithIndex(f) print(x.glom().collect()) # glom() flattens elements on the same partition print(y.glom().collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.getNumPartitions"> <img align=left src="files/images/pyspark-page7.svg" width=500 height=500 /> </a>
# getNumPartitions x = sc.parallelize([1,2,3], 2) y = x.getNumPartitions() print(x.glom().collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.filter"> <img align=left src="files/images/pyspark-page8.svg" width=500 height=500 /> </a>
# filter x = sc.parallelize([1,2,3]) y = x.filter(lambda x: x%2 == 1) # filters out even elements print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.distinct"> <img align=left src="files/images/pyspark-page9.svg" width=500 height=500 /> </a>
# distinct x = sc.parallelize(['A','A','B']) y = x.distinct() print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sample"> <img align=left src="files/images/pyspark-page10.svg" width=500 height=500 /> </a>
# sample x = sc.parallelize(range(7)) ylist = [x.sample(withReplacement=False, fraction=0.5) for i in range(5)] # call 'sample' 5 times print('x = ' + str(x.collect())) for cnt,y in zip(range(len(ylist)), ylist): print('sample:' + str(cnt) + ' y = ' + str(y.collect()))
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.takeSample"> <img align=left src="files/images/pyspark-page11.svg" width=500 height=500 /> </a>
# takeSample x = sc.parallelize(range(7)) ylist = [x.takeSample(withReplacement=False, num=3) for i in range(5)] # call 'sample' 5 times print('x = ' + str(x.collect())) for cnt,y in zip(range(len(ylist)), ylist): print('sample:' + str(cnt) + ' y = ' + str(y)) # no collect on y
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.union"> <img align=left src="files/images/pyspark-page12.svg" width=500 height=500 /> </a>
# union x = sc.parallelize(['A','A','B']) y = sc.parallelize(['D','C','A']) z = x.union(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.intersection"> <img align=left src="files/images/pyspark-page13.svg" width=500 height=500 /> </a>
# intersection x = sc.parallelize(['A','A','B']) y = sc.parallelize(['A','C','D']) z = x.intersection(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sortByKey"> <img align=left src="files/images/pyspark-page14.svg" width=500 height=500 /> </a>
# sortByKey x = sc.parallelize([('B',1),('A',2),('C',3)]) y = x.sortByKey() print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sortBy"> <img align=left src="files/images/pyspark-page15.svg" width=500 height=500 /> </a>
# sortBy x = sc.parallelize(['Cat','Apple','Bat']) def keyGen(val): return val[0] y = x.sortBy(keyGen) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.glom"> <img align=left src="files/images/pyspark-page16.svg" width=500 height=500 /> </a>
# glom x = sc.parallelize(['C','B','A'], 2) y = x.glom() print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.cartesian"> <img align=left src="files/images/pyspark-page17.svg" width=500 height=500 /> </a>
# cartesian x = sc.parallelize(['A','B']) y = sc.parallelize(['C','D']) z = x.cartesian(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.groupBy"> <img align=left src="files/images/pyspark-page18.svg" width=500 height=500 /> <
# groupBy x = sc.parallelize([1,2,3]) y = x.groupBy(lambda x: 'A' if (x%2 == 1) else 'B' ) print(x.collect()) print([(j[0],[i for i in j[1]]) for j in y.collect()]) # y is nested, this iterates through it
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.pipe"> <img align=left src="files/images/pyspark-page19.svg" width=500 height=500 /> </a>
# pipe x = sc.parallelize(['A', 'Ba', 'C', 'AD']) y = x.pipe('grep -i "A"') # calls out to grep, may fail under Windows print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.foreach"> <img align=left src="files/images/pyspark-page20.svg" width=500 height=500 /> </a>
# foreach from __future__ import print_function x = sc.parallelize([1,2,3]) def f(el): '''side effect: append the current RDD elements to a file''' f1=open("./foreachExample.txt", 'a+') print(el,file=f1) open('./foreachExample.txt', 'w').close() # first clear the file contents y = x.foreach(f) # writes ...
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.foreachPartition"> <img align=left src="files/images/pyspark-page21.svg" width=500 height=500 /> </a>
# foreachPartition from __future__ import print_function x = sc.parallelize([1,2,3],5) def f(parition): '''side effect: append the current RDD partition contents to a file''' f1=open("./foreachPartitionExample.txt", 'a+') print([el for el in parition],file=f1) open('./foreachPartitionExample.txt', 'w').cl...
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.collect"> <img align=left src="files/images/pyspark-page22.svg" width=500 height=500 /> </a>
# collect x = sc.parallelize([1,2,3]) y = x.collect() print(x) # distributed print(y) # not distributed
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.reduce"> <img align=left src="files/images/pyspark-page23.svg" width=500 height=500 /> </a>
# reduce x = sc.parallelize([1,2,3]) y = x.reduce(lambda obj, accumulated: obj + accumulated) # computes a cumulative sum print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.fold"> <img align=left src="files/images/pyspark-page24.svg" width=500 height=500 /> </a>
# fold x = sc.parallelize([1,2,3]) neutral_zero_value = 0 # 0 for sum, 1 for multiplication y = x.fold(neutral_zero_value,lambda obj, accumulated: accumulated + obj) # computes cumulative sum print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.aggregate"> <img align=left src="files/images/pyspark-page25.svg" width=500 height=500 /> </a>
# aggregate x = sc.parallelize([2,3,4]) neutral_zero_value = (0,1) # sum: x+0 = x, product: 1*x = x seqOp = (lambda aggregated, el: (aggregated[0] + el, aggregated[1] * el)) combOp = (lambda aggregated, el: (aggregated[0] + el[0], aggregated[1] * el[1])) y = x.aggregate(neutral_zero_value,seqOp,combOp) # computes (cu...
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.max"> <img align=left src="files/images/pyspark-page26.svg" width=500 height=500 /> </a>
# max x = sc.parallelize([1,3,2]) y = x.max() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.min"> <img align=left src="files/images/pyspark-page27.svg" width=500 height=500 /> </a>
# min x = sc.parallelize([1,3,2]) y = x.min() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sum"> <img align=left src="files/images/pyspark-page28.svg" width=500 height=500 /> </a>
# sum x = sc.parallelize([1,3,2]) y = x.sum() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.count"> <img align=left src="files/images/pyspark-page29.svg" width=500 height=500 /> </a>
# count x = sc.parallelize([1,3,2]) y = x.count() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.histogram"> <img align=left src="files/images/pyspark-page30.svg" width=500 height=500 /> </a>
# histogram (example #1) x = sc.parallelize([1,3,1,2,3]) y = x.histogram(buckets = 2) print(x.collect()) print(y) # histogram (example #2) x = sc.parallelize([1,3,1,2,3]) y = x.histogram([0,0.5,1,1.5,2,2.5,3,3.5]) print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mean"> <img align=left src="files/images/pyspark-page31.svg" width=500 height=500 /> </a>
# mean x = sc.parallelize([1,3,2]) y = x.mean() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit