markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The main abstract base class is the following one: class SerialPort: metaclass = abc.ABCMeta @abc.abstractmethod def isOpen(self): pass @abc.abstractmethod def readline(self): pass @abc.abstractmethod def close(self): pass @abc.abstractmethod def get_port(self): return "" @abc.abstractmethod def...
import hit.serial.serial_port port="" baud=0 dummySerialPort = hit.serial.serial_port.DummySerialPort(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> The DummSerialPort is very simple. It just says "Hee" (after a few seconds) when its method "readline()" is called.<br> Port and Baud are useless here. </span>
print dummySerialPort.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> Let's create a more interesting Serialport instance, </span>
import hit.serial.serial_port port="" baud=0 emulatedSerialPort = hit.serial.serial_port.ATTEmulatedSerialPort(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> The ATTEmulatedSerialPort will emulate a real ATT serial port reading.<br> Port and Baud are useless here. </span>
print emulatedSerialPort.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<h3>Using a Builder</h3> <span> Let's use a builder now. </span> <span> We can choose the builder we want and build as many SerialPorts we want. </span>
import hit.serial.serial_port_builder builder = hit.serial.serial_port_builder.ATTEmulatedSerialPortBuilder() port="" baud=0 emulatedSerialPort1 = builder.build_serial_port(port, baud) emulatedSerialPort2 = builder.build_serial_port(port, baud) emulatedSerialPort3 = builder.build_serial_port(port, baud) emulatedSeri...
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> And call "readline()" </span>
print emulatedSerialPort5.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> There is a special Serial port abstraction that is fed from a file.<br> This is useful when we want to "mock" the serial port and give it previously stored readings. </span> <span> This is interesting, for example, in order to reproduce, or visualize the repetition of an interesting set of hits in a game. Becaus...
!head -10 train_points_import_data/arduino_raw_data.txt import hit.serial.serial_port_builder builder = hit.serial.serial_port_builder.ATTHitsFromFilePortBuilder() port="train_points_import_data/arduino_raw_data.txt" baud=0 fileSerialPort = builder.build_serial_port(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> And now we will read some lines: </span>
for i in range(20): print fileSerialPort.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
Use Keras Model Zoo
# https://www.tensorflow.org/api_docs/python/tf/keras/applications/ #from tensorflow.keras.preprocessing import image as keras_preprocessing_image from tensorflow.keras.preprocessing import image as keras_preprocessing_image
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Architecture Choices NASNet cell structure Ensure we have the model loaded
#from tensorflow.python.keras.applications.nasnet import NASNetLarge, preprocess_input #model = NASNetLarge(weights='imagenet', include_top=False) # 343,608,736 from tensorflow.keras.applications.nasnet import NASNetMobile, preprocess_input, decode_predictions model_imagenet = NASNetMobile(weights='imagenet', includ...
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Build the model and select layers we need - the features are taken from the final network layer, before the softmax nonlinearity.
def image_to_input(model, img_path): target_size=model.input_shape[1:] img = keras_preprocessing_image.load_img(img_path, target_size=target_size) x = keras_preprocessing_image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) return x def get_single_prediction(img_p...
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Transfer Learning Now, we'll work with the layer 'just before' the final (ImageNet) classification layer.
#model_logits = NASNetMobile(weights='imagenet', include_top=False, pooling=None) # 19,993,200 bytes #logits_layer = model_imagenet.get_layer('global_average_pooling2d_1') logits_layer = model_imagenet.get_layer('predictions') model_logits = keras.Model(inputs=model_imagenet.input, output...
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Use the Network to create 'features' for the training images Now go through the input images and feature-ize them at the 'logit level' according to the pretrained network. <!-- [Logits vs the softmax probabilities](images/presentation/softmax-layer-generic_676x327.png) !--> NB: The pretraining was done on ImageNet - ...
#writer = tf.summary.FileWriter(logdir='../tensorflow.logdir/', graph=tf.get_default_graph()) #writer.flush()
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Handy cropping function
def crop_middle_square_area(np_image): h, w, _ = np_image.shape h = int(h/2) w = int(w/2) if h>w: return np_image[ h-w:h+w, : ] return np_image[ :, w-h:w+h ] im_sq = crop_middle_square_area(im) im_sq.shape def get_logits_from_non_top(np_logits): # ~ average pooling #return np_lo...
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Use folder names to imply classes for Training Set
classes = sorted( [ d for d in os.listdir(CLASS_DIR) if os.path.isdir(os.path.join(CLASS_DIR, d)) ] ) classes # Sorted for for consistency train = dict(filepath=[], features=[], target=[]) t0 = time.time() for class_i, directory in enumerate(classes): for filename in os.listdir(os.path.join(CLASS_DIR, directory)...
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Build an SVM model over the features
from sklearn import svm classifier = svm.LinearSVC() classifier.fit(train['features'], train['target']) # learn from the data
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Use the SVM model to classify the test set
test_image_files = [f for f in os.listdir(CLASS_DIR) if not os.path.isdir(os.path.join(CLASS_DIR, f))] t0 = time.time() for filename in sorted(test_image_files): filepath = os.path.join(CLASS_DIR, filename) im = plt.imread(filepath) im_sq = crop_middle_square_area(im) # This is two ops : one merely lo...
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Algorithm for processing Chunks Make a partition given the extent Produce a tuple (minx ,maxx,miny,maxy) for each element on the partition Calculate the semivariogram for each chunk and save it in a dataframe Plot Everything Do the same with a mMatern Kernel
minx,maxx,miny,maxy = getExtent(new_data) maxy ## If prefered a fixed number of chunks N = 100 xp,dx = np.linspace(minx,maxx,N,retstep=True) yp,dy = np.linspace(miny,maxy,N,retstep=True) ### Distance interval print(dx) print(dy) ## Let's build the partition ## If prefered a fixed size of chunk ds = 300000 #step siz...
notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb
molgor/spystats
bsd-2-clause
For efficiency purposes we restrict to 10 variograms
smaller_list = chunks_non_empty[:10] variograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=200000),smaller_list) vars = map(lambda v : v.calculateEmpirical(),variograms) vars = map(lambda v : v.calculateEnvelope(num_iterations=50),variograms)
notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Take an average of the empirical variograms also with the envelope. We will use the group by directive on the field lags
envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1) envhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1) variogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1) lags = vars[0][['lags']] meanlow = list(envslow.apply(lambda row : np.mean(row),axis=1)) meanhigh = list(envhigh.appl...
notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Sommer 2015 Datenmodell Aufgabe Erstellen Sie eine Abfrage, mit der Sie die Daten aller Kunden, die Anzahl deren Aufträge, die Anzahl der Fahrten und die Summe der Streckenkilometer erhalten. Die Ausgabe soll nach Kunden-PLZ absteigend sortiert sein. Lösung
%%sql %sql select count(*) as AnzahlFahrten from fahrten
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Warum geht kein Join ?? ```mysql ```
%%sql select k.kd_id, k.`kd_firma`, k.`kd_plz`, count(distinct a.Au_ID) as AnzAuftrag, count(distinct f.f_id) as AnzFahrt, sum(distinct ts.ts_strecke) as SumStrecke from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilst...
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Der Ansatz mit Join funktioniert in dieser Form nicht, da spätestens beim 2. Join die Firma Trappo mit 2 Datensätzen aus dem 1. Join verknüpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join. Die folgende Abfrage zeigt ohne die Aggregatfunktionen das jeweilige Ausgangsergebni...
%sql select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id` from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` order by k.`kd_plz`
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Winter 2015 Datenmodell Hinweis: In Rechnung gibt es zusätzlich ein Feld Rechnung.Kd_ID Aufgabe Erstellen Sie eine SQL-Abfrage, mit der alle Kunden wie folgt aufgelistet werden, bei denen eine Zahlungsbedingung mit einem Skontosatz größer 3 % ist, mit Ausgabe der Anzahl der hinterlegten Rechnungen aus dem Jahr 2015. ...
%sql mysql://steinam:steinam@localhost/winter_2015
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
``mysql select count(rechnung.Rg_ID), kunde.Kd_Namefrom rechnung inner join kunde onrechnung.Rg_KD_ID= kunde.Kd_IDinner joinzahlungsbedingungon kunde.Kd_Zb_ID=zahlungsbedingung.Zb_IDwherezahlungsbedingung.Zb_SkontoProzent&gt; 3.0 and year(rechnung.Rg_Datum) = 2015 group by Kunde.Kd_Name` ```
%%sql select count(rechnung.`Rg_ID`), kunde.`Kd_Name` from rechnung inner join kunde on `rechnung`.`Rg_KD_ID` = kunde.`Kd_ID` inner join `zahlungsbedingung` on kunde.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` where `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0 and year(`rechnung`.`Rg_Dat...
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Es geht auch mit einem Subselect ``mysql select kd.Kd_Name, (select COUNT(*) from Rechnung as R where R.Rg_KD_ID= KD.Kd_IDand year(R.Rg_Datum`) = 2015) from Kunde kd inner join `zahlungsbedingung` on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` and zahlungsbedingung.Zb_SkontoProzent > 3.0 ```
%%sql select kd.`Kd_Name`, (select COUNT(*) from Rechnung as R where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl from Kunde kd inner join `zahlungsbedingung` on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Versicherung Zeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden. Datenmodell Versicherung
%sql -- your code goes here
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Lösung
%sql mysql://steinam:steinam@localhost/versicherung_complete %%sql select min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID` from `versicherungsvertrag` vv inner join mitarbeiter m on vv.`Mitarbeiter_ID` = m.`ID` where vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m inner join...
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Data source is http://www.quandl.com. We use blaze to store data.
with open('../.quandl_api_key.txt', 'r') as f: api_key = f.read() db = Quandl.get("EOD/DB", authtoken=api_key) bz.odo(db['Rate'].reset_index(), '../data/db.bcolz') fx = Quandl.get("CURRFX/EURUSD", authtoken=api_key) bz.odo(fx['Rate'].reset_index(), '../data/eurusd.bcolz')
notebooks/DataPreparation.ipynb
mvaz/osqf2015
mit
Can also migrate it to a sqlite database
bz.odo('../data/db.bcolz', 'sqlite:///osqf.db::db') %load_ext sql %%sql sqlite:///osqf.db select * from db
notebooks/DataPreparation.ipynb
mvaz/osqf2015
mit
Can perform queries
d = bz.Data('../data/db.bcolz') d.Close.max()
notebooks/DataPreparation.ipynb
mvaz/osqf2015
mit
AutoML SDK: AutoML image classification model Installation Install the latest (preview) version of AutoML SDK.
! pip3 install -U google-cloud-automl --user
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the Google cloud-storage library as well.
! pip3 install google-cloud-storage
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Restart the Kernel Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
import os if not os.getenv("AUTORUN"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin GPU run-time Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project. When you first create a...
PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:",...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You cannot use a Multi-Region...
REGION = 'us-central1' #@param {type: "string"}
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your GCP account If you are using AutoML Notebooks, your environment is already authenticated. Skip this step. Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your Google Cloud account. This provides access # to your Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on AutoML, then don't execute this code if not ...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have sto...
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION gs://$BUCKET_NAME
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al gs://$BUCKET_NAME
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import AutoML SDK Import the AutoML SDK into our Python environment.
import json import os import sys import time from google.cloud import automl_v1beta1 as automl from google.protobuf.json_format import MessageToJson from google.protobuf.json_format import ParseDict
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
AutoML constants Setup up the following constants for AutoML: PARENT: The AutoML location root path for dataset, model and endpoint resources.
# AutoML location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML). You will use several clients in this tutorial, so set them all up upfront. (?)
def automl_client(): return automl.AutoMlClient() def prediction_client(): return automl.PredictionServiceClient() def operations_client(): return automl.AutoMlClient()._transport.operations_client clients = {} clients["automl"] = automl_client() clients["prediction"] = prediction_client() clients["opera...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: TRAIN,gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv TEST,gs://automl-video-demo-data/hmdb_split1_5classes_test_inf.csv Create a dataset projects.locations.datasets.create Request
dataset = { "display_name": "hmdb_" + TIMESTAMP, "video_classification_dataset_metadata": {} } print(MessageToJson( automl.CreateDatasetRequest( parent=PARENT, dataset=dataset ).__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "dataset": { "displayName": "hmdb_20210228225744", "videoClassificationDatasetMetadata": {} } } Call
request = clients["automl"].create_dataset( parent=PARENT, dataset=dataset )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
result = request print(MessageToJson(result.__dict__["_pb"]))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/datasets/VCN6574174086275006464", "displayName": "hmdb_20210228225744", "createTime": "2021-02-28T23:06:43.197904Z", "etag": "AB3BwFrtf0Yl4fgnXW4leoEEANTAGQdOngyIqdQSJBT9pKEChgeXom-0OyH7dKtfvA4=", "videoClassificationDatasetMetadata": {} }
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split('/')[-1] print(dataset_id)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.datasets.importData Request
input_config = { "gcs_source": { "input_uris": [IMPORT_FILE] } } print(MessageToJson( automl.ImportDataRequest( name=dataset_short_id, input_config=input_config ).__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "VCN6574174086275006464", "inputConfig": { "gcsSource": { "inputUris": [ "gs://automl-video-demo-data/hmdb_split1.csv" ] } } } Call
request = clients["automl"].import_data( name=dataset_id, input_config=input_config )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
result = request.result() print(MessageToJson(result))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {} Train a model projects.locations.models.create Request
model = { "display_name": "hmdb_" + TIMESTAMP, "dataset_id": dataset_short_id, "video_classification_model_metadata": {} } print(MessageToJson( automl.CreateModelRequest( parent=PARENT, model=model ).__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "model": { "displayName": "hmdb_20210228225744", "datasetId": "VCN6574174086275006464", "videoClassificationModelMetadata": {} } } Call
request = clients["automl"].create_model( parent=PARENT, model=model )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
result = request.result() print(MessageToJson(result.__dict__["_pb"]))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648" }
# The full unique ID for the training pipeline model_id = result.name # The short numeric ID for the training pipeline model_short_id = model_id.split('/')[-1] print(model_short_id)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Evaluate the model projects.locations.models.modelEvaluations.list Call
request = clients["automl"].list_model_evaluations( parent=model_id )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
import json model_evaluations = [ json.loads(MessageToJson(me.__dict__["_pb"])) for me in request ] # The evaluation slice evaluation_slice = request.model_evaluation[0].name print(json.dumps(model_evaluations, indent=2))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output ``` [ { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266", "createTime": "2021-03-01T01:02:02.452298Z", "evaluatedExampleCount": 150, "classificationEvaluationMetrics": { "auPrc": 1.0, "confidenceMetricsE...
request = clients["automl"].get_model_evaluation( name=evaluation_slice )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
print(MessageToJson(request.__dict__["_pb"]))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266", "createTime": "2021-03-01T01:02:02.452298Z", "evaluatedExampleCount": 150, "classificationEvaluationMetrics": { "auPrc": 1.0, "confidenceMetricsEntry": [ ...
TRAIN_FILES = "gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv" test_items = ! gsutil cat $TRAIN_FILES | head -n2 cols = str(test_items[0]).split(',') test_item_1, test_label_1, test_start_1, test_end_1 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3]) print(test_item_1, test_label_1) cols = s...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi cartwheel gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi cartwheel
import tensorflow as tf import json gcs_input_uri = "gs://" + BUCKET_NAME + '/test.csv' with tf.io.gfile.GFile(gcs_input_uri, 'w') as f: data = f"{test_item_1}, {test_start_1}, {test_end_1}" f.write(data + '\n') data = f"{test_item_2}, {test_start_2}, {test_end_2}" f.write(data + '\n') print(gcs_i...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: gs://migration-ucaip-trainingaip-20210228225744/test.csv gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi, 0.0, inf gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi, 0.0, inf projects.locations.models.batchPredict Reques...
input_config = { "gcs_source": { "input_uris": [gcs_input_uri] } } output_config = { "gcs_destination": { "output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/" } } batch_prediction = automl.BatchPredictRequest( name=model_id, input_config=input_config, output_config...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648", "inputConfig": { "gcsSource": { "inputUris": [ "gs://migration-ucaip-trainingaip-20210228225744/test.csv" ] } }, "outputConfig": { "gcsDestination": { "outputUriPrefix": "...
request = clients["prediction"].batch_predict( request=batch_prediction )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {} Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial.
delete_dataset = True delete_model = True delete_bucket = True # Delete the dataset using the AutoML fully qualified identifier for the dataset try: if delete_dataset: clients['automl'].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the model using the AutoML fully qualified ...
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Class Class is a blueprint defining the charactaristics and behaviors of an object. python class MyClass: ... ... For a simple class, one shall define an instance python __init__() to handle variable when it created. Let's try the following example:
class Person: def __init__(self,age,salary): self.age = age self.salary = salary def out(self): print(self.age) print(self.salary)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
This is a basic class definition, the age and salary are needed when creating this object. The new class can be invoked like this:
a = Person(30,10000) a.out()
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
The __init__ initilaze the variables stored in the class. When they are called inside the class, we should add a self. in front of the variable. The out(Self) method are arbitary functions that can be used by calling Yourclass.yourfunction(). The input to the functions can be added after the self input. Python Conditio...
# make a list students = ['boy', 'boy', 'girl', 'boy', 'girl', 'girl', 'boy', 'boy', 'girl', 'girl', 'boy', 'boy'] boys = 0; girls = 0 for s in students: if s == 'boy': boys = boys +1 else: girls+=1 print("boys:", boys) print("girls:", girls)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
The While statement The While statement reads while CONDITIONAL: CONDITIONAL is a conditional statement, like i &lt; 100 or a boolean variable. After this line, user should add an indentation at the start of next line, either by space or tab.
def int_sum(n): s=0; i=1 while i < n: s += i*i i += 1 return s int_sum(1000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Performance
%timeit int_sum(100000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
<img src="images/numba-blue-horizontal-rgb.svg" alt="numba" style="width: 600px;"/> <img src="images/numba_features.png" alt="numba" style="width: 600px;"/> Numba translates Python functions to optimized machine code at runtime using the LLVM compiler library. Your functions will be translated to c-code during declarat...
import numba @numba.njit def int_sum_nb(n): s=0; i=1 while i < n: s += i*i i += 1 return s int_sum_nb(1000) %timeit int_sum_nb(100000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Examples
import random def monte_carlo_pi(n): acc = 0 for i in range(n): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / n monte_carlo_pi(1000000) %timeit monte_carlo_pi(1000000) @numba.njit def monte_carlo_pi_nb(n): acc = 0 f...
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Data Clean up In the last section of looking around, I saw that a lot of rows do not have any values or have garbage values(see first row of the table above). This can cause errors when computing anything using the values in these rows, hence a clean up is required. If a the coulums last_activity and first_login are em...
# Lets check the health of the data set user_data_to_clean.info()
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
As is visible from the last column (age_on_platform) data type, Pandas is not recognising it as date type format. This will make things difficult, so I delete this particular column and add a new one. Since the data in age_on_platform can be recreated by doing age_on_platform = last_activity - first_login But on eyeb...
# Run a loop through the data frame and check each row for this anamoly, if found drop, # this is being done ONLY for selected columns import datetime swapped_count = 0 first_login_count = 0 last_activity_count = 0 email_count = 0 userid_count = 0 for index, row in user_data_to_clean.iterrows(): if r...
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Validate if email i'd is correctly formatted and the email i'd really exists
from validate_email import validate_email email_count_invalid = 0 for index, row in user_data_to_clean.iterrows(): if not validate_email(row.email): # , verify=True) for checking if email i'd actually exits user_data_to_clean.drop(index, inplace=True) email_count_invalid = emai...
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Remove duplicates
user_data_to_deDuplicate = user_data_to_clean.copy() user_data_deDuplicateD = user_data_to_deDuplicate.loc[~user_data_to_deDuplicate.email.str.strip().duplicated()] len(user_data_deDuplicateD) user_data_deDuplicateD.info() # Now its time to convert the timedelta64 data type column named age_on_platform to seconds de...
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Clustering using Mean shift from sklearn.cluster import MeanShift, estimate_bandwidth x = [1,1,5,6,1,5,10,22,23,23,50,51,51,52,100,112,130,500,512,600,12000,12230] x = pd.Series(user_data_deDuplicateD_timedelta64_converted['age_on_platform']) X = np.array(zip(x,np.zeros(len(x))), dtype=np.int) '''-- bandwidth = estimat...
# Clustering using Kmeans, not working ''' y = [1,1,5,6,1,5,10,22,23,23,50,51,51,52,100,112,130,500,512,600,12000,12230] y_float = map(float, y) x = range(len(y)) x_float = map(float, x) m = np.matrix([x_float, y_float]).transpose() from scipy.cluster.vq import kmeans kclust = kmeans(m, 5) kclust[0][:, 0] assigned...
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Binning based on age_on_platform day 1; day 2; week 1; week 2; week 3; week 4; week 6; week 8; week 12; 3 months; 6 months; 1 year;
user_data_binned = user_data_deDuplicateD_timedelta64_converted.copy() # function to convert age_on_platform in seconds to hours convert_sec_to_hr = lambda x: x/3600 user_data_binned["age_on_platform"] = user_data_binned['age_on_platform'].map(convert_sec_to_hr).copy() # filter rows based on first_...
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
TensorFlow Addons Optimizers: LazyAdam <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_lazyadam"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" h...
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa # Hyperparameters batch_size=64 epochs=10
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Build the Model
model = tf.keras.Sequential([ tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'), tf.keras.layers.Dense(64, activation='relu', name='dense_2'), tf.keras.layers.Dense(10, activation='softmax', name='predictions'), ])
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Prepare the Data
# Load MNIST dataset as NumPy arrays dataset = {} num_validation = 10000 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # Preprocess the data x_train = x_train.reshape(-1, 784).astype('float32') / 255 x_test = x_test.reshape(-1, 784).astype('float32') / 255
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Train and Evaluate Simply replace typical keras optimizers with the new tfa optimizer
# Compile the model model.compile( optimizer=tfa.optimizers.LazyAdam(0.001), # Utilize TFA optimizer loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) # Train the network history = model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs) # Evaluate the...
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Preprocessing
import os from skimage import io from skimage.color import rgb2gray from skimage import transform from math import ceil IMGSIZE = (100, 100) def load_images(folder, scalefactor=(2, 2), labeldict=None): images = [] labels = [] files = os.listdir(folder) for file in (fname for fname in files if f...
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
Training LeNet First, we will train a simple CNN with a single hidden fully connected layer as a classifier.
from keras.layers import Conv2D, Dense, Flatten, MaxPooling2D from keras.models import Sequential from keras.backend import image_data_format def generate(figsize, nr_classes, cunits=[20, 50], fcunits=[500]): model = Sequential() cunits = list(cunits) input_shape = figsize + (1,) if image_data_format == '...
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
Training Random Forests Preprocessing to a fixed size training set since sklearn doesn't suppport streaming training sets?
# Same size training set as LeNet TRAININGSET_SIZE = len(x_train) * 5 * 100 batch_size = len(x_train) nr_batches = TRAININGSET_SIZE // batch_size + 1 imgit = imggen.flow(x_train, y=y_train, batch_size=batch_size) x_train_sampled = np.empty((TRAININGSET_SIZE, 1,) + x_train.shape[-2:]) y_train_sampled = np.empty(TRAININ...
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
So training on raw pixel values might not be a good idea. Let's build a feature extractor based on the trained LeNet (or any other pretrained image classifier)
model = load_model('aps_lenet.h5') enc_layers = it.takewhile(lambda l: not isinstance(l, keras.layers.Flatten), model.layers) encoder_model = keras.models.Sequential(enc_layers) encoder_model.add(keras.layers.Flatten()) x_train_sampled_enc = encoder_model.predict(x_train_sampled, verbose=True...
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest. After collecting our corpus, there are typically a number of preprocessing steps we want ...
# Create a set of frequent words stoplist = set('for a of the and to in'.split(' ')) # Lowercase each document, split it by white space and filter out stopwords texts = [[word for word in document.lower().split() if word not in stoplist] for document in raw_corpus] # Count word frequencies from collections im...
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
from gensim import corpora dictionary = corpora.Dictionary(processed_corpus) print(dictionary)
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common. Vector To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach...
print(dictionary.token2id)
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts:
new_doc = "Human computer interaction" new_vec = dictionary.doc2bow(new_doc.lower().split()) new_vec
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token. Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually ap...
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus] bow_corpus
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details. Model Now that we have vectorized our corpus we can begin to transform ...
from gensim import models # train the model tfidf = models.TfidfModel(bow_corpus) # transform the "system minors" string tfidf[dictionary.doc2bow("system minors".lower().split())]
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
💣 Note: The version available on PyPI might not be the latest version of DPPy. Please consider forking or cloning DPPy using
# !rm -r DPPy # !git clone https://github.com/guilgautier/DPPy.git # !pip install scipy --upgrade # Then # !pip install DPPy/. # OR # !pip install DPPy/.['zonotope','trees','docs'] to perform a full installation.
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
💣 If you have chosen to clone the repo and now wish to interact with the source code while running this notebook. You can uncomment the following cell.
%load_ext autoreload %autoreload 2 import os import sys sys.path.insert(0, os.path.abspath('..')) import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_format = 'retina' from dppy.beta_ensemble_polynomial_potential import BetaEnsemblePolynomialPotential
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
💻You can play with the various parameters, e.g., $N, \beta, V$ nb_gibbs_passes 💻 $V(x) = g_{2m} x^{2m}$ We first consider even monomial potentials whose equilibrium distribution can be derived from [Dei00, Proposition 6.156] $V(x) = \frac{1}{2} x^2$ (Hermite ensemble) This this the potential associated to the Hermite...
beta, V = 2, np.poly1d([0.5, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x2 = be.sample_mcmc(N=1000, nb_gibbs_passes=1, sample_exact_cond=True) be.hist(sampl_x2)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = \frac{1}{4} x^4$ To depart from the classical quadratic potential we consider the quartic potential, which has been sampled by [LiMe13] [OlNaTr15] [ChFe19, Section 3.1]
beta, V = 2, np.poly1d([1/4, 0, 0, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4 = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=True) # sample_exact_cond=False, # nb_mala_steps=100) be.hist(sampl_x4)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = \frac{1}{6} x^6$ This is the first time the sextic ensemble is (approximately) sampled to the best of our knowledge. In this case, the conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA. For this reason, w...
beta, V = 2, np.poly1d([1/6, 0, 0, 0, 0, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x6 = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x6)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = g_2 x^2 + g_4 x^4$ We consider quartic potentials where $g_2$ varies to reveal equilibrium distributions with support which are connected, about to be disconnect and fully disconnected. We refer to [DuKu06, p.2-3], [Molinari, Example 3.3] and [LiMe13, Section 2] for the exact shape of the corresponding equilibr...
beta, V = 2, np.poly1d([1/4, 0, 1/2, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x2 = be.sample_mcmc(N=1000, nb_gibbs_passes=10, sample_exact_cond=True) # sample_exact_cond=False, # nb_mala_steps=100) be.hist(samp...
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x)= \frac{1}{4} x^4 - x^2$ (onset of two-cut solution) This case reveal an equilibrium density with support which is about to be disconnected. The conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA. For this re...
beta, V = 2, np.poly1d([1/4, 0, -1, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x2_onset_2cut = be.sample_mcmc(N=1000, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x4_x2_onset_2cut)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x)= \frac{1}{4} x^4 - \frac{5}{4} x^2$ (Two-cut eigenvalue distribution) This case reveals an equilibrium density with support having two connected components. The conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of ...
beta, V = 2, np.poly1d([1/4, 0, -1.25, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x2_2cut = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x4_x2_2cut)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = \frac{1}{20} x^4 - \frac{4}{15}x^3 + \frac{1}{5}x^2 + \frac{8}{5}x$ This case reveals a singular behavior at the right edge of the support of the equilibrium density The conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by de...
beta, V = 2, np.poly1d([1/20, -4/15, 1/5, 8/5, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x3_x2_x = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x4_x3_x2_x)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit