markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**Conclusion:**- Classification accuracy is the **easiest classification metric to understand**- But, it does not tell you the **underlying distribution** of response values- And, it does not tell you what **"types" of errors** your classifier is making 5. Confusion matrixTable that describes the performance of a classification model
# IMPORTANT: first argument is true values, second argument is predicted values print(metrics.confusion_matrix(y_test, y_pred_class))
[[114 16] [ 46 16]]
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
![Small confusion matrix](09_confusion_matrix_1.png) - Every observation in the testing set is represented in **exactly one box**- It's a 2x2 matrix because there are **2 response classes**- The format shown here is **not** universal **Basic terminology**- **True Positives (TP):** we *correctly* predicted that they *do* have diabetes- **True Negatives (TN):** we *correctly* predicted that they *don't* have diabetes- **False Positives (FP):** we *incorrectly* predicted that they *do* have diabetes (a "Type I error")- **False Negatives (FN):** we *incorrectly* predicted that they *don't* have diabetes (a "Type II error")
# print the first 25 true and predicted responses print('True:', y_test.values[0:25]) print('Pred:', y_pred_class[0:25]) # save confusion matrix and slice into four pieces confusion = metrics.confusion_matrix(y_test, y_pred_class) TP = confusion[1, 1] TN = confusion[0, 0] FP = confusion[0, 1] FN = confusion[1, 0]
_____no_output_____
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
![Large confusion matrix](09_confusion_matrix_2.png) Metrics computed from a confusion matrix **Classification Accuracy:** Overall, how often is the classifier correct?
print((TP + TN) / float(TP + TN + FP + FN)) print(metrics.accuracy_score(y_test, y_pred_class))
0.6770833333333334 0.6770833333333334
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
**Classification Error:** Overall, how often is the classifier incorrect?- Also known as "Misclassification Rate"
print((FP + FN) / float(TP + TN + FP + FN)) print(1 - metrics.accuracy_score(y_test, y_pred_class))
0.3229166666666667 0.32291666666666663
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
**Sensitivity:** When the actual value is positive, how often is the prediction correct?- How "sensitive" is the classifier to detecting positive instances?- Also known as "True Positive Rate" or "Recall"
print(TP / float(TP + FN)) print(metrics.recall_score(y_test, y_pred_class))
0.25806451612903225 0.25806451612903225
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
**Specificity:** When the actual value is negative, how often is the prediction correct?- How "specific" (or "selective") is the classifier in predicting positive instances?
print(TN / float(TN + FP))
0.8769230769230769
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
**False Positive Rate:** When the actual value is negative, how often is the prediction incorrect?
print(FP / float(TN + FP))
0.12307692307692308
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
**Precision:** When a positive value is predicted, how often is the prediction correct?- How "precise" is the classifier when predicting positive instances?
print(TP / float(TP + FP)) print(metrics.precision_score(y_test, y_pred_class))
0.5 0.5
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
Many other metrics can be computed: F1 score, Matthews correlation coefficient, etc. **Conclusion:**- Confusion matrix gives you a **more complete picture** of how your classifier is performing- Also allows you to compute various **classification metrics**, and these metrics can guide your model selection**Which metrics should you focus on?**- Choice of metric depends on your **business objective**- **Spam filter** (positive class is "spam"): Optimize for **precision or specificity** because false negatives (spam goes to the inbox) are more acceptable than false positives (non-spam is caught by the spam filter)- **Fraudulent transaction detector** (positive class is "fraud"): Optimize for **sensitivity** because false positives (normal transactions that are flagged as possible fraud) are more acceptable than false negatives (fraudulent transactions that are not detected) 6. Adjusting the classification threshold
# print the first 10 predicted responses logreg.predict(X_test)[0:10] # print the first 10 predicted probabilities of class membership logreg.predict_proba(X_test)[0:10, :] # print the first 10 predicted probabilities for class 1 logreg.predict_proba(X_test)[0:10, 1] # store the predicted probabilities for class 1 y_pred_prob = logreg.predict_proba(X_test)[:, 1] # allow plots to appear in the notebook %matplotlib inline import matplotlib.pyplot as plt # histogram of predicted probabilities plt.hist(y_pred_prob, bins=8) plt.xlim(0, 1) plt.title('Histogram of predicted probabilities') plt.xlabel('Predicted probability of diabetes') plt.ylabel('Frequency')
_____no_output_____
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
**Decrease the threshold** for predicting diabetes in order to **increase the sensitivity** of the classifier
# predict diabetes if the predicted probability is greater than 0.3 from sklearn.preprocessing import binarize y_pred_class = binarize([y_pred_prob], 0.3)[0] # print the first 10 predicted probabilities y_pred_prob[0:10] # print the first 10 predicted classes with the lower threshold y_pred_class[0:10] # previous confusion matrix (default threshold of 0.5) print(confusion) # new confusion matrix (threshold of 0.3) print(metrics.confusion_matrix(y_test, y_pred_class)) # sensitivity has increased (used to be 0.24) print(46 / float(46 + 16)) # specificity has decreased (used to be 0.91) print(80 / float(80 + 50))
0.6153846153846154
BSD-3-Clause
Day04/1-classification/classificationV2.ipynb
kxu08/Bootcamp2019
Tutorial: Creating Tasks Pre-requisiteBefore we start, we need to install EvoJAX and import some libraries. **Note** In our [paper](https://arxiv.org/abs/2202.05008), we ran the experiments on NVIDIA V100 GPU(s). Your results can be different from ours.
from IPython.display import clear_output, Image !pip install evojax !pip install torchvision # We use torchvision.datasets.MNIST in this tutorial. clear_output() import os import numpy as np import jax import jax.numpy as jnp from evojax.task.cartpole import CartPoleSwingUp from evojax.policy.mlp import MLPPolicy from evojax.algo import PGPE from evojax import Trainer from evojax.util import create_logger # Let's create a directory to save logs and models. log_dir = './log' logger = create_logger(name='EvoJAX', log_dir=log_dir) logger.info('Welcome to the tutorial on Task creation!') logger.info('Jax backend: {}'.format(jax.local_devices())) !nvidia-smi --query-gpu=name --format=csv,noheader
EvoJAX: 2022-02-12 05:53:28,121 [INFO] Welcome to the tutorial on Task creation! absl: 2022-02-12 05:53:28,133 [INFO] Starting the local TPU driver. absl: 2022-02-12 05:53:28,135 [INFO] Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local:// absl: 2022-02-12 05:53:28,519 [INFO] Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available. EvoJAX: 2022-02-12 05:53:28,520 [INFO] Jax backend: [GpuDevice(id=0, process_index=0)]
Apache-2.0
examples/notebooks/TutorialTaskImplementation.ipynb
jamesbut/evojax
Introduction EvoJAX has three major components: the *task*, the *policy network* and the *neuroevolution algorithm*. Once these components are implemented and instantiated, we can use a trainer to start the training process. The following code snippet provides an example of how we use EvoJAX.
seed = 42 # Wish me luck! # We use the classic cart-pole swing up as our tasks, see # https://github.com/google/evojax/tree/main/evojax/task for more example tasks. # The test flag provides the opportunity for a user to # 1. Return different signals as rewards. For example, in our MNIST example, # we use negative cross-entropy loss as the reward in training tasks, and the # classification accuracy as the reward in test tasks. # 2. Perform reward shaping. It is common for RL practitioners to modify the # rewards during training so that the agent learns more efficiently. But this # modification should not be allowed in tests for fair evaluations. hard = False train_task = CartPoleSwingUp(harder=hard, test=False) test_task = CartPoleSwingUp(harder=hard, test=True) # We use a feedforward network as our policy. # By default, MLPPolicy uses "tanh" as its activation function for the output. policy = MLPPolicy( input_dim=train_task.obs_shape[0], hidden_dims=[64, 64], output_dim=train_task.act_shape[0], logger=logger, ) # We use PGPE as our evolution algorithm. # If you want to know more about the algorithm, please take a look at the paper: # https://people.idsia.ch/~juergen/nn2010.pdf solver = PGPE( pop_size=64, param_size=policy.num_params, optimizer='adam', center_learning_rate=0.05, seed=seed, ) # Now that we have all the three components instantiated, we can create a # trainer and start the training process. trainer = Trainer( policy=policy, solver=solver, train_task=train_task, test_task=test_task, max_iter=600, log_interval=100, test_interval=200, n_repeats=5, n_evaluations=128, seed=seed, log_dir=log_dir, logger=logger, ) _ = trainer.run() # Let's visualize the learned policy. def render(task, algo, policy): """Render the learned policy.""" task_reset_fn = jax.jit(test_task.reset) policy_reset_fn = jax.jit(policy.reset) step_fn = jax.jit(test_task.step) act_fn = jax.jit(policy.get_actions) params = algo.best_params[None, :] task_s = task_reset_fn(jax.random.PRNGKey(seed=seed)[None, :]) policy_s = policy_reset_fn(task_s) images = [CartPoleSwingUp.render(task_s, 0)] done = False step = 0 reward = 0 while not done: act, policy_s = act_fn(task_s, params, policy_s) task_s, r, d = step_fn(task_s, act) step += 1 reward = reward + r done = bool(d[0]) if step % 3 == 0: images.append(CartPoleSwingUp.render(task_s, 0)) print('reward={}'.format(reward)) return images imgs = render(test_task, solver, policy) gif_file = os.path.join(log_dir, 'cartpole.gif') imgs[0].save( gif_file, save_all=True, append_images=imgs[1:], duration=40, loop=0) Image(open(gif_file,'rb').read())
reward=[934.1182]
Apache-2.0
examples/notebooks/TutorialTaskImplementation.ipynb
jamesbut/evojax
Including the three major components, EvoJAX implements the entire training pipeline in JAX. In the first release, we have created several [demo tasks](https://github.com/google/evojax/tree/main/evojax/task) to showcase EvoJAX's capacity. And we encourage the users to bring their own tasks. To this end, we will walk you through the process of creating EvoJAX tasks in this tutorial. To contribute a task implementation to EvoJAX, all you need to do is to implement the `VectorizedTask` interface. The interface is defined as the following and you can see the related Python file [here](https://github.com/google/evojax/blob/main/evojax/task/base.py):```pythonclass TaskState(ABC): """A template of the task state.""" obs: jnp.ndarrayclass VectorizedTask(ABC): """Interface for all the EvoJAX tasks.""" max_steps: int obs_shape: Tuple act_shape: Tuple test: bool multi_agent_training: bool = False @abstractmethod def reset(self, key: jnp.array) -> TaskState: """This resets the vectorized task. Args: key - A jax random key. Returns: TaskState. Initial task state. """ raise NotImplementedError() @abstractmethod def step(self, state: TaskState, action: jnp.ndarray) -> Tuple[TaskState, jnp.ndarray, jnp.ndarray]: """This steps once the simulation. Args: state - System internal states of shape (num_tasks, *). action - Vectorized actions of shape (num_tasks, action_size). Returns: TaskState. Task states. jnp.ndarray. Reward. jnp.ndarray. Task termination flag: 1 for done, 0 otherwise. """ raise NotImplementedError()``` MNIST classification While one would obviously use gradient descent for MNIST in practice, the point is to show that neuroevolution can also solve them to some degree of accuracy within a short amount of time, which will be useful when these models are adapted within a more complicated task where gradient-based approaches may not work.The following code snippet shows how we wrap the dataset and treat it as a one-step `VectorizedTask`.
from torchvision import datasets from flax.struct import dataclass from evojax.task.base import TaskState from evojax.task.base import VectorizedTask # This state contains the information we wish to carry over to the next step. # The state will be used in `VectorizedTask.step` method. # In supervised learning tasks, we want to store the data and the labels so that # we can calculate the loss or the accuracy and use that as the reward signal. @dataclass class State(TaskState): obs: jnp.ndarray labels: jnp.ndarray def sample_batch(key, data, labels, batch_size): ix = jax.random.choice( key=key, a=data.shape[0], shape=(batch_size,), replace=False) return (jnp.take(data, indices=ix, axis=0), jnp.take(labels, indices=ix, axis=0)) def loss(prediction, target): target = jax.nn.one_hot(target, 10) return -jnp.mean(jnp.sum(prediction * target, axis=1)) def accuracy(prediction, target): predicted_class = jnp.argmax(prediction, axis=1) return jnp.mean(predicted_class == target) class MNIST(VectorizedTask): """MNIST classification task. We model the classification as an one-step task, i.e., `MNIST.reset` returns a batch of data to the agent, the agent outputs predictions, `MNIST.step` returns the reward (loss or accuracy) and terminates the rollout. """ def __init__(self, batch_size, test): self.max_steps = 1 # These are similar to OpenAI Gym environment's # observation_space and action_space. # They are helpful for initializing the policy networks. self.obs_shape = tuple([28, 28, 1]) self.act_shape = tuple([10, ]) # We download the dataset and normalize the value. dataset = datasets.MNIST('./data', train=not test, download=True) data = np.expand_dims(dataset.data.numpy() / 255., axis=-1) labels = dataset.targets.numpy() def reset_fn(key): if test: # In the test mode, we want to test on the entire test set. batch_data, batch_labels = data, labels else: # In the training mode, we only sample a batch of training data. batch_data, batch_labels = sample_batch( key, data, labels, batch_size) return State(obs=batch_data, labels=batch_labels) # We use jax.vmap for auto-vectorization. self._reset_fn = jax.jit(jax.vmap(reset_fn)) def step_fn(state, action): if test: # In the test mode, we report the classification accuracy. reward = accuracy(action, state.labels) else: # In the training mode, we return the negative loss as the # reward signal. It is legitimate to return accuracy as the # reward signal in training too, but we find the performance is # not as good as when we use the negative loss. reward = -loss(action, state.labels) # This is an one-step task, so that last return value (the `done` # flag) is one. return state, reward, jnp.ones(()) # We use jax.vmap for auto-vectorization. self._step_fn = jax.jit(jax.vmap(step_fn)) def reset(self, key): return self._reset_fn(key) def step(self, state, action): return self._step_fn(state, action) # Okay, let's test out the task with a ConvNet policy. from evojax.policy.convnet import ConvNetPolicy batch_size = 1024 train_task = MNIST(batch_size=batch_size, test=False) test_task = MNIST(batch_size=batch_size, test=True) policy = ConvNetPolicy(logger=logger) solver = PGPE( pop_size=64, param_size=policy.num_params, optimizer='adam', center_learning_rate=0.006, stdev_learning_rate=0.09, init_stdev=0.04, logger=logger, seed=seed, ) trainer = Trainer( policy=policy, solver=solver, train_task=train_task, test_task=test_task, max_iter=5000, log_interval=100, test_interval=1000, n_repeats=1, n_evaluations=1, seed=seed, log_dir=log_dir, logger=logger, ) _ = trainer.run()
EvoJAX: 2022-02-12 05:54:41,285 [INFO] ConvNetPolicy.num_params = 11274 EvoJAX: 2022-02-12 05:54:41,435 [INFO] Start to train for 5000 iterations. EvoJAX: 2022-02-12 05:54:52,635 [INFO] Iter=100, size=64, max=-0.8691, avg=-1.0259, min=-1.4128, std=0.1188 EvoJAX: 2022-02-12 05:54:56,730 [INFO] Iter=200, size=64, max=-0.5346, avg=-0.6686, min=-1.2417, std=0.1188 EvoJAX: 2022-02-12 05:55:00,824 [INFO] Iter=300, size=64, max=-0.3925, avg=-0.4791, min=-0.5902, std=0.0456 EvoJAX: 2022-02-12 05:55:04,917 [INFO] Iter=400, size=64, max=-0.3357, avg=-0.3918, min=-0.5241, std=0.0388 EvoJAX: 2022-02-12 05:55:09,010 [INFO] Iter=500, size=64, max=-0.2708, avg=-0.3235, min=-0.4797, std=0.0317 EvoJAX: 2022-02-12 05:55:13,104 [INFO] Iter=600, size=64, max=-0.1965, avg=-0.2417, min=-0.3119, std=0.0238 EvoJAX: 2022-02-12 05:55:17,198 [INFO] Iter=700, size=64, max=-0.1784, avg=-0.2177, min=-0.3148, std=0.0268 EvoJAX: 2022-02-12 05:55:21,292 [INFO] Iter=800, size=64, max=-0.1797, avg=-0.2105, min=-0.2762, std=0.0222 EvoJAX: 2022-02-12 05:55:25,386 [INFO] Iter=900, size=64, max=-0.1803, avg=-0.2379, min=-0.3923, std=0.0330 EvoJAX: 2022-02-12 05:55:29,478 [INFO] Iter=1000, size=64, max=-0.1535, avg=-0.1856, min=-0.2457, std=0.0225 EvoJAX: 2022-02-12 05:55:31,071 [INFO] [TEST] Iter=1000, #tests=1, max=0.9627 avg=0.9627, min=0.9627, std=0.0000 EvoJAX: 2022-02-12 05:55:35,170 [INFO] Iter=1100, size=64, max=-0.1150, avg=-0.1438, min=-0.1971, std=0.0153 EvoJAX: 2022-02-12 05:55:39,263 [INFO] Iter=1200, size=64, max=-0.1278, avg=-0.1571, min=-0.2458, std=0.0193 EvoJAX: 2022-02-12 05:55:43,358 [INFO] Iter=1300, size=64, max=-0.1323, avg=-0.1641, min=-0.2089, std=0.0164 EvoJAX: 2022-02-12 05:55:47,453 [INFO] Iter=1400, size=64, max=-0.1331, avg=-0.1573, min=-0.2085, std=0.0163 EvoJAX: 2022-02-12 05:55:51,547 [INFO] Iter=1500, size=64, max=-0.1709, avg=-0.2142, min=-0.2950, std=0.0197 EvoJAX: 2022-02-12 05:55:55,640 [INFO] Iter=1600, size=64, max=-0.1052, avg=-0.1410, min=-0.2766, std=0.0279 EvoJAX: 2022-02-12 05:55:59,735 [INFO] Iter=1700, size=64, max=-0.0897, avg=-0.1184, min=-0.1591, std=0.0144 EvoJAX: 2022-02-12 05:56:03,828 [INFO] Iter=1800, size=64, max=-0.0777, avg=-0.1029, min=-0.1509, std=0.0165 EvoJAX: 2022-02-12 05:56:07,922 [INFO] Iter=1900, size=64, max=-0.0935, avg=-0.1285, min=-0.1682, std=0.0151 EvoJAX: 2022-02-12 05:56:12,015 [INFO] Iter=2000, size=64, max=-0.1158, avg=-0.1439, min=-0.2054, std=0.0155 EvoJAX: 2022-02-12 05:56:12,026 [INFO] [TEST] Iter=2000, #tests=1, max=0.9740 avg=0.9740, min=0.9740, std=0.0000 EvoJAX: 2022-02-12 05:56:16,121 [INFO] Iter=2100, size=64, max=-0.1054, avg=-0.1248, min=-0.1524, std=0.0101 EvoJAX: 2022-02-12 05:56:20,213 [INFO] Iter=2200, size=64, max=-0.1092, avg=-0.1363, min=-0.1774, std=0.0146 EvoJAX: 2022-02-12 05:56:24,306 [INFO] Iter=2300, size=64, max=-0.1079, avg=-0.1298, min=-0.1929, std=0.0158 EvoJAX: 2022-02-12 05:56:28,398 [INFO] Iter=2400, size=64, max=-0.1129, avg=-0.1352, min=-0.1870, std=0.0145 EvoJAX: 2022-02-12 05:56:32,491 [INFO] Iter=2500, size=64, max=-0.0790, avg=-0.0955, min=-0.1291, std=0.0113 EvoJAX: 2022-02-12 05:56:36,584 [INFO] Iter=2600, size=64, max=-0.1299, avg=-0.1537, min=-0.1947, std=0.0128 EvoJAX: 2022-02-12 05:56:40,675 [INFO] Iter=2700, size=64, max=-0.0801, avg=-0.0983, min=-0.1301, std=0.0094 EvoJAX: 2022-02-12 05:56:44,767 [INFO] Iter=2800, size=64, max=-0.0849, avg=-0.1014, min=-0.1511, std=0.0116 EvoJAX: 2022-02-12 05:56:48,859 [INFO] Iter=2900, size=64, max=-0.0669, avg=-0.0796, min=-0.1111, std=0.0090 EvoJAX: 2022-02-12 05:56:52,950 [INFO] Iter=3000, size=64, max=-0.0782, avg=-0.0975, min=-0.1304, std=0.0123 EvoJAX: 2022-02-12 05:56:52,960 [INFO] [TEST] Iter=3000, #tests=1, max=0.9768 avg=0.9768, min=0.9768, std=0.0000 EvoJAX: 2022-02-12 05:56:57,056 [INFO] Iter=3100, size=64, max=-0.0857, avg=-0.1029, min=-0.1421, std=0.0092 EvoJAX: 2022-02-12 05:57:01,149 [INFO] Iter=3200, size=64, max=-0.0769, avg=-0.0964, min=-0.1279, std=0.0120 EvoJAX: 2022-02-12 05:57:05,242 [INFO] Iter=3300, size=64, max=-0.0805, avg=-0.1021, min=-0.1200, std=0.0088 EvoJAX: 2022-02-12 05:57:09,335 [INFO] Iter=3400, size=64, max=-0.0642, avg=-0.0774, min=-0.0972, std=0.0080 EvoJAX: 2022-02-12 05:57:13,428 [INFO] Iter=3500, size=64, max=-0.0601, avg=-0.0771, min=-0.1074, std=0.0080 EvoJAX: 2022-02-12 05:57:17,522 [INFO] Iter=3600, size=64, max=-0.0558, avg=-0.0709, min=-0.1082, std=0.0094 EvoJAX: 2022-02-12 05:57:21,615 [INFO] Iter=3700, size=64, max=-0.0915, avg=-0.1048, min=-0.1519, std=0.0100 EvoJAX: 2022-02-12 05:57:25,709 [INFO] Iter=3800, size=64, max=-0.0525, avg=-0.0667, min=-0.0823, std=0.0069 EvoJAX: 2022-02-12 05:57:29,801 [INFO] Iter=3900, size=64, max=-0.0983, avg=-0.1150, min=-0.1447, std=0.0105 EvoJAX: 2022-02-12 05:57:33,895 [INFO] Iter=4000, size=64, max=-0.0759, avg=-0.0954, min=-0.1293, std=0.0114 EvoJAX: 2022-02-12 05:57:33,909 [INFO] [TEST] Iter=4000, #tests=1, max=0.9800 avg=0.9800, min=0.9800, std=0.0000 EvoJAX: 2022-02-12 05:57:38,004 [INFO] Iter=4100, size=64, max=-0.0811, avg=-0.0957, min=-0.1184, std=0.0086 EvoJAX: 2022-02-12 05:57:42,095 [INFO] Iter=4200, size=64, max=-0.0806, avg=-0.0960, min=-0.1313, std=0.0096 EvoJAX: 2022-02-12 05:57:46,187 [INFO] Iter=4300, size=64, max=-0.0698, avg=-0.0908, min=-0.1158, std=0.0100 EvoJAX: 2022-02-12 05:57:50,278 [INFO] Iter=4400, size=64, max=-0.0754, avg=-0.0930, min=-0.1202, std=0.0104 EvoJAX: 2022-02-12 05:57:54,368 [INFO] Iter=4500, size=64, max=-0.0708, avg=-0.0877, min=-0.1107, std=0.0088 EvoJAX: 2022-02-12 05:57:58,459 [INFO] Iter=4600, size=64, max=-0.0610, avg=-0.0773, min=-0.1032, std=0.0076 EvoJAX: 2022-02-12 05:58:02,550 [INFO] Iter=4700, size=64, max=-0.0704, avg=-0.0881, min=-0.1299, std=0.0110 EvoJAX: 2022-02-12 05:58:06,640 [INFO] Iter=4800, size=64, max=-0.0651, avg=-0.0812, min=-0.1042, std=0.0080 EvoJAX: 2022-02-12 05:58:10,732 [INFO] Iter=4900, size=64, max=-0.0588, avg=-0.0712, min=-0.1096, std=0.0081 EvoJAX: 2022-02-12 05:58:14,795 [INFO] [TEST] Iter=5000, #tests=1, max=0.9822, avg=0.9822, min=0.9822, std=0.0000 EvoJAX: 2022-02-12 05:58:14,800 [INFO] Training done, best_score=0.9822
Apache-2.0
examples/notebooks/TutorialTaskImplementation.ipynb
jamesbut/evojax
Okay! Our implementation of the classification task is successful and EvoJAX achieved $>98\%$ test accuracy within 5 min on a V100 GPU.As mentioned before, MNIST is a simple one-step task, we want to get you familiar with the interfaces. Next, we will build the classic cart-pole task from scratch. Cart-pole swing up In our cart-pole swing up task, the agent applies an action $a \in [-1, 1]$ on the cart, and we maintain 4 states:1. cart position $x$2. cart velocity $\dot{x}$3. the angle between the cart and the pole $\theta$4. the pole's angular velocity $\dot{\theta}$We randomly sample the initial states and will use the forward Euler integration to update them: $\mathbf{x}(t + \Delta t) = \mathbf{x}(t) + \Delta t \mathbf{v}(t)$ and $\mathbf{v}(t + \Delta t) = \mathbf{v}(t) + \Delta t f(a, \mathbf{x}(t), \mathbf{v}(t))$ where $\mathbf{x}(t) = [x, \theta]^{\intercal}$, $\mathbf{v}(t) = [\dot{x}, \dot{\theta}]^{\intercal}$ and $f(\cdot)$ is a function that represents the physical model.Thanks to `jax.vmap`, we are able to write the task as if it is designed to deal with non-batch inputs though in the training process JAX will automatically vectorize the task for us.
from evojax.task.base import TaskState from evojax.task.base import VectorizedTask import PIL # Define some physics metrics. GRAVITY = 9.82 CART_MASS = 0.5 POLE_MASS = 0.5 POLE_LEN = 0.6 FRICTION = 0.1 FORCE_SCALING = 10.0 DELTA_T = 0.01 CART_X_LIMIT = 2.4 # Define some constants for visualization. SCREEN_W = 600 SCREEN_H = 600 CART_W = 40 CART_H = 20 VIZ_SCALE = 100 WHEEL_RAD = 5 @dataclass class State(TaskState): obs: jnp.ndarray # This is the tuple (x, x_dot, theta, theta_dot) state: jnp.ndarray # This maintains the system's state. steps: jnp.int32 # This tracks the rollout length. key: jnp.ndarray # This serves as a random seed. class CartPole(VectorizedTask): """A quick implementation of the cart-pole task.""" def __init__(self, max_steps=1000, test=False): self.max_steps = max_steps self.obs_shape = tuple([4, ]) self.act_shape = tuple([1, ]) def sample_init_state(sample_key): return ( jax.random.normal(sample_key, shape=(4,)) * 0.2 + jnp.array([0, 0, jnp.pi, 0]) ) def get_reward(x, x_dot, theta, theta_dot): # We encourage # the pole to be held upward (i.e., theta is close to 0) and # the cart to be at the origin (i.e., x is close to 0). reward_theta = (jnp.cos(theta) + 1.0) / 2.0 reward_x = jnp.cos((x / CART_X_LIMIT) * (jnp.pi / 2.0)) return reward_theta * reward_x def update_state(action, x, x_dot, theta, theta_dot): action = jnp.clip(action, -1.0, 1.0)[0] * FORCE_SCALING s = jnp.sin(theta) c = jnp.cos(theta) total_m = CART_MASS + POLE_MASS m_p_l = POLE_MASS * POLE_LEN # This is the physical model: f-function. x_dot_update = ( (-2 * m_p_l * (theta_dot ** 2) * s + 3 * POLE_MASS * GRAVITY * s * c + 4 * action - 4 * FRICTION * x_dot) / (4 * total_m - 3 * POLE_MASS * c ** 2) ) theta_dot_update = ( (-3 * m_p_l * (theta_dot ** 2) * s * c + 6 * total_m * GRAVITY * s + 6 * (action - FRICTION * x_dot) * c) / (4 * POLE_LEN * total_m - 3 * m_p_l * c ** 2) ) # This is the forward Euler integration. x = x + x_dot * DELTA_T theta = theta + theta_dot * DELTA_T x_dot = x_dot + x_dot_update * DELTA_T theta_dot = theta_dot + theta_dot_update * DELTA_T return jnp.array([x, x_dot, theta, theta_dot]) def out_of_screen(x): """We terminate the rollout if the cart is out of the screen.""" beyond_boundary_l = jnp.where(x < -CART_X_LIMIT, 1, 0) beyond_boundary_r = jnp.where(x > CART_X_LIMIT, 1, 0) return jnp.bitwise_or(beyond_boundary_l, beyond_boundary_r) def reset_fn(key): next_key, key = jax.random.split(key) state = sample_init_state(key) return State( obs=state, # We make the task fully-observable. state=state, steps=jnp.zeros((), dtype=int), key=next_key, ) self._reset_fn = jax.jit(jax.vmap(reset_fn)) def step_fn(state, action): current_state = update_state(action, *state.state) reward = get_reward(*current_state) steps = state.steps + 1 done = jnp.bitwise_or( out_of_screen(current_state[0]), steps >= max_steps) # We reset the step counter to zero if the rollout has ended. steps = jnp.where(done, jnp.zeros((), jnp.int32), steps) # We automatically reset the states if the rollout has ended. next_key, key = jax.random.split(state.key) # current_state = jnp.where( # done, sample_init_state(key), current_state) return State( state=current_state, obs=current_state, steps=steps, key=next_key), reward, done self._step_fn = jax.jit(jax.vmap(step_fn)) def reset(self, key): return self._reset_fn(key) def step(self, state, action): return self._step_fn(state, action) # Optinally, we can implement a render method to visualize the task. @staticmethod def render(state, task_id): """Render a specified task.""" img = PIL.Image.new('RGB', (SCREEN_W, SCREEN_H), (255, 255, 255)) draw = PIL.ImageDraw.Draw(img) x, _, theta, _ = np.array(state.state[task_id]) cart_y = SCREEN_H // 2 + 100 cart_x = x * VIZ_SCALE + SCREEN_W // 2 # Draw the horizon. draw.line( (0, cart_y + CART_H // 2 + WHEEL_RAD, SCREEN_W, cart_y + CART_H // 2 + WHEEL_RAD), fill=(0, 0, 0), width=1) # Draw the cart. draw.rectangle( (cart_x - CART_W // 2, cart_y - CART_H // 2, cart_x + CART_W // 2, cart_y + CART_H // 2), fill=(255, 0, 0), outline=(0, 0, 0)) # Draw the wheels. draw.ellipse( (cart_x - CART_W // 2 - WHEEL_RAD, cart_y + CART_H // 2 - WHEEL_RAD, cart_x - CART_W // 2 + WHEEL_RAD, cart_y + CART_H // 2 + WHEEL_RAD), fill=(220, 220, 220), outline=(0, 0, 0)) draw.ellipse( (cart_x + CART_W // 2 - WHEEL_RAD, cart_y + CART_H // 2 - WHEEL_RAD, cart_x + CART_W // 2 + WHEEL_RAD, cart_y + CART_H // 2 + WHEEL_RAD), fill=(220, 220, 220), outline=(0, 0, 0)) # Draw the pole. draw.line( (cart_x, cart_y, cart_x + POLE_LEN * VIZ_SCALE * np.cos(theta - np.pi / 2), cart_y + POLE_LEN * VIZ_SCALE * np.sin(theta - np.pi / 2)), fill=(0, 0, 255), width=6) return img # Okay, let's test this simple cart-pole implementation. rollout_key = jax.random.PRNGKey(seed=seed) reset_key, rollout_key = jax.random.split(rollout_key, 2) reset_key = reset_key[None, :] # Expand dim, the leading is the batch dim. # Initialize the task. cart_pole_task = CartPole() t_state = cart_pole_task.reset(reset_key) task_screens = [CartPole.render(t_state, 0)] # Rollout with random actions. done = False step_cnt = 0 total_reward = 0 while not done: action_key, rollout_key = jax.random.split(rollout_key, 2) action = jax.random.uniform( action_key, shape=(1, 1), minval=-1., maxval=1.) t_state, reward, done = cart_pole_task.step(t_state, action) total_reward = total_reward + reward step_cnt += 1 if step_cnt % 4 == 0: task_screens.append(CartPole.render(t_state, 0)) print('reward={}, steps={}'.format(total_reward, step_cnt)) # Visualze the rollout. gif_file = os.path.join(log_dir, 'rand_cartpole.gif') task_screens[0].save( gif_file, save_all=True, append_images=task_screens[1:], loop=0) Image(open(gif_file,'rb').read())
reward=[4.687451], steps=221
Apache-2.0
examples/notebooks/TutorialTaskImplementation.ipynb
jamesbut/evojax
The random policy does not solve the cart-pole task, but our implementation seems to be correct. Let's now plug in this task to EvoJAX.
train_task = CartPole(test=False) test_task = CartPole(test=True) # We use the same policy and solver to solve this "new" task. policy = MLPPolicy( input_dim=train_task.obs_shape[0], hidden_dims=[64, 64], output_dim=train_task.act_shape[0], logger=logger, ) solver = PGPE( pop_size=64, param_size=policy.num_params, optimizer='adam', center_learning_rate=0.05, seed=seed, ) trainer = Trainer( policy=policy, solver=solver, train_task=train_task, test_task=test_task, max_iter=600, log_interval=100, test_interval=200, n_repeats=5, n_evaluations=128, seed=seed, log_dir=log_dir, logger=logger, ) _ = trainer.run() # Let's visualize the learned policy. def render(task, algo, policy): """Render the learned policy.""" task_reset_fn = jax.jit(test_task.reset) policy_reset_fn = jax.jit(policy.reset) step_fn = jax.jit(test_task.step) act_fn = jax.jit(policy.get_actions) params = algo.best_params[None, :] task_s = task_reset_fn(jax.random.PRNGKey(seed=seed)[None, :]) policy_s = policy_reset_fn(task_s) images = [CartPole.render(task_s, 0)] done = False step = 0 reward = 0 while not done: act, policy_s = act_fn(task_s, params, policy_s) task_s, r, d = step_fn(task_s, act) step += 1 reward = reward + r done = bool(d[0]) if step % 3 == 0: images.append(CartPole.render(task_s, 0)) print('reward={}'.format(reward)) return images imgs = render(test_task, solver, policy) gif_file = os.path.join(log_dir, 'trained_cartpole.gif') imgs[0].save( gif_file, save_all=True, append_images=imgs[1:], duration=40, loop=0) Image(open(gif_file,'rb').read())
reward=[923.1105]
Apache-2.0
examples/notebooks/TutorialTaskImplementation.ipynb
jamesbut/evojax
Nice! EvoJAX is able to solve the new cart-pole task within a minute.In this tutorial, we walked you through the process of creating tasks from scratch. The two examples we used are simple and are supposed to help you understand the interfaces. If you are interested in learning more, please check out our GitHub [repo](https://github.com/google/evojax/tree/main/evojax/task).Please let us (evojax-dev@google.com) know if you have any problems or suggestions, thanks!
_____no_output_____
Apache-2.0
examples/notebooks/TutorialTaskImplementation.ipynb
jamesbut/evojax
Provider Table MappingThis is an attempt at mapping FHIR to OMOP using the following guide: https://build.fhir.org/ig/HL7/cdmh/profiles.htmlomop-to-fhir-mappingsIn this notebook we are mapping FHIR to the OMOP Provider Table Load Data Frame from Parquet Catalog File
from pyspark.sql import SparkSession from pyspark.sql.functions import dayofmonth,month,year,to_date,trunc,split,explode,array # Create a local Spark session spark = SparkSession.builder.appName('etl').getOrCreate() # Reads file df = spark.read.parquet('data/catalog.parquet')
_____no_output_____
MIT
notebooks/Provider.ipynb
spe-uob/HealthcareLakeETL
Data Frame schema
#df.printSchema()
_____no_output_____
MIT
notebooks/Provider.ipynb
spe-uob/HealthcareLakeETL
Practitioner Mapping Filter By Practitioner Resource type
filtered = df.filter(df['resourceType'] == 'Practitioner') #filtered.printSchema()
_____no_output_____
MIT
notebooks/Provider.ipynb
spe-uob/HealthcareLakeETL
Categorical encodersExamples of how to use the different categorical encoders using the Titanic dataset.
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from feature_engine import categorical_encoders as ce from feature_engine.missing_data_imputers import CategoricalVariableImputer pd.set_option('display.max_columns', None) # Load titanic dataset from OpenML def load_titanic(): data = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl') data = data.replace('?', np.nan) data['cabin'] = data['cabin'].astype(str).str[0] data['pclass'] = data['pclass'].astype('O') data['age'] = data['age'].astype('float') data['fare'] = data['fare'].astype('float') data['embarked'].fillna('C', inplace=True) data.drop(labels=['boat', 'body', 'home.dest'], axis=1, inplace=True) return data # load data data = load_titanic() data.head() data.isnull().sum() # we will encode the below variables, they have no missing values data[['cabin', 'pclass', 'embarked']].isnull().sum() data[['cabin', 'pclass', 'embarked']].dtypes data[['cabin', 'pclass', 'embarked']].dtypes # let's separate into training and testing set X_train, X_test, y_train, y_test = train_test_split( data.drop(['survived', 'name', 'ticket'], axis=1), data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
CountFrequencyCategoricalEncoderThe CountFrequencyCategoricalEncoder, replaces the categories by the count or frequency of the observations in the train set for that category. If we select "count" in the encoding_method, then for the variable colour, if there are 10 observations in the train set that show colour blue, blue will be replaced by 10. Alternatively, if we select "frequency" in the encoding_method, if 10% of the observations in the train set show blue colour, then blue will be replaced by 0.1. FrequencyLabels are replaced by the percentage of the observations that show that label in the train set.
count_enc = ce.CountFrequencyCategoricalEncoder( encoding_method='frequency', variables=['cabin', 'pclass', 'embarked']) count_enc.fit(X_train) # we can explore the encoder_dict_ to find out the category replacements. count_enc.encoder_dict_ # transform the data: see the change in the head view train_t = count_enc.transform(X_train) test_t = count_enc.transform(X_test) test_t.head() test_t['pclass'].value_counts().plot.bar()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
CountLabels are replaced by the number of the observations that show that label in the train set.
# this time we encode only 1 variable count_enc = ce.CountFrequencyCategoricalEncoder(encoding_method='count', variables='cabin') count_enc.fit(X_train) # we can find the mappings in the encoder_dict_ attribute. count_enc.encoder_dict_ # transform the data: see the change in the head view for Cabin train_t = count_enc.transform(X_train) test_t = count_enc.transform(X_test) test_t.head() test_t['pclass'].value_counts().plot.bar()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Select categorical variables automaticallyIf we don't indicate which variables we want to encode, the encoder will find all categorical variables
# this time we ommit the argument for variable count_enc = ce.CountFrequencyCategoricalEncoder(encoding_method = 'count') count_enc.fit(X_train) # we can see that the encoder selected automatically all the categorical variables count_enc.variables # transform the data: see the change in the head view train_t = count_enc.transform(X_train) test_t = count_enc.transform(X_test) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Note that if there are labels in the test set that were not present in the train set, the transformer will introduce NaN, and raise a warning. MeanCategoricalEncoderThe MeanCategoricalEncoder replaces the labels of the variables by the mean value of the target for that label. For example, in the variable colour, if the mean value of the binary target is 0.5 for the label blue, then blue is replaced by 0.5
# we will transform 3 variables mean_enc = ce.MeanCategoricalEncoder(variables=['cabin', 'pclass', 'embarked']) # Note: the MeanCategoricalEncoder needs the target to fit mean_enc.fit(X_train, y_train) # see the dictionary with the mappings per variable mean_enc.encoder_dict_ mean_enc.variables # we can see the transformed variables in the head view train_t = count_enc.transform(X_train) test_t = count_enc.transform(X_test) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Automatically select the variablesThis encoder will select all categorical variables to encode, when no variables are specified when calling the encoder
mean_enc = ce.MeanCategoricalEncoder() mean_enc.fit(X_train, y_train) mean_enc.variables # we can see the transformed variables in the head view train_t = count_enc.transform(X_train) test_t = count_enc.transform(X_test) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
WoERatioCategoricalEncoderThis encoder replaces the labels by the weight of evidence or the ratio of probabilities. It only works for binary classification. The weight of evidence is given by: np.log( p(1) / p(0) ) The target probability ratio is given by: p(1) / p(0) Weight of evidence
## Rare value encoder first to reduce the cardinality # see below for more details on this encoder rare_encoder = ce.RareLabelCategoricalEncoder( tol=0.03, n_categories=2, variables=['cabin', 'pclass', 'embarked']) rare_encoder.fit(X_train) # transform train_t = rare_encoder.transform(X_train) test_t = rare_encoder.transform(X_test) woe_enc = ce.WoERatioCategoricalEncoder( encoding_method='woe', variables=['cabin', 'pclass', 'embarked']) # to fit you need to pass the target y woe_enc.fit(train_t, y_train) woe_enc.encoder_dict_ # transform and visualise the data train_t = woe_enc.transform(train_t) test_t = woe_enc.transform(test_t) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
RatioSimilarly, it is recommended to remove rare labels and high cardinality before using this encoder.
# rare label encoder first: transform train_t = rare_encoder.transform(X_train) test_t = rare_encoder.transform(X_test) ratio_enc = ce.WoERatioCategoricalEncoder( encoding_method='ratio', variables=['cabin', 'pclass', 'embarked']) # to fit we need to pass the target y ratio_enc.fit(train_t, y_train) ratio_enc.encoder_dict_ # transform and visualise the data train_t = woe_enc.transform(train_t) test_t = woe_enc.transform(test_t) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
OrdinalCategoricalEncoderThe OrdinalCategoricalEncoder will replace the variable labels by digits, from 1 to the number of different labels. If we select "arbitrary", then the encoder will assign numbers as the labels appear in the variable (first come first served). If we select "ordered", the encoder will assign numbers following the mean of the target value for that label. So labels for which the mean of the target is higher will get the number 1, and those where the mean of the target is smallest will get the number n. Ordered
# we will encode 3 variables: ordinal_enc = ce.OrdinalCategoricalEncoder( encoding_method='ordered', variables=['pclass', 'cabin', 'embarked']) # for this encoder, we need to pass the target as argument # if encoding_method='ordered' ordinal_enc.fit(X_train, y_train) # here we can see the mappings ordinal_enc.encoder_dict_ # transform and visualise the data train_t = ordinal_enc.transform(X_train) test_t = ordinal_enc.transform(X_test) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Arbitrary
ordinal_enc = ce.OrdinalCategoricalEncoder(encoding_method='arbitrary', variables='cabin') # for this encoder we don't need to add the target. You can leave it or remove it. ordinal_enc.fit(X_train, y_train) ordinal_enc.encoder_dict_
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Note that the ordering of the different labels is not the same when we select "arbitrary" or "ordered"
# transform: see the numerical values in the former categorical variables train_t = ordinal_enc.transform(X_train) test_t = ordinal_enc.transform(X_test) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Automatically select categorical variablesThese encoder as well selects all the categorical variables, if None is passed to the variable argument when calling the enconder.
ordinal_enc = ce.OrdinalCategoricalEncoder(encoding_method = 'arbitrary') # for this encoder we don't need to add the target. You can leave it or remove it. ordinal_enc.fit(X_train) ordinal_enc.variables # transform: see the numerical values in the former categorical variables train_t = ordinal_enc.transform(X_train) test_t = ordinal_enc.transform(X_test) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
OneHotCategoricalEncoderPerforms One Hot Encoding. The encoder can select how many different labels per variable to encode into binaries. When top_categories is set to None, all the categories will be transformed in binary variables. However, when top_categories is set to an integer, for example 10, then only the 10 most popular categories will be transformed into binary, and the rest will be discarded.The encoder has also the possibility to create binary variables from all categories (drop_last = False), or remove the binary for the last category (drop_last = True), for use in linear models. All binary, no top_categories
ohe_enc = ce.OneHotCategoricalEncoder( top_categories=None, variables=['pclass', 'cabin', 'embarked'], drop_last=False) ohe_enc.fit(X_train) ohe_enc.drop_last ohe_enc.encoder_dict_ train_t = ohe_enc.transform(X_train) test_t = ohe_enc.transform(X_train) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Dropping the last category for linear models
ohe_enc = ce.OneHotCategoricalEncoder( top_categories=None, variables=['pclass', 'cabin', 'embarked'], drop_last=True) ohe_enc.fit(X_train) ohe_enc.encoder_dict_ train_t = ohe_enc.transform(X_train) test_t = ohe_enc.transform(X_train) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Selecting top_categories to encode
ohe_enc = ce.OneHotCategoricalEncoder( top_categories=2, variables=['pclass', 'cabin', 'embarked'], drop_last=False) ohe_enc.fit(X_train) ohe_enc.encoder_dict_ train_t = ohe_enc.transform(X_train) test_t = ohe_enc.transform(X_train) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
RareLabelCategoricalEncoderThe RareLabelCategoricalEncoder groups labels that show a small number of observations in the dataset into a new category called 'Rare'. This helps to avoid overfitting.The argument tol indicates the percentage of observations that the label needs to have in order not to be re-grouped into the "Rare" label. The argument n_categories indicates the minimum number of distinct categories that a variable needs to have for any of the labels to be re-grouped into rare. If the number of labels is smaller than n_categories, then the encoder will not group the labels for that variable.
## Rare value encoder rare_encoder = ce.RareLabelCategoricalEncoder( tol=0.03, n_categories=5, variables=['cabin', 'pclass', 'embarked']) rare_encoder.fit(X_train) # the encoder_dict_ contains a dictionary of the {variable: frequent labels} pair rare_encoder.encoder_dict_ train_t = rare_encoder.transform(X_train) test_t = rare_encoder.transform(X_train) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Automatically select all categorical variablesIf no variable list is passed as argument, it selects all the categorical variables.
## Rare value encoder rare_encoder = ce.RareLabelCategoricalEncoder(tol = 0.03, n_categories=5) rare_encoder.fit(X_train) rare_encoder.encoder_dict_ train_t = rare_encoder.transform(X_train) test_t = rare_encoder.transform(X_train) test_t.head()
_____no_output_____
BSD-3-Clause
examples/categorical-encoders.ipynb
iahsanujunda/feature_engine
Simple Linear Regression Importing all libraries required
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split %matplotlib inline # Reading data from remote link url = "http://bit.ly/w-data" data = pd.read_csv(url) print("Data imported successfully") data.head(10) # The shape of dataset data.shape # check the info of data data.info() # check the description of student_score data data.describe()
_____no_output_____
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
Data Visualization
# Plotting the distribution of scores data.plot(x='Hours', y='Scores', style='o') plt.title('Hours vs Percentage') plt.xlabel('Hours Studied') plt.ylabel('Percentage Score') plt.show()
_____no_output_____
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
Linear Regression Model
X = data.iloc[:, :-1].values y = data.iloc[:, 1].values X_train, X_test, y_train, y_test = train_test_split(X, y,train_size=0.80,test_size=0.20,random_state=42)
_____no_output_____
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
Training the model
from sklearn.linear_model import LinearRegression linearRegressor= LinearRegression() linearRegressor.fit(X_train, y_train) y_predict= linearRegressor.predict(X_train)
_____no_output_____
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
Training the Algorithm
regressor = LinearRegression() regressor.fit(X_train, y_train) print("Training complete.") # Plotting the regression line line = regressor.coef_*X+regressor.intercept_ # Plotting for the test data plt.scatter(X, y) plt.plot(X, line); plt.title('Hours vs Percentage') plt.xlabel('Hours Studied') plt.ylabel('Percentage Score') plt.show()
_____no_output_____
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
Checking the accuracy scores for training and test set
print('Test Score') print(regressor.score(X_test, y_test)) print('Training Score') print(regressor.score(X_train, y_train)) y_test y_predict y_predict[:5] data= pd.DataFrame({'Actual': y_test,'Predicted': y_predict[:5]}) data #Let's predict the score for 9.25 hpurs print('Score of student who studied for 9.25 hours a dat', regressor.predict([[9.25]]))
Score of student who studied for 9.25 hours a dat [92.38611528]
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
Model Evaluation Metrics
#Checking the efficiency of model mean_squ_error = mean_squared_error(y_test, y_predict[:5]) mean_abs_error = mean_absolute_error(y_test, y_predict[:5]) print("Mean Squred Error:",mean_squ_error) print("Mean absolute Error:",mean_abs_error)
Mean Squred Error: 1404.2200673968694 Mean absolute Error: 33.80918778157651
MIT
Task 1-Prediction using Supervised ML/Prediction using Supervised ML.ipynb
Divyakathirvel26/GRIP-Internship-Tasks
How to use the Data Lab *Store Client* ServiceThis notebook documents how to use the Data Lab virtual storage system via the store client service. This can be done either from a Python script (e.g. within this notebook) or from the command line using the datalab command. The storage manager service interfaceThe store client service simplifies access to the Data Lab virtual storage system. This section describes the store client service interface in case we want to write our own code against that rather than using one of the provided tools. The store client service accepts an HTTP GET call to the appropriate endpoint for the particular operation:| Endpoint | Description | Req'd Parameters ||----------|-------------|------------|| /get | Retrieve a file | name || /put | Upload a file | name || /load | Load a file to vospace | name, endpoint || /cp | Copy a file/directory | from, to || /ln | Link a file/directory | from, to || /lock | Lock a node from write updates | name || /ls | Get a file/directory listing | name || /access | Determine file accessability | name || /stat | File status info | name,verbose || /mkdir | Create a directory | name || /mv | Move/rename a file/directory | from, to || /rm | Delete a file | name || /rmdir | Delete a directory | name || /tag | Annotate a file/directory | name, tag |For example, a call to http://datalab.noirlab.edu/storage/get?name=vos://mag.csv will retrieve the file '_mag.csv_' from the root directory of the user's virtual storage. Likewise, a python call using the _storeClient_ interface such as "_storeClient.get('vos://mag.csv')_" would get the same file. Virtual storage identifiersFiles in the virtual storage are usually identified via the prefix "_vos://_". This shorthand identifier is resolved to a user's home directory of the storage space in the service. As a convenience, the prefix may optionally be omitted when the parameter refers to a node in the virtual storage. Navigation above a user's home directory is not supported, however, subdirectories within the space may be created and used as needed. AuthenticationThe storage manager service requires a DataLab security token. This needs to be passed as the value of the header keyword "X-DL-AuthToken" in any HTTP GET call to the service. If the token is not supplied anonymous access is assumed but provides access only to public storage spaces. From Python codeThe store client service can be called from Python code using the datalab module. This provides methods to access the various functions in the storeClient subpackage. InitializationThis is the setup that is required to use the store client. The first thing to do is import the relevant Python modules and also retrieve our DataLab security token.
# Standard notebook imports from getpass import getpass from dl import authClient, storeClient
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Comment out and run the cell below if you need to login to Data Lab:
## Get the authentication token for the user #token = authClient.login(input("Enter user name: (+ENTER) "),getpass("Enter password: (+ENTER) ")) #if not authClient.isValidToken(token): # raise Exception('Token is not valid. Please check your usename/password and execute this cell again.')
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Listing a file/directoryWe can see all the files that are in a specific directory or get a full listing for a specific file. In this case, we'll list the default virtual storage directory to use as a basis for changes we'll make below.
listing = storeClient.ls (name = 'vos://') print (listing)
cutout.fits,public,results,tmp
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
The *public* directory shown here is visible to all Data Lab users and provides a means of sharing data without having to setup special access. Similarly, the *tmp* directory is read-protected and provides a convenient temporary directory to be used in a workflow. File Existence and InfoAside from simply listing files, it's possible to test whether a named file already exists or to determine more information about it.
# A simple file existence test: if storeClient.access ('vos://public'): print ('User "public" directory exists') if storeClient.access ('vos://public', mode='w'): print ('User "public" directory is group/world writable') else: print ('User "public" directory is not group/world writable') if storeClient.access ('vos://tmp'): print ('User "tmp" directory exists') if storeClient.access ('vos://tmp', mode='w'): print ('User "tmp" directory is group/world writable') else: print ('User "tmp" directory is not group/world writable')
User "public" directory exists User "public" directory is not group/world writable User "tmp" directory exists User "tmp" directory is not group/world writable
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Uploading a fileNow we want to upload a new data file from our local disk to the virtual storage:
storeClient.put (to = 'vos://newmags.csv', fr = './newmags.csv') print(storeClient.ls (name='vos://'))
(1 / 1) ./newmags.csv -> vos://newmags.csv cutout.fits,newmags.csv,public,results,tmp
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Downloading a fileLet's say we want to download a file from our virtual storage space, in this case a query result that we saved to it in the "How to use the Data Lab query manager service" notebook:
storeClient.get (fr = 'vos://newmags.csv', to = './mymags.csv')
(1/1) [====================] [ 142B] newmags.csv
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
It is also possible to get the contents of a remote file directly into your notebook by specifying the location as an empty string:
data = storeClient.get (fr = 'vos://newmags.csv', to = '') print (data)
id,g,r,i 001,22.3,12.4,21.5 002,22.3,12.4,21.5 003,22.3,12.4,21.5 004,22.3,12.4,21.5 005,22.3,12.4,21.5 006,22.3,12.4,21.5 007,22.3,12.4,21.5
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Loading a file from a remote URLIt is possible to load a file directly to virtual storage from a remote URL )e.g. an "accessURL" for an image cutout, a remote data file, etc) using the "storeClient.load()" method:
url = "http://datalab.noirlab.edu/svc/cutout?col=&siaRef=c4d_161005_022804_ooi_g_v1.fits.fz&extn=31&POS=335.0,0.0&SIZE=0.1" storeClient.load('vos://cutout.fits',url)
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Creating a directoryWe can create a directory on the remote storage to be used for saving data later:
storeClient.mkdir ('vos://results')
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Copying a file/directoryWe want to put a copy of the file in a remote work directory:
storeClient.mkdir ('vos://temp') print ("Before: " + storeClient.ls (name='vos://temp/')) storeClient.cp (fr = 'vos://newmags.csv', to = 'vos://temp/newmags.csv',verbose=True) print ("After: " + storeClient.ls (name='vos://temp/')) print(storeClient.ls('vos://',format='long'))
-rw-rw-r-x demo01 2963520 22 Nov 2021 14:22 cutout.fits -rw-rw-r-x demo01 142 30 Nov 2021 14:58 newmags.csv drwxrwxr-x demo01 0 14 Jul 2020 10:01 public/ drwxrwxr-x demo01 0 22 Nov 2021 14:22 results/ drwxrwxr-x demo01 0 30 Nov 2021 14:58 temp/ drwxrwx--- demo01 0 14 Jul 2020 10:01 tmp/
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Notice that in the *ls()* call we append the directory name with a trailing '/' to list the contents of the directory rather than the directory itself. Linking to a file/directory**WARNING**: Linking is currently **not** working in the Data Lab storage manager. This notebook will be updated when the problem has been resolved.Sometimes we want to create a link to a file or directory. In this case, the link named by the *'fr'* parameter is created and points to the file/container named by the *'target'* parameter.
storeClient.ln ('vos://mags.csv', 'vos://temp/newmags.csv') print ("Root dir: " + storeClient.ls (name='vos://')) print ("Temp dir: " + storeClient.ls (name='vos://temp/'))
Root dir: cutout.fits,newmags.csv,public,results,temp,tmp Temp dir: newmags.csv
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Moving/renaming a file/directoryWe can move a file or directory:
storeClient.mv(fr = 'vos://temp/newmags.csv', to = 'vos://results') print ("Results dir: " + storeClient.ls (name='vos://results/'))
Results dir: newmags.csv
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Deleting a fileWe can delete a file:
print ("Before: " + storeClient.ls (name='vos://')) storeClient.rm (name = 'vos://mags.csv') print ("After: " + storeClient.ls (name='vos://'))
Before: cutout.fits,newmags.csv,public,results,temp,tmp After: cutout.fits,newmags.csv,public,results,temp,tmp
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Deleting a directoryWe can also delete a directory, doing so also deletes the contents of that directory:
storeClient.rmdir(name = 'vos://temp')
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Tagging a file/directory**Warning**: Tagging is currently **not** working in the Data Lab storage manager. This notebook will be updated when the problem has been resolved.We can tag any file or directory with arbitrary metadata:
storeClient.tag('vos://results', 'The results from my analysis')
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Cleanup the demo directory of remaining files
storeClient.rm (name = 'vos://newmags.csv') storeClient.rm (name = 'vos://results') storeClient.ls (name = 'vos://')
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Using the datalab commandThe datalab command provides an alternate command line way to work with the query manager through the query subcommands, which is especially useful if you want to interact with the query manager from your local computer. Please have the `datalab` command line utility installed first (for install instructions see https://github.com/astro-datalab/datalab ). The cells below are commented out. Copy and paste any of them (without the comment sign) and run locally. Log in once
#!datalab login
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
and enter the credentials as prompted. Downloading a fileLet's say we want to download a file from our virtual storage space:
#!datalab get fr="vos://mags.csv" to="./mags.csv"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Uploading a fileNow we want to upload a new data file from our local disk:
#!datalab put fr="./newmags.csv" to="vos://newmags.csv"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Copying a file/directoryWe want to put a copy of the file in a remote work directory:
#!datalab cp fr="vos://newmags.csv" to="vos://temp/newmags.csv"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Linking to a file/directorySometimes we want to create a link to a file or directory:
#!datalab ln fr="vos://temp/mags.csv" to="vos://mags.csv"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Listing a file/directoryWe can see all the files that are in a specific directory or get a full listing for a specific file:
#!datalab ls name="vos://temp"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Creating a directoryWe can create a directory:
#!datalab mkdir name="vos://results"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Moving/renaming a file/directoryWe can move a file or directory:
#!datalab mv fr="vos://temp/newmags.csv" to="vos://results"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Deleting a fileWe can delete a file:
#!datalab rm name="vos://temp/mags.csv"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Deleting a directoryWe can also delete a directory:
#!datalab rmdir name="vos://temp"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
Tagging a file/directoryWe can tag any file or directory with arbitrary metadata:
#!datalab tag name="vos://results" tag="The results from my analysis"
_____no_output_____
BSD-3-Clause
04_HowTos/StoreClient/How_to_use_the_Data_Lab_StoreClient.ipynb
noaodatalab/notebooks_default
*Bosonic statistics and the Bose-Einstein condensation* `Doruk Efe Gökmen -- 30/08/2018 -- Ankara` Non-interacting ideal bosonsNon-interacting bosons is the only system in physics that can undergo a phase transition without mutual interactions between its components.Let us enumerate the energy eigenstates of a single 3D boson in an harmonic trap by the following program.
Emax = 30 States = [] for E_x in range(Emax): for E_y in range(Emax): for E_z in range(Emax): States.append(((E_x + E_y + E_z), (E_x, E_y, E_z))) States.sort() for k in range(Emax): print '%3d' % k, States[k][0], States[k][1]
0 0 (0, 0, 0) 1 1 (0, 0, 1) 2 1 (0, 1, 0) 3 1 (1, 0, 0) 4 2 (0, 0, 2) 5 2 (0, 1, 1) 6 2 (0, 2, 0) 7 2 (1, 0, 1) 8 2 (1, 1, 0) 9 2 (2, 0, 0) 10 3 (0, 0, 3) 11 3 (0, 1, 2) 12 3 (0, 2, 1) 13 3 (0, 3, 0) 14 3 (1, 0, 2) 15 3 (1, 1, 1) 16 3 (1, 2, 0) 17 3 (2, 0, 1) 18 3 (2, 1, 0) 19 3 (3, 0, 0) 20 4 (0, 0, 4) 21 4 (0, 1, 3) 22 4 (0, 2, 2) 23 4 (0, 3, 1) 24 4 (0, 4, 0) 25 4 (1, 0, 3) 26 4 (1, 1, 2) 27 4 (1, 2, 1) 28 4 (1, 3, 0) 29 4 (2, 0, 2)
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Here it can be perceived that the degeneracy at an energy level $E_n$, which we denote by $\mathcal{N}(E_n)$, is $\frac{(n+1)(n+2)}{2}$. Alternatively, we may use a more systematic approach. We can calculate the number of states at the $n$th energy level as $\mathcal{N}(E_n)=\sum_{E_x=0}^{E_n}\sum_{E_y=0}^{E_n}\sum_{E_z=0}^{E_n}\delta_{(E_x+E_y+E_z),E_n}$, where $\delta_{j,k}$ is the Kronecker delta. In the continuous limit we have the Dirac delta function $\delta_{j,k}\rightarrow\delta(j-k) =\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{i(j-k)\lambda}$. (1)If we insert this function into above expression, we get $\mathcal{N}(E_n)=\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-iE_n\lambda}\left(\sum_{E_x=0}^{E_n}e^{iE_x\lambda}\right)^3$. The geometric sum can be evaluated, hence we have the integral $\mathcal{N}(E_n)=\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-iE_n\lambda}\left[\frac{1-e^{i\lambda (n+1)}}{1-e^{i\lambda}}\right]^3$. The integration range corresponds to a circular contour $\mathcal{C}$ of radius 1 centered at 0 at the complex plane. If we define $z=e^{i\lambda}$, the integral transforms into $\mathcal{N}(E_n)=\frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{\text{d}z}{z^{n+1}}\left[\frac{1-z^{n+1}}{1-z}\right]^3$. Using the residue theorem, this integral can be evaluated by determining the coefficient of the $z^{-1}$ term in the Laurent series of $\frac{1}{z^{n+1}}\left[\frac{1-z^{n+1}}{1-z}\right]^3$, which is $(n+1)(n+1)/2$. Hence we recover the previous result. Five boson bounded trap modelConsider 5 bosons in the harmonic trap, but with a cutoff on the single-particle energies: $E_\sigma\leq 4$. There are $34$ possible single-particles energy states. For this model, the above naive enumeration of these energy states still works. We can label the state of each 5 particle by $\sigma_i$, so that $\{\text{5-particle state}\}=\{\sigma_1,\cdots,\sigma_5\}$. The partititon function for this system is given by $Z(\beta)=\sum_{0\leq\sigma_1\leq\cdots\leq\sigma_5\leq 34}e^{-\beta E(\sigma_1,\cdots,\sigma_5)}$. In the following program, the average occupation number of the ground state per particle is calculated at different temperatures (corresponds to the condensate). However, due to the nested for loops, this method is very inconvenient for higher number of particles.
%pylab inline import math, numpy as np, pylab as plt #calculate the partition function for 5 bosons by stacking the bosons in one of the N_states #number of possible states and counting only a specific order of them (they are indistinguishable) def bosons_bounded_harmonic(beta, N): Energy = [] #initialise the vector that the energy values are saved with enumeration n_states_1p = 0 #initialise the total number of single trapped boson states for n in range(N + 1): degeneracy = (n + 1) * (n + 2) / 2 #degeneracy in the 3D harmonic oscillator Energy += [float(n)] * degeneracy n_states_1p += degeneracy n_states_5p = 0 #initialise the total number states of 5 trapped bosons Z = 0.0 #initialise the partition function N0_mean = 0.0 E_mean = 0.0 for s_0 in range(n_states_1p): for s_1 in range(s_0, n_states_1p): #consider the order s_0<s_1... to avoid overcounting for s_2 in range(s_1, n_states_1p): for s_3 in range(s_2, n_states_1p): for s_4 in range(s_3, n_states_1p): n_states_5p += 1 state = [s_0, s_1, s_2, s_3, s_4] #construct the state of each 5 boson E = sum(Energy[s] for s in state) #calculate the total energy by above enumeration Z += math.exp(-beta * E) #canonical partition function E_mean += E * math.exp(-beta * E) #avg. total energy N0_mean += state.count(0) * math.exp(-beta * E) #avg. ground level occupation number return n_states_5p, Z, E_mean, N0_mean N = 4 #the energy cutoff for each boson beta = 1.0 #inverse temperature n_states_5p, Z, E_mean, N0_mean = bosons_bounded_harmonic(beta, N) print 'Temperature:', 1 / beta, 'Total number of possible states:', n_states_5p, '| Partition function:', Z,\ '| Average energy per particle:', E_mean / Z / 5.0,\ '| Condensate fraction (ground state occupation per particle):', N0_mean / Z / 5.0 cond_frac = [] temperature = [] for T in np.linspace(0.1, 1.0, 10): n_states_5p, Z, E_mean, N0_mean = bosons_bounded_harmonic(1.0 / T, N) cond_frac.append(N0_mean / Z / 5.0) temperature.append(T) plt.plot(temperature, cond_frac) plt.title('Condensate? fraction for the $N=5$ bosons bounded trap model ($N_{bound}=%i$)' % N, fontsize = 14) plt.xlabel('$T$', fontsize = 14) plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 14) plt.grid()
Populating the interactive namespace from numpy and matplotlib Temperature: 1.0 Total number of possible states: 575757 | Partition function: 17.3732972183 | Average energy per particle: 1.03133265311 | Condensate fraction (ground state occupation per particle): 0.446969501933
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Here we see that all particles are in the ground states at very low temperatures this is a simple consequence of Boltzmann statistics. At zero temperature all the particles populate the ground state. Bose-Einstein condensation is something else, it means that a finite fraction of the system is in the ground-state for temperatures which are much higher than the gap between the gap between the ground-state and the first excited state, which is one, in our system. Bose-Einstein condensation occurs when all of a sudden a finite fraction of particles populate the single-particle ground state. In a trap, this happens at higher and higher temperatures as we increase the particle number.Alternatively, we can characterise any single particle state $\sigma=0,\cdots,34$ by an occupation number $n_\sigma$. Using this occupation number representation, the energy is given by $E=n_0E_0+\cdots + n_{34}E_{34}$, and the partition function is $Z(\beta)=\sum^{N=5}_{n_0=0}\cdots\sum^{N=5}_{n_{34}=0}e^{-\beta(n_0E_0+\cdots + n_{34}E_{34})}\delta_{(n_0+\cdots + n_{34}),N=5}$. Using the integral representation of the Kronecker delta given in (1), and evaluating the resulting sums, we have $Z(\beta)=\int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)}$. (2) The bosonic density matrix**Distinguishable particles:** The partition function of $N$ distinguishable particles is given by $Z^D(\beta)=\int \text{d}\mathbf{x}\rho(\mathbf{x},\mathbf{x},\beta)$, where $\mathbf{x}=\{0,\cdots,N-1\}$, i.e. the positions of the $i$th particle; and $\rho$ is the $N$ distinguishable particle density matrix. If the particles are non-interacting (ideal), then the density matrix can simply be decomposed into $N$ single particle density matrices as $\rho^{D,\text{ideal}}(\mathbf{x},\mathbf{x}',\beta)=\Pi_{i=0}^{N-1}\rho(x_i,x_i',\beta)$, (3)with the single particle density matrix $\rho(x_i,x_i',\beta)=\sum_{\lambda_i=0}^{\infty}\psi_{\lambda_i}(x_i)\psi_{\lambda_i}^{*}(x'_i)e^{-\beta E_{\lambda_i}}$, where $\lambda_i$ is the energy eigenstate of the $i$th particle. That means that the quantum statistical paths of the two particles are independent. More generally, the interacting many distinguishable particle density matrix is$\rho^{D}(\mathbf{x},\mathbf{x}',\beta)=\sum_{\sigma}\Psi_{\sigma}(\mathbf{x})\Psi_{\sigma}^{*}(\mathbf{x}')e^{-\beta E_{\sigma}}$, (4)where the sum is done over the all possible $N$ particle states $\sigma=\{\lambda_0,\cdots,\lambda_{N-1}\}$. The interacting paths are described by the paths whose weight are modified through Trotter decomposition, which *correlates* those paths. **Indistinguishable particles:** The particles $\{0,\cdots,N-1\}$ are indistinguishable if and only if $\Psi_{\sigma_\text{id}}(\mathbf{x})=\xi^\mathcal{P}\Psi_{\sigma_\text{id}}(\mathcal{P}\mathbf{x})$ $\forall \sigma$, (5)where they are in an indistinguishable state ${\sigma_\text{id}}$, $\mathcal{P}$ is any $N$ particle permutation and the *species factor* $\xi$ is $-1$ (antisymmetric) for fermions, and $1$ (symmetric) for bosons. Here we focus on the bosonic case. Since there are $N!$ such permutations, if the particles are indistinguishable bosons, using (5) we get $\frac{1}{N!}\sum_{\mathcal{P}}\Psi_\sigma(\mathcal{P}x)=\Psi_\sigma(\mathbf{x})$, i.e. $\Psi_\sigma(x)=\Psi_{\sigma_\text{id}}(x)$. Furthermore, from a group theory argument it follows that $\frac{1}{N!}\sum_{\mathcal{P}}\Psi_\sigma(\mathcal{P}x)=0$ otherwise (fermionic or distinguishable). This can be expressed in a more compact form as$\frac{1}{N!}\sum_{\mathcal{P}}\Psi_\sigma(\mathcal{P}x)=\delta_{{\sigma_\text{id}},\sigma}\Psi_\sigma(x)$. (6)By definition, the bosonic density matrix should be $\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\sum_{\sigma=\{\sigma_\text{id}\}}\Psi_\sigma(\mathbf{x})\Psi^{*}_\sigma(\mathbf{x}')e^{-\beta E_\sigma}=\sum_{\sigma}\delta_{{\sigma_\text{id}},\sigma}\Psi_\sigma(\mathbf{x})\Psi^{*}_\sigma(\mathbf{x}')e^{-\beta E_\sigma}$, i.e. a sum over all $N$ particle states which are symmetric. If we insert Eqn. (6) here in the latter equality, we get $\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\sigma\Psi_\sigma(\mathbf{x})\sum_\mathcal{P}\Psi^{*}_\sigma(\mathcal{P}\mathbf{x}')e^{-\beta E_\sigma}$. Exchanging the sums, we get $\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}\sum_\sigma\Psi_\sigma(\mathbf{x})\Psi^{*}_\sigma(\mathcal{P}\mathbf{x}')e^{-\beta E_\sigma}$. In other words, we simply have $\boxed{\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}\rho^D(\mathbf{x},\mathcal{P}\mathbf{x}',\beta)}$, (7)that is the average of the distinguishable density matrices over all permutations of $N$ particles.For ideal bosons, we have $\boxed{\rho^\text{bose, ideal}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}\rho(x_0,\mathcal{P}x_0',\beta)\rho(x_1,\mathcal{P}x_1',\beta)\cdots\rho(x_{N-1},\mathcal{P}x_{N-1}',\beta)}$. (8)The partition function is therefore $Z^\text{bose}(\beta)=\frac{1}{N!}\int \text{d}x_0\cdots\text{d}x_{N-1}\sum_\mathcal{P}\rho^D(\mathbf{x},\mathcal{P}\mathbf{x},\beta)=\frac{1}{N!}\sum_\mathcal{P}Z_\mathcal{P}$, (9)i.e. an integral over paths and an average over all permutations. We should therefore sample both positions and permutations. For fermions, the sum over permutations $\mathcal{P}$ involve a weighting with factor $(-1)^{\mathcal{P}}$: $\rho^\text{fermi}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}(-1)^\mathcal{P}\rho^D(\mathbf{x},\mathcal{P}\mathbf{x}',\beta)$Therefore for fermions corresponding path integrals are nontrivial, and they involve Grassmann variables (see e.g. Negele, Orland https://www.amazon.com/Quantum-Many-particle-Systems-Advanced-Classics/dp/0738200522 ).  Sampling permutationsThe following Markov-chain algorithm samples permutations of $N$ elements on a list $L$. The permutation function for the uniformly distributed $\mathcal{P}$ is $Y_N=\sum_\mathcal{P}1=N!$.
import random N = 3 #length of the list statistics = {} L = range(N) #initialise the list nsteps = 10 for step in range(nsteps): i = random.randint(0, N - 1) #pick two random indices i and j from the list L j = random.randint(0, N - 1) L[i], L[j] = L[j], L[i] #exchange the i'th and j'th elements if tuple(L) in statistics: statistics[tuple(L)] += 1 #if a certain configuration appears again, add 1 to its count else: statistics[tuple(L)] = 1 #if a certain configuration for the first time, give it a count of 1 print L print range(N) print for item in statistics: print item, statistics[item]
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Let us look at the permutation cycles and their frequency of occurrence:
import random N = 20 #length of the list stats = [0] * (N + 1) #initialise the "stats" vector L = range(N) #initialise the list nsteps = 1000000 #number of steps for step in range(nsteps): i = random.randint(0, N - 1) #pick two random indices i and j from the list L j = random.randint(0, N - 1) L[i], L[j] = L[j], L[i] #exchange the i'th and j'th elements in the list L #Calculate the lengths of the permutation cycles in list L if step % 100 == 0: #i.e. at each 100 steps cycle_dict = {} #initialise the permutation cycle dictionary for k in range(N): #loop over the list length,where keys (k) represent the particles cycle_dict[k] = L[k] #and the values (L) are for the successors of the particles in the perm. cycle while cycle_dict != {}: #i.e. when the cycle dictionary is not empty? starting_element = cycle_dict.keys()[0] #save the first (0th) element in the cycle as the starting element cycle_length = 0 #initialise the cycle length old_element = starting_element #ancillary variable while True: cycle_length += 1 #increase the cycle length while... new_element = cycle_dict.pop(old_element) #get the successor of the old element in the perm. cycle if new_element == starting_element: break #the new element is the same as the first one (cycle complete) else: old_element = new_element #move on to the next successor in the perm. cycle stats[cycle_length] += 1 #increase the number of occurrences of a cycle of that length by 1 for k in range(1, N + 1): #print the cycle lengths and their number of occurrences print k, stats[k]
1 10130 2 5008 3 3395 4 2438 5 1969 6 1659 7 1403 8 1260 9 1118 10 949 11 943 12 833 13 778 14 745 15 642 16 618 17 610 18 553 19 530 20 492
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
The partition function of permutations $\mathcal{P}$ on a list of lentgth $N$ is $Y_N=\sum_\mathcal{P}\text{weight}(\mathcal{P})$. Let $z_n$ be the weight of a permutation cycle of length $n$. Then, the permutation $[0,1,2,3]\rightarrow[0,1,2,3]$, which can be represented as $(0)(1)(2)(3)$, has the weight $z_1^4$; similarly, $(0)(12)(3)$ would have $z_1^2z_2$, etc.Generally, the cycle $\{n_1,\cdots,n_{k-1},\text{last element}\}$, i.e. the cycle containing the last element, has a length $k$, with the weight $z_k$. The remaining $N-k$ elements have the partition function $Y_{(N-k)}$. Hence, the total partition function is given by $Y_N=\sum_{k=1}^Nz_k\{\text{ of choices for} \{n_1,\cdots,n_{k-1}\}\}\{\text{ of cycles with} \{n_1,\cdots,n_{k}\}\}Y_{N-k}$$\implies Y_N=\sum_{k=1}^N z_k{{N-1}\choose{k-1}}(k-1)!Y_{N-k}$ which leads to the following recursion formula$\boxed{Y_N=\frac{1}{N}\sum_{k=1}^N z_k\frac{N!}{(N-k)!}Y_{N-k}, (\text{with }Y_0=1)}$. (10) ***Using the convolution property, we can regard the $l+1$ bosons in a permutation cycle of length $l$ at temperatyre $1/\beta$ as a single boson at a temperature $1/(l\beta)$.****Example 1:* Consider the permutation $[0,3,1,2]\rightarrow[0,1,2,3]$ consists of the following permutation cycle $1\rightarrow 2 \rightarrow 3 \rightarrow 1$ of length 3 ($\mathcal{P}=(132)$). This corresponds to the partition function $Z^\text{bose}_{(0)(132)}(\beta)=\int \text{d}x_0\rho(x_0,x_0,\beta)\int\text{d}x_1\int\text{d}x_2\int\text{d}x_3\rho(x_1,x_3,\beta)\rho(x_3,x_2,\beta)\rho(x_2,x_1,\beta)$. Using the convolution property, we have: $\int\text{d}x_3\rho(x_1,x_3,\beta)\rho(x_3,x_2,\beta)=\rho(x_1,x_2,2\beta)\implies\int\text{d}x_2\rho(x_1,x_2,2\beta)\rho(x_2,x_1,\beta)=\rho(x_1,x_1,3\beta)$. The single particle partition function is defined as $z(\beta)=\int\text{d}\mathbf{x}\rho(\mathbf{x},\mathbf{x},\beta) =\left[ \int\text{d}x\rho(x,x,\beta)\right]^3$.$\implies Z^\text{bose}_{(0)(132)}(\beta)=\int \text{d}x_0\rho(x_0,x_0,\beta)\int\text{d}x_1\rho(x_1,x_1,3\beta)=z(\beta)z(3\beta)$.*Example 2:* $Z^\text{bose}_{(0)(1)(2)(3)}(\beta)=z(\beta)^4$.Simulation of bosons in a harmonic trap: (Carefully note that here are no intermediate slices in the sampled paths, since the paths are sampled from the exact distribution.)
import random, math, pylab, mpl_toolkits.mplot3d #3 dimensional Levy algorithm, used for resampling the positions of entire permutation cycles of bosons #to sample positions def levy_harmonic_path(k, beta): #direct sample (rejection-free) three coordinate values, use diagonal density matrix #k corresponds to the length of the permutation cycle xk = tuple([random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0))) for d in range(3)]) x = [xk] #save the 3 coordinate values xk into a 3d vector x (final point) for j in range(1, k): #loop runs through the permutation cycle #Levy sampling (sample a point given the latest sample and the final point) Upsilon_1 = (1.0 / math.tanh(beta) + 1.0 / math.tanh((k - j) * beta)) Upsilon_2 = [x[j - 1][d] / math.sinh(beta) + xk[d] / math.sinh((k - j) * beta) for d in range(3)] x_mean = [Upsilon_2[d] / Upsilon_1 for d in range(3)] sigma = 1.0 / math.sqrt(Upsilon_1) dummy = [random.gauss(x_mean[d], sigma) for d in range(3)] #direct sample the j'th point x.append(tuple(dummy)) #construct the 3d path (permutation cycle) by appending tuples return x #(Non-diagonal) harmonic oscillator density matrix, used for organising the exchange of two elements #to sample permutations def rho_harm(x, xp, beta): Upsilon_1 = sum((x[d] + xp[d]) ** 2 / 4.0 * math.tanh(beta / 2.0) for d in range(3)) Upsilon_2 = sum((x[d] - xp[d]) ** 2 / 4.0 / math.tanh(beta / 2.0) for d in range(3)) return math.exp(- Upsilon_1 - Upsilon_2) N = 256 #number of bosons T_star = 0.3 beta = 1.0 / (T_star * N ** (1.0 / 3.0)) #?? nsteps = 1000000 positions = {} #initial position dictionary for j in range(N): #loop over all particles, initial permutation is identity (k=1) a = levy_harmonic_path(1, beta) #initial positions (outputs a single 3d point) positions[a[0]] = a[0] #positions of particles are keys for themselves in the initial position dict. for step in range(nsteps): boson_a = random.choice(positions.keys()) #randomly pick the position of boson "a" from the dict. perm_cycle = [] #initialise the permutation cycle while True: #compute the permutation cycle of the boson "a": perm_cycle.append(boson_a) #construct the permutation cycle by appending the updated position of boson "a" boson_b = positions.pop(boson_a) #remove and return (pop) the position of "a", save it as a temp. var. if boson_b == perm_cycle[0]: break #if the cycle is completed, break the while loop else: boson_a = boson_b #move boson "a" to position of "b" and continue permuting k = len(perm_cycle) #length of the permutation cycle #SAMPLE POSITIONS: perm_cycle = levy_harmonic_path(k, beta) #resample the particle positions in the current permutation cycle positions[perm_cycle[-1]] = perm_cycle[0] #assures that the new path is a "cycle" (last term maps to the first term) for j in range(len(perm_cycle) - 1): #update the positions of bosons positions[perm_cycle[j]] = perm_cycle[j + 1] #construct the "cycle": j -> j+1 #SAMPLE PERMUTATION CYCLES by exchanges: #Pick two particles and attempt an exchange to sample permutations (with Metropolis acceptance rate): a_1 = random.choice(positions.keys()) #pick the first random particle b_1 = positions.pop(a_1) #save the random particle to a temporary variable a_2 = random.choice(positions.keys()) #pick the second random particle b_2 = positions.pop(a_2) #save the random particle to a temporary variable weight_new = rho_harm(a_1, b_2, beta) * rho_harm(a_2, b_1, beta) #the new Metropolis acceptance rate weight_old = rho_harm(a_1, b_1, beta) * rho_harm(a_2, b_2, beta) #the old Metropolis acceptance rate if random.uniform(0.0, 1.0) < weight_new / weight_old: positions[a_1] = b_2 #accept positions[a_2] = b_1 else: positions[a_1] = b_1 #reject positions[a_2] = b_2 #Figure output: fig = pylab.figure() ax = mpl_toolkits.mplot3d.axes3d.Axes3D(fig) ax.set_aspect('equal') list_colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k'] n_colors = len(list_colors) dict_colors = {} i_color = 0 # find and plot permutation cycles: while positions: x, y, z = [], [], [] starting_boson = positions.keys()[0] boson_old = starting_boson while True: x.append(boson_old[0]) y.append(boson_old[1]) z.append(boson_old[2]) boson_new = positions.pop(boson_old) if boson_new == starting_boson: break else: boson_old = boson_new len_cycle = len(x) if len_cycle > 2: x.append(x[0]) y.append(y[0]) z.append(z[0]) if len_cycle in dict_colors: color = dict_colors[len_cycle] ax.plot(x, y, z, color + '+-', lw=0.75) else: color = list_colors[i_color] i_color = (i_color + 1) % n_colors dict_colors[len_cycle] = color ax.plot(x, y, z, color + '+-', label='k=%i' % len_cycle, lw=0.75) # finalize plot pylab.title('$N=%i$, $T^*=%s$' % (N, T_star)) pylab.legend() ax.set_xlabel('$x$', fontsize=16) ax.set_ylabel('$y$', fontsize=16) ax.set_zlabel('$z$', fontsize=16) ax.set_xlim3d([-8, 8]) ax.set_ylim3d([-8, 8]) ax.set_zlim3d([-8, 8]) pylab.savefig('snapshot_bosons_3d_N%04i_Tstar%04.2f.png' % (N, T_star)) pylab.show()
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
![caption](BEC.gif) But we do know that for the harmonic trap, the single 3-dimensional particle partition function is given by $z(\beta)=\left(\frac{1}{1-e^{-\beta}}\right)^3$. The permutation cycle of length $k$ corresponds to $z_k=z(k\beta)=\left(\frac{1}{1-e^{-k\beta}}\right)^3$. Hence, using (9) and (10), we have that $Z^\text{bose}_N=Y_N/{N!}=\frac{1}{N}\sum_{k=1}^N z_k Z^\text{bose}_{N-k}, (\text{with }Z^\text{bose}_0=1)$. (11)(Due to Landsberg, 1961 http://store.doverpublications.com/0486664937.html)This recursion relation relates the partition function of a system of $N$ ideal bosons to the partition function of a single particle and the partition functions of systems with fewer particles.
import math, pylab def z(k, beta): return 1.0 / (1.0 - math.exp(- k * beta)) ** 3 #partition function of a single particle in a harmonic trap def canonic_recursion(N, beta): #Landsberg recursion relations for the partition function of N bosons Z = [1.0] #Z_0 = 1 for M in range(1, N + 1): Z.append(sum(Z[k] * z(M - k, beta) \ for k in range(M)) / M) return Z #list of partition functions for boson numbers up to N N = 256 #number of bosons T_star = 0.5 #temperature beta = 1.0 / N ** (1.0 / 3.0) / T_star Z = canonic_recursion(N, beta) #partition function pi_k = [(z(k, beta) * Z[N - k] / Z[-1]) / float(N) for k in range(1, N + 1)] #probability of a cycle of length k # graphics output pylab.plot(range(1, N + 1), pi_k, 'b-', lw=2.5) pylab.ylim(0.0, 0.01) pylab.xlabel('cycle length $k$', fontsize=16) pylab.ylabel('cycle probability $\pi_k$', fontsize=16) pylab.title('Cycle length distribution ($N=%i$, $T^*=%s$)' % (N, T_star), fontsize=16) pylab.savefig('plot-prob_cycle_length.png') phase = [pi[k+1] - pi[k] for k in range(1, N+1)] # graphics output pylab.plot(range(1, N + 1), pi_k, 'b-', lw=2.5) pylab.ylim(0.0, 0.01) pylab.xlabel('cycle length $k$', fontsize=16) pylab.ylabel('cycle probability $\pi_k$', fontsize=16) pylab.title('Cycle length distribution ($N=%i$, $T^*=%s$)' % (N, T_star), fontsize=16) pylab.savefig('plot-prob_cycle_length.png')
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Since we have an analytical solution to the problem, we can now implement a rejection-free direct sampling algorithm for the permutations.
import math, random def z(k, beta): #partition function of a single particle in a harmonic trap return (1.0 - math.exp(- k * beta)) ** (-3) def canonic_recursion(N, beta): #Landsberg recursion relation for the partition function of N bosons in a harmonic trap Z = [1.0] for M in range(1, N + 1): Z.append(sum(Z[k] * z(M - k, beta) for k in range(M)) / M) return Z def make_pi_list(Z, M): #the probability for a boson to be in a permutation length of length up to M? pi_list = [0.0] + [z(k, beta) * Z[M - k] / Z[M] / M for k in range(1, M + 1)] pi_cumulative = [0.0] for k in range(1, M + 1): pi_cumulative.append(pi_cumulative[k - 1] + pi_list[k]) return pi_cumulative def naive_tower_sample(pi_cumulative): eta = random.uniform(0.0, 1.0) for k in range(len(pi_cumulative)): if eta < pi_cumulative[k]: break return k def levy_harmonic_path(dtau, N): #path sampling (to sample permutation positions) beta = N * dtau x_N = random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(beta / 2.0))) x = [x_N] for k in range(1, N): dtau_prime = (N - k) * dtau Upsilon_1 = 1.0 / math.tanh(dtau) + 1.0 / math.tanh(dtau_prime) Upsilon_2 = x[k - 1] / math.sinh(dtau) + x_N / math.sinh(dtau_prime) x_mean = Upsilon_2 / Upsilon_1 sigma = 1.0 / math.sqrt(Upsilon_1) x.append(random.gauss(x_mean, sigma)) return x ### main program starts here ### N = 8 #number of bosons T_star = 0.1 #temperature beta = 1.0 / N ** (1.0 / 3.0) / T_star n_steps = 1000 Z = canonic_recursion(N, beta) #{N} boson partition function for step in range(n_steps): N_tmp = N #ancillary x_config, y_config, z_config = [], [], [] #initialise the configurations in each 3 directions while N_tmp > 0: #iterate through all particles pi_sum = make_pi_list(Z, N_tmp) k = naive_tower_sample(pi_sum) x_config += levy_harmonic_path(beta, k) y_config += levy_harmonic_path(beta, k) z_config += levy_harmonic_path(beta, k) N_tmp -= k #reduce the number of particles that are in the permutation cycle of length k
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Physical properties of the 1-dimensional classical and bosonic systems* Consider 2 non-interacting **distinguishable particles** in a 1-dimensional harmonic trap:
import random, math, pylab #There are only two possible cases: For k=1, we sample a single position (cycle of length 1), #for k=2, we sample two positions (a cycle of length two). def levy_harmonic_path(k): x = [random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0)))] #direct-sample the first position if k == 2: Ups1 = 2.0 / math.tanh(beta) Ups2 = 2.0 * x[0] / math.sinh(beta) x.append(random.gauss(Ups2 / Ups1, 1.0 / math.sqrt(Ups1))) return x[:] def pi_x(x, beta): sigma = 1.0 / math.sqrt(2.0 * math.tanh(beta / 2.0)) return math.exp(-x ** 2 / (2.0 * sigma ** 2)) / math.sqrt(2.0 * math.pi) / sigma beta = 2.0 nsteps = 1000000 #initial sample has identity permutation low = levy_harmonic_path(2) #tau=0 high = low[:] #tau=beta data = [] for step in xrange(nsteps): k = random.choice([0, 1]) low[k] = levy_harmonic_path(1)[0] high[k] = low[k] data.append(high[k]) list_x = [0.1 * a for a in range (-30, 31)] y = [pi_x(a, beta) for a in list_x] pylab.plot(list_x, y, linewidth=2.0, label='Exact distribution') pylab.hist(data, normed=True, bins=80, label='QMC', alpha=0.5, color='green') pylab.legend() pylab.xlabel('$x$',fontsize=14) pylab.ylabel('$\\pi(x)$',fontsize=14) pylab.title('2 non-interacting distinguishable 1-d particles',fontsize=14) pylab.xlim(-3, 3) pylab.savefig('plot_A1_beta%s.png' % beta)
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
* Consider two non-interacting **indistinguishable bosonic** quantum particles in a one-dimensional harmonic trap:
import math, random, pylab, numpy as np def z(beta): return 1.0 / (1.0 - math.exp(- beta)) def pi_two_bosons(x, beta): #exact two boson position distribution pi_x_1 = math.sqrt(math.tanh(beta / 2.0)) / math.sqrt(math.pi) * math.exp(-x ** 2 * math.tanh(beta / 2.0)) pi_x_2 = math.sqrt(math.tanh(beta)) / math.sqrt(math.pi) * math.exp(-x ** 2 * math.tanh(beta)) weight_1 = z(beta) ** 2 / (z(beta) ** 2 + z(2.0 * beta)) weight_2 = z(2.0 * beta) / (z(beta) ** 2 + z(2.0 * beta)) pi_x = pi_x_1 * weight_1 + pi_x_2 * weight_2 return pi_x def levy_harmonic_path(k): x = [random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0)))] if k == 2: Ups1 = 2.0 / math.tanh(beta) Ups2 = 2.0 * x[0] / math.sinh(beta) x.append(random.gauss(Ups2 / Ups1, 1.0 / math.sqrt(Ups1))) return x[:] def rho_harm_1d(x, xp, beta): Upsilon_1 = (x + xp) ** 2 / 4.0 * math.tanh(beta / 2.0) Upsilon_2 = (x - xp) ** 2 / 4.0 / math.tanh(beta / 2.0) return math.exp(- Upsilon_1 - Upsilon_2) beta = 2.0 list_beta = np.linspace(0.1, 5.0) nsteps = 10000 low = levy_harmonic_path(2) high = low[:] fract_one_cycle_dat, fract_two_cycles_dat = [], [] for beta in list_beta: one_cycle_dat = 0.0 #initialise the permutation fractions for each temperature data = [] for step in xrange(nsteps): # move 1 (direct-sample the positions) if low[0] == high[0]: #if the cycle is of length 1 k = random.choice([0, 1]) low[k] = levy_harmonic_path(1)[0] high[k] = low[k] #assures the cycle else: #if the cycle is of length 2s low[0], low[1] = levy_harmonic_path(2) high[1] = low[0] #assures the cycle high[0] = low[1] one_cycle_dat += 1.0 / float(nsteps) #calculate the fraction of the single cycle cases data += low[:] #save the position histogram data # move 2 (Metropolis for sampling the permutations) weight_old = (rho_harm_1d(low[0], high[0], beta) * rho_harm_1d(low[1], high[1], beta)) weight_new = (rho_harm_1d(low[0], high[1], beta) * rho_harm_1d(low[1], high[0], beta)) if random.uniform(0.0, 1.0) < weight_new / weight_old: high[0], high[1] = high[1], high[0] fract_one_cycle_dat.append(one_cycle_dat) fract_two_cycles_dat.append(1.0 - one_cycle_dat) #save the fraction of the two cycles cases #Exact permutation distributions for all temperatures fract_two_cycles = [z(beta) ** 2 / (z(beta) ** 2 + z(2.0 * beta)) for beta in list_beta] fract_one_cycle = [z(2.0 * beta) / (z(beta) ** 2 + z(2.0 * beta)) for beta in list_beta] #Graphics output: list_x = [0.1 * a for a in range (-30, 31)] y = [pi_two_bosons(a, beta) for a in list_x] pylab.plot(list_x, y, linewidth=2.0, label='Exact distribution') pylab.hist(data, normed=True, bins=80, label='QMC', alpha=0.5, color='green') pylab.legend() pylab.xlabel('$x$',fontsize=14) pylab.ylabel('$\\pi(x)$',fontsize=14) pylab.title('2 non-interacting bosonic 1-d particles',fontsize=14) pylab.xlim(-3, 3) pylab.savefig('plot_A2_beta%s.png' % beta) pylab.show() pylab.clf() fig = pylab.figure(figsize=(10, 5)) ax = fig.add_subplot(1, 2, 1) ax.plot(list_beta, fract_one_cycle_dat, linewidth=4, label='QMC') ax.plot(list_beta, fract_one_cycle, linewidth=2, label='exact') ax.legend() ax.set_xlabel('$\\beta$',fontsize=14) ax.set_ylabel('$\\pi_2(\\beta)$',fontsize=14) ax.set_title('Fraction of cycles of length 2',fontsize=14) ax = fig.add_subplot(1, 2, 2) ax.plot(list_beta, fract_two_cycles_dat, linewidth=4, label='QMC') ax.plot(list_beta, fract_two_cycles, linewidth=2,label='exact') ax.legend() ax.set_xlabel('$\\beta$',fontsize=14) ax.set_ylabel('$\\pi_1(\\beta)$',fontsize=14) ax.set_title('Fraction of cycles of length 1',fontsize=14) pylab.savefig('plot_A2.png') pylab.show() pylab.clf()
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
We can use dictionaries instead of lists. The implementation is in the following program. Here we also calculate the correlation between the two particles, i.e. sample of the absolute distance $r$ between the two bosons. The comparison between the resulting distribution and the distribution for the distinguishable case corresponds to boson bunching (high weight for small distances between the bosons).
import math, random, pylab def prob_r_distinguishable(r, beta): #the exact correlation function for two particles sigma = math.sqrt(2.0) / math.sqrt(2.0 * math.tanh(beta / 2.0)) prob = (math.sqrt(2.0 / math.pi) / sigma) * math.exp(- r ** 2 / 2.0 / sigma ** 2) return prob def levy_harmonic_path(k): x = [random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0)))] if k == 2: Ups1 = 2.0 / math.tanh(beta) Ups2 = 2.0 * x[0] / math.sinh(beta) x.append(random.gauss(Ups2 / Ups1, 1.0 / math.sqrt(Ups1))) return x[:] def rho_harm_1d(x, xp, beta): Upsilon_1 = (x + xp) ** 2 / 4.0 * math.tanh(beta / 2.0) Upsilon_2 = (x - xp) ** 2 / 4.0 / math.tanh(beta / 2.0) return math.exp(- Upsilon_1 - Upsilon_2) beta = 0.1 nsteps = 1000000 low_1, low_2 = levy_harmonic_path(2) x = {low_1:low_1, low_2:low_2} data_corr = [] for step in xrange(nsteps): # move 1 a = random.choice(x.keys()) if a == x[a]: dummy = x.pop(a) a_new = levy_harmonic_path(1)[0] x[a_new] = a_new else: a_new, b_new = levy_harmonic_path(2) x = {a_new:b_new, b_new:a_new} r = abs(x.keys()[1] - x.keys()[0]) data_corr.append(r) # move 2 (low1, high1), (low2, high2) = x.items() weight_old = rho_harm_1d(low1, high1, beta) * rho_harm_1d(low2, high2, beta) weight_new = rho_harm_1d(low1, high2, beta) * rho_harm_1d(low2, high1, beta) if random.uniform(0.0, 1.0) < weight_new / weight_old: x = {low1:high2, low2:high1} #Graphics output: list_x = [0.1 * a for a in range (0, 100)] y = [prob_r_distinguishable(a, beta) for a in list_x] pylab.plot(list_x, y, linewidth=2.0, label='Exact distinguishable distribution') pylab.hist(data_corr, normed=True, bins=120, label='Indistinguishable QMC', alpha=0.5, color='green') pylab.legend() pylab.xlabel('$r$',fontsize=14) pylab.ylabel('$\\pi_{corr}(r)$',fontsize=14) pylab.title('Correlation function of non-interacting 1-d bosons',fontsize=14) pylab.xlim(0, 10) pylab.savefig('plot_A3_beta%s.png' % beta) pylab.show() pylab.clf()
_____no_output_____
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
3-dimensional bosons Isotropic trap
import random, math, numpy, sys, os import matplotlib.pyplot as plt def harmonic_ground_state(x): return math.exp(-x ** 2)/math.sqrt(math.pi) def levy_harmonic_path_3d(k): x0 = tuple([random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0))) for d in range(3)]) x = [x0] for j in range(1, k): Upsilon_1 = 1.0 / math.tanh(beta) + 1.0 / \ math.tanh((k - j) * beta) Upsilon_2 = [x[j - 1][d] / math.sinh(beta) + x[0][d] / math.sinh((k - j) * beta) for d in range(3)] x_mean = [Upsilon_2[d] / Upsilon_1 for d in range(3)] sigma = 1.0 / math.sqrt(Upsilon_1) dummy = [random.gauss(x_mean[d], sigma) for d in range(3)] x.append(tuple(dummy)) return x def rho_harm_3d(x, xp): Upsilon_1 = sum((x[d] + xp[d]) ** 2 / 4.0 * math.tanh(beta / 2.0) for d in range(3)) Upsilon_2 = sum((x[d] - xp[d]) ** 2 / 4.0 / math.tanh(beta / 2.0) for d in range(3)) return math.exp(- Upsilon_1 - Upsilon_2) N = 512 T_star = 0.8 list_T = numpy.linspace(0.8,0.1,5) beta = 1.0 / (T_star * N ** (1.0 / 3.0)) cycle_min = 10 nsteps = 50000 data_x, data_y, data_x_l, data_y_l = [], [], [], [] for T_star in list_T: # Initial condition filename = 'data_boson_configuration_N%i_T%.1f.txt' % (N,T_star) positions = {} if os.path.isfile(filename): f = open(filename, 'r') for line in f: a = line.split() positions[tuple([float(a[0]), float(a[1]), float(a[2])])] = \ tuple([float(a[3]), float(a[4]), float(a[5])]) f.close() if len(positions) != N: sys.exit('ERROR in the input file.') print 'starting from file', filename else: for k in range(N): a = levy_harmonic_path_3d_anisotropic(1) positions[a[0]] = a[0] print 'Starting from a new configuration' # Monte Carlo loop for step in range(nsteps): # move 1: resample one permutation cycle boson_a = random.choice(positions.keys()) perm_cycle = [] while True: perm_cycle.append(boson_a) boson_b = positions.pop(boson_a) if boson_b == perm_cycle[0]: break else: boson_a = boson_b k = len(perm_cycle) data_x.append(boson_a[0]) data_y.append(boson_a[1]) if k > cycle_min: data_x_l.append(boson_a[0]) data_y_l.append(boson_a[1]) perm_cycle = levy_harmonic_path_3d(k) positions[perm_cycle[-1]] = perm_cycle[0] for k in range(len(perm_cycle) - 1): positions[perm_cycle[k]] = perm_cycle[k + 1] # move 2: exchange a_1 = random.choice(positions.keys()) b_1 = positions.pop(a_1) a_2 = random.choice(positions.keys()) b_2 = positions.pop(a_2) weight_new = rho_harm_3d(a_1, b_2) * rho_harm_3d(a_2, b_1) weight_old = rho_harm_3d(a_1, b_1) * rho_harm_3d(a_2, b_2) if random.uniform(0.0, 1.0) < weight_new / weight_old: positions[a_1] = b_2 positions[a_2] = b_1 else: positions[a_1] = b_1 positions[a_2] = b_2 f = open(filename, 'w') for a in positions: b = positions[a] f.write(str(a[0]) + ' ' + str(a[1]) + ' ' + str(a[2]) + ' ' + str(b[0]) + ' ' + str(b[1]) + ' ' + str(b[2]) + '\n') f.close() # Analyze cycles, do 3d plot import pylab, mpl_toolkits.mplot3d fig = pylab.figure() ax = mpl_toolkits.mplot3d.axes3d.Axes3D(fig) ax.set_aspect('equal') n_colors = 10 list_colors = pylab.cm.rainbow(numpy.linspace(0, 1, n_colors))[::-1] dict_colors = {} i_color = 0 positions_copy = positions.copy() while positions_copy: x, y, z = [], [], [] starting_boson = positions_copy.keys()[0] boson_old = starting_boson while True: x.append(boson_old[0]) y.append(boson_old[1]) z.append(boson_old[2]) boson_new = positions_copy.pop(boson_old) if boson_new == starting_boson: break else: boson_old = boson_new len_cycle = len(x) if len_cycle > 2: x.append(x[0]) y.append(y[0]) z.append(z[0]) if len_cycle in dict_colors: color = dict_colors[len_cycle] ax.plot(x, y, z, '+-', c=color, lw=0.75) else: color = list_colors[i_color] i_color = (i_color + 1) % n_colors dict_colors[len_cycle] = color ax.plot(x, y, z, '+-', c=color, label='k=%i' % len_cycle, lw=0.75) pylab.title(str(N) + ' bosons at T* = ' + str(T_star)) pylab.legend() ax.set_xlabel('$x$', fontsize=16) ax.set_ylabel('$y$', fontsize=16) ax.set_zlabel('$z$', fontsize=16) xmax = 6.0 ax.set_xlim3d([-xmax, xmax]) ax.set_ylim3d([-xmax, xmax]) ax.set_zlim3d([-xmax, xmax]) pylab.savefig('plot_boson_configuration_N%i_T%.1f.png' %(N,T_star)) pylab.show() pylab.clf() #Plot the histograms list_x = [0.1 * a for a in range (-50, 51)] y = [harmonic_ground_state(a) for a in list_x] pylab.plot(list_x, y, linewidth=2.0, label='Ground state') pylab.hist(data_x, normed=True, bins=120, alpha = 0.5, label='All bosons') pylab.hist(data_x_l, normed=True, bins=120, alpha = 0.5, label='Bosons in longer cycle') pylab.xlim(-3.0, 3.0) pylab.xlabel('$x$',fontsize=14) pylab.ylabel('$\pi(x)$',fontsize=14) pylab.title('3-d non-interacting bosons $x$ distribution $N= %i$, $T= %.1f$' %(N,T_star)) pylab.legend() pylab.savefig('position_distribution_N%i_T%.1f.png' %(N,T_star)) pylab.show() pylab.clf() plt.hist2d(data_x_l, data_y_l, bins=40, normed=True) plt.xlabel('$x$') plt.ylabel('$y$') plt.title('The distribution of the $x$ and $y$ positions') plt.colorbar() plt.xlim(-3.0, 3.0) plt.ylim(-3.0, 3.0) plt.show()
starting from file data_boson_configuration_N512_T0.8.txt
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Anisotropic trapWe can imitate the experiments that imitate 1-d bosons in *cigar shaped* anisotropic harmonic traps, and 2-d bosons in *pancake shaped* anisotropic harmonic traps.
%pylab inline import random, math, numpy, os, sys def levy_harmonic_path_3d_anisotropic(k, omega): sigma = [1.0 / math.sqrt(2.0 * omega[d] * math.tanh(0.5 * k * beta * omega[d])) for d in xrange(3)] xk = tuple([random.gauss(0.0, sigma[d]) for d in xrange(3)]) x = [xk] for j in range(1, k): Upsilon_1 = [1.0 / math.tanh(beta * omega[d]) + 1.0 / math.tanh((k - j) * beta * omega[d]) for d in range(3)] Upsilon_2 = [x[j - 1][d] / math.sinh(beta * omega[d]) + \ xk[d] / math.sinh((k - j) * beta * omega[d]) for d in range(3)] x_mean = [Upsilon_2[d] / Upsilon_1[d] for d in range(3)] sigma = [1.0 / math.sqrt(Upsilon_1[d] * omega[d]) for d in range(3)] dummy = [random.gauss(x_mean[d], sigma[d]) for d in range(3)] x.append(tuple(dummy)) return x def rho_harm_3d_anisotropic(x, xp, beta, omega): Upsilon_1 = sum(omega[d] * (x[d] + xp[d]) ** 2 / 4.0 * math.tanh(beta * omega[d] / 2.0) for d in range(3)) Upsilon_2 = sum(omega[d] * (x[d] - xp[d]) ** 2 / 4.0 / math.tanh(beta * omega[d] / 2.0) for d in range(3)) return math.exp(- Upsilon_1 - Upsilon_2) omegas = numpy.array([[4.0, 4.0, 1.0], [1.0, 5.0, 1.0]]) for i in range(len(omegas[:,1])): N = 512 nsteps = 100000 omega_harm = 1.0 omega = omegas[i,:] for d in range(3): omega_harm *= omega[d] ** (1.0 / 3.0) T_star = 0.5 T = T_star * omega_harm * N ** (1.0 / 3.0) beta = 1.0 / T print 'omega: ', omega # Initial condition if i == 0: filename = 'data_boson_configuration_anisotropic_N%i_T%.1f_cigar.txt' % (N,T_star) elif i == 1: filename = 'data_boson_configuration_anisotropic_N%i_T%.1f_pancake.txt' % (N,T_star) positions = {} if os.path.isfile(filename): f = open(filename, 'r') for line in f: a = line.split() positions[tuple([float(a[0]), float(a[1]), float(a[2])])] = \ tuple([float(a[3]), float(a[4]), float(a[5])]) f.close() if len(positions) != N: sys.exit('ERROR in the input file.') print 'starting from file', filename else: for k in range(N): a = levy_harmonic_path_3d_anisotropic(1,omega) positions[a[0]] = a[0] print 'Starting from a new configuration' for step in range(nsteps): boson_a = random.choice(positions.keys()) perm_cycle = [] while True: perm_cycle.append(boson_a) boson_b = positions.pop(boson_a) if boson_b == perm_cycle[0]: break else: boson_a = boson_b k = len(perm_cycle) perm_cycle = levy_harmonic_path_3d_anisotropic(k,omega) positions[perm_cycle[-1]] = perm_cycle[0] for j in range(len(perm_cycle) - 1): positions[perm_cycle[j]] = perm_cycle[j + 1] a_1 = random.choice(positions.keys()) b_1 = positions.pop(a_1) a_2 = random.choice(positions.keys()) b_2 = positions.pop(a_2) weight_new = (rho_harm_3d_anisotropic(a_1, b_2, beta, omega) * rho_harm_3d_anisotropic(a_2, b_1, beta, omega)) weight_old = (rho_harm_3d_anisotropic(a_1, b_1, beta, omega) * rho_harm_3d_anisotropic(a_2, b_2, beta, omega)) if random.uniform(0.0, 1.0) < weight_new / weight_old: positions[a_1], positions[a_2] = b_2, b_1 else: positions[a_1], positions[a_2] = b_1, b_2 f = open(filename, 'w') for a in positions: b = positions[a] f.write(str(a[0]) + ' ' + str(a[1]) + ' ' + str(a[2]) + ' ' + str(b[0]) + ' ' + str(b[1]) + ' ' + str(b[2]) + '\n') f.close() import pylab, mpl_toolkits.mplot3d fig = pylab.figure() ax = mpl_toolkits.mplot3d.axes3d.Axes3D(fig) ax.set_aspect('equal') n_colors = 10 list_colors = pylab.cm.rainbow(numpy.linspace(0, 1, n_colors))[::-1] dict_colors = {} i_color = 0 positions_copy = positions.copy() while positions_copy: x, y, z = [], [], [] starting_boson = positions_copy.keys()[0] boson_old = starting_boson while True: x.append(boson_old[0]) y.append(boson_old[1]) z.append(boson_old[2]) boson_new = positions_copy.pop(boson_old) if boson_new == starting_boson: break else: boson_old = boson_new len_cycle = len(x) if len_cycle > 2: x.append(x[0]) y.append(y[0]) z.append(z[0]) if len_cycle in dict_colors: color = dict_colors[len_cycle] ax.plot(x, y, z, '+-', c=color, lw=0.75) else: color = list_colors[i_color] i_color = (i_color + 1) % n_colors dict_colors[len_cycle] = color ax.plot(x, y, z, '+-', c=color, label='k=%i' % len_cycle, lw=0.75) pylab.legend() ax.set_xlabel('$x$', fontsize=16) ax.set_ylabel('$y$', fontsize=16) ax.set_zlabel('$z$', fontsize=16) xmax = 8.0 ax.set_xlim3d([-xmax, xmax]) ax.set_ylim3d([-xmax, xmax]) ax.set_zlim3d([-xmax, xmax]) if i == 0: pylab.title(str(N) + ' bosons at T* = ' + str(T_star) + ' cigar potential') pylab.savefig('position_distribution_N%i_T%.1f_cigar.png' %(N,T_star)) elif i == 1: pylab.title(str(N) + ' bosons at T* = ' + str(T_star) + ' pancake potential') pylab.savefig('position_distribution_N%i_T%.1f_pancake.png' %(N,T_star)) pylab.show()
Populating the interactive namespace from numpy and matplotlib omega: [4. 4. 1.] starting from file data_boson_configuration_anisotropic_N512_T0.5_cigar.txt
MIT
8. Bose-Einstein condensation.ipynb
cphysics/simulation
Working with 3D city models in Python**Balázs Dukai** [*@BalazsDukai*](https://twitter.com/balazsdukai), **FOSS4G 2019**Tweet CityJSON[3D geoinformation research group, TU Delft, Netherlands](https://3d.bk.tudelft.nl/)![](figures/logos.png)Repo of this talk: [https://github.com/balazsdukai/foss4g2019](https://github.com/balazsdukai/foss4g2019) 3D + city + model ?![](figures/google_earth.png) Probably the most well known 3d city model is what we see in Google Earth. And it is a very nice model to look at and it is improving continuously. However, certain applications require more information than what is stored in such a mesh model. They need to know what does an object in the model represent in the real world. Semantic models![](figures/semantic_model.png) That is why we have semantic models, where for each object in the model we store a label of is meaning.Once we have labels on the object and on their parts, data preparation becomes more simple. An important property for analytical applications, such as wind flow simulations. Useful for urban analysis![](figures/cfd.gif)García-Sánchez, C., van Beeck, J., Gorlé, C., Predictive Large Eddy Simulations for Urban Flows: Challenges and Opportunities, Building and Environment, 139, 146-156, 2018. But we can do much more with 3d city models. We can use them to better estimate the energy consumption in buildings, simulate noise in cities or analyse views and shadows. In the Netherlands sunshine is precious commodity, so we like to get as much as we can. And many more...![3d city model applications](figures/3d_cm_applications.png) There are many open 3d city models available. They come in different formats and quality. However, at our group we are still waiting for the "year of the 3d city model" to come. We don't really see mainstream use, apart of visualisation. Which is nice, I belive they can provide much more value than having a nice thing to simply look at. ...mostly just production of the modelsmany available, but who **uses** them? **For more than visualisation?**![open 3d city models](figures/open_cms.png) In truth, 3D CMs are a bit difficult to work with Our built environment is complex, and the objects are complex too![](figures/assembling_solid.png) Software are lagging behind+ not many software supports 3D city models+ if they do, mostly propietary data model and format+ large, *"eterprise"*-type applications (think Esri, FME, Bentley ... )+ few tools accessible for the individual developer / hobbyist 2. GML doesn't help ( *[GML madness](http://erouault.blogspot.com/2014/04/gml-madness.html) by Even Rouault* ) That is why we are developing CityJSON, which is a data format for 3d city models. Essentially, it aims to increase the value of 3d city models by making it more simple to work with them and lower the entry for a wider audience than cadastral organisations. ![cityjson logo](figures/cityjson_webpage.png) Key concepts of CityJSON + *simple*, as in easy to implement+ designed with programmers in mind+ fully developed in the open+ flattened hierarchy of objects+ implementation first![GitHub Issues](figures/github_issues.png) CityJSON implements the data model of CityGML. CityGML is an international standard for 3d city models and it is coupled with its GML-based encoding. We don't really like GML, because it's verbose, files are deeply nested and large (often several GB). And there are many different ways to do one thing.Also, I'm not a web-developer, but I would be surprised if anyone prefers GML over JSON for sending stuff around the web. JSON-based encoding of the CityGML data model![](figures/citygml_encoding.png) I just got sent a CityGML file. pic.twitter.com/jnTVoRnVLS&mdash; James Fee (@jamesmfee) June 29, 2016 + files are deeply nested, and large+ many "points of entry"+ many diff ways to do one thing (GML doesn't help, *[GML madness](http://erouault.blogspot.com/2014/04/gml-madness.html) by Even Rouault* ) The CityGML data model![](figures/citygml_uml.gif) Compression ~6x over CityGML![](figures/zurich_size.png) Compression| file | CityGML size (original) | CityGML size (w/o spaces) | textures | CityJSON | compression || -------- | ----------------------- | ----------------------------- |--------- | ------------ | --------------- | | [CityGML demo "GeoRes"](https://www.citygml.org/samplefiles/) | 4.3MB | 4.1MB | yes | 524KB | 8.0 || [CityGML v2 demo "Railway"](https://www.citygml.org/samplefiles/) | 45MB | 34MB | yes | 4.3MB | 8.1 || [Den Haag "tile 01"](https://data.overheid.nl/data/dataset/ngr-3d-model-den-haag) | 23MB | 18MB | no, material | 2.9MB | 6.2 || [Montréal VM05](http://donnees.ville.montreal.qc.ca/dataset/maquette-numerique-batiments-citygml-lod2-avec-textures/resource/36047113-aa19-4462-854a-cdcd6281a5af) | 56MB | 42MB | yes | 5.4MB | 7.8 || [New York LoD2 (DA13)](https://www1.nyc.gov/site/doitt/initiatives/3d-building.page) | 590MB | 574MB | no | 105MB | 5.5 || [Rotterdam Delfshaven](http://rotterdamopendata.nl/dataset/rotterdam-3d-bestanden/resource/edacea54-76ce-41c7-a0cc-2ebe5750ac18) | 16MB | 15MB | yes | 2.6MB | 5.8 || [Vienna (the demo file)](https://www.data.gv.at/katalog/dataset/86d88cae-ad97-4476-bae5-73488a12776d) | 37MB | 36MB | no | 5.3MB | 6.8 || [Zürich LoD2](https://www.data.gv.at/katalog/dataset/86d88cae-ad97-4476-bae5-73488a12776d) | 3.03GB | 2.07GB | no | 292MB | 7.1 | If you are interested in a more detailed comparison between CityGML and CityJSON you can read our article, its open access. ![cityjson paper](figures/cityjson_paper.png) And yes, we are guilty of charge. ![standards](figures/standards.png)[https://xkcd.com/927/](https://xkcd.com/927/) Let's have a look-see, shall we?![](figures/looksee.gif) Now let's take a peek under the hood, what's going on in a CityJSON file. An empty CityJSON file![](figures/cj01.svg) In a city model we represent the real-world objects such as buildings, bridges, trees as different types of CityObjects. Each CityObject has its + unique ID, + attributes,+ geometry,+ and it can have children objects or it can be part of a parent object.Note however, that CityObject are not nested. Each of them is stored at root and the hierachy represented by linking to object IDs. A CityObject![](figures/cj02.svg) Each CityObject has a geometry representation. This geometry is composed of *boundaries* and *semantics*. Geometry+ **boundaries** definition uses vertex indices (inspired by Wavefront OBJ)+ We have a vertex list at the root of the document+ Vertices are not repeated (unlike Simple Features)+ **semantics** are linked to the boundary surfaces![](figures/cj04.svg) This `MulitSurface` has 5 surfaces ```json[[0, 3, 2, 1]], [[4, 5, 6, 7]], [[0, 1, 5, 4]], [[0, 2, 3, 8]], [[10, 12, 23, 48]]```each surface has only an exterior ring (the first array)```json[ [0, 3, 2, 1] ]```The semantic surfaces in the `semantics` json-object are linked to the boundary surfaces. The integers in the `values` property of `surfaces` are the 0-based indices of the surfaces of the boundary.
import json import os path = os.path.join('data', 'rotterdam_subset.json') with open(path) as fin: cm = json.loads(fin.read()) print(f"There are {len(cm['CityObjects'])} CityObjects") # list all IDs for id in cm['CityObjects']: print(id, "\t")
There are 16 CityObjects {C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE} {71B60053-BC28-404D-BAB9-8A642AAC0CF4} {6271F75F-E8D8-4EE4-AC46-9DB02771A031} {DE77E78F-B110-43D2-A55C-8B61911192DE} {19935DFC-F7B3-4D6E-92DD-C48EE1D1519A} {953BC999-2F92-4B38-95CF-218F7E05AFA9} {8D716FDE-18DD-4FB5-AB06-9D207377240E} {C6AAF95B-8C09-4130-AB4D-6777A2A18A2E} {72390BDE-903C-4C8C-8A3F-2DF5647CD9B4} {8244B286-63E2-436E-9D4E-169B8ACFE9D0} {87316D28-7574-4763-B9CE-BF6A2DF8092C} {CD98680D-A8DD-4106-A18E-15EE2A908D75} {64A9018E-4F56-47CD-941F-43F6F0C4285B} {459F183A-D0C2-4F8A-8B5F-C498EFDE366D} {237D41CC-991E-4308-8986-42ABFB4F7431} {23D8CA22-0C82-4453-A11E-B3F2B3116DB4}
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
+ Working with a CityJSON file is straightforward. One can open it with the standard library and get going.+ But you need to know the schema well.+ And you need to write everything from scratch. That is why we are developing **cjio**. **cjio** is how *we eat what we cook*Aims to help to actually work with and analyse 3D city models, and extract more value from them. Instead of letting them gather dust in some governmental repository. ![cjio](figures/cjio_docs.png) `cjio` has a (quite) stable CLI```bash$ cjio city_model.json reproject 2056 export --format glb /out/model.glb``` and an experimental API```pythonfrom cjio import cityjsoncm = cityjson.load('city_model.json')cm.get_cityobjects(type='building')``` **`pip install cjio`** This notebook is based on the develop branch. **`pip install git+https://github.com/tudelft3d/cjio@develop`** `cjio`'s CLI
! cjio --help ! cjio data/rotterdam_subset.json info ! cjio data/rotterdam_subset.json validate ! cjio data/rotterdam_subset.json \ subset --exclude --id "{CD98680D-A8DD-4106-A18E-15EE2A908D75}" \ merge data/rotterdam_one.json \ reproject 2056 \ save data/test_rotterdam.json
Parsing data/rotterdam_subset.json Subset of CityJSON Merging files Reproject to EPSG:2056 [?25l [####################################] 100% [?25h Saving CityJSON to a file /home/balazs/Reports/talk_cjio_foss4g_2019/data/test_rotterdam.json
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
+ The CLI was first, no plans for API+ **Works with whole city model only**+ Functions for the CLI work with the JSON directly, passing it along+ Simple and effective architecture `cjio`'s APIAllow *read* --> *explore* --> *modify* --> *write* iterationWork with CityObjects and their partsFunctions for common operationsInspired by the *tidyverse* from the R ecosystem
import os from copy import deepcopy from cjio import cityjson from shapely.geometry import Polygon import matplotlib.pyplot as plt plt.close('all') from sklearn.preprocessing import FunctionTransformer from sklearn import cluster import numpy as np
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
In the following we work with a subset of the 3D city model of Rotterdam![](figures/rotterdam_subset.png) Load a CityJSON The `load()` method loads a CityJSON file into a CityJSON object.
path = os.path.join('data', 'rotterdam_subset.json') cm = cityjson.load(path) print(type(cm))
<class 'cjio.cityjson.CityJSON'>
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Using the CLI commands in the APIYou can use any of the CLI commands on a CityJSON object *However,* not all CLI commands are mapped 1-to-1 to `CityJSON` methodsAnd we haven't harmonized the CLI and the API yet.
cm.validate()
-- Validating the syntax of the file (using the schemas 1.0.0) -- Validating the internal consistency of the file (see docs for list) --Vertex indices coherent --Specific for CityGroups --Semantic arrays coherent with geometry --Root properties --Empty geometries --Duplicate vertices --Orphan vertices --CityGML attributes
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Explore the city modelPrint the basic information about the city model. Note that `print()` returns the same information as the `info` command in the CLI.
print(cm)
{ "cityjson_version": "1.0", "epsg": 7415, "bbox": [ 90454.18900000001, 435614.88, 0.0, 91002.41900000001, 436048.217, 18.29 ], "transform/compressed": true, "cityobjects_total": 16, "cityobjects_present": [ "Building" ], "materials": false, "textures": true }
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Getting objects from the modelGet CityObjects by their *type*, or a list of types. Also by their IDs. Note that `get_cityobjects()` == `cm.cityobjects`
buildings = cm.get_cityobjects(type='building') # both Building and BuildingPart objects buildings_parts = cm.get_cityobjects(type=['building', 'buildingpart']) r_ids = ['{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}', '{6271F75F-E8D8-4EE4-AC46-9DB02771A031}'] buildings_ids = cm.get_cityobjects(id=r_ids)
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Properties and geometry of objects
b01 = buildings_ids['{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}'] print(b01) b01.attributes
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
CityObjects can have *children* and *parents*
b01.children is None and b01.parents is None
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
CityObject geometry is a list of `Geometry` objects. That is because a CityObject can have multiple geometry representations in different levels of detail, eg. a geometry in LoD1 and a second geometry in LoD2.
b01.geometry geom = b01.geometry[0] print("{}, lod {}".format(geom.type, geom.lod))
MultiSurface, lod 2
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Geometry boundaries and Semantic SurfacesOn the contrary to a CityJSON file, the geometry boundaries are dereferenced when working with the API. This means that the vertex coordinates are included in the boundary definition, not only the vertex indices.`cjio` doesn't provide specific geometry classes (yet), eg. MultiSurface or Solid class. If you are working with the geometry boundaries, you need to the geometric operations yourself, or cast the boundary to a geometry-class of some other library. For example `shapely` if 2D is enough. Vertex coordinates are kept 'as is' on loading the geometry. CityJSON files are often compressed and coordinates are shifted and transformed into integers so probably you'll want to transform them back. Otherwise geometry operations won't make sense.
transformation_object = cm.transform geom_transformed = geom.transform(transformation_object) geom_transformed.boundaries[0][0]
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
But it might be easier to transform (decompress) the whole model on load.
cm_transformed = cityjson.load(path, transform=True) print(cm_transformed)
{ "cityjson_version": "1.0", "epsg": 7415, "bbox": [ 90454.18900000001, 435614.88, 0.0, 91002.41900000001, 436048.217, 18.29 ], "transform/compressed": false, "cityobjects_total": 16, "cityobjects_present": [ "Building" ], "materials": false, "textures": true }
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Semantic Surfaces are stored in a similar fashion as in a CityJSON file, in the `surfaces` attribute of a Geometry object.
geom.surfaces
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
`surfaces` does not store geometry boundaries, just references (`surface_idx`). Use the `get_surface_boundaries()` method to obtain the boundary-parts connected to the semantic surface.
roofs = geom.get_surfaces(type='roofsurface') roofs roof_boundaries = [] for r in roofs.values(): roof_boundaries.append(geom.get_surface_boundaries(r)) roof_boundaries
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Assigning attributes to Semantic Surfaces1. extract the surfaces,2. make the changes on the surface,3. overwrite the CityObjects with the changes.
cm_copy = deepcopy(cm) new_cos = {} for co_id, co in cm.cityobjects.items(): new_geoms = [] for geom in co.geometry: # Only LoD >= 2 models have semantic surfaces if geom.lod >= 2.0: # Extract the surfaces roofsurfaces = geom.get_surfaces('roofsurface') for i, rsrf in roofsurfaces.items(): # Change the attributes if 'attributes' in rsrf.keys(): rsrf['attributes']['cladding'] = 'tiles' else: rsrf['attributes'] = {} rsrf['attributes']['cladding'] = 'tiles' geom.surfaces[i] = rsrf new_geoms.append(geom) else: # Use the unchanged geometry new_geoms.append(geom) co.geometry = new_geoms new_cos[co_id] = co cm_copy.cityobjects = new_cos print(cm_copy.cityobjects['{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}'])
{ "id": "{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}", "type": "Building", "attributes": { "TerrainHeight": 3.03, "bron_tex": "UltraCAM-X 10cm juni 2008", "voll_tex": "complete", "bron_geo": "Lidar 15-30 punten - nov. 2008", "status": "1" }, "children": null, "parents": null, "geometry_type": [ "MultiSurface" ], "geometry_lod": [ 2 ], "semantic_surfaces": [ "WallSurface", "RoofSurface", "GroundSurface" ] }
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Create new Semantic SurfacesThe process is similar as previously. However, in this example we create new SemanticSurfaces that hold the values which we compute from the geometry. The input city model has a single semantic "WallSurface", without attributes, for all the walls of a building. The snippet below illustrates how to separate surfaces and assign the semantics to them.
new_cos = {} for co_id, co in cm_copy.cityobjects.items(): new_geoms = [] for geom in co.geometry: if geom.lod >= 2.0: max_id = max(geom.surfaces.keys()) old_ids = [] for w_i, wsrf in geom.get_surfaces('wallsurface').items(): old_ids.append(w_i) del geom.surfaces[w_i] boundaries = geom.get_surface_boundaries(wsrf) for j, boundary_geometry in enumerate(boundaries): # The original geometry has the same Semantic for all wall, # but we want to divide the wall surfaces by their orientation, # thus we need to have the correct surface index surface_index = wsrf['surface_idx'][j] new_srf = { 'type': wsrf['type'], 'surface_idx': surface_index } for multisurface in boundary_geometry: # Do any operation here x, y, z = multisurface[0] if j % 2 > 0: orientation = 'north' else: orientation = 'south' # Add the new attribute to the surface if 'attributes' in wsrf.keys(): wsrf['attributes']['orientation'] = orientation else: wsrf['attributes'] = {} wsrf['attributes']['orientation'] = orientation new_srf['attributes'] = wsrf['attributes'] # if w_i in geom.surfaces.keys(): # del geom.surfaces[w_i] max_id = max_id + 1 geom.surfaces[max_id] = new_srf new_geoms.append(geom) else: # If LoD1, just add the geometry unchanged new_geoms.append(geom) co.geometry = new_geoms new_cos[co_id] = co cm_copy.cityobjects = new_cos
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Analysing CityModels![](figures/zurich.png) In the following I show how to compute some attributes from CityObject geometry and use these attributes as input for machine learning. For this we use the LoD2 model of Zürich.Download the Zürich data set from https://3d.bk.tudelft.nl/opendata/cityjson/1.0/Zurich_Building_LoD2_V10.json
path = os.path.join('data', 'zurich.json') zurich = cityjson.load(path, transform=True)
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
A simple geometry function Here is a simple geometry function that computes the area of the groundsurface (footprint) of buildings in the model. It also show how to cast surfaces, in this case the ground surface, to Shapely Polygons.
def compute_footprint_area(co): """Compute the area of the footprint""" footprint_area = 0 for geom in co.geometry: # only LoD2 (or higher) objects have semantic surfaces if geom.lod >= 2.0: footprints = geom.get_surfaces(type='groundsurface') # there can be many surfaces with label 'groundsurface' for i,f in footprints.items(): for multisurface in geom.get_surface_boundaries(f): for surface in multisurface: # cast to Shapely polygon shapely_poly = Polygon(surface) footprint_area += shapely_poly.area return footprint_area
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019