text stringlengths 0 59.1k |
|---|
name: mongo |
name: mongo-controller |
spec: |
replicas: 1 |
template: |
metadata: |
labels: |
name: mongo |
spec: |
containers: |
- image: mongo |
name: mongo |
ports: |
- name: mongo |
containerPort: 27017 |
hostPort: 27017 |
volumeMounts: |
- name: mongo-persistent-storage |
mountPath: /data/db |
volumes: |
- name: mongo-persistent-storage |
gcePersistentDisk: |
pdName: mongo-disk |
fsType: ext4 |
``` |
[Download file](mongo-controller.yaml) |
Looking at this file from the bottom up: |
First, it creates a volume called "mongo-persistent-storage." |
In the above example, it is using a "gcePersistentDisk" to back the storage. This is only applicable if you are running your Kubernetes cluster in Google Cloud Platform. |
If you don't already have a [Google Persistent Disk](https://cloud.google.com/compute/docs/disks) created in the same zone as your cluster, create a new disk in the same Google Compute Engine / Container Engine zone as your cluster with this command: |
```sh |
gcloud compute disks create --size=200GB --zone=$ZONE mongo-disk |
``` |
If you are using AWS, replace the "volumes" section with this (untested): |
```yaml |
volumes: |
- name: mongo-persistent-storage |
awsElasticBlockStore: |
volumeID: aws://{region}/{volume ID} |
fsType: ext4 |
``` |
If you don't have an EBS volume in the same region as your cluster, create a new EBS volume in the same region with this command (untested): |
```sh |
ec2-create-volume --size 200 --region $REGION --availability-zone $ZONE |
``` |
This command will return a volume ID to use. |
For other storage options (iSCSI, NFS, OpenStack), please follow the documentation. |
Now that the volume is created and usable by Kubernetes, the next step is to create the Pod. |
Looking at the container section: It uses the official MongoDB container, names itself "mongo", opens up port 27017, and mounts the disk to "/data/db" (where the mongo container expects the data to be). |
Now looking at the rest of the file, it is creating a Replication Controller with one replica, called mongo-controller. It is important to use a Replication Controller and not just a Pod, as a Replication Controller will restart the instance in case it crashes. |
Create this controller with this command: |
```sh |
kubectl create -f examples/nodesjs-mongodb/mongo-controller.yaml |
``` |
At this point, MongoDB is up and running. |
Note: There is no password protection or auth running on the database by default. Please keep this in mind! |
### Creating the Node.js Service |
The next step is to create the Node.js service. This service is what will be the endpoint for the website, and will load balance requests to the Node.js instances. |
```yaml |
apiVersion: v1 |
kind: Service |
metadata: |
name: web |
labels: |
name: web |
spec: |
type: LoadBalancer |
ports: |
- port: 80 |
targetPort: 3000 |
protocol: TCP |
selector: |
name: web |
``` |
[Download file](web-service.yaml) |
This service is called "web," and it uses a [LoadBalancer](https://kubernetes.io/docs/user-guide/services.md#type-loadbalancer) to distribute traffic on port 80 to port 3000 running on Pods with the "web" tag. Port 80 is the standard HTTP port, and port 3000 is the standard Node.js port. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.