title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
|---|---|---|
Docker for Data Science — A Step by Step Guide | by Dean Pleban | Towards Data Science
|
A lot has already been said about why Docker can improve your life as a data scientist. I was working on an (un-)cool depth estimation project using Fast.ai with a few friends when I stumbled upon this tweet by @jeremyphoward.
It so happens that we were using Docker to create our data science workspace for the project, so I thought it would make sense to address Jeremy’s questions and share this knowledge with the community.
I’ll very briefly review the core concepts and advantages of Docker, and then show a step-by-step example for setting up an entire data science workspace using Docker.
If you already know what Docker is and why it’s awesome, skip to the step-by-step tutorial.
Docker is a tool for creating and deploying isolated environments (read: virtual machines) for running applications with their dependencies.
A few terms you should be familiar with (including a baking analogy for ease of understanding):
Docker Container — A single instance of the application, that is live and running. In our analogy, this is a cookie.
Docker Image — A blueprint for creating containers. Images are immutable and all containers created from the same image are exactly alike. In our analogy, this is the cookie-cutter mould.
Dockerfile — A text file containing a list of commands to call when creating a Docker Image. In our analogy, this is the instructions to create the cookie-cutter mould.
Broadly, there are two use cases for Docker in ML:
Run Only: A run-only container means you edit your code on a local IDE and run it with the container so that your code runs inside the container. Here is one good example.
End-to-End Platform: An end-to-end platform container means you have an IDE or Jupyter Notebook / Lab, and your entire working environment, running in the container and also run the code inside it (with the exception of the working file system which can be mounted).
We will focus on the second use case.
Using docker containers means you don’t have to deal with “works on my machine” problems.
Generally, the main advantage Docker provides is standardization. This means you can define the parameters of your container once, and run it wherever Docker is installed. This in turn provides two major advantages:
Reproducibility: Everyone has the same OS, the same versions of tools, etc. This means you don’t need to deal with “works on my machine” problems. If it works on your machine, it works on everyone’s machine.Portability: This means that moving from local development to a super-computing cluster is easy. Also, if you’re working on open source data science projects as we do at DAGsHub, you can provide collaborators with an easy way to bypass setup hassle.
Reproducibility: Everyone has the same OS, the same versions of tools, etc. This means you don’t need to deal with “works on my machine” problems. If it works on your machine, it works on everyone’s machine.
Portability: This means that moving from local development to a super-computing cluster is easy. Also, if you’re working on open source data science projects as we do at DAGsHub, you can provide collaborators with an easy way to bypass setup hassle.
Another huge advantage — learning to use Docker will make you a better engineer, or turn you into a data scientist with superpowers. Many systems rely on Docker, and it will help you turn your ML projects into applications and deploy models into production.
pytorch/pytorch — a simple container for Use Case 1 that includes Pytorch
jupyter/scipy-notebook — A container for Use Case 2 that includes Jupyter as the UI, and many python data science modules.
DAGsHub/ml-workspace-minimal — Is the container I’ll show the step-by-step guide on. This container is an updated version from the ml-tooling/ml-workspace repository. The original has not been maintained for the last 7 months so I created an up to date version. It combines the following tools:- 💫 Jupyter, JupyterLab - 👾 VSCode web-based IDE.- 🗃 Pytorch, Tensorflow, Sklearn, Pandas, and many other popular data science libraries & tools.- 🖥 Full Linux desktop GUI accessible via a web browser.- 🎮 Easy terminal access via a web browser.- 🔀 Seamless Git integration optimized for notebooks.- 📈 Integrated hardware & training monitoring via Tensorboard & Netdata.- 🚪 Access from anywhere via Web, SSH, or VNC under a single port.- 🎛 Usable as a remote kernel (Jupyter) or remote machine (VSCode) via SSH.- 🐳 Easy to deploy on Mac, Linux, and Windows via Docker.
Sounds wonderful, right?! Now let’s see how to set it up.
Installing Docker is easy and free. Just follow this guide for your operating system.
Will not be covered in this tutorial. Our focus will be on how to run a Docker Container once we already have the image we want.
We will use a prebuilt image from DAGsHub/ml-workspace-minimal. It is created from this repository on GitHub. If you want to build or modify this image or any other, I recommend the article Jeremy Howard refers to in his original tweet.
Just run the following command:
docker run -d \ -v "/${PWD}:/workspace" \ -p 8080:8080 \ --name "ml-workspace" \ --env AUTHENTICATE_VIA_JUPYTER="mytoken" \ --shm-size 2g \ --restart always \ dagshub/ml-workspace:latest
docker run is the command that takes a Docker Image (cookie cutter) and creates a container from it. In our analogy, this is the step where you make the cookie.
This long command might look scary, but we can think of all these flags as toppings for our cookie (chocolate chips and macadamia nuts 😋). Here is an explanation of the various flags and why you need them:
Mounting your working file system -v "/${PWD}:/workspace"This might be the most important flag of all. It allows you to retain your work (files) after the container shuts down, and to access them from outside the container.
It does this by mapping your current working folder (where you execute the docker run command), denoted as /${PWD}, to a /workspace folder inside the container's virtual file system. If you'd like to change that, you can change this argument appropriately.
Port forwarding -p 8080:8080This argument exposes the 8080 port. In essence, it means that after you run this on a computer, your container will be accessible via http://{computer-ip}:8080. If you're running this on your local system, that address will be http://localhost:8080. For more complex images, you might need more than one port forwarded for additional reasons such as API endpoints. In our case, the port is the UI endpoint, and will lead you to the home screen of ML-Workspace:
Naming our container --name "dags-workspace"This generates a unique identifier for our container for future reference. As it might imply, this name should be unique per your system, so if you make multiple containers from the same image, you’ll need to define different names for them. --name is also useful to add meaning to our container. If you don't define a name, a meaningless one will be generated for you automatically.
Defining Env Variables --env AUTHENTICATE_VIA_JUPYTER="mytoken"The --env flag defines environment variables for your container. This can vary wildly between containers, and so it's hard to give a generic use case for it.
In our case, we use this to define a password to the workspace. When someone opens the link above for the first time, Jupyter will require them to input the password defined here. This might be useful if you’re working on a shared computer.
Defining shared memory --shm-size 2gThis flag is used to define the shared memory of your container (the more the better). Remember that this uses the same RAM as your regular system, so if you set it too high it might slow your computer down. A good size would be somewhere between 2g and 8g for most use cases.
Defining the restart policy --restart alwaysThe --restart flag represents the container's restart policy. According to the Docker docs:
A restart policy controls whether the Docker daemon restarts a container after exit.
We use the always option which means Docker will try to keep the container running even if the system restarts. This is great in order to keep your project context intact.
Congratulations! You now have an entire ML workspace running on your docker, including all the goodies you might need.
If you want to dive one level deeper, I recommend going to the docker run command reference to see all available flags.
Let’s go over a few things I recommend you setup to have an ideal workspace.
We’ve set a standardized, isolated machine to run our ML. If you’re setting up your project for the first time, you’re probably running to your environment manager conda/pip to install some awesome packages.
Let me stop you there.
Why go through all this effort to create an isolated, reproducible environment and then go install a bunch of different packages that no one will know about. You should create a Conda or virtual environment to manage your packages. If you decided to use the DAGsHub/ml-workspace-minimal container, here is what you should do:
In the ML workspace home click the Open Tool dropdown and choose Terminal. Then type the following commands:
# Input your <env-name> and the <python-version> you wantconda create -y --name <env-name> python=<python-version># Activate your environmentsource activate <env-name># If you have a `requirements.txt` file, you should install those requirementspip install -U pip setuptools wheelpip install -r requirements.txt
After you finish installing your packages, and you want to save your project (and hopefully commit it to Git), you should save your package list by running:
# This will override your existing `requirements.txt`. # If you want to append, use `>>` instead of `>`pip list --format=freeze > requirements.txt
Note: Wondering why you should use this command for listing your requirements (instead of pip freeze >> requirements)? Read this GitHub issue
You now have an isolated, reproducible, portable, and awesome ML workspace set up with Docker. I hope you found this article useful, and if you have any questions or feedback, please reach out via our DAGsHub Discord Channel.
|
[
{
"code": null,
"e": 399,
"s": 172,
"text": "A lot has already been said about why Docker can improve your life as a data scientist. I was working on an (un-)cool depth estimation project using Fast.ai with a few friends when I stumbled upon this tweet by @jeremyphoward."
},
{
"code": null,
"e": 601,
"s": 399,
"text": "It so happens that we were using Docker to create our data science workspace for the project, so I thought it would make sense to address Jeremy’s questions and share this knowledge with the community."
},
{
"code": null,
"e": 769,
"s": 601,
"text": "I’ll very briefly review the core concepts and advantages of Docker, and then show a step-by-step example for setting up an entire data science workspace using Docker."
},
{
"code": null,
"e": 861,
"s": 769,
"text": "If you already know what Docker is and why it’s awesome, skip to the step-by-step tutorial."
},
{
"code": null,
"e": 1002,
"s": 861,
"text": "Docker is a tool for creating and deploying isolated environments (read: virtual machines) for running applications with their dependencies."
},
{
"code": null,
"e": 1098,
"s": 1002,
"text": "A few terms you should be familiar with (including a baking analogy for ease of understanding):"
},
{
"code": null,
"e": 1215,
"s": 1098,
"text": "Docker Container — A single instance of the application, that is live and running. In our analogy, this is a cookie."
},
{
"code": null,
"e": 1403,
"s": 1215,
"text": "Docker Image — A blueprint for creating containers. Images are immutable and all containers created from the same image are exactly alike. In our analogy, this is the cookie-cutter mould."
},
{
"code": null,
"e": 1572,
"s": 1403,
"text": "Dockerfile — A text file containing a list of commands to call when creating a Docker Image. In our analogy, this is the instructions to create the cookie-cutter mould."
},
{
"code": null,
"e": 1623,
"s": 1572,
"text": "Broadly, there are two use cases for Docker in ML:"
},
{
"code": null,
"e": 1795,
"s": 1623,
"text": "Run Only: A run-only container means you edit your code on a local IDE and run it with the container so that your code runs inside the container. Here is one good example."
},
{
"code": null,
"e": 2062,
"s": 1795,
"text": "End-to-End Platform: An end-to-end platform container means you have an IDE or Jupyter Notebook / Lab, and your entire working environment, running in the container and also run the code inside it (with the exception of the working file system which can be mounted)."
},
{
"code": null,
"e": 2100,
"s": 2062,
"text": "We will focus on the second use case."
},
{
"code": null,
"e": 2190,
"s": 2100,
"text": "Using docker containers means you don’t have to deal with “works on my machine” problems."
},
{
"code": null,
"e": 2406,
"s": 2190,
"text": "Generally, the main advantage Docker provides is standardization. This means you can define the parameters of your container once, and run it wherever Docker is installed. This in turn provides two major advantages:"
},
{
"code": null,
"e": 2863,
"s": 2406,
"text": "Reproducibility: Everyone has the same OS, the same versions of tools, etc. This means you don’t need to deal with “works on my machine” problems. If it works on your machine, it works on everyone’s machine.Portability: This means that moving from local development to a super-computing cluster is easy. Also, if you’re working on open source data science projects as we do at DAGsHub, you can provide collaborators with an easy way to bypass setup hassle."
},
{
"code": null,
"e": 3071,
"s": 2863,
"text": "Reproducibility: Everyone has the same OS, the same versions of tools, etc. This means you don’t need to deal with “works on my machine” problems. If it works on your machine, it works on everyone’s machine."
},
{
"code": null,
"e": 3321,
"s": 3071,
"text": "Portability: This means that moving from local development to a super-computing cluster is easy. Also, if you’re working on open source data science projects as we do at DAGsHub, you can provide collaborators with an easy way to bypass setup hassle."
},
{
"code": null,
"e": 3579,
"s": 3321,
"text": "Another huge advantage — learning to use Docker will make you a better engineer, or turn you into a data scientist with superpowers. Many systems rely on Docker, and it will help you turn your ML projects into applications and deploy models into production."
},
{
"code": null,
"e": 3653,
"s": 3579,
"text": "pytorch/pytorch — a simple container for Use Case 1 that includes Pytorch"
},
{
"code": null,
"e": 3776,
"s": 3653,
"text": "jupyter/scipy-notebook — A container for Use Case 2 that includes Jupyter as the UI, and many python data science modules."
},
{
"code": null,
"e": 4638,
"s": 3776,
"text": "DAGsHub/ml-workspace-minimal — Is the container I’ll show the step-by-step guide on. This container is an updated version from the ml-tooling/ml-workspace repository. The original has not been maintained for the last 7 months so I created an up to date version. It combines the following tools:- 💫 Jupyter, JupyterLab - 👾 VSCode web-based IDE.- 🗃 Pytorch, Tensorflow, Sklearn, Pandas, and many other popular data science libraries & tools.- 🖥 Full Linux desktop GUI accessible via a web browser.- 🎮 Easy terminal access via a web browser.- 🔀 Seamless Git integration optimized for notebooks.- 📈 Integrated hardware & training monitoring via Tensorboard & Netdata.- 🚪 Access from anywhere via Web, SSH, or VNC under a single port.- 🎛 Usable as a remote kernel (Jupyter) or remote machine (VSCode) via SSH.- 🐳 Easy to deploy on Mac, Linux, and Windows via Docker."
},
{
"code": null,
"e": 4696,
"s": 4638,
"text": "Sounds wonderful, right?! Now let’s see how to set it up."
},
{
"code": null,
"e": 4782,
"s": 4696,
"text": "Installing Docker is easy and free. Just follow this guide for your operating system."
},
{
"code": null,
"e": 4911,
"s": 4782,
"text": "Will not be covered in this tutorial. Our focus will be on how to run a Docker Container once we already have the image we want."
},
{
"code": null,
"e": 5148,
"s": 4911,
"text": "We will use a prebuilt image from DAGsHub/ml-workspace-minimal. It is created from this repository on GitHub. If you want to build or modify this image or any other, I recommend the article Jeremy Howard refers to in his original tweet."
},
{
"code": null,
"e": 5180,
"s": 5148,
"text": "Just run the following command:"
},
{
"code": null,
"e": 5388,
"s": 5180,
"text": "docker run -d \\ -v \"/${PWD}:/workspace\" \\ -p 8080:8080 \\ --name \"ml-workspace\" \\ --env AUTHENTICATE_VIA_JUPYTER=\"mytoken\" \\ --shm-size 2g \\ --restart always \\ dagshub/ml-workspace:latest"
},
{
"code": null,
"e": 5549,
"s": 5388,
"text": "docker run is the command that takes a Docker Image (cookie cutter) and creates a container from it. In our analogy, this is the step where you make the cookie."
},
{
"code": null,
"e": 5755,
"s": 5549,
"text": "This long command might look scary, but we can think of all these flags as toppings for our cookie (chocolate chips and macadamia nuts 😋). Here is an explanation of the various flags and why you need them:"
},
{
"code": null,
"e": 5979,
"s": 5755,
"text": "Mounting your working file system -v \"/${PWD}:/workspace\"This might be the most important flag of all. It allows you to retain your work (files) after the container shuts down, and to access them from outside the container."
},
{
"code": null,
"e": 6236,
"s": 5979,
"text": "It does this by mapping your current working folder (where you execute the docker run command), denoted as /${PWD}, to a /workspace folder inside the container's virtual file system. If you'd like to change that, you can change this argument appropriately."
},
{
"code": null,
"e": 6726,
"s": 6236,
"text": "Port forwarding -p 8080:8080This argument exposes the 8080 port. In essence, it means that after you run this on a computer, your container will be accessible via http://{computer-ip}:8080. If you're running this on your local system, that address will be http://localhost:8080. For more complex images, you might need more than one port forwarded for additional reasons such as API endpoints. In our case, the port is the UI endpoint, and will lead you to the home screen of ML-Workspace:"
},
{
"code": null,
"e": 7154,
"s": 6726,
"text": "Naming our container --name \"dags-workspace\"This generates a unique identifier for our container for future reference. As it might imply, this name should be unique per your system, so if you make multiple containers from the same image, you’ll need to define different names for them. --name is also useful to add meaning to our container. If you don't define a name, a meaningless one will be generated for you automatically."
},
{
"code": null,
"e": 7375,
"s": 7154,
"text": "Defining Env Variables --env AUTHENTICATE_VIA_JUPYTER=\"mytoken\"The --env flag defines environment variables for your container. This can vary wildly between containers, and so it's hard to give a generic use case for it."
},
{
"code": null,
"e": 7616,
"s": 7375,
"text": "In our case, we use this to define a password to the workspace. When someone opens the link above for the first time, Jupyter will require them to input the password defined here. This might be useful if you’re working on a shared computer."
},
{
"code": null,
"e": 7929,
"s": 7616,
"text": "Defining shared memory --shm-size 2gThis flag is used to define the shared memory of your container (the more the better). Remember that this uses the same RAM as your regular system, so if you set it too high it might slow your computer down. A good size would be somewhere between 2g and 8g for most use cases."
},
{
"code": null,
"e": 8065,
"s": 7929,
"text": "Defining the restart policy --restart alwaysThe --restart flag represents the container's restart policy. According to the Docker docs:"
},
{
"code": null,
"e": 8150,
"s": 8065,
"text": "A restart policy controls whether the Docker daemon restarts a container after exit."
},
{
"code": null,
"e": 8322,
"s": 8150,
"text": "We use the always option which means Docker will try to keep the container running even if the system restarts. This is great in order to keep your project context intact."
},
{
"code": null,
"e": 8441,
"s": 8322,
"text": "Congratulations! You now have an entire ML workspace running on your docker, including all the goodies you might need."
},
{
"code": null,
"e": 8561,
"s": 8441,
"text": "If you want to dive one level deeper, I recommend going to the docker run command reference to see all available flags."
},
{
"code": null,
"e": 8638,
"s": 8561,
"text": "Let’s go over a few things I recommend you setup to have an ideal workspace."
},
{
"code": null,
"e": 8846,
"s": 8638,
"text": "We’ve set a standardized, isolated machine to run our ML. If you’re setting up your project for the first time, you’re probably running to your environment manager conda/pip to install some awesome packages."
},
{
"code": null,
"e": 8869,
"s": 8846,
"text": "Let me stop you there."
},
{
"code": null,
"e": 9195,
"s": 8869,
"text": "Why go through all this effort to create an isolated, reproducible environment and then go install a bunch of different packages that no one will know about. You should create a Conda or virtual environment to manage your packages. If you decided to use the DAGsHub/ml-workspace-minimal container, here is what you should do:"
},
{
"code": null,
"e": 9304,
"s": 9195,
"text": "In the ML workspace home click the Open Tool dropdown and choose Terminal. Then type the following commands:"
},
{
"code": null,
"e": 9616,
"s": 9304,
"text": "# Input your <env-name> and the <python-version> you wantconda create -y --name <env-name> python=<python-version># Activate your environmentsource activate <env-name># If you have a `requirements.txt` file, you should install those requirementspip install -U pip setuptools wheelpip install -r requirements.txt"
},
{
"code": null,
"e": 9773,
"s": 9616,
"text": "After you finish installing your packages, and you want to save your project (and hopefully commit it to Git), you should save your package list by running:"
},
{
"code": null,
"e": 9920,
"s": 9773,
"text": "# This will override your existing `requirements.txt`. # If you want to append, use `>>` instead of `>`pip list --format=freeze > requirements.txt"
},
{
"code": null,
"e": 10062,
"s": 9920,
"text": "Note: Wondering why you should use this command for listing your requirements (instead of pip freeze >> requirements)? Read this GitHub issue"
}
] |
C++ Array Library - front() Function
|
The C++ function std::array::front() returns reference to the first element of the array container. If array size is zero then behavior of this method is undefined. Unlike begin()
method this method returns first element itself and not iterator.
Following is the declaration for std::array::front() function form std::array header.
reference front();
const_reference front() cont;
None
Returns the first element of an array. If array object is const-qualified
this method return const reference otherwise it returns reference.
This member function never throws exception. Calling this method on empty
array container will cause undefined behavior.
Constant i.e. O(1)
The following example shows the usage of std::array::front() function.
#include <iostream>
#include <array>
using namespace std;
int main(void) {
array<int, 5> arr = {10, 20, 30, 40, 50};
/* print first element */
cout << "First element of array = " << arr.front()
<< endl;
/* modify value */
arr.front() = 1;
/* print modified value */
cout << "After modification first element of array = " << arr.front()
<< endl;
return 0;
}
Let us compile and run the above program, this will produce the following result −
First element of array = 10
After modification first element of array = 1
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2849,
"s": 2603,
"text": "The C++ function std::array::front() returns reference to the first element of the array container. If array size is zero then behavior of this method is undefined. Unlike begin()\nmethod this method returns first element itself and not iterator."
},
{
"code": null,
"e": 2935,
"s": 2849,
"text": "Following is the declaration for std::array::front() function form std::array header."
},
{
"code": null,
"e": 2984,
"s": 2935,
"text": "reference front();\nconst_reference front() cont;"
},
{
"code": null,
"e": 2989,
"s": 2984,
"text": "None"
},
{
"code": null,
"e": 3130,
"s": 2989,
"text": "Returns the first element of an array. If array object is const-qualified\nthis method return const reference otherwise it returns reference."
},
{
"code": null,
"e": 3251,
"s": 3130,
"text": "This member function never throws exception. Calling this method on empty\narray container will cause undefined behavior."
},
{
"code": null,
"e": 3270,
"s": 3251,
"text": "Constant i.e. O(1)"
},
{
"code": null,
"e": 3341,
"s": 3270,
"text": "The following example shows the usage of std::array::front() function."
},
{
"code": null,
"e": 3762,
"s": 3341,
"text": "#include <iostream>\n#include <array>\n\nusing namespace std;\n\nint main(void) {\n\n array<int, 5> arr = {10, 20, 30, 40, 50};\n\n /* print first element */\n cout << \"First element of array = \" << arr.front() \n << endl;\n\n /* modify value */\n arr.front() = 1;\n\n /* print modified value */\n cout << \"After modification first element of array = \" << arr.front() \n << endl;\n\n return 0;\n}"
},
{
"code": null,
"e": 3845,
"s": 3762,
"text": "Let us compile and run the above program, this will produce the following result −"
},
{
"code": null,
"e": 3939,
"s": 3845,
"text": "First element of array = 10\nAfter modification first element of array = 1\n"
},
{
"code": null,
"e": 3946,
"s": 3939,
"text": " Print"
},
{
"code": null,
"e": 3957,
"s": 3946,
"text": " Add Notes"
}
] |
Best of arXiv—Readings for November 2021 | by Sergi Castella i Sapé | Towards Data Science
|
After a slow summer break, the ML world has been back at full speed for the past month: conferences getting back to in-person format, new parameter count records, Deepmind being the Robin Hood of Reinforcement Learning or a GPT-3 like model (T-0) now published and open-sourced.
Nonetheless, as we approach the end of 2021, for the first time ever on AI arXiv, the publication growth seems to be slowing down: after several years of consistent exponential growth (~yearly 30–40%) it looks like 2021’s number of publications will only top 2020 ones by a shy margin (around 10% more). Will we see a strong surge for NeurIPS and ICLR? Or has AI Research mellowed down?
Let’s begin with some of the hot news from the past few weeks:
The Empirical Methods in Natural Language Processing conference (EMNLP) is happening 7–11 November in a hybrid format: simultaneously online and in Punta Cana, Dominican Republic. The official open proceedings will be published shortly in the ACL anthology.
Deepmind acquired MuJoCo and open sourced it. It’s a big deal cause MuJoCo is one of the most widely used physics simulation software for robotics and RL and it used to be pricey. While big universities bought licenses for their students and staff, the cost made the entry barrier higher for the playful curious.
Microsoft’s megatron 530B parameter model. But wait... It’s still only a hand-wavy blog post! They claim it’s largest monolithic transformer to date; what the heck is monolithic you might think? Well that’s a way of saying all parameters are used, unlike for Mixture of Experts (MoE) types of models, like Wu Dao’s 1,75 trillion or Switch Transformer’s trillion as well, where only a smaller subset are activated during each inference/training step. While the sheer size seems pretty incredible, we’ll have to wait until they share a more in depth account of their work. Speaking about parameter counts, will we ever stop caring about them?
State of AI Report for 2021 was recently released by AI investors Nathan Benaich and Ian Hogarth. It provides a useful yearly executive summary on AI from a birds eye perspective: research, industry, talent, politics and predictions. Definitely worth a read!
If you want to try out big attention-based architectures for computer vision, it’s your lucky day, because Scenic [4] was recently released: a codebase (with lots of boilerplate code and examples) to run JAX models for computer vision, including several popular like the original Vision Transformer [6], ViViT [7] and many more.
If your thing is playing with generative models for images, check out VQGAN-CLIP, a repo for running the popular generative model that turns a natural language sentence into an image.
Finally, we propose you check Dagster an “orchestration platform for development, production, and observation of data assets”.
And finally, here’s the selection of impactful recent papers.
By OpenAI et al.
❓Why → Very long-document summarization (e.g. book scale) is a hard task for machines largely because annotating data is terribly time consuming: to annotate 1 instance or example, a person needs to read a book and come up with a summary of it, which takes several hours.
💡 Key insights → Long range summarization can be (somewhat) successfuly broken down into hirearchical summarization tasks that are way cheaper to annotate: split a book into chunks, then summarize each chunk into summaries. Concatenate those summaries and summarize them. Apply this process recursively until a desired full book summary length is reached.
To give a sense of the scale of the data involved: 40 books used, 100K words on average, mostly fiction, and each summarization subtask compresses to a ratio of approximately 5–10 to 1.
The results of this process are still far from human quality, only 5% of the summaries reach a comparable quality. Interestingly, model size seems to play an important role, as summaries from their biggest model clearly outperform those from a smaller model that followed the same training procedure.
In conclusion, this is yet again a really impressive big, complex human in the loop effort for training big models. It’s still far from generating that “wow this is spookily good” feel, but it’s a start. I’m thinking next up, how can this be translated into a few shot setting where only very few or very sparse annotations from humans are needed?
By Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach. et al.
❓Why → Outrageously large models research has been mostly limited to companies with big budgets. This is the first paper from the Hugging Face BigScience Workshop that proposes a collaborative effort to make large scale ML viable for smaller institutions such as universities. In all fairness, this is not the first large GPT-3 like model to be open sourced (e.g. check out GPT-J) but this is bound to be influential.
💡Key insights → We’re talking about a 11 billion parameter model, completely open-sourced and accessible via 🤗Hugging Face.
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
You can check all details on the forum for the project, GitHub repo which includes detailed descriptions for the training of each variant of the model.
The model is a T5-style1 encoder-decoder Transformer (unlike GPT-3’s decoder-only architecture) which is trained on autoregressive language modeling to predict the next token. However, the training set is now curated more carefully: besides using large web crawls of general use of language, the authors propose to include labeled NLP tasks expressed with natural language prompts. For instance, for a sentence classification task for movie reviews with annotations such as
The film had a superb plot, enhanced by the excellent work from the main actor. | Positive
Converting with a template to:
The film had a superb plot, enhanced by the excellent work from the main actor. It was <great/amazing/fantastic...>.
To avoid over-optimizing for a narrow set of templates, these are sourced from multiple people (36) to maximize variety, ending in dozens of templates for many NLP tasks to alternate.
The result is that even being 16x smaller than GPT-3, T0 outperforms it in most tasks even when the training set for those tasks was not seen during training.
Here’s a summary of the key results. The different variants of T0 reflect what datasets were included during training: T0 excludes all datasets that GPT-3 used for evaluation, T0+ adds the datasets used in evaluation (only the training split, blinding to test set is still guaranteed) and T0++ adds on top of T0+ the datasets in SuperGLUE [2].
If you read our last month’s blog, you might’ve noticed that this approach is very similar to FLAN [1] by Google, published just a few weeks ago. The authors address this work thoroughly and T0 still has a lot going for it: T0 and +/++ variants have comparable or better performance while being 10x smaller (137B vs. 11B params!!!). Key differences between the two works are:
T0 uses an encoder-decoder that was trained on MLM vs. decoder only FLAN (MLM has shown to be way more efficient pretraining approach, although it’s not good for autoregressive generation, thus the encoder-decoder strategy that uses MLM pretrained representations)
More diverse prompts
Holding out multiple tasks at once vs. a single task at a time.
By Xiao Liu, Kaixuan Ji, Yicheng Fu et al.
❓Why → It hasn’t been a year since continuous p-tuning/prompt-tuning/prefix-tuning was proposed [3], and it has already become a viable alternative to finetuning in many tasks and a blossoming corner of ML research. This is its latest revision showing strength in tasks where p-tuning was struggling before.
💡 Key insights → If anyone still had doubts about prompt tuning this paper should clear them out (e.g. not working well for small sized frozen models, or bad for some specific tasks such as hard sequence tagging). For those late to the party, p-tuning (also known as prefix-tuning, soft or continuous prompt-tuning) is a technique for finetuning a pretrained model for a particular task without changing the pretrained parameter models. Instead, it consists of learning a prompt via gradient descent of a few continuous embeddings that are a fixed prefix of any input. This has shown to perform very well with Transformers trained on autoregressive language modeling and is more parameter efficient (i.e. only a very small amount of parameters need to be learned for a specific task compared to full finetuning).
The step further the authors take in this work is to add “depth” to prompts. That is, adding various prompts to different layers of a Transformer. While this increases the trainable parameter count, it improves performance while keeping the ratio of total model parameters vs. trainable prompt in the range of 0.1–3%. These are independent of each other in interlayer (they’re trained independently at each layer instead of coming from the transformer layer forward pass).
Here’s a summary of the main results, expect to see p-tuning applied to other tasks in the near future!
By Samira Abnar, Mostafa Dehghani, Behnam Neyshabur and Hanie Sedghi.
❓Why → Scale has been a persistent topic of discussion within ML circles. We have been including papers on this topic for many months now, because it is definitely one of the important questions the field has to grapple with: where will adding parameters and data stop being... useful? Keep reading.
💡Key insights → Sort of pretty much “As we increase the upstream accuracy, the performance of downstream tasks saturates”.
Okay so the gist of this paper is simple, they study how does pre-training performance on an Upstream (US) tasks (e.g. large scale imagenet labels) transfer to Downstream (DS) performance (e.g. whale detection). Then do this experiment for a lot — by a lot mean a lot — of architectures and sizes:“4800 experiments on Vision Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data” 🤑💸
So the interesting plots compare Upstream performance (US) which means performance on the pre-training task, and Downstream performance (DS) which is on the evaluation task. Pretty much across the board it saturates eventually. Still, it’s super interesting differences across architectures for computer vision!
The authors claim that their observations overall seem robust to choices such as the size of the upstream data or number of training shots, and architecture choices. They also explore the influence of hyper-parameter choices: are some hyper-parameters very good for US but don’t translate well to DS? Yes! They dive deep into this phenomenon in section 4, and find that for instance, weight decay is a particularly salient hyperparameter that influences US and DS performance differently.
In a context where nobody really trains models from scratch but chooses pre-trained models to bootstrap their application, this research is key. There’s much more to the paper than what can be summarized in a few paragraphs, it’s definitely worth a read if you want to dive deeper!
By Yuval Kirstain, Patrick Lewis, Sebastian Riedel and Omer Levy.
❓Why → To annotate or to grow? This can be a common dilemma for ML practitioners deciding how to allocate resources: bigger pre-trained models or annotating more data. It depends!
💡Key insights → The main takeaway is that in the context of NLP tasks, scaling parameters consistently yields performance improvements, however, the contribution of additional annotations highly depends on the task. For instance, in Open Question Answering datasets, adding annotations doesn’t significantly improve performance whereas in sentence classification or extractive question answering, it does. Here’s the best summary figure for the findings of the paper, one would probably expect the heatmaps to have a gradient along the diagonal: both size and annotations yield performance improvements, but that’s not what happens.
And that’s pretty much it! To be fair it’s not that super comprehensive and we’ll have to see how well these can be replicated on other modalities and so on but still, the question being addressed is undoubtedly relevant.
By Junyi Ao, Rui Wang, Long Zhou et al.
❓Why → NLP is often used almost as a synonym for text processing, but there’s so much more to natural language than text! Spoken language uses many more dimensions of expression than just its characters. Here’s an approach to model all that by leveraging the existing techniques that have been so successful in NLP for the past few years.
💡Key insights → Jointly learn text and speech representations by feeding a model both audio and text and train self in a self supervised setting with an analogous task to bidirectional Masked Language Modeling applied to sound. But applying MLM to audio is not as straightforward as it is with text, it involves pre-processing audio to a suitable representation called log-Mel filterbank and apply quantized targets in this representation state where a classification task can be performed. Importantly, audio and text representations are combines and fed to the model jointly, allowing for modeling across modalities.
The results are state-of-the-art for some tasks like voice conversion (VC), Automatic Speech Recognition (ASR) and performs competitively when applied to Text To Speech and Speech to Class (SID).
By Darius Rückert, Linus Franke and Marc Stamminger.
❓Why →Using Neural Networks to improve rendering at a reduced computational cost — in comparison to traditional techniques — is an extremely exciting, specially at a time where the VR and AR sectors are slowly but steadily taking off (hello Meta). After all, Deep Learning might play a key role in rendering the metaverse...
💡 Key insights → Rendering a view of a scene (e.g. in a videogame or simulation) is an impressively complex process: 3D objects can be defined in several ways, lighting, occlusion, textures, transparencies, reflections interact in complicated ways, rasterising stuff into a pixel grid, etc. Brute forcing these tasks is out of the question for low latency applications; instead, one must be smart about not computing things that don’t need to be computed, like opaque objects that are occluding other objects.
It turns out that most of the processes involved in rendering can be performed by differentiable modules, which means that one can use gradient descent to optimize them given an appropriate loss function. The main modules involved in rendering novel views of a scene are the rasterizer, the renderer and the tonemapper, as you can see in the figure below.
We can’t go too much in detail because in all honesty, the topic is a bit over our heads. Still, the video demos they provide are quite impressive and we can’t wait for this kind of technology to be widely adopted by mainstream rendering technology.
On the ethics side of AI, this past month we’ve also seen a couple of papers we’d like to highlight
Delphi: Towards Machine Ethics and Norms is a brave attempt at teaching a machine the intricacies of right and wrong. While the complexity of the task has eluded philosophical consensus for millennia, this work is a tangible step towards introducing ethical judgements into algorithms.
Systematic Inequalities in Language Technology Performance across the World’s Languages introduces a framework for estimating the “global utility” of language technologies and how it covers the diversity of languages around the world.
On the topic of information retrieval, Adversarial Retriever-Ranker for dense text retrieval is an exciting new approach to model the interaction between a retriever and a ranker for the 2 stage retrieval setting, in which the retriever tries to fool the ranker with documents that “seem relevant” but aren’t and the ranker tries to surface the most top relevance labeled document.
Our monthly selection ends here; if you want to keep up to date with the latest research, follow us on Twitter @zetavector. See you soon in the next one!
References:
[1] Finetuned Language Models Are Zero-Shot Learners. By Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu et al. 2021
[2] SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. By Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman, 2019.
[3] Prefix-Tuning: Optimizing Continuous Prompts for Generation. By Xiang Lisa Li, Percy Liang, 2021.
[4] SCENIC: A JAX Library for Computer Vision Research and Beyond. By Mostafa Dehghani, Alexey Gritsenko, Anurag Arnab, Matthias Minderer, Yi Tay, 2021.
[6] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. By Alexey Dosovitskiy et al. 2020.
[7] ViViT: A Video Vision Transformer. By Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid, 2021.
|
[
{
"code": null,
"e": 325,
"s": 46,
"text": "After a slow summer break, the ML world has been back at full speed for the past month: conferences getting back to in-person format, new parameter count records, Deepmind being the Robin Hood of Reinforcement Learning or a GPT-3 like model (T-0) now published and open-sourced."
},
{
"code": null,
"e": 712,
"s": 325,
"text": "Nonetheless, as we approach the end of 2021, for the first time ever on AI arXiv, the publication growth seems to be slowing down: after several years of consistent exponential growth (~yearly 30–40%) it looks like 2021’s number of publications will only top 2020 ones by a shy margin (around 10% more). Will we see a strong surge for NeurIPS and ICLR? Or has AI Research mellowed down?"
},
{
"code": null,
"e": 775,
"s": 712,
"text": "Let’s begin with some of the hot news from the past few weeks:"
},
{
"code": null,
"e": 1033,
"s": 775,
"text": "The Empirical Methods in Natural Language Processing conference (EMNLP) is happening 7–11 November in a hybrid format: simultaneously online and in Punta Cana, Dominican Republic. The official open proceedings will be published shortly in the ACL anthology."
},
{
"code": null,
"e": 1346,
"s": 1033,
"text": "Deepmind acquired MuJoCo and open sourced it. It’s a big deal cause MuJoCo is one of the most widely used physics simulation software for robotics and RL and it used to be pricey. While big universities bought licenses for their students and staff, the cost made the entry barrier higher for the playful curious."
},
{
"code": null,
"e": 1987,
"s": 1346,
"text": "Microsoft’s megatron 530B parameter model. But wait... It’s still only a hand-wavy blog post! They claim it’s largest monolithic transformer to date; what the heck is monolithic you might think? Well that’s a way of saying all parameters are used, unlike for Mixture of Experts (MoE) types of models, like Wu Dao’s 1,75 trillion or Switch Transformer’s trillion as well, where only a smaller subset are activated during each inference/training step. While the sheer size seems pretty incredible, we’ll have to wait until they share a more in depth account of their work. Speaking about parameter counts, will we ever stop caring about them?"
},
{
"code": null,
"e": 2246,
"s": 1987,
"text": "State of AI Report for 2021 was recently released by AI investors Nathan Benaich and Ian Hogarth. It provides a useful yearly executive summary on AI from a birds eye perspective: research, industry, talent, politics and predictions. Definitely worth a read!"
},
{
"code": null,
"e": 2575,
"s": 2246,
"text": "If you want to try out big attention-based architectures for computer vision, it’s your lucky day, because Scenic [4] was recently released: a codebase (with lots of boilerplate code and examples) to run JAX models for computer vision, including several popular like the original Vision Transformer [6], ViViT [7] and many more."
},
{
"code": null,
"e": 2759,
"s": 2575,
"text": "If your thing is playing with generative models for images, check out VQGAN-CLIP, a repo for running the popular generative model that turns a natural language sentence into an image."
},
{
"code": null,
"e": 2886,
"s": 2759,
"text": "Finally, we propose you check Dagster an “orchestration platform for development, production, and observation of data assets”."
},
{
"code": null,
"e": 2948,
"s": 2886,
"text": "And finally, here’s the selection of impactful recent papers."
},
{
"code": null,
"e": 2965,
"s": 2948,
"text": "By OpenAI et al."
},
{
"code": null,
"e": 3237,
"s": 2965,
"text": "❓Why → Very long-document summarization (e.g. book scale) is a hard task for machines largely because annotating data is terribly time consuming: to annotate 1 instance or example, a person needs to read a book and come up with a summary of it, which takes several hours."
},
{
"code": null,
"e": 3593,
"s": 3237,
"text": "💡 Key insights → Long range summarization can be (somewhat) successfuly broken down into hirearchical summarization tasks that are way cheaper to annotate: split a book into chunks, then summarize each chunk into summaries. Concatenate those summaries and summarize them. Apply this process recursively until a desired full book summary length is reached."
},
{
"code": null,
"e": 3779,
"s": 3593,
"text": "To give a sense of the scale of the data involved: 40 books used, 100K words on average, mostly fiction, and each summarization subtask compresses to a ratio of approximately 5–10 to 1."
},
{
"code": null,
"e": 4080,
"s": 3779,
"text": "The results of this process are still far from human quality, only 5% of the summaries reach a comparable quality. Interestingly, model size seems to play an important role, as summaries from their biggest model clearly outperform those from a smaller model that followed the same training procedure."
},
{
"code": null,
"e": 4428,
"s": 4080,
"text": "In conclusion, this is yet again a really impressive big, complex human in the loop effort for training big models. It’s still far from generating that “wow this is spookily good” feel, but it’s a start. I’m thinking next up, how can this be translated into a few shot setting where only very few or very sparse annotations from humans are needed?"
},
{
"code": null,
"e": 4497,
"s": 4428,
"text": "By Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach. et al."
},
{
"code": null,
"e": 4915,
"s": 4497,
"text": "❓Why → Outrageously large models research has been mostly limited to companies with big budgets. This is the first paper from the Hugging Face BigScience Workshop that proposes a collaborative effort to make large scale ML viable for smaller institutions such as universities. In all fairness, this is not the first large GPT-3 like model to be open sourced (e.g. check out GPT-J) but this is bound to be influential."
},
{
"code": null,
"e": 5039,
"s": 4915,
"text": "💡Key insights → We’re talking about a 11 billion parameter model, completely open-sourced and accessible via 🤗Hugging Face."
},
{
"code": null,
"e": 5104,
"s": 5039,
"text": "model = AutoModelForSeq2SeqLM.from_pretrained(\"bigscience/T0pp\")"
},
{
"code": null,
"e": 5256,
"s": 5104,
"text": "You can check all details on the forum for the project, GitHub repo which includes detailed descriptions for the training of each variant of the model."
},
{
"code": null,
"e": 5730,
"s": 5256,
"text": "The model is a T5-style1 encoder-decoder Transformer (unlike GPT-3’s decoder-only architecture) which is trained on autoregressive language modeling to predict the next token. However, the training set is now curated more carefully: besides using large web crawls of general use of language, the authors propose to include labeled NLP tasks expressed with natural language prompts. For instance, for a sentence classification task for movie reviews with annotations such as"
},
{
"code": null,
"e": 5821,
"s": 5730,
"text": "The film had a superb plot, enhanced by the excellent work from the main actor. | Positive"
},
{
"code": null,
"e": 5852,
"s": 5821,
"text": "Converting with a template to:"
},
{
"code": null,
"e": 5969,
"s": 5852,
"text": "The film had a superb plot, enhanced by the excellent work from the main actor. It was <great/amazing/fantastic...>."
},
{
"code": null,
"e": 6153,
"s": 5969,
"text": "To avoid over-optimizing for a narrow set of templates, these are sourced from multiple people (36) to maximize variety, ending in dozens of templates for many NLP tasks to alternate."
},
{
"code": null,
"e": 6312,
"s": 6153,
"text": "The result is that even being 16x smaller than GPT-3, T0 outperforms it in most tasks even when the training set for those tasks was not seen during training."
},
{
"code": null,
"e": 6656,
"s": 6312,
"text": "Here’s a summary of the key results. The different variants of T0 reflect what datasets were included during training: T0 excludes all datasets that GPT-3 used for evaluation, T0+ adds the datasets used in evaluation (only the training split, blinding to test set is still guaranteed) and T0++ adds on top of T0+ the datasets in SuperGLUE [2]."
},
{
"code": null,
"e": 7032,
"s": 6656,
"text": "If you read our last month’s blog, you might’ve noticed that this approach is very similar to FLAN [1] by Google, published just a few weeks ago. The authors address this work thoroughly and T0 still has a lot going for it: T0 and +/++ variants have comparable or better performance while being 10x smaller (137B vs. 11B params!!!). Key differences between the two works are:"
},
{
"code": null,
"e": 7297,
"s": 7032,
"text": "T0 uses an encoder-decoder that was trained on MLM vs. decoder only FLAN (MLM has shown to be way more efficient pretraining approach, although it’s not good for autoregressive generation, thus the encoder-decoder strategy that uses MLM pretrained representations)"
},
{
"code": null,
"e": 7318,
"s": 7297,
"text": "More diverse prompts"
},
{
"code": null,
"e": 7382,
"s": 7318,
"text": "Holding out multiple tasks at once vs. a single task at a time."
},
{
"code": null,
"e": 7425,
"s": 7382,
"text": "By Xiao Liu, Kaixuan Ji, Yicheng Fu et al."
},
{
"code": null,
"e": 7733,
"s": 7425,
"text": "❓Why → It hasn’t been a year since continuous p-tuning/prompt-tuning/prefix-tuning was proposed [3], and it has already become a viable alternative to finetuning in many tasks and a blossoming corner of ML research. This is its latest revision showing strength in tasks where p-tuning was struggling before."
},
{
"code": null,
"e": 8546,
"s": 7733,
"text": "💡 Key insights → If anyone still had doubts about prompt tuning this paper should clear them out (e.g. not working well for small sized frozen models, or bad for some specific tasks such as hard sequence tagging). For those late to the party, p-tuning (also known as prefix-tuning, soft or continuous prompt-tuning) is a technique for finetuning a pretrained model for a particular task without changing the pretrained parameter models. Instead, it consists of learning a prompt via gradient descent of a few continuous embeddings that are a fixed prefix of any input. This has shown to perform very well with Transformers trained on autoregressive language modeling and is more parameter efficient (i.e. only a very small amount of parameters need to be learned for a specific task compared to full finetuning)."
},
{
"code": null,
"e": 9019,
"s": 8546,
"text": "The step further the authors take in this work is to add “depth” to prompts. That is, adding various prompts to different layers of a Transformer. While this increases the trainable parameter count, it improves performance while keeping the ratio of total model parameters vs. trainable prompt in the range of 0.1–3%. These are independent of each other in interlayer (they’re trained independently at each layer instead of coming from the transformer layer forward pass)."
},
{
"code": null,
"e": 9123,
"s": 9019,
"text": "Here’s a summary of the main results, expect to see p-tuning applied to other tasks in the near future!"
},
{
"code": null,
"e": 9193,
"s": 9123,
"text": "By Samira Abnar, Mostafa Dehghani, Behnam Neyshabur and Hanie Sedghi."
},
{
"code": null,
"e": 9493,
"s": 9193,
"text": "❓Why → Scale has been a persistent topic of discussion within ML circles. We have been including papers on this topic for many months now, because it is definitely one of the important questions the field has to grapple with: where will adding parameters and data stop being... useful? Keep reading."
},
{
"code": null,
"e": 9616,
"s": 9493,
"text": "💡Key insights → Sort of pretty much “As we increase the upstream accuracy, the performance of downstream tasks saturates”."
},
{
"code": null,
"e": 10102,
"s": 9616,
"text": "Okay so the gist of this paper is simple, they study how does pre-training performance on an Upstream (US) tasks (e.g. large scale imagenet labels) transfer to Downstream (DS) performance (e.g. whale detection). Then do this experiment for a lot — by a lot mean a lot — of architectures and sizes:“4800 experiments on Vision Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data” 🤑💸"
},
{
"code": null,
"e": 10414,
"s": 10102,
"text": "So the interesting plots compare Upstream performance (US) which means performance on the pre-training task, and Downstream performance (DS) which is on the evaluation task. Pretty much across the board it saturates eventually. Still, it’s super interesting differences across architectures for computer vision!"
},
{
"code": null,
"e": 10903,
"s": 10414,
"text": "The authors claim that their observations overall seem robust to choices such as the size of the upstream data or number of training shots, and architecture choices. They also explore the influence of hyper-parameter choices: are some hyper-parameters very good for US but don’t translate well to DS? Yes! They dive deep into this phenomenon in section 4, and find that for instance, weight decay is a particularly salient hyperparameter that influences US and DS performance differently."
},
{
"code": null,
"e": 11185,
"s": 10903,
"text": "In a context where nobody really trains models from scratch but chooses pre-trained models to bootstrap their application, this research is key. There’s much more to the paper than what can be summarized in a few paragraphs, it’s definitely worth a read if you want to dive deeper!"
},
{
"code": null,
"e": 11251,
"s": 11185,
"text": "By Yuval Kirstain, Patrick Lewis, Sebastian Riedel and Omer Levy."
},
{
"code": null,
"e": 11431,
"s": 11251,
"text": "❓Why → To annotate or to grow? This can be a common dilemma for ML practitioners deciding how to allocate resources: bigger pre-trained models or annotating more data. It depends!"
},
{
"code": null,
"e": 12064,
"s": 11431,
"text": "💡Key insights → The main takeaway is that in the context of NLP tasks, scaling parameters consistently yields performance improvements, however, the contribution of additional annotations highly depends on the task. For instance, in Open Question Answering datasets, adding annotations doesn’t significantly improve performance whereas in sentence classification or extractive question answering, it does. Here’s the best summary figure for the findings of the paper, one would probably expect the heatmaps to have a gradient along the diagonal: both size and annotations yield performance improvements, but that’s not what happens."
},
{
"code": null,
"e": 12286,
"s": 12064,
"text": "And that’s pretty much it! To be fair it’s not that super comprehensive and we’ll have to see how well these can be replicated on other modalities and so on but still, the question being addressed is undoubtedly relevant."
},
{
"code": null,
"e": 12326,
"s": 12286,
"text": "By Junyi Ao, Rui Wang, Long Zhou et al."
},
{
"code": null,
"e": 12665,
"s": 12326,
"text": "❓Why → NLP is often used almost as a synonym for text processing, but there’s so much more to natural language than text! Spoken language uses many more dimensions of expression than just its characters. Here’s an approach to model all that by leveraging the existing techniques that have been so successful in NLP for the past few years."
},
{
"code": null,
"e": 13284,
"s": 12665,
"text": "💡Key insights → Jointly learn text and speech representations by feeding a model both audio and text and train self in a self supervised setting with an analogous task to bidirectional Masked Language Modeling applied to sound. But applying MLM to audio is not as straightforward as it is with text, it involves pre-processing audio to a suitable representation called log-Mel filterbank and apply quantized targets in this representation state where a classification task can be performed. Importantly, audio and text representations are combines and fed to the model jointly, allowing for modeling across modalities."
},
{
"code": null,
"e": 13480,
"s": 13284,
"text": "The results are state-of-the-art for some tasks like voice conversion (VC), Automatic Speech Recognition (ASR) and performs competitively when applied to Text To Speech and Speech to Class (SID)."
},
{
"code": null,
"e": 13534,
"s": 13480,
"text": "By Darius Rückert, Linus Franke and Marc Stamminger."
},
{
"code": null,
"e": 13859,
"s": 13534,
"text": "❓Why →Using Neural Networks to improve rendering at a reduced computational cost — in comparison to traditional techniques — is an extremely exciting, specially at a time where the VR and AR sectors are slowly but steadily taking off (hello Meta). After all, Deep Learning might play a key role in rendering the metaverse..."
},
{
"code": null,
"e": 14369,
"s": 13859,
"text": "💡 Key insights → Rendering a view of a scene (e.g. in a videogame or simulation) is an impressively complex process: 3D objects can be defined in several ways, lighting, occlusion, textures, transparencies, reflections interact in complicated ways, rasterising stuff into a pixel grid, etc. Brute forcing these tasks is out of the question for low latency applications; instead, one must be smart about not computing things that don’t need to be computed, like opaque objects that are occluding other objects."
},
{
"code": null,
"e": 14725,
"s": 14369,
"text": "It turns out that most of the processes involved in rendering can be performed by differentiable modules, which means that one can use gradient descent to optimize them given an appropriate loss function. The main modules involved in rendering novel views of a scene are the rasterizer, the renderer and the tonemapper, as you can see in the figure below."
},
{
"code": null,
"e": 14975,
"s": 14725,
"text": "We can’t go too much in detail because in all honesty, the topic is a bit over our heads. Still, the video demos they provide are quite impressive and we can’t wait for this kind of technology to be widely adopted by mainstream rendering technology."
},
{
"code": null,
"e": 15075,
"s": 14975,
"text": "On the ethics side of AI, this past month we’ve also seen a couple of papers we’d like to highlight"
},
{
"code": null,
"e": 15361,
"s": 15075,
"text": "Delphi: Towards Machine Ethics and Norms is a brave attempt at teaching a machine the intricacies of right and wrong. While the complexity of the task has eluded philosophical consensus for millennia, this work is a tangible step towards introducing ethical judgements into algorithms."
},
{
"code": null,
"e": 15596,
"s": 15361,
"text": "Systematic Inequalities in Language Technology Performance across the World’s Languages introduces a framework for estimating the “global utility” of language technologies and how it covers the diversity of languages around the world."
},
{
"code": null,
"e": 15978,
"s": 15596,
"text": "On the topic of information retrieval, Adversarial Retriever-Ranker for dense text retrieval is an exciting new approach to model the interaction between a retriever and a ranker for the 2 stage retrieval setting, in which the retriever tries to fool the ranker with documents that “seem relevant” but aren’t and the ranker tries to surface the most top relevance labeled document."
},
{
"code": null,
"e": 16132,
"s": 15978,
"text": "Our monthly selection ends here; if you want to keep up to date with the latest research, follow us on Twitter @zetavector. See you soon in the next one!"
},
{
"code": null,
"e": 16144,
"s": 16132,
"text": "References:"
},
{
"code": null,
"e": 16267,
"s": 16144,
"text": "[1] Finetuned Language Models Are Zero-Shot Learners. By Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu et al. 2021"
},
{
"code": null,
"e": 16484,
"s": 16267,
"text": "[2] SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. By Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman, 2019."
},
{
"code": null,
"e": 16586,
"s": 16484,
"text": "[3] Prefix-Tuning: Optimizing Continuous Prompts for Generation. By Xiang Lisa Li, Percy Liang, 2021."
},
{
"code": null,
"e": 16739,
"s": 16586,
"text": "[4] SCENIC: A JAX Library for Computer Vision Research and Beyond. By Mostafa Dehghani, Alexey Gritsenko, Anurag Arnab, Matthias Minderer, Yi Tay, 2021."
},
{
"code": null,
"e": 16854,
"s": 16739,
"text": "[6] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. By Alexey Dosovitskiy et al. 2020."
}
] |
Divide an array into K subarray with the given condition - GeeksforGeeks
|
06 May, 2021
Given an array arr[] and an integer K. The task is to divide the array into K parts ( subarray ) such that the sum of the values of all subarray is minimum.The value of every subarray is defined as:
Take the maximum from that subarray.
Subtract each element of the subarray with the maximum.
Take the sum of all the values after subtraction.
The task is to minimize the sum of the values after dividing the array into K parts.Examples:
Input: arr[] = { 2, 9, 5, 4, 8, 3, 6 }, K = 2 Output: 19 Explanation: The two groups are : {2} with max = 2 and {9, 5, 4, 8, 3, 6} with max=9, sum of difference of first group = 2 – 2 = 0, sum of difference of second group = (9-9) + (9-5) + (9-4) + (9-8) + (9-3) + (9-6) = 19Input: arr[] = { 12, 20, 30, 14, 25}, K = 3 Output: 19
Approach: The brute-force solution will be to try all the possible partitions and take the minimum overall. Although this solution is exponential in time. In the recursive solution, there are many overlapping sub-problems that can be optimised using dynamic programming.So, We can form a basic recursive formula and that computes every possible solution and finds the best possible solution. We can see that the recursive solution has many overlapping sub-problems we can reduce the complexity using Dynamic programming.Recursive formula: F(i, K) = { min of all values such that j < i [ max(Arr[i..j]) * (i – j + 1) – Sum(A[i...j] ] } + F(j, K-1) The bottom-up approach can be used to compute the values of sub-problems first and store them.Here dp[i][j] defines the minimum value that can be obtained if the array is starting from index i and have j partition.So, the answer to the problems will be dp[0][K], array starting at 0 and having K partitions.Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation of the above approach #include <bits/stdc++.h>using namespace std; // Function to divide an array into k// parts such that the sum of difference// of every element with the maximum element// of that part is minimumint divideArray(int arr[], int n, int k){ // Dp to store the values int dp[500][500] = { 0 }; k -= 1; // Fill up the dp table for (int i = n - 1; i >= 0; i--) { for (int j = 0; j <= k; j++) { // Intitilize maximum value dp[i][j] = INT_MAX; // Max element and the sum int max_ = -1, sum = 0; // Run a loop from i to n for (int l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element int diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i][j] = min(dp[i][j], diff + dp[l + 1][j - 1]); else dp[i][j] = diff; } } } // Returns the minimum sum // in K parts return dp[0][k];} // Driver codeint main(){ int arr[] = { 2, 9, 5, 4, 8, 3, 6 }; int n = sizeof(arr) / sizeof(int); int k = 2; cout << divideArray(arr, n, k) << "\n"; return 0;}
// Java implementation of the above approachclass GFG{ // Function to divide an array into k // parts such that the sum of difference // of every element with the maximum element // of that part is minimum static int divideArray(int arr[], int n, int k) { // Dp to store the values int dp[][] = new int[500][500]; int i, j; for(i = 0; i < 500; i++) for(j = 0; j < 500; j++) dp[i][j] = 0; k -= 1; // Fill up the dp table for (i = n - 1; i >= 0; i--) { for (j = 0; j <= k; j++) { // Intitilize maximum value dp[i][j] = Integer.MAX_VALUE; // Max element and the sum int max_ = -1, sum = 0; // Run a loop from i to n for (int l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = Math.max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element int diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i][j] = Math.min(dp[i][j], diff + dp[l + 1][j - 1]); else dp[i][j] = diff; } } } // Returns the minimum sum // in K parts return dp[0][k]; } // Driver code public static void main (String[] args) { int arr[] = { 2, 9, 5, 4, 8, 3, 6 }; int n = arr.length; int k = 2; System.out.println(divideArray(arr, n, k)); }} // This code is contributed by AnkitRai01
# Python3 implementation of the above approach # Function to divide an array into k# parts such that the summ of difference# of every element with the maximum element# of that part is minimumdef divideArray(arr, n, k): # Dp to store the values dp = [[0 for i in range(500)] for i in range(500)] k -= 1 # Fill up the dp table for i in range(n - 1, -1, -1): for j in range(0, k + 1): # Intitilize maximum value dp[i][j] = 10**9 # Max element and the summ max_ = -1 summ = 0 # Run a loop from i to n for l in range(i, n): # Find the maximum number # from i to l and the summ # from i to l max_ = max(max_, arr[l]) summ += arr[l] # Find the summ of difference # of every element with the # maximum element diff = (l - i + 1) * max_ - summ # If the array can be divided if (j > 0): dp[i][j]= min(dp[i][j], diff + dp[l + 1][j - 1]) else: dp[i][j] = diff # Returns the minimum summ # in K parts return dp[0][k] # Driver codearr = [2, 9, 5, 4, 8, 3, 6]n = len(arr)k = 2 print(divideArray(arr, n, k)) # This code is contributed by Mohit Kumar
// C# implementation of above approachusing System; class GFG{ // Function to divide an array into k // parts such that the sum of difference // of every element with the maximum element // of that part is minimum static int divideArray(int []arr, int n, int k) { // Dp to store the values int [,]dp = new int[500, 500]; int i, j; for(i = 0; i < 500; i++) for(j = 0; j < 500; j++) dp[i, j] = 0; k -= 1; // Fill up the dp table for (i = n - 1; i >= 0; i--) { for (j = 0; j <= k; j++) { // Intitilize maximum value dp[i, j] = int.MaxValue; // Max element and the sum int max_ = -1, sum = 0; // Run a loop from i to n for (int l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = Math.Max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element int diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i, j] = Math.Min(dp[i, j], diff + dp[l + 1, j - 1]); else dp[i, j] = diff; } } } // Returns the minimum sum // in K parts return dp[0, k]; } // Driver code public static void Main (String[] args) { int []arr = { 2, 9, 5, 4, 8, 3, 6 }; int n = arr.Length; int k = 2; Console.WriteLine(divideArray(arr, n, k)); }} // This code is contributed by 29AjayKumar
<script> // Javascript implementation of the above approach // Function to divide an array into k// parts such that the sum of difference// of every element with the maximum element// of that part is minimumfunction divideArray(arr, n, k){ // Dp to store the values var dp = Array.from(Array(500), ()=> Array(500).fill(0)); k -= 1; // Fill up the dp table for (var i = n - 1; i >= 0; i--) { for (var j = 0; j <= k; j++) { // Intitilize maximum value dp[i][j] = 1000000000; // Max element and the sum var max_ = -1, sum = 0; // Run a loop from i to n for (var l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = Math.max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element var diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i][j] = Math.min(dp[i][j], diff + dp[l + 1][j - 1]); else dp[i][j] = diff; } } } // Returns the minimum sum // in K parts return dp[0][k];} // Driver codevar arr = [2, 9, 5, 4, 8, 3, 6 ];var n = arr.length;var k = 2;document.write( divideArray(arr, n, k) + "<br>"); </script>
19
mohit kumar 29
ankthon
29AjayKumar
rrrtnx
subarray
tail-recursion
Arrays
Dynamic Programming
Recursion
Arrays
Dynamic Programming
Recursion
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Maximum and minimum of an array using minimum number of comparisons
Stack Data Structure (Introduction and Program)
Top 50 Array Coding Problems for Interviews
Multidimensional Arrays in Java
Introduction to Arrays
0-1 Knapsack Problem | DP-10
Program for Fibonacci numbers
Longest Common Subsequence | DP-4
Bellman–Ford Algorithm | DP-23
Floyd Warshall Algorithm | DP-16
|
[
{
"code": null,
"e": 25378,
"s": 25350,
"text": "\n06 May, 2021"
},
{
"code": null,
"e": 25579,
"s": 25378,
"text": "Given an array arr[] and an integer K. The task is to divide the array into K parts ( subarray ) such that the sum of the values of all subarray is minimum.The value of every subarray is defined as: "
},
{
"code": null,
"e": 25616,
"s": 25579,
"text": "Take the maximum from that subarray."
},
{
"code": null,
"e": 25672,
"s": 25616,
"text": "Subtract each element of the subarray with the maximum."
},
{
"code": null,
"e": 25722,
"s": 25672,
"text": "Take the sum of all the values after subtraction."
},
{
"code": null,
"e": 25818,
"s": 25722,
"text": "The task is to minimize the sum of the values after dividing the array into K parts.Examples: "
},
{
"code": null,
"e": 26150,
"s": 25818,
"text": "Input: arr[] = { 2, 9, 5, 4, 8, 3, 6 }, K = 2 Output: 19 Explanation: The two groups are : {2} with max = 2 and {9, 5, 4, 8, 3, 6} with max=9, sum of difference of first group = 2 – 2 = 0, sum of difference of second group = (9-9) + (9-5) + (9-4) + (9-8) + (9-3) + (9-6) = 19Input: arr[] = { 12, 20, 30, 14, 25}, K = 3 Output: 19 "
},
{
"code": null,
"e": 27159,
"s": 26152,
"text": "Approach: The brute-force solution will be to try all the possible partitions and take the minimum overall. Although this solution is exponential in time. In the recursive solution, there are many overlapping sub-problems that can be optimised using dynamic programming.So, We can form a basic recursive formula and that computes every possible solution and finds the best possible solution. We can see that the recursive solution has many overlapping sub-problems we can reduce the complexity using Dynamic programming.Recursive formula: F(i, K) = { min of all values such that j < i [ max(Arr[i..j]) * (i – j + 1) – Sum(A[i...j] ] } + F(j, K-1) The bottom-up approach can be used to compute the values of sub-problems first and store them.Here dp[i][j] defines the minimum value that can be obtained if the array is starting from index i and have j partition.So, the answer to the problems will be dp[0][K], array starting at 0 and having K partitions.Below is the implementation of the above approach: "
},
{
"code": null,
"e": 27163,
"s": 27159,
"text": "C++"
},
{
"code": null,
"e": 27168,
"s": 27163,
"text": "Java"
},
{
"code": null,
"e": 27176,
"s": 27168,
"text": "Python3"
},
{
"code": null,
"e": 27179,
"s": 27176,
"text": "C#"
},
{
"code": null,
"e": 27190,
"s": 27179,
"text": "Javascript"
},
{
"code": "// C++ implementation of the above approach #include <bits/stdc++.h>using namespace std; // Function to divide an array into k// parts such that the sum of difference// of every element with the maximum element// of that part is minimumint divideArray(int arr[], int n, int k){ // Dp to store the values int dp[500][500] = { 0 }; k -= 1; // Fill up the dp table for (int i = n - 1; i >= 0; i--) { for (int j = 0; j <= k; j++) { // Intitilize maximum value dp[i][j] = INT_MAX; // Max element and the sum int max_ = -1, sum = 0; // Run a loop from i to n for (int l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element int diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i][j] = min(dp[i][j], diff + dp[l + 1][j - 1]); else dp[i][j] = diff; } } } // Returns the minimum sum // in K parts return dp[0][k];} // Driver codeint main(){ int arr[] = { 2, 9, 5, 4, 8, 3, 6 }; int n = sizeof(arr) / sizeof(int); int k = 2; cout << divideArray(arr, n, k) << \"\\n\"; return 0;}",
"e": 28750,
"s": 27190,
"text": null
},
{
"code": "// Java implementation of the above approachclass GFG{ // Function to divide an array into k // parts such that the sum of difference // of every element with the maximum element // of that part is minimum static int divideArray(int arr[], int n, int k) { // Dp to store the values int dp[][] = new int[500][500]; int i, j; for(i = 0; i < 500; i++) for(j = 0; j < 500; j++) dp[i][j] = 0; k -= 1; // Fill up the dp table for (i = n - 1; i >= 0; i--) { for (j = 0; j <= k; j++) { // Intitilize maximum value dp[i][j] = Integer.MAX_VALUE; // Max element and the sum int max_ = -1, sum = 0; // Run a loop from i to n for (int l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = Math.max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element int diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i][j] = Math.min(dp[i][j], diff + dp[l + 1][j - 1]); else dp[i][j] = diff; } } } // Returns the minimum sum // in K parts return dp[0][k]; } // Driver code public static void main (String[] args) { int arr[] = { 2, 9, 5, 4, 8, 3, 6 }; int n = arr.length; int k = 2; System.out.println(divideArray(arr, n, k)); }} // This code is contributed by AnkitRai01",
"e": 30761,
"s": 28750,
"text": null
},
{
"code": "# Python3 implementation of the above approach # Function to divide an array into k# parts such that the summ of difference# of every element with the maximum element# of that part is minimumdef divideArray(arr, n, k): # Dp to store the values dp = [[0 for i in range(500)] for i in range(500)] k -= 1 # Fill up the dp table for i in range(n - 1, -1, -1): for j in range(0, k + 1): # Intitilize maximum value dp[i][j] = 10**9 # Max element and the summ max_ = -1 summ = 0 # Run a loop from i to n for l in range(i, n): # Find the maximum number # from i to l and the summ # from i to l max_ = max(max_, arr[l]) summ += arr[l] # Find the summ of difference # of every element with the # maximum element diff = (l - i + 1) * max_ - summ # If the array can be divided if (j > 0): dp[i][j]= min(dp[i][j], diff + dp[l + 1][j - 1]) else: dp[i][j] = diff # Returns the minimum summ # in K parts return dp[0][k] # Driver codearr = [2, 9, 5, 4, 8, 3, 6]n = len(arr)k = 2 print(divideArray(arr, n, k)) # This code is contributed by Mohit Kumar",
"e": 32208,
"s": 30761,
"text": null
},
{
"code": "// C# implementation of above approachusing System; class GFG{ // Function to divide an array into k // parts such that the sum of difference // of every element with the maximum element // of that part is minimum static int divideArray(int []arr, int n, int k) { // Dp to store the values int [,]dp = new int[500, 500]; int i, j; for(i = 0; i < 500; i++) for(j = 0; j < 500; j++) dp[i, j] = 0; k -= 1; // Fill up the dp table for (i = n - 1; i >= 0; i--) { for (j = 0; j <= k; j++) { // Intitilize maximum value dp[i, j] = int.MaxValue; // Max element and the sum int max_ = -1, sum = 0; // Run a loop from i to n for (int l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = Math.Max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element int diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i, j] = Math.Min(dp[i, j], diff + dp[l + 1, j - 1]); else dp[i, j] = diff; } } } // Returns the minimum sum // in K parts return dp[0, k]; } // Driver code public static void Main (String[] args) { int []arr = { 2, 9, 5, 4, 8, 3, 6 }; int n = arr.Length; int k = 2; Console.WriteLine(divideArray(arr, n, k)); }} // This code is contributed by 29AjayKumar",
"e": 34221,
"s": 32208,
"text": null
},
{
"code": "<script> // Javascript implementation of the above approach // Function to divide an array into k// parts such that the sum of difference// of every element with the maximum element// of that part is minimumfunction divideArray(arr, n, k){ // Dp to store the values var dp = Array.from(Array(500), ()=> Array(500).fill(0)); k -= 1; // Fill up the dp table for (var i = n - 1; i >= 0; i--) { for (var j = 0; j <= k; j++) { // Intitilize maximum value dp[i][j] = 1000000000; // Max element and the sum var max_ = -1, sum = 0; // Run a loop from i to n for (var l = i; l < n; l++) { // Find the maximum number // from i to l and the sum // from i to l max_ = Math.max(max_, arr[l]); sum += arr[l]; // Find the sum of difference // of every element with the // maximum element var diff = (l - i + 1) * max_ - sum; // If the array can be divided if (j > 0) dp[i][j] = Math.min(dp[i][j], diff + dp[l + 1][j - 1]); else dp[i][j] = diff; } } } // Returns the minimum sum // in K parts return dp[0][k];} // Driver codevar arr = [2, 9, 5, 4, 8, 3, 6 ];var n = arr.length;var k = 2;document.write( divideArray(arr, n, k) + \"<br>\"); </script>",
"e": 35747,
"s": 34221,
"text": null
},
{
"code": null,
"e": 35750,
"s": 35747,
"text": "19"
},
{
"code": null,
"e": 35767,
"s": 35752,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 35775,
"s": 35767,
"text": "ankthon"
},
{
"code": null,
"e": 35787,
"s": 35775,
"text": "29AjayKumar"
},
{
"code": null,
"e": 35794,
"s": 35787,
"text": "rrrtnx"
},
{
"code": null,
"e": 35803,
"s": 35794,
"text": "subarray"
},
{
"code": null,
"e": 35818,
"s": 35803,
"text": "tail-recursion"
},
{
"code": null,
"e": 35825,
"s": 35818,
"text": "Arrays"
},
{
"code": null,
"e": 35845,
"s": 35825,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 35855,
"s": 35845,
"text": "Recursion"
},
{
"code": null,
"e": 35862,
"s": 35855,
"text": "Arrays"
},
{
"code": null,
"e": 35882,
"s": 35862,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 35892,
"s": 35882,
"text": "Recursion"
},
{
"code": null,
"e": 35990,
"s": 35892,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 36058,
"s": 35990,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 36106,
"s": 36058,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 36150,
"s": 36106,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 36182,
"s": 36150,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 36205,
"s": 36182,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 36234,
"s": 36205,
"text": "0-1 Knapsack Problem | DP-10"
},
{
"code": null,
"e": 36264,
"s": 36234,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 36298,
"s": 36264,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 36329,
"s": 36298,
"text": "Bellman–Ford Algorithm | DP-23"
}
] |
Deploy a Python model (more efficiently) over Spark | by Schaun Wheeler | Towards Data Science
|
UPDATE: I wrote about another way of deploying Scikit-Learn models over Spark that avoids some of the problems of this approach.
I’m posting this so other people can hopefully avoid some of the pain I had to go through. There are times when I want to train a model in scikit-learn, but model outcomes for all records in a large dataset in Spark. Yes, I know I could use Spark MLlib, but I find scikit-learn to have a more robust set of offerings, I understand the code in scikit-learn better than I understand the code in MLlib, and I’m just plain-old more familiar with scikit-learn. However, calling a scikit-learn `predict` method through a PySpark UDF creates a couple problems:
It incurs the overhead of pickling and unpickling the model object for every record of the Spark dataframe.
It fails to take advantage of scikit-learn’s optimizations, which mostly are due to vectorizing function calls over NumPy arrays.
I’ve found I can mitigate some of this overhead by grouping your spark dataframe in order to call my Python object’s methods on multiple records at once. Here’s how you do it:
Set up three columns in your Spark data frame:
Set up three columns in your Spark data frame:
* A unique id. This can be anything. As long as it is unique, you’re good to go.
* All of your predictors. You can use a Structype or MLLib’s VectorAssembler to get all of your predictors into a single column.
* A groups column. You can call row_number() modulo’d by the number of groups you want. I find it generally works well to create enough groups that each group will have 50–100k records in it.
2. Call the Spark SQL function `create_map` to merge your unique id and predictor columns into a single column where each record is a key-value store.
3. Group by your groups column, and call the Spark SQL function `collect_list` on your key-value column. This will aggregate your data set into lists of dictionaries.
4. Broadcast your scikit-learn model.
5. Create a UDF that unpacks a list of dictionaries into a list of keys (your unique ids) and a list of lists (your predictors). You can then feed the list of lists directly into a broadcasted scikit-learn model’s `predict` method. Then zip the result of that function call with your key list and convert to a dictionary. The udf will return a MapType, with the keys and values types set appropriately depending on what format your keys take and what format you want to return from your scikit-learn function call.
6. Call explode on the results of your udf, and include two aliases — one for the keys, and one for the results. You’ll then have a new data frame, the same size as your original (pre-grouped) dataframe, with your results in one column, and keys in the other column that can be used to join the results with the original data.
Here’s an example:
"""assumes the following already exist within the environment:`model`: a scikit-learn class that predicts probabilities for a two-class (0.0, 1.0) model`sdf`: a spark dataframe with at least two columns: "unique_id" and "feature_list""""import pyspark.sql.functions as fimport pyspark.sql.types as timport pyspark.sql.window.Window as wfrom pyspark.context import SparkContextsc = SparkContext.getOrCreate()# broadcast modelmodel_broadcast = sc.broadcast(model)# udf to predict on the clusterdef predict_new(feature_map): ids, features = zip(*[ (k, v) for d in feature_map for k, v in d.items() ]) ind = model_broadcast.value.classes_.tolist().index(1.0) probs = [ float(v) for v in model_broadcast.value.predict_proba(features)[:, ind] ] return dict(zip(ids, probs))predict_new_udf = f.udf( predict_new, t.MapType(t.LongType(), t.FloatType())# set the number of prediction groups to createnparts = 5000# put everything togetheroutcome_sdf = ( sdf .select( f.create_map( f.col('unique_id'), f.col('feature_list') ).alias('feature_map'), ( f.row_number().over( w.partitionBy(f.lit(1)).orderBy(f.lit(1)) ) % nparts ).alias('grouper') ) .groupby(f.col('grouper')) .agg( f.collect_list(f.col('feature_map')).alias('feature_map') ) .select( predict_new_udf(f.col('feature_map')).alias('results') ) .select( f.explode(f.col('results')) .alias('unique_id', 'probability_estimate') ))
I’d like to be able to show some graphs comparing performance with the above method to the performance of simply wrapping a call to the model’s predict method in a PySpark UDF, but I can’t: I figured this method out because I couldn’t get the naive method to finish. I had a dataset with over 100 million records. I had trained a Naive Bayes (using scikit-learn’s MultinomialNB) classifier to distinguish between two classes — 0 vs. 1 — based on a hashed term-document matrix (using scikit-learn’s HashingVectorizer). At first, I simply broadcast the trained model, and then wrote a UDF that took a pre-processed string as an argument. The UDF ran that string through HashingVectorizer, then fed those results into the model’s predict method. I then ran the script and monitored YARN. After two days, the process was about 10% complete. I then rewrote the process to follow the approach I’ve outlined here: the UDF took a list of strings as an argument, and that list was processed by HashingVectorizer and the results fed to the model’s predict method. The entire run completed in 30 minutes.
One last note: different scikit-learn models have vastly different footprints in memory. A Naive Bayes model only needs to keep a few values for each parameter. A random forest needs to keep every tree in the forest. Because you need to broadcast the model to each executor, you could easily find that a model trained on a lot of data requires a whole lot of memory.
|
[
{
"code": null,
"e": 301,
"s": 172,
"text": "UPDATE: I wrote about another way of deploying Scikit-Learn models over Spark that avoids some of the problems of this approach."
},
{
"code": null,
"e": 855,
"s": 301,
"text": "I’m posting this so other people can hopefully avoid some of the pain I had to go through. There are times when I want to train a model in scikit-learn, but model outcomes for all records in a large dataset in Spark. Yes, I know I could use Spark MLlib, but I find scikit-learn to have a more robust set of offerings, I understand the code in scikit-learn better than I understand the code in MLlib, and I’m just plain-old more familiar with scikit-learn. However, calling a scikit-learn `predict` method through a PySpark UDF creates a couple problems:"
},
{
"code": null,
"e": 963,
"s": 855,
"text": "It incurs the overhead of pickling and unpickling the model object for every record of the Spark dataframe."
},
{
"code": null,
"e": 1093,
"s": 963,
"text": "It fails to take advantage of scikit-learn’s optimizations, which mostly are due to vectorizing function calls over NumPy arrays."
},
{
"code": null,
"e": 1269,
"s": 1093,
"text": "I’ve found I can mitigate some of this overhead by grouping your spark dataframe in order to call my Python object’s methods on multiple records at once. Here’s how you do it:"
},
{
"code": null,
"e": 1316,
"s": 1269,
"text": "Set up three columns in your Spark data frame:"
},
{
"code": null,
"e": 1363,
"s": 1316,
"text": "Set up three columns in your Spark data frame:"
},
{
"code": null,
"e": 1444,
"s": 1363,
"text": "* A unique id. This can be anything. As long as it is unique, you’re good to go."
},
{
"code": null,
"e": 1573,
"s": 1444,
"text": "* All of your predictors. You can use a Structype or MLLib’s VectorAssembler to get all of your predictors into a single column."
},
{
"code": null,
"e": 1765,
"s": 1573,
"text": "* A groups column. You can call row_number() modulo’d by the number of groups you want. I find it generally works well to create enough groups that each group will have 50–100k records in it."
},
{
"code": null,
"e": 1916,
"s": 1765,
"text": "2. Call the Spark SQL function `create_map` to merge your unique id and predictor columns into a single column where each record is a key-value store."
},
{
"code": null,
"e": 2083,
"s": 1916,
"text": "3. Group by your groups column, and call the Spark SQL function `collect_list` on your key-value column. This will aggregate your data set into lists of dictionaries."
},
{
"code": null,
"e": 2121,
"s": 2083,
"text": "4. Broadcast your scikit-learn model."
},
{
"code": null,
"e": 2636,
"s": 2121,
"text": "5. Create a UDF that unpacks a list of dictionaries into a list of keys (your unique ids) and a list of lists (your predictors). You can then feed the list of lists directly into a broadcasted scikit-learn model’s `predict` method. Then zip the result of that function call with your key list and convert to a dictionary. The udf will return a MapType, with the keys and values types set appropriately depending on what format your keys take and what format you want to return from your scikit-learn function call."
},
{
"code": null,
"e": 2963,
"s": 2636,
"text": "6. Call explode on the results of your udf, and include two aliases — one for the keys, and one for the results. You’ll then have a new data frame, the same size as your original (pre-grouped) dataframe, with your results in one column, and keys in the other column that can be used to join the results with the original data."
},
{
"code": null,
"e": 2982,
"s": 2963,
"text": "Here’s an example:"
},
{
"code": null,
"e": 4558,
"s": 2982,
"text": "\"\"\"assumes the following already exist within the environment:`model`: a scikit-learn class that predicts probabilities for a two-class (0.0, 1.0) model`sdf`: a spark dataframe with at least two columns: \"unique_id\" and \"feature_list\"\"\"\"import pyspark.sql.functions as fimport pyspark.sql.types as timport pyspark.sql.window.Window as wfrom pyspark.context import SparkContextsc = SparkContext.getOrCreate()# broadcast modelmodel_broadcast = sc.broadcast(model)# udf to predict on the clusterdef predict_new(feature_map): ids, features = zip(*[ (k, v) for d in feature_map for k, v in d.items() ]) ind = model_broadcast.value.classes_.tolist().index(1.0) probs = [ float(v) for v in model_broadcast.value.predict_proba(features)[:, ind] ] return dict(zip(ids, probs))predict_new_udf = f.udf( predict_new, t.MapType(t.LongType(), t.FloatType())# set the number of prediction groups to createnparts = 5000# put everything togetheroutcome_sdf = ( sdf .select( f.create_map( f.col('unique_id'), f.col('feature_list') ).alias('feature_map'), ( f.row_number().over( w.partitionBy(f.lit(1)).orderBy(f.lit(1)) ) % nparts ).alias('grouper') ) .groupby(f.col('grouper')) .agg( f.collect_list(f.col('feature_map')).alias('feature_map') ) .select( predict_new_udf(f.col('feature_map')).alias('results') ) .select( f.explode(f.col('results')) .alias('unique_id', 'probability_estimate') ))"
},
{
"code": null,
"e": 5652,
"s": 4558,
"text": "I’d like to be able to show some graphs comparing performance with the above method to the performance of simply wrapping a call to the model’s predict method in a PySpark UDF, but I can’t: I figured this method out because I couldn’t get the naive method to finish. I had a dataset with over 100 million records. I had trained a Naive Bayes (using scikit-learn’s MultinomialNB) classifier to distinguish between two classes — 0 vs. 1 — based on a hashed term-document matrix (using scikit-learn’s HashingVectorizer). At first, I simply broadcast the trained model, and then wrote a UDF that took a pre-processed string as an argument. The UDF ran that string through HashingVectorizer, then fed those results into the model’s predict method. I then ran the script and monitored YARN. After two days, the process was about 10% complete. I then rewrote the process to follow the approach I’ve outlined here: the UDF took a list of strings as an argument, and that list was processed by HashingVectorizer and the results fed to the model’s predict method. The entire run completed in 30 minutes."
}
] |
Erlang - OTP
|
OTP stands for Open Telecom Platform. It’s an application operating system and a set of libraries and procedures used for building large-scale, fault-tolerant, distributed applications. If you want to program your own applications using OTP, then the central concept that you will find very useful is the OTP behavior. A behavior encapsulates common behavioral patterns — think of it as an application framework that is parameterized by a callback module.
The power of OTP comes from the properties such as fault tolerance, scalability, dynamic-code upgrade, and so on, can be provided by the behavior itself. So the first basic concept is to create a server component that mimics the basics of an OTP environment, let’s look at the following example for the same.
-module(server).
-export([start/2, rpc/2]).
start(Name, Mod) ->
register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)).
rpc(Name, Request) ->
Name ! {self(), Request},
receive
{Name, Response} -> Response
end.
loop(Name, Mod, State) ->
receive
{From, Request} ->
{Response, State1} = Mod:handle(Request, State),
From ! {Name, Response},
loop(Name, Mod, State1)
end.
The following things need to be noted about the above program −
The process if registered with the system using the register function.
The process if registered with the system using the register function.
The process spawns a loop function which handles the processing.
The process spawns a loop function which handles the processing.
Now let’s write a client program that will utilize the server program.
-module(name_server).
-export([init/0, add/2, whereis/1, handle/2]).
-import(server1, [rpc/2]).
add(Name, Place) -> rpc(name_server, {add, Name, Place}).
whereis(Name) -> rpc(name_server, {whereis, Name}).
init() -> dict:new().
handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)};
handle({whereis, Name}, Dict) -> {dict:find(Name, Dict), Dict}.
This code actually performs two tasks. It serves as a callback module that is called from the server framework code, and at the same time, it contains the interfacing routines that will be called by the client. The usual OTP convention is to combine both functions in the same module.
So here is how the above program needs to be run −
In erl, first run the server program by running the following command.
server(name_server,name_server)
You will get the following output −
true
Then, run the following command
name_server.add(erlang,”Tutorialspoint”).
You will get the following output −
Ok
Then, run the following command −
name_server.whereis(erlang).
You will get the following output −
{ok,"Tutorialspoint"}
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2757,
"s": 2301,
"text": "OTP stands for Open Telecom Platform. It’s an application operating system and a set of libraries and procedures used for building large-scale, fault-tolerant, distributed applications. If you want to program your own applications using OTP, then the central concept that you will find very useful is the OTP behavior. A behavior encapsulates common behavioral patterns — think of it as an application framework that is parameterized by a callback module."
},
{
"code": null,
"e": 3066,
"s": 2757,
"text": "The power of OTP comes from the properties such as fault tolerance, scalability, dynamic-code upgrade, and so on, can be provided by the behavior itself. So the first basic concept is to create a server component that mimics the basics of an OTP environment, let’s look at the following example for the same."
},
{
"code": null,
"e": 3516,
"s": 3066,
"text": "-module(server). \n-export([start/2, rpc/2]). \n\nstart(Name, Mod) -> \n register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)). \nrpc(Name, Request) -> \n Name ! {self(), Request}, \n receive \n {Name, Response} -> Response \n end. \n \nloop(Name, Mod, State) ->\n receive \n {From, Request} ->\n {Response, State1} = Mod:handle(Request, State), \n From ! {Name, Response}, \n loop(Name, Mod, State1) \n end."
},
{
"code": null,
"e": 3580,
"s": 3516,
"text": "The following things need to be noted about the above program −"
},
{
"code": null,
"e": 3651,
"s": 3580,
"text": "The process if registered with the system using the register function."
},
{
"code": null,
"e": 3722,
"s": 3651,
"text": "The process if registered with the system using the register function."
},
{
"code": null,
"e": 3787,
"s": 3722,
"text": "The process spawns a loop function which handles the processing."
},
{
"code": null,
"e": 3852,
"s": 3787,
"text": "The process spawns a loop function which handles the processing."
},
{
"code": null,
"e": 3923,
"s": 3852,
"text": "Now let’s write a client program that will utilize the server program."
},
{
"code": null,
"e": 4296,
"s": 3923,
"text": "-module(name_server). \n-export([init/0, add/2, whereis/1, handle/2]). \n-import(server1, [rpc/2]). \n\nadd(Name, Place) -> rpc(name_server, {add, Name, Place}). \nwhereis(Name) -> rpc(name_server, {whereis, Name}). \n\ninit() -> dict:new().\nhandle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)}; \nhandle({whereis, Name}, Dict) -> {dict:find(Name, Dict), Dict}."
},
{
"code": null,
"e": 4581,
"s": 4296,
"text": "This code actually performs two tasks. It serves as a callback module that is called from the server framework code, and at the same time, it contains the interfacing routines that will be called by the client. The usual OTP convention is to combine both functions in the same module."
},
{
"code": null,
"e": 4632,
"s": 4581,
"text": "So here is how the above program needs to be run −"
},
{
"code": null,
"e": 4703,
"s": 4632,
"text": "In erl, first run the server program by running the following command."
},
{
"code": null,
"e": 4736,
"s": 4703,
"text": "server(name_server,name_server)\n"
},
{
"code": null,
"e": 4772,
"s": 4736,
"text": "You will get the following output −"
},
{
"code": null,
"e": 4778,
"s": 4772,
"text": "true\n"
},
{
"code": null,
"e": 4810,
"s": 4778,
"text": "Then, run the following command"
},
{
"code": null,
"e": 4853,
"s": 4810,
"text": "name_server.add(erlang,”Tutorialspoint”).\n"
},
{
"code": null,
"e": 4889,
"s": 4853,
"text": "You will get the following output −"
},
{
"code": null,
"e": 4893,
"s": 4889,
"text": "Ok\n"
},
{
"code": null,
"e": 4927,
"s": 4893,
"text": "Then, run the following command −"
},
{
"code": null,
"e": 4957,
"s": 4927,
"text": "name_server.whereis(erlang).\n"
},
{
"code": null,
"e": 4993,
"s": 4957,
"text": "You will get the following output −"
},
{
"code": null,
"e": 5016,
"s": 4993,
"text": "{ok,\"Tutorialspoint\"}\n"
},
{
"code": null,
"e": 5023,
"s": 5016,
"text": " Print"
},
{
"code": null,
"e": 5034,
"s": 5023,
"text": " Add Notes"
}
] |
How to create a simple server in Node.js that display Hello World ? - GeeksforGeeks
|
01 Oct, 2021
A Server is a piece of computer hardware or software that provides functionality for other programs or devices, called clients. This architecture is called the client-server model. Node is an open-source, cross-platform runtime environment that allows developers to create all kinds of server-side tools and applications in JavaScript.
In the following example, we will create a simple server in Node.js that returns Hello World using an express server.
Create NodeJS Application: Initialize the NodeJS application using the following command:
npm init
Module Installation: Install the express module which is a web framework for NodeJS using the following command.
npm install express
Implementation: Create an app.js file and write down the following code in it.
app.js
// Require would make available the// express package to be used in// our codeconst express = require("express"); // Creates an express objectconst app = express(); // It listens to HTTP get request. // Here it listens to the root i.e '/'app.get("/", (req, res) => { // Using send function we send // response to the client // Here we are sending html res.send("<h1> Hello World </h1>");}); // It configures the system to listen// to port 3000. Any number can be // given instead of 3000, the only// condition is that no other server// should be running at that portapp.listen(3000, () => { // Print in the console when the // servers starts to listen on 3000 console.log("Listening to port 3000");});
Step to run the application: Run the app.js file using the following command.
node app.js
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
output
So this is how you can set up the server and achieve the task. If you want to return anything else then pass that argument in res.send() of the app.get() function instead of “Hello World”.
NodeJS-Questions
Picked
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Express.js express.Router() Function
Mongoose Populate() Method
Express.js req.params Property
Mongoose find() Function
How to connect Node.js with React.js ?
Express.js express.Router() Function
How to set input type date in dd-mm-yyyy format using HTML ?
Differences between Functional Components and Class Components in React
How to create footer to stay at the bottom of a Web page?
Convert a string to an integer in JavaScript
|
[
{
"code": null,
"e": 24557,
"s": 24529,
"text": "\n01 Oct, 2021"
},
{
"code": null,
"e": 24893,
"s": 24557,
"text": "A Server is a piece of computer hardware or software that provides functionality for other programs or devices, called clients. This architecture is called the client-server model. Node is an open-source, cross-platform runtime environment that allows developers to create all kinds of server-side tools and applications in JavaScript."
},
{
"code": null,
"e": 25011,
"s": 24893,
"text": "In the following example, we will create a simple server in Node.js that returns Hello World using an express server."
},
{
"code": null,
"e": 25101,
"s": 25011,
"text": "Create NodeJS Application: Initialize the NodeJS application using the following command:"
},
{
"code": null,
"e": 25110,
"s": 25101,
"text": "npm init"
},
{
"code": null,
"e": 25223,
"s": 25110,
"text": "Module Installation: Install the express module which is a web framework for NodeJS using the following command."
},
{
"code": null,
"e": 25243,
"s": 25223,
"text": "npm install express"
},
{
"code": null,
"e": 25322,
"s": 25243,
"text": "Implementation: Create an app.js file and write down the following code in it."
},
{
"code": null,
"e": 25329,
"s": 25322,
"text": "app.js"
},
{
"code": "// Require would make available the// express package to be used in// our codeconst express = require(\"express\"); // Creates an express objectconst app = express(); // It listens to HTTP get request. // Here it listens to the root i.e '/'app.get(\"/\", (req, res) => { // Using send function we send // response to the client // Here we are sending html res.send(\"<h1> Hello World </h1>\");}); // It configures the system to listen// to port 3000. Any number can be // given instead of 3000, the only// condition is that no other server// should be running at that portapp.listen(3000, () => { // Print in the console when the // servers starts to listen on 3000 console.log(\"Listening to port 3000\");});",
"e": 26045,
"s": 25329,
"text": null
},
{
"code": null,
"e": 26123,
"s": 26045,
"text": "Step to run the application: Run the app.js file using the following command."
},
{
"code": null,
"e": 26135,
"s": 26123,
"text": "node app.js"
},
{
"code": null,
"e": 26234,
"s": 26135,
"text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"
},
{
"code": null,
"e": 26241,
"s": 26234,
"text": "output"
},
{
"code": null,
"e": 26430,
"s": 26241,
"text": "So this is how you can set up the server and achieve the task. If you want to return anything else then pass that argument in res.send() of the app.get() function instead of “Hello World”."
},
{
"code": null,
"e": 26447,
"s": 26430,
"text": "NodeJS-Questions"
},
{
"code": null,
"e": 26454,
"s": 26447,
"text": "Picked"
},
{
"code": null,
"e": 26462,
"s": 26454,
"text": "Node.js"
},
{
"code": null,
"e": 26479,
"s": 26462,
"text": "Web Technologies"
},
{
"code": null,
"e": 26577,
"s": 26479,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26586,
"s": 26577,
"text": "Comments"
},
{
"code": null,
"e": 26599,
"s": 26586,
"text": "Old Comments"
},
{
"code": null,
"e": 26636,
"s": 26599,
"text": "Express.js express.Router() Function"
},
{
"code": null,
"e": 26663,
"s": 26636,
"text": "Mongoose Populate() Method"
},
{
"code": null,
"e": 26694,
"s": 26663,
"text": "Express.js req.params Property"
},
{
"code": null,
"e": 26719,
"s": 26694,
"text": "Mongoose find() Function"
},
{
"code": null,
"e": 26758,
"s": 26719,
"text": "How to connect Node.js with React.js ?"
},
{
"code": null,
"e": 26795,
"s": 26758,
"text": "Express.js express.Router() Function"
},
{
"code": null,
"e": 26856,
"s": 26795,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 26928,
"s": 26856,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 26986,
"s": 26928,
"text": "How to create footer to stay at the bottom of a Web page?"
}
] |
How to Install a Package in R ? - GeeksforGeeks
|
21 Apr, 2021
R programming language doesn’t come with all packages installed, and they need to be installed explicitly. In this article, we will discuss How to Install a Package in the R language.
Method 1: Using application options
1. Open R studio.
2. Select tools
3. After selecting the tools you need to press install packages.
4. Here you need to give the package name you need to install.
Here we used expm. This function computes the exponential of a square matrix.
1. Open RGui
2. Select packages
3. Select install packages.
4. Select required package and click ok.
Package will be installed
package 'expm' successfully unpacked and MD5 sums checked
Method 2: Using command
In this method, simply pass the package to be installed as an argument to install.packages() function
Syntax:
install.packages(“package name”)
Example:
R
install.packages("ggplot2")
A package can be loaded once it has been installed using library() command.
Syntax;
library(“package_name”)
Example:
R
library("ggplot2")
Picked
R-Packages
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
Replace Specific Characters in String in R
How to filter R dataframe by multiple conditions?
R - if statement
How to import an Excel File into R ?
How to change the order of bars in bar chart in R ?
|
[
{
"code": null,
"e": 24851,
"s": 24823,
"text": "\n21 Apr, 2021"
},
{
"code": null,
"e": 25035,
"s": 24851,
"text": "R programming language doesn’t come with all packages installed, and they need to be installed explicitly. In this article, we will discuss How to Install a Package in the R language."
},
{
"code": null,
"e": 25071,
"s": 25035,
"text": "Method 1: Using application options"
},
{
"code": null,
"e": 25089,
"s": 25071,
"text": "1. Open R studio."
},
{
"code": null,
"e": 25105,
"s": 25089,
"text": "2. Select tools"
},
{
"code": null,
"e": 25170,
"s": 25105,
"text": "3. After selecting the tools you need to press install packages."
},
{
"code": null,
"e": 25233,
"s": 25170,
"text": "4. Here you need to give the package name you need to install."
},
{
"code": null,
"e": 25311,
"s": 25233,
"text": "Here we used expm. This function computes the exponential of a square matrix."
},
{
"code": null,
"e": 25324,
"s": 25311,
"text": "1. Open RGui"
},
{
"code": null,
"e": 25343,
"s": 25324,
"text": "2. Select packages"
},
{
"code": null,
"e": 25371,
"s": 25343,
"text": "3. Select install packages."
},
{
"code": null,
"e": 25412,
"s": 25371,
"text": "4. Select required package and click ok."
},
{
"code": null,
"e": 25440,
"s": 25412,
"text": " Package will be installed "
},
{
"code": null,
"e": 25498,
"s": 25440,
"text": "package 'expm' successfully unpacked and MD5 sums checked"
},
{
"code": null,
"e": 25523,
"s": 25498,
"text": "Method 2: Using command "
},
{
"code": null,
"e": 25625,
"s": 25523,
"text": "In this method, simply pass the package to be installed as an argument to install.packages() function"
},
{
"code": null,
"e": 25633,
"s": 25625,
"text": "Syntax:"
},
{
"code": null,
"e": 25666,
"s": 25633,
"text": "install.packages(“package name”)"
},
{
"code": null,
"e": 25675,
"s": 25666,
"text": "Example:"
},
{
"code": null,
"e": 25677,
"s": 25675,
"text": "R"
},
{
"code": "install.packages(\"ggplot2\")",
"e": 25705,
"s": 25677,
"text": null
},
{
"code": null,
"e": 25781,
"s": 25705,
"text": "A package can be loaded once it has been installed using library() command."
},
{
"code": null,
"e": 25789,
"s": 25781,
"text": "Syntax;"
},
{
"code": null,
"e": 25813,
"s": 25789,
"text": "library(“package_name”)"
},
{
"code": null,
"e": 25822,
"s": 25813,
"text": "Example:"
},
{
"code": null,
"e": 25824,
"s": 25822,
"text": "R"
},
{
"code": "library(\"ggplot2\")",
"e": 25843,
"s": 25824,
"text": null
},
{
"code": null,
"e": 25850,
"s": 25843,
"text": "Picked"
},
{
"code": null,
"e": 25861,
"s": 25850,
"text": "R-Packages"
},
{
"code": null,
"e": 25872,
"s": 25861,
"text": "R Language"
},
{
"code": null,
"e": 25970,
"s": 25872,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25979,
"s": 25970,
"text": "Comments"
},
{
"code": null,
"e": 25992,
"s": 25979,
"text": "Old Comments"
},
{
"code": null,
"e": 26044,
"s": 25992,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 26082,
"s": 26044,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 26117,
"s": 26082,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 26175,
"s": 26117,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 26224,
"s": 26175,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 26267,
"s": 26224,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 26317,
"s": 26267,
"text": "How to filter R dataframe by multiple conditions?"
},
{
"code": null,
"e": 26334,
"s": 26317,
"text": "R - if statement"
},
{
"code": null,
"e": 26371,
"s": 26334,
"text": "How to import an Excel File into R ?"
}
] |
Lucene - WildcardQuery
|
WildcardQuery is used to search documents using wildcards like '*' for any character sequence, matching a single character.
Following is the declaration for org.apache.lucene.search.WildcardQuery class −
public class WildcardQuery
extends MultiTermQuery
protected Term term
WildcardQuery(Term term)
boolean equals(Object obj)
protected FilteredTermEnum getEnum(IndexReader reader)
Construct the enumeration to be used, expanding the pattern term.
Term getTerm()
Returns the pattern term.
int hashCode()
String toString(String field)
Prints a user-readable version of this query.
This class inherits methods from the following classes −
org.apache.lucene.search.MultiTermQuery
org.apache.lucene.search.Query
java.lang.Object
private void searchUsingWildCardQuery(String searchQuery)
throws IOException, ParseException {
searcher = new Searcher(indexDir);
long startTime = System.currentTimeMillis();
//create a term to search file name
Term term = new Term(LuceneConstants.FILE_NAME, searchQuery);
//create the term query object
Query query = new WildcardQuery(term);
//do the search
TopDocs hits = searcher.search(query);
long endTime = System.currentTimeMillis();
System.out.println(hits.totalHits +
" documents found. Time :" + (endTime - startTime) + "ms");
for(ScoreDoc scoreDoc : hits.scoreDocs) {
Document doc = searcher.getDocument(scoreDoc);
System.out.println("File: "+ doc.get(LuceneConstants.FILE_PATH));
}
searcher.close();
}
Let us create a test Lucene application to test search using WildcardQuery.
Create a project with a name LuceneFirstApplication under a package com.tutorialspoint.lucene as explained in the Lucene - First Application chapter. You can also use the project created in Lucene - First Application chapter as such for this chapter to understand the searching process.
Create LuceneConstants.java and Searcher.java as explained in the Lucene - First Application chapter. Keep the rest of the files unchanged.
Create LuceneTester.java as mentioned below.
Clean and Build the application to make sure the business logic is working as per the requirements.
This class is used to provide various constants to be used across the sample application.
package com.tutorialspoint.lucene;
public class LuceneConstants {
public static final String CONTENTS = "contents";
public static final String FILE_NAME = "filename";
public static final String FILE_PATH = "filepath";
public static final int MAX_SEARCH = 10;
}
This class is used to read the indexes made on raw data and searches data using lucene library.
package com.tutorialspoint.lucene;
import java.io.File;
import java.io.IOException;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;
public class Searcher {
IndexSearcher indexSearcher;
QueryParser queryParser;
Query query;
public Searcher(String indexDirectoryPath) throws IOException {
Directory indexDirectory = FSDirectory.open(new File(indexDirectoryPath));
indexSearcher = new IndexSearcher(indexDirectory);
queryParser = new QueryParser(Version.LUCENE_36, LuceneConstants.CONTENTS,
new StandardAnalyzer(Version.LUCENE_36));
}
public TopDocs search( String searchQuery) throws IOException, ParseException {
query = queryParser.parse(searchQuery);
return indexSearcher.search(query, LuceneConstants.MAX_SEARCH);
}
public TopDocs search(Query query) throws IOException, ParseException {
return indexSearcher.search(query, LuceneConstants.MAX_SEARCH);
}
public Document getDocument(ScoreDoc scoreDoc)
throws CorruptIndexException, IOException {
return indexSearcher.doc(scoreDoc.doc);
}
public void close() throws IOException {
indexSearcher.close();
}
}
This class is used to test the searching capability of lucene library.
package com.tutorialspoint.lucene;
import java.io.IOException;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.Term;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.search.WildcardQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
public class LuceneTester {
String indexDir = "E:\\Lucene\\Index";
String dataDir = "E:\\Lucene\\Data";
Searcher searcher;
public static void main(String[] args) {
LuceneTester tester;
try {
tester = new LuceneTester();
tester.searchUsingWildCardQuery("record1*");
} catch (IOException e) {
e.printStackTrace();
} catch (ParseException e) {
e.printStackTrace();
}
}
private void searchUsingWildCardQuery(String searchQuery)
throws IOException, ParseException {
searcher = new Searcher(indexDir);
long startTime = System.currentTimeMillis();
//create a term to search file name
Term term = new Term(LuceneConstants.FILE_NAME, searchQuery);
//create the term query object
Query query = new WildcardQuery(term);
//do the search
TopDocs hits = searcher.search(query);
long endTime = System.currentTimeMillis();
System.out.println(hits.totalHits +
" documents found. Time :" + (endTime - startTime) + "ms");
for(ScoreDoc scoreDoc : hits.scoreDocs) {
Document doc = searcher.getDocument(scoreDoc);
System.out.println("File: "+ doc.get(LuceneConstants.FILE_PATH));
}
searcher.close();
}
}
I've used 10 text files named from record1.txt to record10.txt containing simply names and other details of the students and put them in the directory E:\Lucene\Data. Test Data. An index directory path should be created as E:\Lucene\Index. After running the indexing program during chapter Lucene - Indexing Process, you can see the list of index files created in that folder.
Once you are done with creating source, creating the raw data, data directory, index directory and indexes, you are ready for this step which is compiling and running your program. To do this, Keep LuceneTester.Java file tab active and use either Run option available in the Eclipse IDE or use Ctrl + F11 to compile and run your LuceneTester application. If everything is fine with your application, this will print the following message in Eclipse IDE's console −
2 documents found. Time :47ms
File: E:\Lucene\Data\record1.txt
File: E:\Lucene\Data\record10.txt
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 1967,
"s": 1843,
"text": "WildcardQuery is used to search documents using wildcards like '*' for any character sequence, matching a single character."
},
{
"code": null,
"e": 2047,
"s": 1967,
"text": "Following is the declaration for org.apache.lucene.search.WildcardQuery class −"
},
{
"code": null,
"e": 2103,
"s": 2047,
"text": "public class WildcardQuery \n extends MultiTermQuery \n"
},
{
"code": null,
"e": 2123,
"s": 2103,
"text": "protected Term term"
},
{
"code": null,
"e": 2148,
"s": 2123,
"text": "WildcardQuery(Term term)"
},
{
"code": null,
"e": 2175,
"s": 2148,
"text": "boolean equals(Object obj)"
},
{
"code": null,
"e": 2230,
"s": 2175,
"text": "protected FilteredTermEnum getEnum(IndexReader reader)"
},
{
"code": null,
"e": 2296,
"s": 2230,
"text": "Construct the enumeration to be used, expanding the pattern term."
},
{
"code": null,
"e": 2311,
"s": 2296,
"text": "Term getTerm()"
},
{
"code": null,
"e": 2337,
"s": 2311,
"text": "Returns the pattern term."
},
{
"code": null,
"e": 2352,
"s": 2337,
"text": "int hashCode()"
},
{
"code": null,
"e": 2382,
"s": 2352,
"text": "String toString(String field)"
},
{
"code": null,
"e": 2428,
"s": 2382,
"text": "Prints a user-readable version of this query."
},
{
"code": null,
"e": 2485,
"s": 2428,
"text": "This class inherits methods from the following classes −"
},
{
"code": null,
"e": 2525,
"s": 2485,
"text": "org.apache.lucene.search.MultiTermQuery"
},
{
"code": null,
"e": 2556,
"s": 2525,
"text": "org.apache.lucene.search.Query"
},
{
"code": null,
"e": 2573,
"s": 2556,
"text": "java.lang.Object"
},
{
"code": null,
"e": 3376,
"s": 2573,
"text": "private void searchUsingWildCardQuery(String searchQuery) \n throws IOException, ParseException { \n searcher = new Searcher(indexDir); \n long startTime = System.currentTimeMillis(); \n\t\n //create a term to search file name \n Term term = new Term(LuceneConstants.FILE_NAME, searchQuery); \n //create the term query object \n Query query = new WildcardQuery(term); \n //do the search \n TopDocs hits = searcher.search(query); \n long endTime = System.currentTimeMillis(); \n\t\n System.out.println(hits.totalHits + \n \" documents found. Time :\" + (endTime - startTime) + \"ms\"); \n\t\t\n for(ScoreDoc scoreDoc : hits.scoreDocs) { \n Document doc = searcher.getDocument(scoreDoc); \n System.out.println(\"File: \"+ doc.get(LuceneConstants.FILE_PATH)); \n } \n\t\n searcher.close(); \n} "
},
{
"code": null,
"e": 3452,
"s": 3376,
"text": "Let us create a test Lucene application to test search using WildcardQuery."
},
{
"code": null,
"e": 3739,
"s": 3452,
"text": "Create a project with a name LuceneFirstApplication under a package com.tutorialspoint.lucene as explained in the Lucene - First Application chapter. You can also use the project created in Lucene - First Application chapter as such for this chapter to understand the searching process."
},
{
"code": null,
"e": 3879,
"s": 3739,
"text": "Create LuceneConstants.java and Searcher.java as explained in the Lucene - First Application chapter. Keep the rest of the files unchanged."
},
{
"code": null,
"e": 3924,
"s": 3879,
"text": "Create LuceneTester.java as mentioned below."
},
{
"code": null,
"e": 4024,
"s": 3924,
"text": "Clean and Build the application to make sure the business logic is working as per the requirements."
},
{
"code": null,
"e": 4114,
"s": 4024,
"text": "This class is used to provide various constants to be used across the sample application."
},
{
"code": null,
"e": 4394,
"s": 4114,
"text": "package com.tutorialspoint.lucene; \npublic class LuceneConstants { \n public static final String CONTENTS = \"contents\"; \n public static final String FILE_NAME = \"filename\"; \n public static final String FILE_PATH = \"filepath\"; \n public static final int MAX_SEARCH = 10; \n}"
},
{
"code": null,
"e": 4490,
"s": 4394,
"text": "This class is used to read the indexes made on raw data and searches data using lucene library."
},
{
"code": null,
"e": 6223,
"s": 4490,
"text": "package com.tutorialspoint.lucene; \n\nimport java.io.File; \nimport java.io.IOException; \n\nimport org.apache.lucene.analysis.standard.StandardAnalyzer; \nimport org.apache.lucene.document.Document; \nimport org.apache.lucene.index.CorruptIndexException; \nimport org.apache.lucene.queryParser.ParseException; \nimport org.apache.lucene.queryParser.QueryParser; \n\nimport org.apache.lucene.search.IndexSearcher; \nimport org.apache.lucene.search.Query; \nimport org.apache.lucene.search.ScoreDoc; \nimport org.apache.lucene.search.TopDocs; \n\nimport org.apache.lucene.store.Directory; \nimport org.apache.lucene.store.FSDirectory; \n\nimport org.apache.lucene.util.Version; \n \npublic class Searcher { \n \n IndexSearcher indexSearcher; \n QueryParser queryParser; \n Query query; \n\t\n public Searcher(String indexDirectoryPath) throws IOException { \n Directory indexDirectory = FSDirectory.open(new File(indexDirectoryPath)); \n indexSearcher = new IndexSearcher(indexDirectory); \n queryParser = new QueryParser(Version.LUCENE_36, LuceneConstants.CONTENTS, \n new StandardAnalyzer(Version.LUCENE_36)); \n } \n\t\n public TopDocs search( String searchQuery) throws IOException, ParseException { \n query = queryParser.parse(searchQuery); \n return indexSearcher.search(query, LuceneConstants.MAX_SEARCH); \n } \n \n public TopDocs search(Query query) throws IOException, ParseException { \n return indexSearcher.search(query, LuceneConstants.MAX_SEARCH); \n } \n\t\n public Document getDocument(ScoreDoc scoreDoc) \n throws CorruptIndexException, IOException { \n return indexSearcher.doc(scoreDoc.doc); \n }\n\t\n public void close() throws IOException { \n indexSearcher.close(); \n } \n}"
},
{
"code": null,
"e": 6294,
"s": 6223,
"text": "This class is used to test the searching capability of lucene library."
},
{
"code": null,
"e": 8011,
"s": 6294,
"text": "package com.tutorialspoint.lucene;\n \nimport java.io.IOException; \n\nimport org.apache.lucene.document.Document; \nimport org.apache.lucene.index.Term; \nimport org.apache.lucene.queryParser.ParseException; \nimport org.apache.lucene.search.WildcardQuery; \n\nimport org.apache.lucene.search.Query; \nimport org.apache.lucene.search.ScoreDoc; \nimport org.apache.lucene.search.TopDocs; \n \npublic class LuceneTester { \n \n String indexDir = \"E:\\\\Lucene\\\\Index\"; \n String dataDir = \"E:\\\\Lucene\\\\Data\"; \n Searcher searcher; \n\t\n public static void main(String[] args) { \n LuceneTester tester; \n try { \n tester = new LuceneTester(); \n tester.searchUsingWildCardQuery(\"record1*\"); \n } catch (IOException e) { \n e.printStackTrace(); \n } catch (ParseException e) { \n e.printStackTrace(); \n } \n } \n\t\n private void searchUsingWildCardQuery(String searchQuery) \n throws IOException, ParseException { \n searcher = new Searcher(indexDir); \n long startTime = System.currentTimeMillis(); \n\t\t\n //create a term to search file name \n Term term = new Term(LuceneConstants.FILE_NAME, searchQuery); \n //create the term query object \n Query query = new WildcardQuery(term); \n //do the search \n TopDocs hits = searcher.search(query); \n long endTime = System.currentTimeMillis(); \n\t\t\n System.out.println(hits.totalHits + \n \" documents found. Time :\" + (endTime - startTime) + \"ms\"); \n\t\t\t\n for(ScoreDoc scoreDoc : hits.scoreDocs) { \n Document doc = searcher.getDocument(scoreDoc); \n System.out.println(\"File: \"+ doc.get(LuceneConstants.FILE_PATH)); \n } \n\t\t\n searcher.close(); \n } \n} "
},
{
"code": null,
"e": 8388,
"s": 8011,
"text": "I've used 10 text files named from record1.txt to record10.txt containing simply names and other details of the students and put them in the directory E:\\Lucene\\Data. Test Data. An index directory path should be created as E:\\Lucene\\Index. After running the indexing program during chapter Lucene - Indexing Process, you can see the list of index files created in that folder."
},
{
"code": null,
"e": 8853,
"s": 8388,
"text": "Once you are done with creating source, creating the raw data, data directory, index directory and indexes, you are ready for this step which is compiling and running your program. To do this, Keep LuceneTester.Java file tab active and use either Run option available in the Eclipse IDE or use Ctrl + F11 to compile and run your LuceneTester application. If everything is fine with your application, this will print the following message in Eclipse IDE's console −"
},
{
"code": null,
"e": 8953,
"s": 8853,
"text": "2 documents found. Time :47ms \nFile: E:\\Lucene\\Data\\record1.txt \nFile: E:\\Lucene\\Data\\record10.txt\n"
},
{
"code": null,
"e": 8960,
"s": 8953,
"text": " Print"
},
{
"code": null,
"e": 8971,
"s": 8960,
"text": " Add Notes"
}
] |
How to retrieve documents from a collection in MongoDB?
|
To retrieve documents from a collection in MongoDB, you need to use find() method. The syntax is as follows:
db.yourCollectionName.find();
The above syntax will return all the documents from a collection in MongoDB. To understand
the above syntax, let us create a collection with documents. The query to create documents are
as follows:
> db.retrieveAllStudents.insertOne({"StudentId":"STUD101","StudentName":"David","StudentAge":24});
{
"acknowledged" : true, "insertedId" : ObjectId("5c6bf5cf68174aae23f5ef4e")
}
> db.retrieveAllStudents.insertOne({"StudentId":"STUD102","StudentName":"Carol","StudentAge":22});
{
"acknowledged" : true, "insertedId" : ObjectId("5c6bf5e968174aae23f5ef4f")
}
> db.retrieveAllStudents.insertOne({"StudentId":"STUD103","StudentName":"Maxwell","StudentAge":25});
{
"acknowledged" : true, "insertedId" : ObjectId("5c6bf5f768174aae23f5ef50")
}
> db.retrieveAllStudents.insertOne({"StudentId":"STUD104","StudentName":"Bob","StudentAge":23});
{
"acknowledged" : true, "insertedId" : ObjectId("5c6bf60868174aae23f5ef51")
}
> db.retrieveAllStudents.insertOne({"StudentId":"STUD105","StudentName":"Sam","StudentAge":27});
{
"acknowledged" : true, "insertedId" : ObjectId("5c6bf61b68174aae23f5ef52")
}
Now you can use the above syntax in order to retrieve all the documents from a collection with
the help of find() method. The query is as follows:
> db.retrieveAllStudents.find();
The following is the output:
{ "_id" : ObjectId("5c6bf5cf68174aae23f5ef4e"), "StudentId" : "STUD-101", "StudentName" :
"David", "StudentAge" : 24 }
{ "_id" : ObjectId("5c6bf5e968174aae23f5ef4f"), "StudentId" : "STUD-102", "StudentName" :
"Carol", "StudentAge" : 22 }
{ "_id" : ObjectId("5c6bf5f768174aae23f5ef50"), "StudentId" : "STUD-103", "StudentName" :
"Maxwell", "StudentAge" : 25 }
{ "_id" : ObjectId("5c6bf60868174aae23f5ef51"), "StudentId" : "STUD-104", "StudentName" :
"Bob", "StudentAge" : 23 }
{ "_id" : ObjectId("5c6bf61b68174aae23f5ef52"), "StudentId" : "STUD-105", "StudentName" :
"Sam", "StudentAge" : 27 }
For a proper formatted output, use pretty() with find(). The query is as follows:
> db.retriveAllStudents.find().pretty();
The following is the output:
{
"_id" : ObjectId("5c6bf5cf68174aae23f5ef4e"),
"StudentId" : "STUD-101",
"StudentName" : "David",
"StudentAge" : 24
}
{
"_id" : ObjectId("5c6bf5e968174aae23f5ef4f"),
"StudentId" : "STUD-102",
"StudentName" : "Carol",
"StudentAge" : 22
}
{
"_id" : ObjectId("5c6bf5f768174aae23f5ef50"),
"StudentId" : "STUD-103",
"StudentName" : "Maxwell",
"StudentAge" : 25
}
{
"_id" : ObjectId("5c6bf60868174aae23f5ef51"),
"StudentId" : "STUD-104",
"StudentName" : "Bob",
"StudentAge" : 23
}
{
"_id" : ObjectId("5c6bf61b68174aae23f5ef52"),
"StudentId" : "STUD-105",
"StudentName" : "Sam",
"StudentAge" : 27
}
If you want to retrieve a single document on the basis of some condition, then you can use the
following query. Here, we are retrieving the document with StudentName as “Maxwell”:
> db.retriveAllStudents.find({"StudentName":"Maxwell"}).pretty();
The following is the output:
{
"_id" : ObjectId("5c6bf5f768174aae23f5ef50"),
"StudentId" : "STUD-103",
"StudentName" : "Maxwell",
"StudentAge" : 25
}
|
[
{
"code": null,
"e": 1171,
"s": 1062,
"text": "To retrieve documents from a collection in MongoDB, you need to use find() method. The syntax is as follows:"
},
{
"code": null,
"e": 1201,
"s": 1171,
"text": "db.yourCollectionName.find();"
},
{
"code": null,
"e": 1399,
"s": 1201,
"text": "The above syntax will return all the documents from a collection in MongoDB. To understand\nthe above syntax, let us create a collection with documents. The query to create documents are\nas follows:"
},
{
"code": null,
"e": 2302,
"s": 1399,
"text": "> db.retrieveAllStudents.insertOne({\"StudentId\":\"STUD101\",\"StudentName\":\"David\",\"StudentAge\":24});\n{\n \"acknowledged\" : true, \"insertedId\" : ObjectId(\"5c6bf5cf68174aae23f5ef4e\")\n}\n> db.retrieveAllStudents.insertOne({\"StudentId\":\"STUD102\",\"StudentName\":\"Carol\",\"StudentAge\":22});\n{\n \"acknowledged\" : true, \"insertedId\" : ObjectId(\"5c6bf5e968174aae23f5ef4f\")\n}\n> db.retrieveAllStudents.insertOne({\"StudentId\":\"STUD103\",\"StudentName\":\"Maxwell\",\"StudentAge\":25});\n{\n \"acknowledged\" : true, \"insertedId\" : ObjectId(\"5c6bf5f768174aae23f5ef50\")\n}\n> db.retrieveAllStudents.insertOne({\"StudentId\":\"STUD104\",\"StudentName\":\"Bob\",\"StudentAge\":23});\n{\n \"acknowledged\" : true, \"insertedId\" : ObjectId(\"5c6bf60868174aae23f5ef51\")\n}\n> db.retrieveAllStudents.insertOne({\"StudentId\":\"STUD105\",\"StudentName\":\"Sam\",\"StudentAge\":27});\n{\n \"acknowledged\" : true, \"insertedId\" : ObjectId(\"5c6bf61b68174aae23f5ef52\")\n}"
},
{
"code": null,
"e": 2449,
"s": 2302,
"text": "Now you can use the above syntax in order to retrieve all the documents from a collection with\nthe help of find() method. The query is as follows:"
},
{
"code": null,
"e": 2482,
"s": 2449,
"text": "> db.retrieveAllStudents.find();"
},
{
"code": null,
"e": 2511,
"s": 2482,
"text": "The following is the output:"
},
{
"code": null,
"e": 3119,
"s": 2511,
"text": "{ \"_id\" : ObjectId(\"5c6bf5cf68174aae23f5ef4e\"), \"StudentId\" : \"STUD-101\", \"StudentName\" :\n \"David\", \"StudentAge\" : 24 }\n{ \"_id\" : ObjectId(\"5c6bf5e968174aae23f5ef4f\"), \"StudentId\" : \"STUD-102\", \"StudentName\" :\n \"Carol\", \"StudentAge\" : 22 }\n{ \"_id\" : ObjectId(\"5c6bf5f768174aae23f5ef50\"), \"StudentId\" : \"STUD-103\", \"StudentName\" :\n \"Maxwell\", \"StudentAge\" : 25 }\n{ \"_id\" : ObjectId(\"5c6bf60868174aae23f5ef51\"), \"StudentId\" : \"STUD-104\", \"StudentName\" :\n \"Bob\", \"StudentAge\" : 23 }\n{ \"_id\" : ObjectId(\"5c6bf61b68174aae23f5ef52\"), \"StudentId\" : \"STUD-105\", \"StudentName\" :\n \"Sam\", \"StudentAge\" : 27 }"
},
{
"code": null,
"e": 3201,
"s": 3119,
"text": "For a proper formatted output, use pretty() with find(). The query is as follows:"
},
{
"code": null,
"e": 3242,
"s": 3201,
"text": "> db.retriveAllStudents.find().pretty();"
},
{
"code": null,
"e": 3271,
"s": 3242,
"text": "The following is the output:"
},
{
"code": null,
"e": 3924,
"s": 3271,
"text": "{\n \"_id\" : ObjectId(\"5c6bf5cf68174aae23f5ef4e\"),\n \"StudentId\" : \"STUD-101\",\n \"StudentName\" : \"David\",\n \"StudentAge\" : 24\n}\n{\n \"_id\" : ObjectId(\"5c6bf5e968174aae23f5ef4f\"),\n \"StudentId\" : \"STUD-102\",\n \"StudentName\" : \"Carol\",\n \"StudentAge\" : 22\n}\n{\n \"_id\" : ObjectId(\"5c6bf5f768174aae23f5ef50\"),\n \"StudentId\" : \"STUD-103\",\n \"StudentName\" : \"Maxwell\",\n \"StudentAge\" : 25\n}\n{\n \"_id\" : ObjectId(\"5c6bf60868174aae23f5ef51\"),\n \"StudentId\" : \"STUD-104\",\n \"StudentName\" : \"Bob\",\n \"StudentAge\" : 23\n}\n{\n \"_id\" : ObjectId(\"5c6bf61b68174aae23f5ef52\"),\n \"StudentId\" : \"STUD-105\",\n \"StudentName\" : \"Sam\",\n \"StudentAge\" : 27\n}"
},
{
"code": null,
"e": 4104,
"s": 3924,
"text": "If you want to retrieve a single document on the basis of some condition, then you can use the\nfollowing query. Here, we are retrieving the document with StudentName as “Maxwell”:"
},
{
"code": null,
"e": 4170,
"s": 4104,
"text": "> db.retriveAllStudents.find({\"StudentName\":\"Maxwell\"}).pretty();"
},
{
"code": null,
"e": 4199,
"s": 4170,
"text": "The following is the output:"
},
{
"code": null,
"e": 4332,
"s": 4199,
"text": "{\n \"_id\" : ObjectId(\"5c6bf5f768174aae23f5ef50\"),\n \"StudentId\" : \"STUD-103\",\n \"StudentName\" : \"Maxwell\",\n \"StudentAge\" : 25\n}"
}
] |
How to make a class thread-safe in Java?
|
A thread-safe class is a class that guarantees the internal state of the class as well as returned values from methods, are correct while invoked concurrently from multiple threads.
The HashMap is a non-synchronized collection class. If we need to perform thread-safe operations on it then we must need to synchronize it explicitly.
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.Iterator;
public class HashMapSyncExample {
public static void main(String args[]) {
HashMap hmap = new HashMap();
hmap.put(2, "Raja");
hmap.put(44, "Archana");
hmap.put(1, "Krishna");
hmap.put(4, "Vineet");
hmap.put(88, "XYZ");
Map map= Collections.synchronizedMap(hmap);
Set set = map.entrySet();
synchronized(map){
Iterator i = set.iterator();
// Display elements
while(i.hasNext()) {
Map.Entry me = (Map.Entry)i.next();
System.out.print(me.getKey() + ": ");
System.out.println(me.getValue());
}
}
}
}
In the above example, we have a HashMap it is having integer keys and String type values. In order to synchronize it we are using Collections.synchronizedMap(hashmap). It returns a thread-safe map backed up by the specified HashMap.
1: Krishna
2: Raja
4: Vineet
88: XYZ
44: Archana
|
[
{
"code": null,
"e": 1244,
"s": 1062,
"text": "A thread-safe class is a class that guarantees the internal state of the class as well as returned values from methods, are correct while invoked concurrently from multiple threads."
},
{
"code": null,
"e": 1395,
"s": 1244,
"text": "The HashMap is a non-synchronized collection class. If we need to perform thread-safe operations on it then we must need to synchronize it explicitly."
},
{
"code": null,
"e": 2171,
"s": 1395,
"text": "import java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Iterator;\n\npublic class HashMapSyncExample {\n public static void main(String args[]) {\n\n HashMap hmap = new HashMap();\n hmap.put(2, \"Raja\");\n hmap.put(44, \"Archana\");\n hmap.put(1, \"Krishna\");\n hmap.put(4, \"Vineet\");\n hmap.put(88, \"XYZ\");\n\n Map map= Collections.synchronizedMap(hmap);\n Set set = map.entrySet();\n synchronized(map){\n Iterator i = set.iterator();\n // Display elements\n while(i.hasNext()) {\n Map.Entry me = (Map.Entry)i.next();\n System.out.print(me.getKey() + \": \");\n System.out.println(me.getValue());\n }\n }\n }\n}"
},
{
"code": null,
"e": 2404,
"s": 2171,
"text": "In the above example, we have a HashMap it is having integer keys and String type values. In order to synchronize it we are using Collections.synchronizedMap(hashmap). It returns a thread-safe map backed up by the specified HashMap."
},
{
"code": null,
"e": 2453,
"s": 2404,
"text": "1: Krishna\n2: Raja\n4: Vineet\n88: XYZ\n44: Archana"
}
] |
_Noreturn function specifier in C - GeeksforGeeks
|
29 Mar, 2019
After the removal of “noreturn” keyword, C11 standard (known as final draft) of C programming language introduce a new “_Noreturn” function specifier that specify that the function does not return to the function that it was called from. If the programmer try to return any value from that function which is declared as _Noreturn type, then the compiler automatically generates a compile time error.
// C program to show how _Noreturn type // function behave if it has return statement.#include <stdio.h>#include <stdlib.h> // With return value_Noreturn void view(){ return 10;}int main(void){ printf("Ready to begin...\n"); view(); printf("NOT over till now\n"); return 0;}
Output:
Ready to begin...
After that abnormal termination of program.
compiler error:[Warning] function declared 'noreturn' has a 'return' statement
// C program to illustrate the working // of _Noreturn type function.#include <stdio.h>#include <stdlib.h> // Nothing to return_Noreturn void show(){ printf("BYE BYE");}int main(void){ printf("Ready to begin...\n"); show(); printf("NOT over till now\n"); return 0;}
Output:
Ready to begin...
BYE BYE
Reference: http://en.cppreference.com/w/c/language/_Noreturn
This article is contributed by Bishal Kumar Dubey. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
AbhinavMadheshiya1
C-Library
C Language
Misc
Misc
Misc
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCP Server-Client implementation in C
Multithreading in C
Exception Handling in C++
'this' pointer in C++
Arrow operator -> in C/C++ with Examples
Top 10 algorithms in Interview Questions
vector::push_back() and vector::pop_back() in C++ STL
Overview of Data Structures | Set 1 (Linear Data Structures)
How to write Regular Expressions?
fgets() and gets() in C language
|
[
{
"code": null,
"e": 24232,
"s": 24204,
"text": "\n29 Mar, 2019"
},
{
"code": null,
"e": 24632,
"s": 24232,
"text": "After the removal of “noreturn” keyword, C11 standard (known as final draft) of C programming language introduce a new “_Noreturn” function specifier that specify that the function does not return to the function that it was called from. If the programmer try to return any value from that function which is declared as _Noreturn type, then the compiler automatically generates a compile time error."
},
{
"code": "// C program to show how _Noreturn type // function behave if it has return statement.#include <stdio.h>#include <stdlib.h> // With return value_Noreturn void view(){ return 10;}int main(void){ printf(\"Ready to begin...\\n\"); view(); printf(\"NOT over till now\\n\"); return 0;}",
"e": 24925,
"s": 24632,
"text": null
},
{
"code": null,
"e": 24933,
"s": 24925,
"text": "Output:"
},
{
"code": null,
"e": 25075,
"s": 24933,
"text": "Ready to begin...\nAfter that abnormal termination of program.\ncompiler error:[Warning] function declared 'noreturn' has a 'return' statement\n"
},
{
"code": "// C program to illustrate the working // of _Noreturn type function.#include <stdio.h>#include <stdlib.h> // Nothing to return_Noreturn void show(){ printf(\"BYE BYE\");}int main(void){ printf(\"Ready to begin...\\n\"); show(); printf(\"NOT over till now\\n\"); return 0;}",
"e": 25359,
"s": 25075,
"text": null
},
{
"code": null,
"e": 25367,
"s": 25359,
"text": "Output:"
},
{
"code": null,
"e": 25394,
"s": 25367,
"text": "Ready to begin...\nBYE BYE\n"
},
{
"code": null,
"e": 25455,
"s": 25394,
"text": "Reference: http://en.cppreference.com/w/c/language/_Noreturn"
},
{
"code": null,
"e": 25761,
"s": 25455,
"text": "This article is contributed by Bishal Kumar Dubey. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 25886,
"s": 25761,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 25905,
"s": 25886,
"text": "AbhinavMadheshiya1"
},
{
"code": null,
"e": 25915,
"s": 25905,
"text": "C-Library"
},
{
"code": null,
"e": 25926,
"s": 25915,
"text": "C Language"
},
{
"code": null,
"e": 25931,
"s": 25926,
"text": "Misc"
},
{
"code": null,
"e": 25936,
"s": 25931,
"text": "Misc"
},
{
"code": null,
"e": 25941,
"s": 25936,
"text": "Misc"
},
{
"code": null,
"e": 26039,
"s": 25941,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26077,
"s": 26039,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 26097,
"s": 26077,
"text": "Multithreading in C"
},
{
"code": null,
"e": 26123,
"s": 26097,
"text": "Exception Handling in C++"
},
{
"code": null,
"e": 26145,
"s": 26123,
"text": "'this' pointer in C++"
},
{
"code": null,
"e": 26186,
"s": 26145,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 26227,
"s": 26186,
"text": "Top 10 algorithms in Interview Questions"
},
{
"code": null,
"e": 26281,
"s": 26227,
"text": "vector::push_back() and vector::pop_back() in C++ STL"
},
{
"code": null,
"e": 26342,
"s": 26281,
"text": "Overview of Data Structures | Set 1 (Linear Data Structures)"
},
{
"code": null,
"e": 26376,
"s": 26342,
"text": "How to write Regular Expressions?"
}
] |
Bitcoin's Monetary Policy - GeeksforGeeks
|
03 Jan, 2020
Bitcoin‘s Monetary Policy consists of 2 main parts:
1. The Halving
2. Block Frequency
The Monetary Policy is completely controlled by Software. It’s all preprogrammed.
1. The Halving:The number of Bitcoins released into the system every 10 minutes is halved after every 4 years. Actually, the halving takes place after every 210000 Blocks which takes approximately 4 years to generate (as on an average one block is generated every 10 minutes).
When bitcoin started (2009), the block reward was 50 Bitcoin every 10 minutes.In November 2012, Bitcoin’s 1st Halving took place and block reward(i.e reward for successfully mining one block into the Blockchain) reduced to half i.e., 25 Bitcoin from 50 Bitcoin.
In July 2016 Bitcoin’s 2nd halving took place(reward reduces to 12.5 Bitcoin) and the next halving which is Bitcoin’s 3rd will take place in May 2020. This is when the current block reward of 12.5 Bitcoin every 10 minutes will be cut into half to 6.25 Bitcoin.
This also means over time mining will become more difficult. As network difficulty increases over time & the reward rate drop, so the actual cost of mining each Bitcoin increases, which will then cause the trading price of each Bitcoin to increase as well. Also, the limited supply will cause Bitcoin prices to increase, as their scarcity also increases proportionally.
As total Bitcoin’s supply is set to be limited by Satoshi Nakamoto (who created Bitcoin). Only 21 million Bitcoins can be generated. Right now there is almost 18 million Bitcoin (85% of total supply) in circulation.
The supply limit of 21 million Bitcoins will be reached by 2140.
2. Block Frequency:Block Frequency is defined as, how often the blocks come in & break the reward which is now 12.5 Bitcoin per block for Bitcoin.The Average Block-Time or Block Frequency is different for different cryptocurrencies.For example, Average Block-Time for Bitcoin is 10 minutes. For Ethereum, it is 15 seconds. For Ripple, it is 3.5 seconds.
BlockChain
GBlog
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Roadmap to Become a Web Developer in 2022
DSA Sheet by Love Babbar
A Freshers Guide To Programming
Top 10 Angular Libraries For Web Developers
Top 5 Python Libraries For Big Data
Top 10 Programming Languages to Learn in 2022
ML | Underfitting and Overfitting
Virtualization In Cloud Computing and Types
What is web socket and how it is different from the HTTP?
Software Testing | Basics
|
[
{
"code": null,
"e": 24984,
"s": 24956,
"text": "\n03 Jan, 2020"
},
{
"code": null,
"e": 25036,
"s": 24984,
"text": "Bitcoin‘s Monetary Policy consists of 2 main parts:"
},
{
"code": null,
"e": 25071,
"s": 25036,
"text": "1. The Halving\n2. Block Frequency "
},
{
"code": null,
"e": 25153,
"s": 25071,
"text": "The Monetary Policy is completely controlled by Software. It’s all preprogrammed."
},
{
"code": null,
"e": 25430,
"s": 25153,
"text": "1. The Halving:The number of Bitcoins released into the system every 10 minutes is halved after every 4 years. Actually, the halving takes place after every 210000 Blocks which takes approximately 4 years to generate (as on an average one block is generated every 10 minutes)."
},
{
"code": null,
"e": 25692,
"s": 25430,
"text": "When bitcoin started (2009), the block reward was 50 Bitcoin every 10 minutes.In November 2012, Bitcoin’s 1st Halving took place and block reward(i.e reward for successfully mining one block into the Blockchain) reduced to half i.e., 25 Bitcoin from 50 Bitcoin."
},
{
"code": null,
"e": 25953,
"s": 25692,
"text": "In July 2016 Bitcoin’s 2nd halving took place(reward reduces to 12.5 Bitcoin) and the next halving which is Bitcoin’s 3rd will take place in May 2020. This is when the current block reward of 12.5 Bitcoin every 10 minutes will be cut into half to 6.25 Bitcoin."
},
{
"code": null,
"e": 26323,
"s": 25953,
"text": "This also means over time mining will become more difficult. As network difficulty increases over time & the reward rate drop, so the actual cost of mining each Bitcoin increases, which will then cause the trading price of each Bitcoin to increase as well. Also, the limited supply will cause Bitcoin prices to increase, as their scarcity also increases proportionally."
},
{
"code": null,
"e": 26539,
"s": 26323,
"text": "As total Bitcoin’s supply is set to be limited by Satoshi Nakamoto (who created Bitcoin). Only 21 million Bitcoins can be generated. Right now there is almost 18 million Bitcoin (85% of total supply) in circulation."
},
{
"code": null,
"e": 26604,
"s": 26539,
"text": "The supply limit of 21 million Bitcoins will be reached by 2140."
},
{
"code": null,
"e": 26958,
"s": 26604,
"text": "2. Block Frequency:Block Frequency is defined as, how often the blocks come in & break the reward which is now 12.5 Bitcoin per block for Bitcoin.The Average Block-Time or Block Frequency is different for different cryptocurrencies.For example, Average Block-Time for Bitcoin is 10 minutes. For Ethereum, it is 15 seconds. For Ripple, it is 3.5 seconds."
},
{
"code": null,
"e": 26969,
"s": 26958,
"text": "BlockChain"
},
{
"code": null,
"e": 26975,
"s": 26969,
"text": "GBlog"
},
{
"code": null,
"e": 27073,
"s": 26975,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27115,
"s": 27073,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27140,
"s": 27115,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 27172,
"s": 27140,
"text": "A Freshers Guide To Programming"
},
{
"code": null,
"e": 27216,
"s": 27172,
"text": "Top 10 Angular Libraries For Web Developers"
},
{
"code": null,
"e": 27252,
"s": 27216,
"text": "Top 5 Python Libraries For Big Data"
},
{
"code": null,
"e": 27298,
"s": 27252,
"text": "Top 10 Programming Languages to Learn in 2022"
},
{
"code": null,
"e": 27332,
"s": 27298,
"text": "ML | Underfitting and Overfitting"
},
{
"code": null,
"e": 27376,
"s": 27332,
"text": "Virtualization In Cloud Computing and Types"
},
{
"code": null,
"e": 27434,
"s": 27376,
"text": "What is web socket and how it is different from the HTTP?"
}
] |
gdbtui - Unix, Linux Command
|
GDB can do four main kinds of things (plus other things in support of
these) to help you catch bugs in the act:
You can use GDB to debug programs written in C, C++, and Modula-2.
Fortran support will be added when a GNU Fortran compiler is ready.
GDB is invoked with the shell command
gdb . Once started, it reads
commands from the terminal until you tell it to exit with the GDB
command
quit . You can get online help from
gdb itself
by using the command
help .
You can run
gdb with no arguments or options; but the most
usual way to start GDB is with one argument or two, specifying an
executable program as the argument:
gdb program
You can also start with both an executable program and a core file specified:
gdb program core
You can, instead, specify a process ID as a second argument, if you want
to debug a running process:
gdb program 1234
would attach GDB to process
1234 (unless you also have a file
named ‘
1234 ’; GDB does check for a core file first).
Here are some of the most frequently needed GDB commands:
All the options and command line arguments you give are processed
in sequential order. The order makes a difference when the
‘
-x ’ option is used.
Batch mode may be useful for running GDB as a filter, for example to
download and run a program on another computer; in order to make this
more useful, the message
Program exited normally.
(which is ordinarily issued whenever a program running under GDB control
terminates) is not issued when running in batch mode.
$ gdbtui gdb_example
$ gdb gdb_example -tui
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 10691,
"s": 10577,
"text": "\nGDB can do four main kinds of things (plus other things in support of\nthese) to help you catch bugs in the act:\n"
},
{
"code": null,
"e": 10836,
"s": 10699,
"text": "\nYou can use GDB to debug programs written in C, C++, and Modula-2.\nFortran support will be added when a GNU Fortran compiler is ready.\n"
},
{
"code": null,
"e": 11061,
"s": 10836,
"text": "\nGDB is invoked with the shell command \ngdb . Once started, it reads\ncommands from the terminal until you tell it to exit with the GDB\ncommand \nquit . You can get online help from \ngdb itself\nby using the command \nhelp .\n"
},
{
"code": null,
"e": 11226,
"s": 11061,
"text": "\nYou can run \ngdb with no arguments or options; but the most\nusual way to start GDB is with one argument or two, specifying an\nexecutable program as the argument:\n"
},
{
"code": null,
"e": 11240,
"s": 11226,
"text": "\ngdb program\n"
},
{
"code": null,
"e": 11322,
"s": 11242,
"text": "\nYou can also start with both an executable program and a core file specified:\n"
},
{
"code": null,
"e": 11341,
"s": 11322,
"text": "\ngdb program core\n"
},
{
"code": null,
"e": 11446,
"s": 11343,
"text": "\nYou can, instead, specify a process ID as a second argument, if you want\nto debug a running process:\n"
},
{
"code": null,
"e": 11465,
"s": 11446,
"text": "\ngdb program 1234\n"
},
{
"code": null,
"e": 11588,
"s": 11467,
"text": "\nwould attach GDB to process \n1234 (unless you also have a file\nnamed ‘\n1234 ’; GDB does check for a core file first).\n"
},
{
"code": null,
"e": 11648,
"s": 11588,
"text": "\nHere are some of the most frequently needed GDB commands:\n"
},
{
"code": null,
"e": 11799,
"s": 11648,
"text": "\nAll the options and command line arguments you give are processed\nin sequential order. The order makes a difference when the\n‘\n-x ’ option is used.\n"
},
{
"code": null,
"e": 11989,
"s": 11823,
"text": "\nBatch mode may be useful for running GDB as a filter, for example to\ndownload and run a program on another computer; in order to make this\nmore useful, the message\n"
},
{
"code": null,
"e": 12016,
"s": 11989,
"text": "\nProgram exited normally.\n"
},
{
"code": null,
"e": 12147,
"s": 12018,
"text": "\n(which is ordinarily issued whenever a program running under GDB control\nterminates) is not issued when running in batch mode.\n"
},
{
"code": null,
"e": 12177,
"s": 12155,
"text": "$ gdbtui gdb_example\n"
},
{
"code": null,
"e": 12201,
"s": 12177,
"text": "$ gdb gdb_example -tui\n"
},
{
"code": null,
"e": 12236,
"s": 12201,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 12264,
"s": 12236,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 12298,
"s": 12264,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 12315,
"s": 12298,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 12348,
"s": 12315,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 12359,
"s": 12348,
"text": " Pradeep D"
},
{
"code": null,
"e": 12394,
"s": 12359,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 12410,
"s": 12394,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 12443,
"s": 12410,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 12455,
"s": 12443,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 12487,
"s": 12455,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 12495,
"s": 12487,
"text": " Uplatz"
},
{
"code": null,
"e": 12502,
"s": 12495,
"text": " Print"
},
{
"code": null,
"e": 12513,
"s": 12502,
"text": " Add Notes"
}
] |
Creating a Cell at specific position in Excel file using Java - GeeksforGeeks
|
14 Apr, 2022
Apache POI is an open-source java library to create and manipulate various file formats based on Microsoft Office. Using POI, one should be able to perform create, modify and display/read operations on the following file formats/ it can be used to create a cell in a Given Excel file at a specific position. Apache POI is an API provided by the Apache foundation.
Create a maven project(Maven is a build automation tool used primarily for Java projects) in eclipse or a Java project with the POI library installedAdd the following maven dependency in the pom.xml fileWrite java code in javaresource folder
Create a maven project(Maven is a build automation tool used primarily for Java projects) in eclipse or a Java project with the POI library installed
Add the following maven dependency in the pom.xml file
Write java code in javaresource folder
Example
Java
// Java Program to Demonstrate Creation Of Cell// At Specific Position in Excel File // Importing required classesimport java.io.*;import org.apache.poi.hssf.usermodel.HSSFWorkbook;import org.apache.poi.ss.usermodel.Cell;import org.apache.poi.ss.usermodel.Row;import org.apache.poi.ss.usermodel.Sheet;import org.apache.poi.ss.usermodel.Workbook; // Class// CreateCellAtSpecificPositionpublic class GFG { // Main driver method public static void main(String[] args) throws FileNotFoundException, IOException { // Creating a workbook instances Workbook wb = new HSSFWorkbook(); // Creating output file OutputStream os = new FileOutputStream("Geeks.xlsx"); // Creating a sheet using predefined class // provided by Apache POI Sheet sheet = wb.createSheet("Company Preparation"); // Creating a row at specific position // using predefined class provided by Apache POI // Specific row number Row row = sheet.createRow(1); // Specific cell number Cell cell = row.createCell(1); // putting value at specific position cell.setCellValue("Geeks"); // Finding index value of row and column of give // cell int rowIndex = cell.getRowIndex(); int columnIndex = cell.getColumnIndex(); // Writing the content to Workbook wb.write(os); // Printing the row and column index of cell created System.out.println("Given cell is created at " + "(" + rowIndex + "," + columnIndex + ")"); }}
Output: On console
Given cell is created at (1,1)
Output: Inside file named ‘Geeks.xlsx’
akshaysingh98088
nandinigujral
GBlog
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Roadmap to Become a Web Developer in 2022
DSA Sheet by Love Babbar
GET and POST requests using Python
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Working with csv files in Python
Arrays in Java
Split() String method in Java with examples
For-each loop in Java
Arrays.sort() in Java with examples
Reverse a string in Java
|
[
{
"code": null,
"e": 24558,
"s": 24530,
"text": "\n14 Apr, 2022"
},
{
"code": null,
"e": 24922,
"s": 24558,
"text": "Apache POI is an open-source java library to create and manipulate various file formats based on Microsoft Office. Using POI, one should be able to perform create, modify and display/read operations on the following file formats/ it can be used to create a cell in a Given Excel file at a specific position. Apache POI is an API provided by the Apache foundation."
},
{
"code": null,
"e": 25164,
"s": 24922,
"text": "Create a maven project(Maven is a build automation tool used primarily for Java projects) in eclipse or a Java project with the POI library installedAdd the following maven dependency in the pom.xml fileWrite java code in javaresource folder"
},
{
"code": null,
"e": 25314,
"s": 25164,
"text": "Create a maven project(Maven is a build automation tool used primarily for Java projects) in eclipse or a Java project with the POI library installed"
},
{
"code": null,
"e": 25369,
"s": 25314,
"text": "Add the following maven dependency in the pom.xml file"
},
{
"code": null,
"e": 25408,
"s": 25369,
"text": "Write java code in javaresource folder"
},
{
"code": null,
"e": 25416,
"s": 25408,
"text": "Example"
},
{
"code": null,
"e": 25421,
"s": 25416,
"text": "Java"
},
{
"code": "// Java Program to Demonstrate Creation Of Cell// At Specific Position in Excel File // Importing required classesimport java.io.*;import org.apache.poi.hssf.usermodel.HSSFWorkbook;import org.apache.poi.ss.usermodel.Cell;import org.apache.poi.ss.usermodel.Row;import org.apache.poi.ss.usermodel.Sheet;import org.apache.poi.ss.usermodel.Workbook; // Class// CreateCellAtSpecificPositionpublic class GFG { // Main driver method public static void main(String[] args) throws FileNotFoundException, IOException { // Creating a workbook instances Workbook wb = new HSSFWorkbook(); // Creating output file OutputStream os = new FileOutputStream(\"Geeks.xlsx\"); // Creating a sheet using predefined class // provided by Apache POI Sheet sheet = wb.createSheet(\"Company Preparation\"); // Creating a row at specific position // using predefined class provided by Apache POI // Specific row number Row row = sheet.createRow(1); // Specific cell number Cell cell = row.createCell(1); // putting value at specific position cell.setCellValue(\"Geeks\"); // Finding index value of row and column of give // cell int rowIndex = cell.getRowIndex(); int columnIndex = cell.getColumnIndex(); // Writing the content to Workbook wb.write(os); // Printing the row and column index of cell created System.out.println(\"Given cell is created at \" + \"(\" + rowIndex + \",\" + columnIndex + \")\"); }}",
"e": 27043,
"s": 25421,
"text": null
},
{
"code": null,
"e": 27063,
"s": 27043,
"text": "Output: On console "
},
{
"code": null,
"e": 27094,
"s": 27063,
"text": "Given cell is created at (1,1)"
},
{
"code": null,
"e": 27135,
"s": 27094,
"text": "Output: Inside file named ‘Geeks.xlsx’ "
},
{
"code": null,
"e": 27152,
"s": 27135,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 27166,
"s": 27152,
"text": "nandinigujral"
},
{
"code": null,
"e": 27172,
"s": 27166,
"text": "GBlog"
},
{
"code": null,
"e": 27177,
"s": 27172,
"text": "Java"
},
{
"code": null,
"e": 27182,
"s": 27177,
"text": "Java"
},
{
"code": null,
"e": 27280,
"s": 27182,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27322,
"s": 27280,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27347,
"s": 27322,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 27382,
"s": 27347,
"text": "GET and POST requests using Python"
},
{
"code": null,
"e": 27444,
"s": 27382,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 27477,
"s": 27444,
"text": "Working with csv files in Python"
},
{
"code": null,
"e": 27492,
"s": 27477,
"text": "Arrays in Java"
},
{
"code": null,
"e": 27536,
"s": 27492,
"text": "Split() String method in Java with examples"
},
{
"code": null,
"e": 27558,
"s": 27536,
"text": "For-each loop in Java"
},
{
"code": null,
"e": 27594,
"s": 27558,
"text": "Arrays.sort() in Java with examples"
}
] |
How to show all the tables present in the database and server in MySQL using Python?
|
We may sometimes require to get the list of all the tables present in our database. This can be done by using the SHOW TABLES command.
The SHOW TABLES command is used to display the table names in a database as well as the server.
To show the tables present in a database −
SHOW TABLES
The above statement when executed using the cursor object returns the names of the tables present in our database.
To show the tables present in a server
SELECT table_name FROM information_schema.tables
import MySQL connector
import MySQL connector
establish connection with the connector using connect()
establish connection with the connector using connect()
create the cursor object using cursor() method
create the cursor object using cursor() method
create a query using the appropriate mysql statements
create a query using the appropriate mysql statements
execute the SQL query using execute() method
execute the SQL query using execute() method
close the connection
close the connection
Example
import mysql.connector
db=mysql.connector.connect(host="your host", user="your username", password="your_password",database="database_name")
cursor=db.cursor()
cursor.execute("SHOW TABLES")
for table_name in cursor:
print(table_name)
Example
import mysql.connector
db=mysql.connector.connect(host="your host", user="your username", password="your_password",database="database_name")
cursor=db.cursor()
cursor.execute("SELECT table_name FROM information_schema.tables")
for table_name in cursor:
print(table_name)
The above codes output the list of tables present in your database or the server .
Employees
Students
MyTable
|
[
{
"code": null,
"e": 1197,
"s": 1062,
"text": "We may sometimes require to get the list of all the tables present in our database. This can be done by using the SHOW TABLES command."
},
{
"code": null,
"e": 1293,
"s": 1197,
"text": "The SHOW TABLES command is used to display the table names in a database as well as the server."
},
{
"code": null,
"e": 1336,
"s": 1293,
"text": "To show the tables present in a database −"
},
{
"code": null,
"e": 1348,
"s": 1336,
"text": "SHOW TABLES"
},
{
"code": null,
"e": 1463,
"s": 1348,
"text": "The above statement when executed using the cursor object returns the names of the tables present in our database."
},
{
"code": null,
"e": 1502,
"s": 1463,
"text": "To show the tables present in a server"
},
{
"code": null,
"e": 1551,
"s": 1502,
"text": "SELECT table_name FROM information_schema.tables"
},
{
"code": null,
"e": 1574,
"s": 1551,
"text": "import MySQL connector"
},
{
"code": null,
"e": 1597,
"s": 1574,
"text": "import MySQL connector"
},
{
"code": null,
"e": 1653,
"s": 1597,
"text": "establish connection with the connector using connect()"
},
{
"code": null,
"e": 1709,
"s": 1653,
"text": "establish connection with the connector using connect()"
},
{
"code": null,
"e": 1756,
"s": 1709,
"text": "create the cursor object using cursor() method"
},
{
"code": null,
"e": 1803,
"s": 1756,
"text": "create the cursor object using cursor() method"
},
{
"code": null,
"e": 1857,
"s": 1803,
"text": "create a query using the appropriate mysql statements"
},
{
"code": null,
"e": 1911,
"s": 1857,
"text": "create a query using the appropriate mysql statements"
},
{
"code": null,
"e": 1956,
"s": 1911,
"text": "execute the SQL query using execute() method"
},
{
"code": null,
"e": 2001,
"s": 1956,
"text": "execute the SQL query using execute() method"
},
{
"code": null,
"e": 2022,
"s": 2001,
"text": "close the connection"
},
{
"code": null,
"e": 2043,
"s": 2022,
"text": "close the connection"
},
{
"code": null,
"e": 2051,
"s": 2043,
"text": "Example"
},
{
"code": null,
"e": 2292,
"s": 2051,
"text": "import mysql.connector\n\ndb=mysql.connector.connect(host=\"your host\", user=\"your username\", password=\"your_password\",database=\"database_name\")\n\ncursor=db.cursor()\n\ncursor.execute(\"SHOW TABLES\")\n\nfor table_name in cursor:\n print(table_name)"
},
{
"code": null,
"e": 2300,
"s": 2292,
"text": "Example"
},
{
"code": null,
"e": 2578,
"s": 2300,
"text": "import mysql.connector\n\ndb=mysql.connector.connect(host=\"your host\", user=\"your username\", password=\"your_password\",database=\"database_name\")\n\ncursor=db.cursor()\n\ncursor.execute(\"SELECT table_name FROM information_schema.tables\")\n\nfor table_name in cursor:\n print(table_name)"
},
{
"code": null,
"e": 2661,
"s": 2578,
"text": "The above codes output the list of tables present in your database or the server ."
},
{
"code": null,
"e": 2688,
"s": 2661,
"text": "Employees\nStudents\nMyTable"
}
] |
Position of robot after given movements in C++
|
In this problem, we are given a robot that moves in all four directions but only one move. The directions are up(‘U’), down(‘D’), left(‘L’), right(‘R’). And we are given a string that contains initials of directions of the number. Our task is to print the final position of the robot, given the initial position of the robot is (0,0).
Let’s take an example to understand the problem
Input − input: ‘LDRRUL’
Output − (0, 0)
Explanation −
L (left) : (0,0) -> (-1,0)
D (down) : (-1,0) -> (-1, -1)
R (right) : (-1, -1) -> (0, -1)
R (right) : (0, -1) -> (1, -1)
U(up) : (1, -1) -> (1, 0)
L(left) : (1, 0) -> (0, 0)
To solve this problem, we will count the total moves in the x-axis and the y-axis direction. For x-coordinate, increase the count for Right move and decrease count for a left move. For y-coordinate, increase the count for the up move and down count for a left move.
Program to show the implementation of our solution
Live Demo
#include <iostream>
#include <string.h>
using namespace std;
void robotMoved(string move) {
int xAxis, yAxis;
int l=move.size();
for (int i = 0; i < l; i++) {
if (move[i]=='U')
yAxis++;
else if (move[i]=='D')
yAxis--;
else if (move[i]=='L')
xAxis--;
else if (move[i]=='R')
xAxis++;
}
cout<<"Final Position of the robot is : ("<<xAxis<<", "<<yAxis<<")"<<endl;
}
int main() {
string move="URLLDDRRUDUDDRU";
robotMoved(move);
return 0;
}
Final Position of the robot is : (32744, -274873553)
|
[
{
"code": null,
"e": 1397,
"s": 1062,
"text": "In this problem, we are given a robot that moves in all four directions but only one move. The directions are up(‘U’), down(‘D’), left(‘L’), right(‘R’). And we are given a string that contains initials of directions of the number. Our task is to print the final position of the robot, given the initial position of the robot is (0,0)."
},
{
"code": null,
"e": 1445,
"s": 1397,
"text": "Let’s take an example to understand the problem"
},
{
"code": null,
"e": 1469,
"s": 1445,
"text": "Input − input: ‘LDRRUL’"
},
{
"code": null,
"e": 1485,
"s": 1469,
"text": "Output − (0, 0)"
},
{
"code": null,
"e": 1499,
"s": 1485,
"text": "Explanation −"
},
{
"code": null,
"e": 1672,
"s": 1499,
"text": "L (left) : (0,0) -> (-1,0)\nD (down) : (-1,0) -> (-1, -1)\nR (right) : (-1, -1) -> (0, -1)\nR (right) : (0, -1) -> (1, -1)\nU(up) : (1, -1) -> (1, 0)\nL(left) : (1, 0) -> (0, 0)"
},
{
"code": null,
"e": 1938,
"s": 1672,
"text": "To solve this problem, we will count the total moves in the x-axis and the y-axis direction. For x-coordinate, increase the count for Right move and decrease count for a left move. For y-coordinate, increase the count for the up move and down count for a left move."
},
{
"code": null,
"e": 1989,
"s": 1938,
"text": "Program to show the implementation of our solution"
},
{
"code": null,
"e": 2000,
"s": 1989,
"text": " Live Demo"
},
{
"code": null,
"e": 2519,
"s": 2000,
"text": "#include <iostream>\n#include <string.h>\nusing namespace std;\nvoid robotMoved(string move) {\n int xAxis, yAxis;\n int l=move.size();\n for (int i = 0; i < l; i++) {\n if (move[i]=='U')\n yAxis++;\n else if (move[i]=='D')\n yAxis--;\n else if (move[i]=='L')\n xAxis--;\n else if (move[i]=='R')\n xAxis++;\n }\n cout<<\"Final Position of the robot is : (\"<<xAxis<<\", \"<<yAxis<<\")\"<<endl;\n}\nint main() {\n string move=\"URLLDDRRUDUDDRU\";\n robotMoved(move);\n return 0;\n}"
},
{
"code": null,
"e": 2572,
"s": 2519,
"text": "Final Position of the robot is : (32744, -274873553)"
}
] |
Python 3 - String title() Method
|
The title() method returns a copy of the string in which first characters of all the words are capitalized.
Following is the syntax for title() method −
str.title();
NA
This method returns a copy of the string in which first characters of all the words are capitalized.
The following example shows the usage of title() method.
#!/usr/bin/python3
str = "this is string example....wow!!!"
print (str.title())
When we run above program, it produces the following result −
This Is String Example....Wow!!!
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2448,
"s": 2340,
"text": "The title() method returns a copy of the string in which first characters of all the words are capitalized."
},
{
"code": null,
"e": 2493,
"s": 2448,
"text": "Following is the syntax for title() method −"
},
{
"code": null,
"e": 2507,
"s": 2493,
"text": "str.title();\n"
},
{
"code": null,
"e": 2510,
"s": 2507,
"text": "NA"
},
{
"code": null,
"e": 2611,
"s": 2510,
"text": "This method returns a copy of the string in which first characters of all the words are capitalized."
},
{
"code": null,
"e": 2668,
"s": 2611,
"text": "The following example shows the usage of title() method."
},
{
"code": null,
"e": 2749,
"s": 2668,
"text": "#!/usr/bin/python3\n\nstr = \"this is string example....wow!!!\"\nprint (str.title())"
},
{
"code": null,
"e": 2811,
"s": 2749,
"text": "When we run above program, it produces the following result −"
},
{
"code": null,
"e": 2845,
"s": 2811,
"text": "This Is String Example....Wow!!!\n"
},
{
"code": null,
"e": 2882,
"s": 2845,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 2898,
"s": 2882,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 2931,
"s": 2898,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2950,
"s": 2931,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 2985,
"s": 2950,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3007,
"s": 2985,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3041,
"s": 3007,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3069,
"s": 3041,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3104,
"s": 3069,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3118,
"s": 3104,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3151,
"s": 3118,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3168,
"s": 3151,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3175,
"s": 3168,
"text": " Print"
},
{
"code": null,
"e": 3186,
"s": 3175,
"text": " Add Notes"
}
] |
How to set time zone in a JSP?
|
The <fmt:setTimeZone> tag is used to copy a time zone object into the specified scoped variable.
The <fmt:setTimeZone> tag has the following attributes −
<%@ taglib uri = "http://java.sun.com/jsp/jstl/core" prefix = "c" %>
<%@ taglib uri = "http://java.sun.com/jsp/jstl/fmt" prefix = "fmt" %>
<html>
<head>
<title>JSTL fmt:setTimeZone Tag</title>
</head>
<body>
<c:set var = "now" value = "<%=new java.util.Date()%>" />
<p>Date in Current Zone: <fmt:formatDate value = "${now}" type = "both" timeStyle = "long" dateStyle = "long" /></p>
<p>Change Time Zone to GMT-8</p>
<fmt:setTimeZone value = "GMT-8" />
<p>Date in Changed Zone: <fmt:formatDate value = "${now}" type = "both" timeStyle = "long" dateStyle = "long" /></p>
</body>
</html>
The above code will generate the following result −
Date in Current Zone: 23 September 2010 15:21:37 GST
Change Time Zone to GMT-8
Date in Changed Zone: 23 September 2010 03:21:37 GMT-08:00
|
[
{
"code": null,
"e": 1159,
"s": 1062,
"text": "The <fmt:setTimeZone> tag is used to copy a time zone object into the specified scoped variable."
},
{
"code": null,
"e": 1216,
"s": 1159,
"text": "The <fmt:setTimeZone> tag has the following attributes −"
},
{
"code": null,
"e": 1849,
"s": 1216,
"text": "<%@ taglib uri = \"http://java.sun.com/jsp/jstl/core\" prefix = \"c\" %>\n<%@ taglib uri = \"http://java.sun.com/jsp/jstl/fmt\" prefix = \"fmt\" %>\n<html>\n <head>\n <title>JSTL fmt:setTimeZone Tag</title>\n </head>\n <body>\n <c:set var = \"now\" value = \"<%=new java.util.Date()%>\" />\n <p>Date in Current Zone: <fmt:formatDate value = \"${now}\" type = \"both\" timeStyle = \"long\" dateStyle = \"long\" /></p>\n <p>Change Time Zone to GMT-8</p>\n <fmt:setTimeZone value = \"GMT-8\" />\n <p>Date in Changed Zone: <fmt:formatDate value = \"${now}\" type = \"both\" timeStyle = \"long\" dateStyle = \"long\" /></p>\n </body>\n</html>"
},
{
"code": null,
"e": 1901,
"s": 1849,
"text": "The above code will generate the following result −"
},
{
"code": null,
"e": 2039,
"s": 1901,
"text": "Date in Current Zone: 23 September 2010 15:21:37 GST\nChange Time Zone to GMT-8\nDate in Changed Zone: 23 September 2010 03:21:37 GMT-08:00"
}
] |
How to validate a date pattern in JavaScript?
|
To validate a date pattern in JavaScript, try to run the following code. Here, we will check for correct as well as incorrect dates to validate
Live Demo
<!DOCTYPE html>
<html>
<body>
<script>
function validDate(date) {
var split = date.split('/');
var date = new Date(split[2] + '/' + split[0] + '/' + split[1]);
return (date && (date.getMonth() + 1) == split[0] && date.getDate() == Number(split[1]) && date.getFullYear() == Number(split[2]));
}
document.write("Valid date: 11/11/2017 = "+validDate('11/11/2017'));
document.write("<br>Valid date: 18/18/2017 = "+validDate('18/18/2017'));
document.write("<br>Valid date: 05/09/2017 = "+validDate('05/09/2017'));
</script>
</body>
</html>
Valid date: 11/11/2017 = true
Valid date: 18/18/2017 = false
Valid date: 05/09/2017 = true
|
[
{
"code": null,
"e": 1206,
"s": 1062,
"text": "To validate a date pattern in JavaScript, try to run the following code. Here, we will check for correct as well as incorrect dates to validate"
},
{
"code": null,
"e": 1216,
"s": 1206,
"text": "Live Demo"
},
{
"code": null,
"e": 1885,
"s": 1216,
"text": "<!DOCTYPE html>\n<html>\n <body>\n \n <script>\n function validDate(date) {\n var split = date.split('/');\n var date = new Date(split[2] + '/' + split[0] + '/' + split[1]);\n return (date && (date.getMonth() + 1) == split[0] && date.getDate() == Number(split[1]) && date.getFullYear() == Number(split[2]));\n }\n document.write(\"Valid date: 11/11/2017 = \"+validDate('11/11/2017'));\n document.write(\"<br>Valid date: 18/18/2017 = \"+validDate('18/18/2017'));\n document.write(\"<br>Valid date: 05/09/2017 = \"+validDate('05/09/2017'));\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 1977,
"s": 1885,
"text": "Valid date: 11/11/2017 = true\nValid date: 18/18/2017 = false\nValid date: 05/09/2017 = true\n"
}
] |
Generating Singlish Text Messages with a LSTM Network | by Jason Yip | Towards Data Science
|
After publishing my first post on Medium, I realized how enjoyable it was to go through the whole process of conceptualizing an idea to sharing my findings and learning experience. I realized most data-related projects and solutions arise from these 2 things — 1) A problem we are trying to solve 2) An opportunity from the data that we have. Since my previous post was about solving the problem of manual work in Social Media Contests, I figured that I should look to finding data first this time.
Google recently released Google Dataset Search, a Google Scholar for datasets, and this came in handy in helping me get started. I came across a really interesting dataset from my school, The National University of Singapore SMS Corpus. It is a corpus of more than 50,000 SMS messages in Singapore English (Singlish) and it was part of a research work from the Department of Computer Science. The messages largely originated from volunteers of the study who are Singaporeans attending the University.
I thought it was such an amazing opportunity to study the language of those texts especially because I am a student myself in NUS and I speak and text in Singlish all the time. For the uninitiated, Singlish seems to be a broken version of English or a very crude slang even. It is actually very universal in Singapore despite the seeming lack of coherence and semantics. In fact, it can often be used to establish a connection and trust with Singaporeans immediately.
Singlish can also be incredibly hard to get when a single word can change the entire meaning of the message, that is also one of the reasons why it is a very efficient language.
Singlish in text messages is on another level. Besides the lack of complete sentences, the texting and Internet language just shortens it even more.
While I am no linguistics expert, I thought it could be useful to understand this by training a neural network on the corpus to generate similar text messages. I want to understand and demonstrate the reasons for choosing the final representation of our model for our text generation.
An artificial neural network models our brain and represents the neurons with nodes. The network has an input layer which takes in information, hidden layers which processes the info (manipulation, computation, feature extraction) and a final output layer which generates a desired output based on the information which is usually used to make a prediction.
The predicted output and the actual output can be very different and this is measured with a cost function which we want to minimize.
The neural net is just like a baby, he wants to learn how to speak properly (model output Yhat)by doing a certain set of actions such as uttering, ... , shouting random things(X_1 to X_n),some more frequently than the other (W_1 to W_n).He attempts to say what is right by iteratively trying out different scenarios of actions in his mind (update weights).The final set of actions he chose at that point in time is as close as possible to what his parents tell him is right (minimize cost function C).
The neural net is just like a baby, he wants to learn how to speak properly (model output Yhat)
by doing a certain set of actions such as uttering, ... , shouting random things(X_1 to X_n),
some more frequently than the other (W_1 to W_n).
He attempts to say what is right by iteratively trying out different scenarios of actions in his mind (update weights).
The final set of actions he chose at that point in time is as close as possible to what his parents tell him is right (minimize cost function C).
That is why it is a form of Supervised Learning. This updating/understanding process in his mind is also known as Backpropagation.
The feedforward neural net is the first and simplest type of artificial neural network devised and it only allows signals to travel from input to output. It has no element of time. When it comes to text generation, we are trying to predict the next word given a sequence of words and we need to have a model that represents memory.
This recurring connection helps the RNN to learn the effect of previous inputs X_t-1 (a vector) along with the current input X_t (a vector) while predicting the output at time Yhat_t. This gives RNN a sense of time. It allows the baby to learn from past scenarios when he got scolded and avoids making the same mistakes.
Each output (h) from a state (blue) is the activation function of the output from the previous state (h_t-1) and the current input vector (X_t). These outputs h_1 to h_t of the first layer will then be fed as an input into the next layer as it goes deep into the RNN.
This will allow us to predict the next word given the context of a sentence.
However, when the required context of the sentence gets very large, it might not be so important to remember words such as “so”. Unfortunately, as that gap grows, traditional RNNs become unable to learn to connect the information because it is unable to ignore or forget all the unnecessary information.
The LSTM model is able to represent the actions of taking in information (input gate), give out a predicted value (output gate) and leaving out unimportant information (forget gate). The LSTM is very popular in sequence modelling tasks such as text generation. Similar to the previous RNN diagram, a LSTM network will have LSTM cells in place of the nodes.
The LSTM structure and vanilla RNN structure is very similar on the outside but the main difference is what is within a single cell. This will help us to model the different states in time where we are able to input, output and forget information.
Every gate has a sigmoid function that will return an output of 0–1 which represents the proportion of information that passes through the gate. Each gate will have the weight functions W and U as well as the bias term b.
σ(W*Xt + U*ht-1 + b) = [0,1]
With some understanding of LSTMs, we can finally explore the corpus and build our model.
I’ll be using Python 3.6 and Keras for this task. First we will parse the data and tokenize each text.
Number of Users: 343Number of Texts: 55835Sequence Length: 5
After taking only the first 1000 messages, the modal length of a text is 5 and we will use that as our sequence length. Basically we will use the past 4 words in a sentence to predict the next word.
['just', 'now', 'i', 'heard', 'thunder', 'but', 'e', 'sky', 'still', 'looks', 'ok', 'hee', 'if', 'really', 'rain', 'den', 'i', 'no', 'need', 'to', 'run', 'liao', 'i', 'also', 'lazy', 'but', 'no', 'choice', 'have', 'to', 'force', 'myself', 'to', 'run']
The above is an example of a tokenized text message. The drawback of my model is that I excluded punctuation but it could also be modeled otherwise.
Vocab Size: 1490Total words 10419Vocab / Total words ratio: 0.143
The tokenizer.word_index returns a dictionary mapping the word to and index, while tokenizer.index_word returns the reverse. Each word is encoded with an index and this index is actually the position to fire up in the respective one-hot vector array.
This helps the neural net understand the vocabulary which are now represented by one-hot vectors. However, this results in a very large and sparse matrix which takes up 5x6 cells’ worth of space currently but grows by a lot when the size of the vocabulary increases.
We can use a Word Embedding layer to map the representation into a specified number of dimensions. A recommended size in practice (I found online) is vocab_size**0.25 but in the following example, I will use an embedding size of 3.
We can use a simple architecture with a Sequential model —
Embedding LayerBidirectional LSTM Layer (to learn the previous and future context of a sentence, won’t go into the details here)Dropout Layer (prevent overfitting)Dense layer to map output size back to the vocab_sizeActivation using Softmax to find the most likely category(word) in the vocabulary to use
Embedding Layer
Bidirectional LSTM Layer (to learn the previous and future context of a sentence, won’t go into the details here)
Dropout Layer (prevent overfitting)
Dense layer to map output size back to the vocab_size
Activation using Softmax to find the most likely category(word) in the vocabulary to use
With our model set up and texts encoded, the next step is to prepare the sequence data to be trained.
This many-words-to-many-words context helps us to train the model by telling them which sequence of words (our predictors X) leads to the final word (our label Y).
Finally we will compile and fit our model using —
Adam Optimizer (popular for Deep Learning and easy to configure)Sparse Categorical Cross Entropy (Cost/Loss function for multi-class classification where target outputs are integer indices instead of one-hot encoded)ModelCheckpoint to save optimal weights each time accuracy improvesEarlyStopping to stop training when validation accuracy does not increase for 4 times consecutively.
Adam Optimizer (popular for Deep Learning and easy to configure)
Sparse Categorical Cross Entropy (Cost/Loss function for multi-class classification where target outputs are integer indices instead of one-hot encoded)
ModelCheckpoint to save optimal weights each time accuracy improves
EarlyStopping to stop training when validation accuracy does not increase for 4 times consecutively.
We will also save all our objects in a Pickle file so that we can reload them when generating our texts.
The training process will look something like this
Epoch 39/100loss: 4.2459 - sparse_categorical_accuracy: 0.2050 - val_loss: 6.4890 - val_sparse_categorical_accuracy: 0.0924Epoch 00039: sparse_categorical_accuracy improved from 0.20413 to 0.20503, saving model to best_weights.hdf5Epoch 40/100loss: 4.2390 - sparse_categorical_accuracy: 0.2051 - val_loss: 6.4887 - val_sparse_categorical_accuracy: 0.0935Epoch 00040: sparse_categorical_accuracy improved from 0.20503 to 0.20513, saving model to best_weights.hdf5
Finally, we can create a generate_text function that takes in a seed sentence “i will be” for example, pad it to the correct sequence_length and use it to predict the next word iteratively.
Well it sounds like Singlish in text messages indeed! The model has managed to learn the grammar even with incomplete spelling and even though the text is forced to have a length of 5 words, it is rather comprehensible.
For the interest of time and money, I did not fully train the network on the entire corpus. In fact I only used it on 1000 texts just to test the entire flow. The validation accuracy was extremely low and the model was certainly over-fitting. I also did not test using an optimal network structure nor tune any parameters. I also used https://www.floydhub.com but it only had 2 hours free for their GPU. I am currently waiting to get my AWS student account verified so that I could train the model on the entire corpus. However, my final exams are coming and I could not wait any longer.
It was nonetheless a very interesting problem and a good learning experience. Looking forward to learning, exploring and sharing more when I am back!
Link to project’s repo
Discuss further with me on LinkedIn or via jasonyip184@gmail.com!
|
[
{
"code": null,
"e": 671,
"s": 172,
"text": "After publishing my first post on Medium, I realized how enjoyable it was to go through the whole process of conceptualizing an idea to sharing my findings and learning experience. I realized most data-related projects and solutions arise from these 2 things — 1) A problem we are trying to solve 2) An opportunity from the data that we have. Since my previous post was about solving the problem of manual work in Social Media Contests, I figured that I should look to finding data first this time."
},
{
"code": null,
"e": 1172,
"s": 671,
"text": "Google recently released Google Dataset Search, a Google Scholar for datasets, and this came in handy in helping me get started. I came across a really interesting dataset from my school, The National University of Singapore SMS Corpus. It is a corpus of more than 50,000 SMS messages in Singapore English (Singlish) and it was part of a research work from the Department of Computer Science. The messages largely originated from volunteers of the study who are Singaporeans attending the University."
},
{
"code": null,
"e": 1640,
"s": 1172,
"text": "I thought it was such an amazing opportunity to study the language of those texts especially because I am a student myself in NUS and I speak and text in Singlish all the time. For the uninitiated, Singlish seems to be a broken version of English or a very crude slang even. It is actually very universal in Singapore despite the seeming lack of coherence and semantics. In fact, it can often be used to establish a connection and trust with Singaporeans immediately."
},
{
"code": null,
"e": 1818,
"s": 1640,
"text": "Singlish can also be incredibly hard to get when a single word can change the entire meaning of the message, that is also one of the reasons why it is a very efficient language."
},
{
"code": null,
"e": 1967,
"s": 1818,
"text": "Singlish in text messages is on another level. Besides the lack of complete sentences, the texting and Internet language just shortens it even more."
},
{
"code": null,
"e": 2252,
"s": 1967,
"text": "While I am no linguistics expert, I thought it could be useful to understand this by training a neural network on the corpus to generate similar text messages. I want to understand and demonstrate the reasons for choosing the final representation of our model for our text generation."
},
{
"code": null,
"e": 2610,
"s": 2252,
"text": "An artificial neural network models our brain and represents the neurons with nodes. The network has an input layer which takes in information, hidden layers which processes the info (manipulation, computation, feature extraction) and a final output layer which generates a desired output based on the information which is usually used to make a prediction."
},
{
"code": null,
"e": 2744,
"s": 2610,
"text": "The predicted output and the actual output can be very different and this is measured with a cost function which we want to minimize."
},
{
"code": null,
"e": 3246,
"s": 2744,
"text": "The neural net is just like a baby, he wants to learn how to speak properly (model output Yhat)by doing a certain set of actions such as uttering, ... , shouting random things(X_1 to X_n),some more frequently than the other (W_1 to W_n).He attempts to say what is right by iteratively trying out different scenarios of actions in his mind (update weights).The final set of actions he chose at that point in time is as close as possible to what his parents tell him is right (minimize cost function C)."
},
{
"code": null,
"e": 3342,
"s": 3246,
"text": "The neural net is just like a baby, he wants to learn how to speak properly (model output Yhat)"
},
{
"code": null,
"e": 3436,
"s": 3342,
"text": "by doing a certain set of actions such as uttering, ... , shouting random things(X_1 to X_n),"
},
{
"code": null,
"e": 3486,
"s": 3436,
"text": "some more frequently than the other (W_1 to W_n)."
},
{
"code": null,
"e": 3606,
"s": 3486,
"text": "He attempts to say what is right by iteratively trying out different scenarios of actions in his mind (update weights)."
},
{
"code": null,
"e": 3752,
"s": 3606,
"text": "The final set of actions he chose at that point in time is as close as possible to what his parents tell him is right (minimize cost function C)."
},
{
"code": null,
"e": 3883,
"s": 3752,
"text": "That is why it is a form of Supervised Learning. This updating/understanding process in his mind is also known as Backpropagation."
},
{
"code": null,
"e": 4215,
"s": 3883,
"text": "The feedforward neural net is the first and simplest type of artificial neural network devised and it only allows signals to travel from input to output. It has no element of time. When it comes to text generation, we are trying to predict the next word given a sequence of words and we need to have a model that represents memory."
},
{
"code": null,
"e": 4536,
"s": 4215,
"text": "This recurring connection helps the RNN to learn the effect of previous inputs X_t-1 (a vector) along with the current input X_t (a vector) while predicting the output at time Yhat_t. This gives RNN a sense of time. It allows the baby to learn from past scenarios when he got scolded and avoids making the same mistakes."
},
{
"code": null,
"e": 4804,
"s": 4536,
"text": "Each output (h) from a state (blue) is the activation function of the output from the previous state (h_t-1) and the current input vector (X_t). These outputs h_1 to h_t of the first layer will then be fed as an input into the next layer as it goes deep into the RNN."
},
{
"code": null,
"e": 4881,
"s": 4804,
"text": "This will allow us to predict the next word given the context of a sentence."
},
{
"code": null,
"e": 5185,
"s": 4881,
"text": "However, when the required context of the sentence gets very large, it might not be so important to remember words such as “so”. Unfortunately, as that gap grows, traditional RNNs become unable to learn to connect the information because it is unable to ignore or forget all the unnecessary information."
},
{
"code": null,
"e": 5542,
"s": 5185,
"text": "The LSTM model is able to represent the actions of taking in information (input gate), give out a predicted value (output gate) and leaving out unimportant information (forget gate). The LSTM is very popular in sequence modelling tasks such as text generation. Similar to the previous RNN diagram, a LSTM network will have LSTM cells in place of the nodes."
},
{
"code": null,
"e": 5790,
"s": 5542,
"text": "The LSTM structure and vanilla RNN structure is very similar on the outside but the main difference is what is within a single cell. This will help us to model the different states in time where we are able to input, output and forget information."
},
{
"code": null,
"e": 6012,
"s": 5790,
"text": "Every gate has a sigmoid function that will return an output of 0–1 which represents the proportion of information that passes through the gate. Each gate will have the weight functions W and U as well as the bias term b."
},
{
"code": null,
"e": 6041,
"s": 6012,
"text": "σ(W*Xt + U*ht-1 + b) = [0,1]"
},
{
"code": null,
"e": 6130,
"s": 6041,
"text": "With some understanding of LSTMs, we can finally explore the corpus and build our model."
},
{
"code": null,
"e": 6233,
"s": 6130,
"text": "I’ll be using Python 3.6 and Keras for this task. First we will parse the data and tokenize each text."
},
{
"code": null,
"e": 6295,
"s": 6233,
"text": "Number of Users: 343Number of Texts: 55835Sequence Length: 5"
},
{
"code": null,
"e": 6494,
"s": 6295,
"text": "After taking only the first 1000 messages, the modal length of a text is 5 and we will use that as our sequence length. Basically we will use the past 4 words in a sentence to predict the next word."
},
{
"code": null,
"e": 6746,
"s": 6494,
"text": "['just', 'now', 'i', 'heard', 'thunder', 'but', 'e', 'sky', 'still', 'looks', 'ok', 'hee', 'if', 'really', 'rain', 'den', 'i', 'no', 'need', 'to', 'run', 'liao', 'i', 'also', 'lazy', 'but', 'no', 'choice', 'have', 'to', 'force', 'myself', 'to', 'run']"
},
{
"code": null,
"e": 6895,
"s": 6746,
"text": "The above is an example of a tokenized text message. The drawback of my model is that I excluded punctuation but it could also be modeled otherwise."
},
{
"code": null,
"e": 6961,
"s": 6895,
"text": "Vocab Size: 1490Total words 10419Vocab / Total words ratio: 0.143"
},
{
"code": null,
"e": 7212,
"s": 6961,
"text": "The tokenizer.word_index returns a dictionary mapping the word to and index, while tokenizer.index_word returns the reverse. Each word is encoded with an index and this index is actually the position to fire up in the respective one-hot vector array."
},
{
"code": null,
"e": 7479,
"s": 7212,
"text": "This helps the neural net understand the vocabulary which are now represented by one-hot vectors. However, this results in a very large and sparse matrix which takes up 5x6 cells’ worth of space currently but grows by a lot when the size of the vocabulary increases."
},
{
"code": null,
"e": 7711,
"s": 7479,
"text": "We can use a Word Embedding layer to map the representation into a specified number of dimensions. A recommended size in practice (I found online) is vocab_size**0.25 but in the following example, I will use an embedding size of 3."
},
{
"code": null,
"e": 7770,
"s": 7711,
"text": "We can use a simple architecture with a Sequential model —"
},
{
"code": null,
"e": 8075,
"s": 7770,
"text": "Embedding LayerBidirectional LSTM Layer (to learn the previous and future context of a sentence, won’t go into the details here)Dropout Layer (prevent overfitting)Dense layer to map output size back to the vocab_sizeActivation using Softmax to find the most likely category(word) in the vocabulary to use"
},
{
"code": null,
"e": 8091,
"s": 8075,
"text": "Embedding Layer"
},
{
"code": null,
"e": 8205,
"s": 8091,
"text": "Bidirectional LSTM Layer (to learn the previous and future context of a sentence, won’t go into the details here)"
},
{
"code": null,
"e": 8241,
"s": 8205,
"text": "Dropout Layer (prevent overfitting)"
},
{
"code": null,
"e": 8295,
"s": 8241,
"text": "Dense layer to map output size back to the vocab_size"
},
{
"code": null,
"e": 8384,
"s": 8295,
"text": "Activation using Softmax to find the most likely category(word) in the vocabulary to use"
},
{
"code": null,
"e": 8486,
"s": 8384,
"text": "With our model set up and texts encoded, the next step is to prepare the sequence data to be trained."
},
{
"code": null,
"e": 8650,
"s": 8486,
"text": "This many-words-to-many-words context helps us to train the model by telling them which sequence of words (our predictors X) leads to the final word (our label Y)."
},
{
"code": null,
"e": 8700,
"s": 8650,
"text": "Finally we will compile and fit our model using —"
},
{
"code": null,
"e": 9084,
"s": 8700,
"text": "Adam Optimizer (popular for Deep Learning and easy to configure)Sparse Categorical Cross Entropy (Cost/Loss function for multi-class classification where target outputs are integer indices instead of one-hot encoded)ModelCheckpoint to save optimal weights each time accuracy improvesEarlyStopping to stop training when validation accuracy does not increase for 4 times consecutively."
},
{
"code": null,
"e": 9149,
"s": 9084,
"text": "Adam Optimizer (popular for Deep Learning and easy to configure)"
},
{
"code": null,
"e": 9302,
"s": 9149,
"text": "Sparse Categorical Cross Entropy (Cost/Loss function for multi-class classification where target outputs are integer indices instead of one-hot encoded)"
},
{
"code": null,
"e": 9370,
"s": 9302,
"text": "ModelCheckpoint to save optimal weights each time accuracy improves"
},
{
"code": null,
"e": 9471,
"s": 9370,
"text": "EarlyStopping to stop training when validation accuracy does not increase for 4 times consecutively."
},
{
"code": null,
"e": 9576,
"s": 9471,
"text": "We will also save all our objects in a Pickle file so that we can reload them when generating our texts."
},
{
"code": null,
"e": 9627,
"s": 9576,
"text": "The training process will look something like this"
},
{
"code": null,
"e": 10090,
"s": 9627,
"text": "Epoch 39/100loss: 4.2459 - sparse_categorical_accuracy: 0.2050 - val_loss: 6.4890 - val_sparse_categorical_accuracy: 0.0924Epoch 00039: sparse_categorical_accuracy improved from 0.20413 to 0.20503, saving model to best_weights.hdf5Epoch 40/100loss: 4.2390 - sparse_categorical_accuracy: 0.2051 - val_loss: 6.4887 - val_sparse_categorical_accuracy: 0.0935Epoch 00040: sparse_categorical_accuracy improved from 0.20503 to 0.20513, saving model to best_weights.hdf5"
},
{
"code": null,
"e": 10280,
"s": 10090,
"text": "Finally, we can create a generate_text function that takes in a seed sentence “i will be” for example, pad it to the correct sequence_length and use it to predict the next word iteratively."
},
{
"code": null,
"e": 10500,
"s": 10280,
"text": "Well it sounds like Singlish in text messages indeed! The model has managed to learn the grammar even with incomplete spelling and even though the text is forced to have a length of 5 words, it is rather comprehensible."
},
{
"code": null,
"e": 11088,
"s": 10500,
"text": "For the interest of time and money, I did not fully train the network on the entire corpus. In fact I only used it on 1000 texts just to test the entire flow. The validation accuracy was extremely low and the model was certainly over-fitting. I also did not test using an optimal network structure nor tune any parameters. I also used https://www.floydhub.com but it only had 2 hours free for their GPU. I am currently waiting to get my AWS student account verified so that I could train the model on the entire corpus. However, my final exams are coming and I could not wait any longer."
},
{
"code": null,
"e": 11238,
"s": 11088,
"text": "It was nonetheless a very interesting problem and a good learning experience. Looking forward to learning, exploring and sharing more when I am back!"
},
{
"code": null,
"e": 11261,
"s": 11238,
"text": "Link to project’s repo"
}
] |
Design Patterns - Proxy Pattern
|
In proxy pattern, a class represents functionality of another class. This type of design pattern comes under structural pattern.
In proxy pattern, we create object having original object to interface its functionality to outer world.
We are going to create an Image interface and concrete classes implementing the Image interface. ProxyImage is a a proxy class to reduce memory footprint of RealImage object loading.
ProxyPatternDemo, our demo class, will use ProxyImage to get an Image object to load and display as it needs.
Create an interface.
Image.java
public interface Image {
void display();
}
Create concrete classes implementing the same interface.
RealImage.java
public class RealImage implements Image {
private String fileName;
public RealImage(String fileName){
this.fileName = fileName;
loadFromDisk(fileName);
}
@Override
public void display() {
System.out.println("Displaying " + fileName);
}
private void loadFromDisk(String fileName){
System.out.println("Loading " + fileName);
}
}
ProxyImage.java
public class ProxyImage implements Image{
private RealImage realImage;
private String fileName;
public ProxyImage(String fileName){
this.fileName = fileName;
}
@Override
public void display() {
if(realImage == null){
realImage = new RealImage(fileName);
}
realImage.display();
}
}
Use the ProxyImage to get object of RealImage class when required.
ProxyPatternDemo.java
public class ProxyPatternDemo {
public static void main(String[] args) {
Image image = new ProxyImage("test_10mb.jpg");
//image will be loaded from disk
image.display();
System.out.println("");
//image will not be loaded from disk
image.display();
}
}
Verify the output.
Loading test_10mb.jpg
Displaying test_10mb.jpg
Displaying test_10mb.jpg
102 Lectures
10 hours
Arnab Chakraborty
30 Lectures
3 hours
Arnab Chakraborty
31 Lectures
4 hours
Arnab Chakraborty
43 Lectures
1.5 hours
Manoj Kumar
7 Lectures
1 hours
Zach Miller
54 Lectures
4 hours
Sasha Miller
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2880,
"s": 2751,
"text": "In proxy pattern, a class represents functionality of another class. This type of design pattern comes under structural pattern."
},
{
"code": null,
"e": 2985,
"s": 2880,
"text": "In proxy pattern, we create object having original object to interface its functionality to outer world."
},
{
"code": null,
"e": 3168,
"s": 2985,
"text": "We are going to create an Image interface and concrete classes implementing the Image interface. ProxyImage is a a proxy class to reduce memory footprint of RealImage object loading."
},
{
"code": null,
"e": 3278,
"s": 3168,
"text": "ProxyPatternDemo, our demo class, will use ProxyImage to get an Image object to load and display as it needs."
},
{
"code": null,
"e": 3299,
"s": 3278,
"text": "Create an interface."
},
{
"code": null,
"e": 3310,
"s": 3299,
"text": "Image.java"
},
{
"code": null,
"e": 3356,
"s": 3310,
"text": "public interface Image {\n void display();\n}"
},
{
"code": null,
"e": 3413,
"s": 3356,
"text": "Create concrete classes implementing the same interface."
},
{
"code": null,
"e": 3428,
"s": 3413,
"text": "RealImage.java"
},
{
"code": null,
"e": 3807,
"s": 3428,
"text": "public class RealImage implements Image {\n\n private String fileName;\n\n public RealImage(String fileName){\n this.fileName = fileName;\n loadFromDisk(fileName);\n }\n\n @Override\n public void display() {\n System.out.println(\"Displaying \" + fileName);\n }\n\n private void loadFromDisk(String fileName){\n System.out.println(\"Loading \" + fileName);\n }\n}"
},
{
"code": null,
"e": 3823,
"s": 3807,
"text": "ProxyImage.java"
},
{
"code": null,
"e": 4161,
"s": 3823,
"text": "public class ProxyImage implements Image{\n\n private RealImage realImage;\n private String fileName;\n\n public ProxyImage(String fileName){\n this.fileName = fileName;\n }\n\n @Override\n public void display() {\n if(realImage == null){\n realImage = new RealImage(fileName);\n }\n realImage.display();\n }\n}"
},
{
"code": null,
"e": 4228,
"s": 4161,
"text": "Use the ProxyImage to get object of RealImage class when required."
},
{
"code": null,
"e": 4250,
"s": 4228,
"text": "ProxyPatternDemo.java"
},
{
"code": null,
"e": 4557,
"s": 4250,
"text": "public class ProxyPatternDemo {\n\t\n public static void main(String[] args) {\n Image image = new ProxyImage(\"test_10mb.jpg\");\n\n //image will be loaded from disk\n image.display(); \n System.out.println(\"\");\n \n //image will not be loaded from disk\n image.display(); \t\n }\n}"
},
{
"code": null,
"e": 4576,
"s": 4557,
"text": "Verify the output."
},
{
"code": null,
"e": 4650,
"s": 4576,
"text": "Loading test_10mb.jpg\nDisplaying test_10mb.jpg\n\nDisplaying test_10mb.jpg\n"
},
{
"code": null,
"e": 4685,
"s": 4650,
"text": "\n 102 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 4704,
"s": 4685,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 4737,
"s": 4704,
"text": "\n 30 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 4756,
"s": 4737,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 4789,
"s": 4756,
"text": "\n 31 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4808,
"s": 4789,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 4843,
"s": 4808,
"text": "\n 43 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 4856,
"s": 4843,
"text": " Manoj Kumar"
},
{
"code": null,
"e": 4888,
"s": 4856,
"text": "\n 7 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4901,
"s": 4888,
"text": " Zach Miller"
},
{
"code": null,
"e": 4934,
"s": 4901,
"text": "\n 54 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4948,
"s": 4934,
"text": " Sasha Miller"
},
{
"code": null,
"e": 4955,
"s": 4948,
"text": " Print"
},
{
"code": null,
"e": 4966,
"s": 4955,
"text": " Add Notes"
}
] |
Using CAPM to Evaluate the Performance of Listed Investment Companies with R | by David Harris | Towards Data Science
|
The return on an investment should compensate an investor for the time value of the capital they have invested, as well as the risk that some, or all, of their investment may be lost. The Capital Asset Pricing Model (CAPM) is widely used in finance as a means of determining the level of compensation an investor should expect to receive from an investment given the level of risk associated with holding that particular asset rather than holding a “risk-free” asset, such as sovereign government bonds.
Idiosyncratic (Unsystematic) risk — Refers to risk that is unique to that particular asset. Idiosyncratic risk can be reduced through diversification and maintaining a well-constructed portfolio (Investopedia 2019a).
Systematic risk — Refers to market-wide risk, which cannot be reduced through portfolio diversification. Although an investor can build a portfolio that limits their exposure to systematic risk, the CAPM theory suggests this will come with a trade-off of the returns they can expect to receive (Investopedia 2019b).
Since it is possible to diversify investments in such a way as to eliminate idiosyncratic risk, the CAPM assumes returns on investments do not compensate an investor for holding this type of risk. Therefore it is essential that an investor diversifies their investments to avoid carrying risk that they will not be compensated for*.
[* how this diversification can be done is beyond the scope of this article, however, in a future article I will discuss Modern Portfolio Theory which enables the development of a portfolio of assets designed to eliminate idiosyncratic risk through asset diversification and obtain an appropriate level of exposure to systematic risk based on personal risk tolerance preferences.]
The CAPM methodology describes the relationship between the expected return of an asset and its exposure to systematic risk. To calculate the expected return of an asset given its risk, by way of CAPM, the following equation is used:
By manipulating this equation, we can utilise financial asset data and regression analysis to evaluate firstly whether the CAPM theory holds up in practice and secondly to evaluate the performance of particular financial assets.
This methodology follows from Jensen (1968) in which the performance of US mutual funds was systematically examined to determine whether any were able to “outperform the market” (Brooks 2008, pp. 67–81).
Listed Investment Companies (LICs) — A LIC is similar to a Managed Mutual Fund, however, investors are able to buy and sell shares in a LIC just like ordinary shares on a stock exchange. As such, historical pricing data is readily available online and can be easily sourced to conduct our analysis. Generally, the aim of holding LIC securities is to gain access to the skills and expertise of the LIC managers who utilise active investment strategies to outperform a defined benchmark. For holding LIC securities, investors are charged a management fee in the range of 1.0–1.5% per annum. It is also not uncommon for LICs to charge performance fees when returns are above the defined benchmark, typically 10–20% of the return above that of the benchmark (Firstlinks 2016).
Passive Exchange Traded Funds (ETFs) — Unlike the actively managed LICs, the aim of a passively managed ETF is to replicate the performance of defined benchmark, such as a market index (ASXETFS.com 2019). The fees an investor can expect to pay for holding a passive ETF are generally just a fraction of those charged by actively managed LICs. For example, our analysis uses the Vanguard Australian Shares Index ETF (VAS) which seeks to track the return of the S&P/ASX 300 Index before taking into account fees, which as of May 2020 are 0.1% per annum (Vanguard 2020).
Why would an investor prefer an LIC over a passive ETF? — Using the CAPM terminology introduced above, the main reason for an investor choosing a LIC investment over a passive ETF is the belief that the active investment strategies employed by the LIC managers will yield “alpha” returns (alpha > 0) whilst minimising risk exposure.
The following sections will utilise the CAPM methodology of Jensen (1968) to examine the historical performance of Australian LICs by obtaining estimates for the alpha and beta parameters of each security. Further, we will use our findings to evaluate the potential for building portfolios to replicate the returns of the LIC assets using a combination of a passive index tracking exchange traded fund (ETF), namely the Vanguard Australian Shares Index ETF (VAS), and Australian Commonwealth Government bonds.
The first order of business for our analysis is to obtain the required data. We have decided to use monthly returns data for the past 5 years as this is a common timeframe and frequency for calculating CAPM parameters*, however, the analysis and code can be easily adapted to accommodate alternate timeframes and frequency. For our analysis we have used returns data from 2015–06–01 through 2020–03–01.
[* this is currently the timeframe and frequency used for the beta calculation on Yahoo (Au)]
LIC List — A list of the 118 LICs currently trading on the ASX was obtained from https://www.asxlics.com, imported to R as a data frame and cleaned.
# IMPORT AND TIDY LIC LIST #################################LICs <- read.csv("https://www.asxlics.com/uploads/csv/20200401-lics.csv", header = TRUE)n <- nrow(LICs)LICs <- LICs[c(2:n), 1:3]lic.colnames <- c("Code", "Company", "Market Cap")names(LICs) <- lic.colnamesticker <- as.character(LICs[,1])row.names(LICs) <- ticker
Risk-free rate data — Sovereign government bonds are widely used in finance as “risk-free” assets (Investopedia 2020b). For our analysis, we will be using the yield data on Australian Commonwealth Government bonds with 5 years to maturity*. This data can be sourced from the Reserve Bank of Australia’s webpage**.
[* the 5-year timeframe was chosen simply as it paired nicely with the 5 year timeframe selected for data collection.]
[** prior to importing this data into R, we quickly formatted the dates to YYYY-MM-DD in Excel.]
# IMPORT AND TIDY RISK-FREE RATE DATA ######################Rf <- import("f2.1-data.csv") ## need to have manually formatted dates to YYYY-MM-DD in Exceln <- nrow(Rf)Rf <- Rf[c(12:n), c(1, 4)]Rf <- Rf[!apply(Rf == "", 1, all),]Rf$V1 <- as.Date(Rf$V1)Rf$V4 <- as.numeric(Rf$V4)Rf$V4 <- ((1+(Rf$V4/100))^(1/12)-1)Rf <- xts(Rf$V4, order.by = Rf$V1)names(Rf) <- c("Rf")
Benchmark Index Data — We have elected to use pricing data for the passively managed Vanguard Australian Shares Index ETF (VAS) as a proxy for the market returns. The fund seeks to track the return of the S&P/ASX 300 Index, therefore we would expect to obtain comparable results if we were to explicitly use direct price data of that market index or, to a lesser extent, another index such as the All Ordinaries or S&P/ASX 200. The reason for using the Vanguard ETF data is that it will enable us to readily produce replication portfolios following our initial CAPM modelling. Historical price data for VAS was obtained from Yahoo Finance. We use “Adj. Close” price data as this data series is adjusted for dividend distributions and enables our analysis to consider the dividend cashflows an investor would have been entitled to, as well as the capital gains return.
# IMPORT AND TIDY BENCHMARK DATA ###########################Rb <- read.csv("https://query1.finance.yahoo.com/v7/finance/download/VAS.AX?period1=1430265600&period2=1588118400&interval=1mo&events=history")n <- nrow(Rb)Rb <- Rb[c(1:n-1), c(1,6)]Rb$Date <- as.Date(Rb[, 1])Rb <- xts(Rb$`Adj.Close`, order.by = Rb$Date)names(Rb) <- c("Rb")Rb$Rb <- Return.calculate(Rb$Rb, method = "log")
LIC Data — The list of LICs was then used to obtain historical pricing data for each from Yahoo Finance. Again “Adj. Close” price data is used. Using R, the data was compiled into a single xts object along with the risk-free data and benchmark data, then trimmed.
# IMPORT AND TIDY LIC DATA #################################url_f <- "https://query1.finance.yahoo.com/v7/finance/download/"url_e <- ".AX?period1=1430265600&period2=1588118400&interval=1mo&events=history"n <- nrow(LICs)data <- merge(Rf, Rb)k <- nrow(data)for(i in 1:n){ url_temp_ch <- as.character(LICs[i,1]) url_temp <- paste(url_f, url_temp_ch, url_e, sep = "") Ra_temp <- data.frame(rep(NA, k)) try(Ra_temp <- read.csv(url_temp, na.strings = c("null")), silent = T) n_temp <- nrow(Ra_temp) try(Ra_temp <- Ra_temp[c(1:n_temp-1), c(1,6)], silent = T) if(is.na(Ra_temp[1, 1]) != TRUE){ Ra_temp$Date <- as.Date(Ra_temp[, 1]) Ra_temp <- xts(Ra_temp$`Adj.Close`, order.by = Ra_temp$Date) header <- as.character(LICs[i,1]) names(Ra_temp) <- header Ra_temp[, 1] <- Return.calculate(Ra_temp[, 1], method = "log") data <- merge(data, Ra_temp) rm(Ra_temp) } else if(is.na(Ra_temp[1, 1]) == TRUE){ data_temp <- data data_temp$Rf <- rep(data_temp[1, 2], k) data_temp <- data_temp$Rf header <- as.character(LICs[i,1]) names(data_temp) <- header data <- merge(data, data_temp) rm(data_temp) }}n <- nrow(data)data <- data[complete.cases(data[1:n, c(1, 2)]),]LIC.list <- names(data)names(data) <- LIC.listn <- ncol(data)LIC.list <- LIC.list[3:n]
Generating the CAPM Variables — As indicated above, the CAPM regression analysis requires us to calculate the “Excess Returns” for each LIC and the “Market Risk Premium”. These are calculated and added to a new data frame called “capm.data” in case we wish to export and save the data.
# GENERATE CAPM VARIABLES ##################################n <- ncol(data)capm.data <- as.xts(merge(data$Rf, data$Rb-data$Rf))names(capm.data) <- c("Rf", "mrp")for(i in 3:n){ Ra.Er_temp <- as.xts(data[, i]-data$Rf) header <- as.character(names(data)) header <- paste(header, ".Er", sep = "") names(Ra.Er_temp) <- header[i] capm.data <- merge(capm.data, Ra.Er_temp)}n <- ncol(capm.data)LICs$Code <- LIC.list
Having compiled our data, we are now ready to calculate the CAPM parameters alpha and beta. To do so, we use a series of linear regressions with the “Excess Returns” of our LICs as our dependent variables and the “Market Risk Premium” as our explanatory variable. It is important to note that not all of our LICs have data for the entire 5 year period, and as such, some of the parameters will be less precisely estimated given the fewer data points available*. The below code calculates the parameters alpha and beta, as well as using a t-test function to produce p-values so we can have some idea of how accurately the parameters have been estimated.
[* again, one may prefer to adjust the timeframe and frequency to alleviate this issue.]
# LOAD t-TEST FUNCTION #####################################ttest <- function(reg, coefnum, val){ co <- coef(summary(reg)) tstat <- (co[coefnum,1]-val)/co[coefnum,2] 2 * pt(abs(tstat), reg$df.residual, lower.tail = FALSE)}# CALCULATE CAPM PARAMETERS ################################n <- ncol(capm.data)capm.para <- data.frame()for(i in 3:n){ try( capm <- lm(capm.data[, i] ~ capm.data$mrp) , silent = T) para.temp <- data.frame(rep(0, 4)) try(para.temp <- capm$coefficients, silent = T) para.temp <- as.data.frame(para.temp) para.temp <- as.data.frame(transpose(para.temp)) try(para.temp[1, 3] <- ttest(capm, 1, 0), silent = T) try(para.temp[1, 4] <- ttest(capm, 2, 1), silent = T) names(para.temp) <- c("alpha", "beta", "alpha(0) ~ Pr(>|t|)", "beta(1) ~ Pr(>|t|)") row.names(para.temp) <- as.character(LICs[i-2,1]) capm.para[i-2, 1:4] <- para.temp[1, 1:4] try(rm(capm), silent = T) rm(para.temp)}row.names(LICs) <- LICs$CodeLICs <- merge(LICs, capm.para, by.x = 1, by.y = 0, all.x = TRUE, all.y = TRUE)
For beta, we have simply tested the hypothesis that our estimated value is indifferent from 1, that is that the LIC security tends to move in line with the market as a whole*.
[* this test and subsequent p-value is not overly relevant to our particular line of analysis, however, the t-test can be modified to test whether is indifferent from any particular value so could be useful if we chose to drill our analysis down a bit deeper into the individual LICs and how they have historically reacted to broader market movements.]
For alpha, our hypothesis is that our estimated value is indifferent from zero, indicating that we have not found the LIC managers to be outperforming the market, or evidence to suggest the CAPM does not hold based on the data we have collected. Therefore we simply need to check whether we have obtained a positive and statistically significant alpha value for any of the LICs.
# CHECK FOR POSITIVE AND SIGNIFICANT ALPHA RETURNS #########LICs$alpha.rtns <- ifelse(LICs$`alpha(0) ~ Pr(>|t|)`<= 0.05 & LICs$alpha > 0.0, "TRUE", "FALSE")
Our results show that over this period no LIC achieved a statistically significant (at the 95% confidence level), and positive alpha return. In fact, there was only the one LIC to have an estimated alpha value statistically different from zero, and in that case, the alpha value was -0.0051, indicating that this LIC had under-performed the market relative to the systematic risk exposure. If we widen our “significance” cut-off to the 90% confidence level, we only find one additional LICs with a significant alpha value, again it is estimated to be negative indicating under-performance. We can, therefore, conclude that our analysis was unable to find evidence to suggest that actively managed LICs yielded returns for investors that were significantly different from the returns of the broader market, relative to the risk exposure.
Since we have now seen that the LICs have not outperformed the market by delivering statistically significant higher returns, we are able to make portfolios combining the risk-free asset and the market tracking ETF that replicate the return of the LIC with a lower risk exposure. From our findings regarding alpha returns above, we know that the expected return of our replication portfolio and those of the LIC securities will not be statistically different from one-another*. Therefore, if we can show our replication portfolios exhibit less variation in returns when compared to the relative LIC, we can be confident that these portfolios deliver comparable returns to an investor for less risk expose.
[* it is also worth reminding ourselves at this point that we have not accounted for any management or performance fees that may be charged by the LICs.]
We first need to calculate the standard deviation of the LIC excess returns using the data series produced earlier of each LIC.
# CALCULATE SD(x) FOR LICs #################################k <- ncol(data)-2sd.temp <- as.numeric(vector())er.list <- names(capm.data)n <- nrow(er.list)er.list <- er.list[3:(k+2)]for(i in 1:k){ sd.temp[i] <- STDEV(capm.data[, er.list[i]])}sd.temp <- as.data.frame(as.numeric(sd.temp))row.names(sd.temp) <- LIC.listnames(sd.temp) <- c("SD(ER_at)")LICs <- merge(LICs, sd.temp, by.x = 1, by.y = 0, all.x = TRUE, all.y = TRUE)
Next, we need to calculate the excess returns for our replication portfolios by first calculating the historical return for each period, then subtracting the risk-free rate. From this, the standard deviations of the replication portfolio excess returns can be calculated.
# CALCULATE SD(x) FOR REP. PORT. ###########################k <- nrow(data)j <- nrow(LICs)sd.temp <- as.numeric(vector())for(i in 1:j){ beta.temp <- as.data.frame(rep(LICs[i, 5], k)) rep.port.temp <- beta.temp Rf.temp <- as.numeric(data$Rf) rep.port.temp <- add_column(rep.port.temp, Rf.temp, .after = 100) rtn.temp <- as.data.frame(data[, 2]) rep.port.temp <- add_column(rep.port.temp, rtn.temp, .after = 100) names(rep.port.temp) <- c("Beta", "Rf", "Rtn") port.temp <- (1-rep.port.temp$Beta)*rep.port.temp$Rf+rep.port.temp$Beta*rep.port.temp$Rtn rep.port.temp <- add_column(rep.port.temp, port.temp, .after = 100) names(rep.port.temp) <- c("Beta", "Rf", "Rtn", "Port. Rtn") rep.port.temp$`Port. Rtn` <- as.numeric(unlist(rep.port.temp$`Port. Rtn`)) rep.port.temp$Rtn <- as.numeric(unlist(rep.port.temp$Rtn)) rep.port.temp$Exc.Port.Rtn <- as.numeric(unlist(rep.port.temp$`Port. Rtn`-rep.port.temp$Rf)) sd.temp[i] <- STDEV(rep.port.temp[, 5])}LICs$"SD(ER_pt)" <- sd.temp
Finally, we check whether the standard deviation of the returns from our replication portfolios is lower than that of the LIC securities, indicating less risk exposure.
# COMPARE SD(x) PERFORMANCE ################################LICs$'Lower Rep. Port. Risk?' <- ifelse(LICs$`SD(ER_pt)` <= LICs$`SD(ER_at)`, "TRUE", "FALSE")
The results show that for each LIC in our analysis we have been able to produce a replication portfolio, combining a passive index tracking ETF and risk-free assets that will produce a comparable rate of return while exposing an investor to less risk.
We have shown through this analysis that over our sample period (2015–06–01 through 2020–03–01) we fail to reject our hypothesis that LIC managers were unable consistently deliver returns above those offered by holding a replication portfolio of a passive index tracking ETF and risk-free assets. Further, investors holding LIC securities are being charged a higher management fee, and in some cases, a performance fee, to hold these securities with the belief that the expertise of their LIC manager will deliver the highest possible return for the lowest possible risk exposure. Yet we have seen that by holding a replication portfolio, investors will also be able to reduce their risk exposure.
It is important to note that, as with all financial assets, past performance is not always a reliable indicator of future performance. The analysis above has used 5 years of monthly price data, using different timeframes and periods will likely yield different results. I have posted this article not with the intention of providing financial advice, but to share the methodology and coding for others to replicate with their own preferences.
Thanks for reading all the way to the end of the article! I’d love to hear any comments about the above. Feel free to leave a message, or reach out to me through LinkedIn.
# Packages #############################pacman::p_load(pacman, expss, jtools, NCmisc, PerformanceAnalytics, purrr, quantmod, rio, stats, tibble, tidyquant, utils, xts, zoo)
Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.
ASXETFS.com (2019) — https://www.asxetfs.com [accessed 08/05/2020]
ASXLICS.com (2019) — https://www.asxlics.com [accessed 08/05/2020]
Brooks, C 2008, Introductory Econometrics for Finance, Second Edition, Cambridge University Press
Firstlinks (2016) — https://www.firstlinks.com.au/understanding-lic-fee-structures [accessed 08/05/2020]
Investopedia (2019a) — https://www.investopedia.com/terms/u/unsystematicrisk.asp [accessed 08/05/2020]
Investopedia (2019b) — https://www.investopedia.com/terms/s/systematicrisk.asp [accessed 08/05/2020]
Investopedia (2020a) — https://www.investopedia.com/terms/c/capm.asp [accessed 08/05/2020]
Investopedia (2020b) — https://www.investopedia.com/terms/r/riskfreeasset.asp [accessed 08/05/2020]
Reserve Bank of Australia (2020) — https://www.rba.gov.au/statistics/tables/#interest-rates [accessed 08/05/2020]
Vanguard (2020) — https://www.vanguardinvestments.com.au/retail/ret/investments/product.html#/fundDetail/etf/portId=8205/?overview [accessed 08/05/2020]
Yahoo Finance (2020) — https://au.finance.yahoo.com [accessed 08/05/2020]
|
[
{
"code": null,
"e": 676,
"s": 172,
"text": "The return on an investment should compensate an investor for the time value of the capital they have invested, as well as the risk that some, or all, of their investment may be lost. The Capital Asset Pricing Model (CAPM) is widely used in finance as a means of determining the level of compensation an investor should expect to receive from an investment given the level of risk associated with holding that particular asset rather than holding a “risk-free” asset, such as sovereign government bonds."
},
{
"code": null,
"e": 893,
"s": 676,
"text": "Idiosyncratic (Unsystematic) risk — Refers to risk that is unique to that particular asset. Idiosyncratic risk can be reduced through diversification and maintaining a well-constructed portfolio (Investopedia 2019a)."
},
{
"code": null,
"e": 1209,
"s": 893,
"text": "Systematic risk — Refers to market-wide risk, which cannot be reduced through portfolio diversification. Although an investor can build a portfolio that limits their exposure to systematic risk, the CAPM theory suggests this will come with a trade-off of the returns they can expect to receive (Investopedia 2019b)."
},
{
"code": null,
"e": 1542,
"s": 1209,
"text": "Since it is possible to diversify investments in such a way as to eliminate idiosyncratic risk, the CAPM assumes returns on investments do not compensate an investor for holding this type of risk. Therefore it is essential that an investor diversifies their investments to avoid carrying risk that they will not be compensated for*."
},
{
"code": null,
"e": 1923,
"s": 1542,
"text": "[* how this diversification can be done is beyond the scope of this article, however, in a future article I will discuss Modern Portfolio Theory which enables the development of a portfolio of assets designed to eliminate idiosyncratic risk through asset diversification and obtain an appropriate level of exposure to systematic risk based on personal risk tolerance preferences.]"
},
{
"code": null,
"e": 2157,
"s": 1923,
"text": "The CAPM methodology describes the relationship between the expected return of an asset and its exposure to systematic risk. To calculate the expected return of an asset given its risk, by way of CAPM, the following equation is used:"
},
{
"code": null,
"e": 2386,
"s": 2157,
"text": "By manipulating this equation, we can utilise financial asset data and regression analysis to evaluate firstly whether the CAPM theory holds up in practice and secondly to evaluate the performance of particular financial assets."
},
{
"code": null,
"e": 2590,
"s": 2386,
"text": "This methodology follows from Jensen (1968) in which the performance of US mutual funds was systematically examined to determine whether any were able to “outperform the market” (Brooks 2008, pp. 67–81)."
},
{
"code": null,
"e": 3363,
"s": 2590,
"text": "Listed Investment Companies (LICs) — A LIC is similar to a Managed Mutual Fund, however, investors are able to buy and sell shares in a LIC just like ordinary shares on a stock exchange. As such, historical pricing data is readily available online and can be easily sourced to conduct our analysis. Generally, the aim of holding LIC securities is to gain access to the skills and expertise of the LIC managers who utilise active investment strategies to outperform a defined benchmark. For holding LIC securities, investors are charged a management fee in the range of 1.0–1.5% per annum. It is also not uncommon for LICs to charge performance fees when returns are above the defined benchmark, typically 10–20% of the return above that of the benchmark (Firstlinks 2016)."
},
{
"code": null,
"e": 3931,
"s": 3363,
"text": "Passive Exchange Traded Funds (ETFs) — Unlike the actively managed LICs, the aim of a passively managed ETF is to replicate the performance of defined benchmark, such as a market index (ASXETFS.com 2019). The fees an investor can expect to pay for holding a passive ETF are generally just a fraction of those charged by actively managed LICs. For example, our analysis uses the Vanguard Australian Shares Index ETF (VAS) which seeks to track the return of the S&P/ASX 300 Index before taking into account fees, which as of May 2020 are 0.1% per annum (Vanguard 2020)."
},
{
"code": null,
"e": 4264,
"s": 3931,
"text": "Why would an investor prefer an LIC over a passive ETF? — Using the CAPM terminology introduced above, the main reason for an investor choosing a LIC investment over a passive ETF is the belief that the active investment strategies employed by the LIC managers will yield “alpha” returns (alpha > 0) whilst minimising risk exposure."
},
{
"code": null,
"e": 4774,
"s": 4264,
"text": "The following sections will utilise the CAPM methodology of Jensen (1968) to examine the historical performance of Australian LICs by obtaining estimates for the alpha and beta parameters of each security. Further, we will use our findings to evaluate the potential for building portfolios to replicate the returns of the LIC assets using a combination of a passive index tracking exchange traded fund (ETF), namely the Vanguard Australian Shares Index ETF (VAS), and Australian Commonwealth Government bonds."
},
{
"code": null,
"e": 5177,
"s": 4774,
"text": "The first order of business for our analysis is to obtain the required data. We have decided to use monthly returns data for the past 5 years as this is a common timeframe and frequency for calculating CAPM parameters*, however, the analysis and code can be easily adapted to accommodate alternate timeframes and frequency. For our analysis we have used returns data from 2015–06–01 through 2020–03–01."
},
{
"code": null,
"e": 5271,
"s": 5177,
"text": "[* this is currently the timeframe and frequency used for the beta calculation on Yahoo (Au)]"
},
{
"code": null,
"e": 5420,
"s": 5271,
"text": "LIC List — A list of the 118 LICs currently trading on the ASX was obtained from https://www.asxlics.com, imported to R as a data frame and cleaned."
},
{
"code": null,
"e": 5743,
"s": 5420,
"text": "# IMPORT AND TIDY LIC LIST #################################LICs <- read.csv(\"https://www.asxlics.com/uploads/csv/20200401-lics.csv\", header = TRUE)n <- nrow(LICs)LICs <- LICs[c(2:n), 1:3]lic.colnames <- c(\"Code\", \"Company\", \"Market Cap\")names(LICs) <- lic.colnamesticker <- as.character(LICs[,1])row.names(LICs) <- ticker"
},
{
"code": null,
"e": 6057,
"s": 5743,
"text": "Risk-free rate data — Sovereign government bonds are widely used in finance as “risk-free” assets (Investopedia 2020b). For our analysis, we will be using the yield data on Australian Commonwealth Government bonds with 5 years to maturity*. This data can be sourced from the Reserve Bank of Australia’s webpage**."
},
{
"code": null,
"e": 6176,
"s": 6057,
"text": "[* the 5-year timeframe was chosen simply as it paired nicely with the 5 year timeframe selected for data collection.]"
},
{
"code": null,
"e": 6273,
"s": 6176,
"text": "[** prior to importing this data into R, we quickly formatted the dates to YYYY-MM-DD in Excel.]"
},
{
"code": null,
"e": 6639,
"s": 6273,
"text": "# IMPORT AND TIDY RISK-FREE RATE DATA ######################Rf <- import(\"f2.1-data.csv\") ## need to have manually formatted dates to YYYY-MM-DD in Exceln <- nrow(Rf)Rf <- Rf[c(12:n), c(1, 4)]Rf <- Rf[!apply(Rf == \"\", 1, all),]Rf$V1 <- as.Date(Rf$V1)Rf$V4 <- as.numeric(Rf$V4)Rf$V4 <- ((1+(Rf$V4/100))^(1/12)-1)Rf <- xts(Rf$V4, order.by = Rf$V1)names(Rf) <- c(\"Rf\")"
},
{
"code": null,
"e": 7507,
"s": 6639,
"text": "Benchmark Index Data — We have elected to use pricing data for the passively managed Vanguard Australian Shares Index ETF (VAS) as a proxy for the market returns. The fund seeks to track the return of the S&P/ASX 300 Index, therefore we would expect to obtain comparable results if we were to explicitly use direct price data of that market index or, to a lesser extent, another index such as the All Ordinaries or S&P/ASX 200. The reason for using the Vanguard ETF data is that it will enable us to readily produce replication portfolios following our initial CAPM modelling. Historical price data for VAS was obtained from Yahoo Finance. We use “Adj. Close” price data as this data series is adjusted for dividend distributions and enables our analysis to consider the dividend cashflows an investor would have been entitled to, as well as the capital gains return."
},
{
"code": null,
"e": 7890,
"s": 7507,
"text": "# IMPORT AND TIDY BENCHMARK DATA ###########################Rb <- read.csv(\"https://query1.finance.yahoo.com/v7/finance/download/VAS.AX?period1=1430265600&period2=1588118400&interval=1mo&events=history\")n <- nrow(Rb)Rb <- Rb[c(1:n-1), c(1,6)]Rb$Date <- as.Date(Rb[, 1])Rb <- xts(Rb$`Adj.Close`, order.by = Rb$Date)names(Rb) <- c(\"Rb\")Rb$Rb <- Return.calculate(Rb$Rb, method = \"log\")"
},
{
"code": null,
"e": 8154,
"s": 7890,
"text": "LIC Data — The list of LICs was then used to obtain historical pricing data for each from Yahoo Finance. Again “Adj. Close” price data is used. Using R, the data was compiled into a single xts object along with the risk-free data and benchmark data, then trimmed."
},
{
"code": null,
"e": 9440,
"s": 8154,
"text": "# IMPORT AND TIDY LIC DATA #################################url_f <- \"https://query1.finance.yahoo.com/v7/finance/download/\"url_e <- \".AX?period1=1430265600&period2=1588118400&interval=1mo&events=history\"n <- nrow(LICs)data <- merge(Rf, Rb)k <- nrow(data)for(i in 1:n){ url_temp_ch <- as.character(LICs[i,1]) url_temp <- paste(url_f, url_temp_ch, url_e, sep = \"\") Ra_temp <- data.frame(rep(NA, k)) try(Ra_temp <- read.csv(url_temp, na.strings = c(\"null\")), silent = T) n_temp <- nrow(Ra_temp) try(Ra_temp <- Ra_temp[c(1:n_temp-1), c(1,6)], silent = T) if(is.na(Ra_temp[1, 1]) != TRUE){ Ra_temp$Date <- as.Date(Ra_temp[, 1]) Ra_temp <- xts(Ra_temp$`Adj.Close`, order.by = Ra_temp$Date) header <- as.character(LICs[i,1]) names(Ra_temp) <- header Ra_temp[, 1] <- Return.calculate(Ra_temp[, 1], method = \"log\") data <- merge(data, Ra_temp) rm(Ra_temp) } else if(is.na(Ra_temp[1, 1]) == TRUE){ data_temp <- data data_temp$Rf <- rep(data_temp[1, 2], k) data_temp <- data_temp$Rf header <- as.character(LICs[i,1]) names(data_temp) <- header data <- merge(data, data_temp) rm(data_temp) }}n <- nrow(data)data <- data[complete.cases(data[1:n, c(1, 2)]),]LIC.list <- names(data)names(data) <- LIC.listn <- ncol(data)LIC.list <- LIC.list[3:n]"
},
{
"code": null,
"e": 9726,
"s": 9440,
"text": "Generating the CAPM Variables — As indicated above, the CAPM regression analysis requires us to calculate the “Excess Returns” for each LIC and the “Market Risk Premium”. These are calculated and added to a new data frame called “capm.data” in case we wish to export and save the data."
},
{
"code": null,
"e": 10139,
"s": 9726,
"text": "# GENERATE CAPM VARIABLES ##################################n <- ncol(data)capm.data <- as.xts(merge(data$Rf, data$Rb-data$Rf))names(capm.data) <- c(\"Rf\", \"mrp\")for(i in 3:n){ Ra.Er_temp <- as.xts(data[, i]-data$Rf) header <- as.character(names(data)) header <- paste(header, \".Er\", sep = \"\") names(Ra.Er_temp) <- header[i] capm.data <- merge(capm.data, Ra.Er_temp)}n <- ncol(capm.data)LICs$Code <- LIC.list"
},
{
"code": null,
"e": 10792,
"s": 10139,
"text": "Having compiled our data, we are now ready to calculate the CAPM parameters alpha and beta. To do so, we use a series of linear regressions with the “Excess Returns” of our LICs as our dependent variables and the “Market Risk Premium” as our explanatory variable. It is important to note that not all of our LICs have data for the entire 5 year period, and as such, some of the parameters will be less precisely estimated given the fewer data points available*. The below code calculates the parameters alpha and beta, as well as using a t-test function to produce p-values so we can have some idea of how accurately the parameters have been estimated."
},
{
"code": null,
"e": 10881,
"s": 10792,
"text": "[* again, one may prefer to adjust the timeframe and frequency to alleviate this issue.]"
},
{
"code": null,
"e": 11904,
"s": 10881,
"text": "# LOAD t-TEST FUNCTION #####################################ttest <- function(reg, coefnum, val){ co <- coef(summary(reg)) tstat <- (co[coefnum,1]-val)/co[coefnum,2] 2 * pt(abs(tstat), reg$df.residual, lower.tail = FALSE)}# CALCULATE CAPM PARAMETERS ################################n <- ncol(capm.data)capm.para <- data.frame()for(i in 3:n){ try( capm <- lm(capm.data[, i] ~ capm.data$mrp) , silent = T) para.temp <- data.frame(rep(0, 4)) try(para.temp <- capm$coefficients, silent = T) para.temp <- as.data.frame(para.temp) para.temp <- as.data.frame(transpose(para.temp)) try(para.temp[1, 3] <- ttest(capm, 1, 0), silent = T) try(para.temp[1, 4] <- ttest(capm, 2, 1), silent = T) names(para.temp) <- c(\"alpha\", \"beta\", \"alpha(0) ~ Pr(>|t|)\", \"beta(1) ~ Pr(>|t|)\") row.names(para.temp) <- as.character(LICs[i-2,1]) capm.para[i-2, 1:4] <- para.temp[1, 1:4] try(rm(capm), silent = T) rm(para.temp)}row.names(LICs) <- LICs$CodeLICs <- merge(LICs, capm.para, by.x = 1, by.y = 0, all.x = TRUE, all.y = TRUE)"
},
{
"code": null,
"e": 12080,
"s": 11904,
"text": "For beta, we have simply tested the hypothesis that our estimated value is indifferent from 1, that is that the LIC security tends to move in line with the market as a whole*."
},
{
"code": null,
"e": 12433,
"s": 12080,
"text": "[* this test and subsequent p-value is not overly relevant to our particular line of analysis, however, the t-test can be modified to test whether is indifferent from any particular value so could be useful if we chose to drill our analysis down a bit deeper into the individual LICs and how they have historically reacted to broader market movements.]"
},
{
"code": null,
"e": 12812,
"s": 12433,
"text": "For alpha, our hypothesis is that our estimated value is indifferent from zero, indicating that we have not found the LIC managers to be outperforming the market, or evidence to suggest the CAPM does not hold based on the data we have collected. Therefore we simply need to check whether we have obtained a positive and statistically significant alpha value for any of the LICs."
},
{
"code": null,
"e": 12969,
"s": 12812,
"text": "# CHECK FOR POSITIVE AND SIGNIFICANT ALPHA RETURNS #########LICs$alpha.rtns <- ifelse(LICs$`alpha(0) ~ Pr(>|t|)`<= 0.05 & LICs$alpha > 0.0, \"TRUE\", \"FALSE\")"
},
{
"code": null,
"e": 13806,
"s": 12969,
"text": "Our results show that over this period no LIC achieved a statistically significant (at the 95% confidence level), and positive alpha return. In fact, there was only the one LIC to have an estimated alpha value statistically different from zero, and in that case, the alpha value was -0.0051, indicating that this LIC had under-performed the market relative to the systematic risk exposure. If we widen our “significance” cut-off to the 90% confidence level, we only find one additional LICs with a significant alpha value, again it is estimated to be negative indicating under-performance. We can, therefore, conclude that our analysis was unable to find evidence to suggest that actively managed LICs yielded returns for investors that were significantly different from the returns of the broader market, relative to the risk exposure."
},
{
"code": null,
"e": 14513,
"s": 13806,
"text": "Since we have now seen that the LICs have not outperformed the market by delivering statistically significant higher returns, we are able to make portfolios combining the risk-free asset and the market tracking ETF that replicate the return of the LIC with a lower risk exposure. From our findings regarding alpha returns above, we know that the expected return of our replication portfolio and those of the LIC securities will not be statistically different from one-another*. Therefore, if we can show our replication portfolios exhibit less variation in returns when compared to the relative LIC, we can be confident that these portfolios deliver comparable returns to an investor for less risk expose."
},
{
"code": null,
"e": 14667,
"s": 14513,
"text": "[* it is also worth reminding ourselves at this point that we have not accounted for any management or performance fees that may be charged by the LICs.]"
},
{
"code": null,
"e": 14795,
"s": 14667,
"text": "We first need to calculate the standard deviation of the LIC excess returns using the data series produced earlier of each LIC."
},
{
"code": null,
"e": 15220,
"s": 14795,
"text": "# CALCULATE SD(x) FOR LICs #################################k <- ncol(data)-2sd.temp <- as.numeric(vector())er.list <- names(capm.data)n <- nrow(er.list)er.list <- er.list[3:(k+2)]for(i in 1:k){ sd.temp[i] <- STDEV(capm.data[, er.list[i]])}sd.temp <- as.data.frame(as.numeric(sd.temp))row.names(sd.temp) <- LIC.listnames(sd.temp) <- c(\"SD(ER_at)\")LICs <- merge(LICs, sd.temp, by.x = 1, by.y = 0, all.x = TRUE, all.y = TRUE)"
},
{
"code": null,
"e": 15492,
"s": 15220,
"text": "Next, we need to calculate the excess returns for our replication portfolios by first calculating the historical return for each period, then subtracting the risk-free rate. From this, the standard deviations of the replication portfolio excess returns can be calculated."
},
{
"code": null,
"e": 16477,
"s": 15492,
"text": "# CALCULATE SD(x) FOR REP. PORT. ###########################k <- nrow(data)j <- nrow(LICs)sd.temp <- as.numeric(vector())for(i in 1:j){ beta.temp <- as.data.frame(rep(LICs[i, 5], k)) rep.port.temp <- beta.temp Rf.temp <- as.numeric(data$Rf) rep.port.temp <- add_column(rep.port.temp, Rf.temp, .after = 100) rtn.temp <- as.data.frame(data[, 2]) rep.port.temp <- add_column(rep.port.temp, rtn.temp, .after = 100) names(rep.port.temp) <- c(\"Beta\", \"Rf\", \"Rtn\") port.temp <- (1-rep.port.temp$Beta)*rep.port.temp$Rf+rep.port.temp$Beta*rep.port.temp$Rtn rep.port.temp <- add_column(rep.port.temp, port.temp, .after = 100) names(rep.port.temp) <- c(\"Beta\", \"Rf\", \"Rtn\", \"Port. Rtn\") rep.port.temp$`Port. Rtn` <- as.numeric(unlist(rep.port.temp$`Port. Rtn`)) rep.port.temp$Rtn <- as.numeric(unlist(rep.port.temp$Rtn)) rep.port.temp$Exc.Port.Rtn <- as.numeric(unlist(rep.port.temp$`Port. Rtn`-rep.port.temp$Rf)) sd.temp[i] <- STDEV(rep.port.temp[, 5])}LICs$\"SD(ER_pt)\" <- sd.temp"
},
{
"code": null,
"e": 16646,
"s": 16477,
"text": "Finally, we check whether the standard deviation of the returns from our replication portfolios is lower than that of the LIC securities, indicating less risk exposure."
},
{
"code": null,
"e": 16801,
"s": 16646,
"text": "# COMPARE SD(x) PERFORMANCE ################################LICs$'Lower Rep. Port. Risk?' <- ifelse(LICs$`SD(ER_pt)` <= LICs$`SD(ER_at)`, \"TRUE\", \"FALSE\")"
},
{
"code": null,
"e": 17053,
"s": 16801,
"text": "The results show that for each LIC in our analysis we have been able to produce a replication portfolio, combining a passive index tracking ETF and risk-free assets that will produce a comparable rate of return while exposing an investor to less risk."
},
{
"code": null,
"e": 17751,
"s": 17053,
"text": "We have shown through this analysis that over our sample period (2015–06–01 through 2020–03–01) we fail to reject our hypothesis that LIC managers were unable consistently deliver returns above those offered by holding a replication portfolio of a passive index tracking ETF and risk-free assets. Further, investors holding LIC securities are being charged a higher management fee, and in some cases, a performance fee, to hold these securities with the belief that the expertise of their LIC manager will deliver the highest possible return for the lowest possible risk exposure. Yet we have seen that by holding a replication portfolio, investors will also be able to reduce their risk exposure."
},
{
"code": null,
"e": 18194,
"s": 17751,
"text": "It is important to note that, as with all financial assets, past performance is not always a reliable indicator of future performance. The analysis above has used 5 years of monthly price data, using different timeframes and periods will likely yield different results. I have posted this article not with the intention of providing financial advice, but to share the methodology and coding for others to replicate with their own preferences."
},
{
"code": null,
"e": 18366,
"s": 18194,
"text": "Thanks for reading all the way to the end of the article! I’d love to hear any comments about the above. Feel free to leave a message, or reach out to me through LinkedIn."
},
{
"code": null,
"e": 18539,
"s": 18366,
"text": "# Packages #############################pacman::p_load(pacman, expss, jtools, NCmisc, PerformanceAnalytics, purrr, quantmod, rio, stats, tibble, tidyquant, utils, xts, zoo)"
},
{
"code": null,
"e": 18839,
"s": 18539,
"text": "Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details."
},
{
"code": null,
"e": 18906,
"s": 18839,
"text": "ASXETFS.com (2019) — https://www.asxetfs.com [accessed 08/05/2020]"
},
{
"code": null,
"e": 18973,
"s": 18906,
"text": "ASXLICS.com (2019) — https://www.asxlics.com [accessed 08/05/2020]"
},
{
"code": null,
"e": 19071,
"s": 18973,
"text": "Brooks, C 2008, Introductory Econometrics for Finance, Second Edition, Cambridge University Press"
},
{
"code": null,
"e": 19176,
"s": 19071,
"text": "Firstlinks (2016) — https://www.firstlinks.com.au/understanding-lic-fee-structures [accessed 08/05/2020]"
},
{
"code": null,
"e": 19279,
"s": 19176,
"text": "Investopedia (2019a) — https://www.investopedia.com/terms/u/unsystematicrisk.asp [accessed 08/05/2020]"
},
{
"code": null,
"e": 19380,
"s": 19279,
"text": "Investopedia (2019b) — https://www.investopedia.com/terms/s/systematicrisk.asp [accessed 08/05/2020]"
},
{
"code": null,
"e": 19471,
"s": 19380,
"text": "Investopedia (2020a) — https://www.investopedia.com/terms/c/capm.asp [accessed 08/05/2020]"
},
{
"code": null,
"e": 19571,
"s": 19471,
"text": "Investopedia (2020b) — https://www.investopedia.com/terms/r/riskfreeasset.asp [accessed 08/05/2020]"
},
{
"code": null,
"e": 19685,
"s": 19571,
"text": "Reserve Bank of Australia (2020) — https://www.rba.gov.au/statistics/tables/#interest-rates [accessed 08/05/2020]"
},
{
"code": null,
"e": 19838,
"s": 19685,
"text": "Vanguard (2020) — https://www.vanguardinvestments.com.au/retail/ret/investments/product.html#/fundDetail/etf/portId=8205/?overview [accessed 08/05/2020]"
}
] |
Python 3 - List append() Method
|
The append() method appends a passed obj into the existing list.
Following is the syntax for append() method −
list.append(obj)
obj − This is the object to be appended in the list.
This method does not return any value but updates existing list.
The following example shows the usage of append() method.
#!/usr/bin/python3
list1 = ['C++', 'Java', 'Python']
list1.append('C#')
print ("updated list : ", list1)
When we run above program, it produces the following result −
updated list : ['C++', 'Java', 'Python', 'C#']
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2405,
"s": 2340,
"text": "The append() method appends a passed obj into the existing list."
},
{
"code": null,
"e": 2451,
"s": 2405,
"text": "Following is the syntax for append() method −"
},
{
"code": null,
"e": 2469,
"s": 2451,
"text": "list.append(obj)\n"
},
{
"code": null,
"e": 2522,
"s": 2469,
"text": "obj − This is the object to be appended in the list."
},
{
"code": null,
"e": 2587,
"s": 2522,
"text": "This method does not return any value but updates existing list."
},
{
"code": null,
"e": 2645,
"s": 2587,
"text": "The following example shows the usage of append() method."
},
{
"code": null,
"e": 2751,
"s": 2645,
"text": "#!/usr/bin/python3\n\nlist1 = ['C++', 'Java', 'Python']\nlist1.append('C#')\nprint (\"updated list : \", list1)"
},
{
"code": null,
"e": 2813,
"s": 2751,
"text": "When we run above program, it produces the following result −"
},
{
"code": null,
"e": 2862,
"s": 2813,
"text": "updated list : ['C++', 'Java', 'Python', 'C#']\n"
},
{
"code": null,
"e": 2899,
"s": 2862,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 2915,
"s": 2899,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 2948,
"s": 2915,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2967,
"s": 2948,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3002,
"s": 2967,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3024,
"s": 3002,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3058,
"s": 3024,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3086,
"s": 3058,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3121,
"s": 3086,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3135,
"s": 3121,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3168,
"s": 3135,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3185,
"s": 3168,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3192,
"s": 3185,
"text": " Print"
},
{
"code": null,
"e": 3203,
"s": 3192,
"text": " Add Notes"
}
] |
Changing the Position of List Markers in CSS
|
The CSS list-style-position property is used to set marker position of list items. The default value for this property is outside which sets the marker outside the list item.
The syntax of CSS list-style-position property is as follows −
Selector {
list-style-position: /*value*/
}
The following examples illustrate CSS list-style-property −
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
li {
width: 50%;
margin: 5px;
font-size: 120%;
box-shadow: 0 0 3px 1px black;
background: url("https://www.tutorialspoint.com/dbms/images/dbms.jpg") no-repeat 32px 8px;
list-style-position: inside;
padding: 0 0 10px 20px;
}
ol ol li {
list-style: lower-roman;
list-style-position: outside;
}
</style>
</head>
<body>
<ol>
<li>Black</li>
<li>
Blue
<ol>
<li>Green</li>
<li>Red</li>
</ol>
</li>
<li>Yellow</li>
<li>Red</li>
</ol>
</body>
</html>
This gives the following output −
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
ul {
width: 200px;
box-shadow: inset 0 0 6px green;
list-style-position: outside;
}
ul + ul {
list-style-type: circle;
list-style-position: inside;
}
</style>
</head>
<body>
<ul>
<li>demo</li>
<li>demo</li>
<li>demo</li>
</ul>
<ul>
<li>demo</li>
<li>demo</li>
<li>demo</li>
</ul>
</body>
</html>
This gives the following output −
|
[
{
"code": null,
"e": 1237,
"s": 1062,
"text": "The CSS list-style-position property is used to set marker position of list items. The default value for this property is outside which sets the marker outside the list item."
},
{
"code": null,
"e": 1300,
"s": 1237,
"text": "The syntax of CSS list-style-position property is as follows −"
},
{
"code": null,
"e": 1347,
"s": 1300,
"text": "Selector {\n list-style-position: /*value*/\n}"
},
{
"code": null,
"e": 1407,
"s": 1347,
"text": "The following examples illustrate CSS list-style-property −"
},
{
"code": null,
"e": 1418,
"s": 1407,
"text": " Live Demo"
},
{
"code": null,
"e": 1925,
"s": 1418,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<style>\nli {\n width: 50%;\n margin: 5px;\n font-size: 120%;\n box-shadow: 0 0 3px 1px black;\n background: url(\"https://www.tutorialspoint.com/dbms/images/dbms.jpg\") no-repeat 32px 8px;\n list-style-position: inside;\n padding: 0 0 10px 20px;\n}\nol ol li {\n list-style: lower-roman;\n list-style-position: outside;\n}\n</style>\n</head>\n<body>\n<ol>\n<li>Black</li>\n<li>\nBlue\n<ol>\n<li>Green</li>\n<li>Red</li>\n</ol>\n</li>\n<li>Yellow</li>\n<li>Red</li>\n</ol>\n</body>\n</html>"
},
{
"code": null,
"e": 1959,
"s": 1925,
"text": "This gives the following output −"
},
{
"code": null,
"e": 1970,
"s": 1959,
"text": " Live Demo"
},
{
"code": null,
"e": 2319,
"s": 1970,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<style>\nul {\n width: 200px;\n box-shadow: inset 0 0 6px green;\n list-style-position: outside;\n}\nul + ul {\n list-style-type: circle;\n list-style-position: inside;\n}\n</style>\n</head>\n<body>\n<ul>\n<li>demo</li>\n<li>demo</li>\n<li>demo</li>\n</ul>\n<ul>\n<li>demo</li>\n<li>demo</li>\n<li>demo</li>\n</ul>\n</body>\n</html>"
},
{
"code": null,
"e": 2353,
"s": 2319,
"text": "This gives the following output −"
}
] |
How to change the Y-axis values in a bar plot using ggplot2 in R?
|
Bar plot is frequently used to analyze the number of times a level of factor variable occurs in a data set and the Y-axis values are crucial to the bar plot. Sometimes these values are not in the form we want, therefore, we want to replace them with the new ones. This can be done with the help of breaks argument of scale_y_continuous function in ggplot2.
Consider the below data frame −
> set.seed(1)
> x<-rpois(50,5)
> df<-data.frame(x)
Loading ggplot2 package −
> library(ggplot2)
Creating the plot without specifying the Y-axis values −
> ggplot(df,aes(x))+
+ geom_bar()
Plotting with new Y-axis values −
> ggplot(df,aes(x))+
+ geom_bar()+
+ scale_y_continuous(breaks=c(0,2,4,6,8,10))
|
[
{
"code": null,
"e": 1419,
"s": 1062,
"text": "Bar plot is frequently used to analyze the number of times a level of factor variable occurs in a data set and the Y-axis values are crucial to the bar plot. Sometimes these values are not in the form we want, therefore, we want to replace them with the new ones. This can be done with the help of breaks argument of scale_y_continuous function in ggplot2."
},
{
"code": null,
"e": 1451,
"s": 1419,
"text": "Consider the below data frame −"
},
{
"code": null,
"e": 1502,
"s": 1451,
"text": "> set.seed(1)\n> x<-rpois(50,5)\n> df<-data.frame(x)"
},
{
"code": null,
"e": 1528,
"s": 1502,
"text": "Loading ggplot2 package −"
},
{
"code": null,
"e": 1547,
"s": 1528,
"text": "> library(ggplot2)"
},
{
"code": null,
"e": 1604,
"s": 1547,
"text": "Creating the plot without specifying the Y-axis values −"
},
{
"code": null,
"e": 1638,
"s": 1604,
"text": "> ggplot(df,aes(x))+\n+ geom_bar()"
},
{
"code": null,
"e": 1672,
"s": 1638,
"text": "Plotting with new Y-axis values −"
},
{
"code": null,
"e": 1752,
"s": 1672,
"text": "> ggplot(df,aes(x))+\n+ geom_bar()+\n+ scale_y_continuous(breaks=c(0,2,4,6,8,10))"
}
] |
How to populate an array one value at a time by taking input from user in Java?
|
To read data from user create a scanner class. Read the size of the array to be created from the user using nextInt() method. Create an array with the specified size. In the loop read the values from the user and store in the array created above.
import java.util.Arrays;
import java.util.Scanner;
public class PopulatingAnArray {
public static void main(String args[]) {
System.out.println("Enter the required size of the array :: ");
Scanner s = new Scanner(System.in);
int size = s.nextInt();
int myArray[] = new int [size];
System.out.println("Enter the elements of the array one by one ");
for(int i=0; i<size; i++) {
myArray[i] = s.nextInt();
}
System.out.println("Contents of the array are: "+Arrays.toString(myArray));
}
}
Enter the required size of the array ::
5
Enter the elements of the array one by one
78
96
45
23
45
Contents of the array are: [78, 96, 45, 23, 45]
|
[
{
"code": null,
"e": 1309,
"s": 1062,
"text": "To read data from user create a scanner class. Read the size of the array to be created from the user using nextInt() method. Create an array with the specified size. In the loop read the values from the user and store in the array created above."
},
{
"code": null,
"e": 1857,
"s": 1309,
"text": "import java.util.Arrays;\nimport java.util.Scanner;\n\npublic class PopulatingAnArray {\n public static void main(String args[]) {\n System.out.println(\"Enter the required size of the array :: \");\n Scanner s = new Scanner(System.in);\n int size = s.nextInt();\n int myArray[] = new int [size];\n System.out.println(\"Enter the elements of the array one by one \");\n for(int i=0; i<size; i++) {\n myArray[i] = s.nextInt();\n }\n System.out.println(\"Contents of the array are: \"+Arrays.toString(myArray));\n }\n}"
},
{
"code": null,
"e": 2005,
"s": 1857,
"text": "Enter the required size of the array ::\n5\nEnter the elements of the array one by one\n78\n96\n45\n23\n45\nContents of the array are: [78, 96, 45, 23, 45]"
}
] |
A Guide to Streamlit — Frontend for Data Science Made Simpler | by Yash Prakash | Towards Data Science
|
Multiple times in our data science job or even in a cool side project that we dedicate our time and energy to, we face the dilemma of how to showcase our work properly. Should we build a new webpage ourselves, make sure our model works as an API and finally, put it all together as a package and expect someone else to run and interact with it the same way we did?
It’s hard to do the frontend work.
Thus, for those of us backend people who never bothered to toil and learn the nuances of HTML, CSS and Javascript, Streamlit is here to save us.
In brief, this is the library that allows us to build frontend for our machine learning and data science apps by writing all the code in Python. Beautiful UIs can easily be designed through numerous components from the library. This is the full documentation from their website.
This means you can have — buttons, pretty text displays, scrollable boxes, drop-down lists, file upload functionalities — all inside of your python project with minimal effort.
Let’s get started with the Rock Paper Scissors project that we’ve been working on from this article:
towardsdatascience.com
If you haven’t read that article yet, this is the work we did there in brief — The Rock Paper scissors dataset is the collection of images from three classes of images, you guessed it — Rock, Paper and Scissor hand gestures. We perform image classification on the dataset using some quick transfer learning with the help of the Fastai deep learning library.
We imported the dataset, trained the model and saved it in this article.
Now comes the part where we design an interactive interface around it.
Just in case you were wondering about the whole codebase: it is all here. Feel free to follow along as you read this tutorial. :)
github.com
We will be adding the following two methods of predicting an image to our app. Either we can:
choose from a list of test images to predict, or
upload a new test image to predict
This sounds pretty simple right? Yes, that it is! But with this, you will gain the knowledge of using a lot of different streamlit components in your further projects as well.
We should now go ahead and make sure we install the library.
Do this from the terminal while inside your virtual environment:
pipenv shell #activate your virtual envpipenv install streamlit #install streamlit
Now go ahead and open up your project directory in your favourite code editor.
For this app, we will be building a full containerised (dockerized) shareable application in the future, so make sure you build this project structure first:
The entirety of the app will be living inside the app directory.
Our project needs to sit in the module directory.
Our trained model (pickled) will be housed in models directory.
And finally, we will have our start.py, the main file to run the app outside the module but inside the app directory.
The other two files are just pipenv files made when using the virtual environment.
Now that we have gotten that out of the way, let’s write some code!
The whole app consists of just some helper functions and one main function to run them all.
Let’s see what’s inside load_model.py
Define the trained model path and import fastai:
from fastai.vision.all import *SAVED_MODEL_PATH = './models/trained_model.pkl'
Finally, define the classifier function:
def _get_model(): model = load_learner(SAVED_MODEL_PATH) return modeldef perform_prediction(image_path): model = _get_model() pred = model.predict(PILImage.create(image_path)) return pred[0]
We also define another function to open and display an image via the PIL libary:
def get_opened_image(image): return Image.open(image)
Now, we can move on to the start.py.
Let’s connect our helper function in the load_model.py module with a function in this file:
def classify(image_file): pred = perform_prediction(image_file) return pred
Let’s now get on with the streamlit app.
Import the libary:
import streamlit as st
First, we define the title of the app like so:
st.sidebar.title("RPS Image Classifier")
Next, we make sure we have a main function. Let’s create a sidebar — to house all the interactive components in the app.
Now, we define the functionality to upload an image.
image_file = st.sidebar.file_uploader('Upload an image', type = 'png')
Finally, we make a button that helps us load our image into the app and display it:
if image_file and st.sidebar.button('Load'): image = get_opened_image(image_file) with st.beta_expander('Selected Image', expanded = True): st.image(image, use_column_width = True)
Now, we include the option of selecting an existing image file as well:
image_file_chosen = st.sidebar.selectbox('Select an existing image:', get_list_of_images())
Here, get_list_of_images is a new function we define in the load_model.py:
def get_list_of_images():file_list = os.listdir(PATH_TO_TEST_IMAGES)return [str(filename) for filename in file_list if str(filename).endswith('.png')]
Pheww! This should now be enough for our UI.
This is how your app should look right now:
These are the components we’ve built:
st.title — the component to write the title. Other text components for various use cases are: st.text, st.write, st.markdown. We’ll be using all of that further in here too.
st.sidebar — to build the sidebar you see in the above image.
st.sidebar.file_uploader — to make a widget of file uploader available in the sidebar. You can also use it simply without the sidebar, like this: st.file_uploader. Then, it will appear in the main area which looks blank in the above image.
st.image — The widget to simply show the image loaded into the app.
st.selectbox — to make a drop-down list of items (images here) to select instead of uploading a file yourself.
This is the part we’ve been waiting for, isn’t it? Well, thanks to fastai, this step is very simple too.
def perform_prediction(image_path): model = _get_model() pred = model.predict(PILImage.create(image_path)) return pred[0]def _get_model(): model = load_learner(SAVED_MODEL_PATH) return model
These two functions help us predict on any given image. We load the learner module from fastai, and then we return back the result. In this case, it is the first item in the pred variable. Hence, pred[0].
The only thing we need to do now is to link this function inside our start.py.
prediction = classify(os.path.join(PATH_TO_TEST_IMAGES, image_file))
And that’s it!
Let’s display the results as well!
st.subheader('Prediction')st.markdown(f'The predicted label is: **{prediction}**')
The app is action now looks like so:
If you’ve followed along with me thus far, take a moment to congratulate yourself on learning the fundamentals of a cool, useful library! The ease of building a frontend for a data science project had never been so great and you should definitely try to apply more of this in your future projects. I surely will!
In the next article, I will be containerising this app with the help of Docker. So do stay tuned for that!
Thank you for reading!
Here is the master codebase of all my Data Science articles:
github.com
I also write about Data Science and Software Engineering on Twitter.
|
[
{
"code": null,
"e": 537,
"s": 172,
"text": "Multiple times in our data science job or even in a cool side project that we dedicate our time and energy to, we face the dilemma of how to showcase our work properly. Should we build a new webpage ourselves, make sure our model works as an API and finally, put it all together as a package and expect someone else to run and interact with it the same way we did?"
},
{
"code": null,
"e": 572,
"s": 537,
"text": "It’s hard to do the frontend work."
},
{
"code": null,
"e": 717,
"s": 572,
"text": "Thus, for those of us backend people who never bothered to toil and learn the nuances of HTML, CSS and Javascript, Streamlit is here to save us."
},
{
"code": null,
"e": 996,
"s": 717,
"text": "In brief, this is the library that allows us to build frontend for our machine learning and data science apps by writing all the code in Python. Beautiful UIs can easily be designed through numerous components from the library. This is the full documentation from their website."
},
{
"code": null,
"e": 1173,
"s": 996,
"text": "This means you can have — buttons, pretty text displays, scrollable boxes, drop-down lists, file upload functionalities — all inside of your python project with minimal effort."
},
{
"code": null,
"e": 1274,
"s": 1173,
"text": "Let’s get started with the Rock Paper Scissors project that we’ve been working on from this article:"
},
{
"code": null,
"e": 1297,
"s": 1274,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1655,
"s": 1297,
"text": "If you haven’t read that article yet, this is the work we did there in brief — The Rock Paper scissors dataset is the collection of images from three classes of images, you guessed it — Rock, Paper and Scissor hand gestures. We perform image classification on the dataset using some quick transfer learning with the help of the Fastai deep learning library."
},
{
"code": null,
"e": 1728,
"s": 1655,
"text": "We imported the dataset, trained the model and saved it in this article."
},
{
"code": null,
"e": 1799,
"s": 1728,
"text": "Now comes the part where we design an interactive interface around it."
},
{
"code": null,
"e": 1929,
"s": 1799,
"text": "Just in case you were wondering about the whole codebase: it is all here. Feel free to follow along as you read this tutorial. :)"
},
{
"code": null,
"e": 1940,
"s": 1929,
"text": "github.com"
},
{
"code": null,
"e": 2034,
"s": 1940,
"text": "We will be adding the following two methods of predicting an image to our app. Either we can:"
},
{
"code": null,
"e": 2083,
"s": 2034,
"text": "choose from a list of test images to predict, or"
},
{
"code": null,
"e": 2118,
"s": 2083,
"text": "upload a new test image to predict"
},
{
"code": null,
"e": 2294,
"s": 2118,
"text": "This sounds pretty simple right? Yes, that it is! But with this, you will gain the knowledge of using a lot of different streamlit components in your further projects as well."
},
{
"code": null,
"e": 2355,
"s": 2294,
"text": "We should now go ahead and make sure we install the library."
},
{
"code": null,
"e": 2420,
"s": 2355,
"text": "Do this from the terminal while inside your virtual environment:"
},
{
"code": null,
"e": 2503,
"s": 2420,
"text": "pipenv shell #activate your virtual envpipenv install streamlit #install streamlit"
},
{
"code": null,
"e": 2582,
"s": 2503,
"text": "Now go ahead and open up your project directory in your favourite code editor."
},
{
"code": null,
"e": 2740,
"s": 2582,
"text": "For this app, we will be building a full containerised (dockerized) shareable application in the future, so make sure you build this project structure first:"
},
{
"code": null,
"e": 2805,
"s": 2740,
"text": "The entirety of the app will be living inside the app directory."
},
{
"code": null,
"e": 2855,
"s": 2805,
"text": "Our project needs to sit in the module directory."
},
{
"code": null,
"e": 2919,
"s": 2855,
"text": "Our trained model (pickled) will be housed in models directory."
},
{
"code": null,
"e": 3037,
"s": 2919,
"text": "And finally, we will have our start.py, the main file to run the app outside the module but inside the app directory."
},
{
"code": null,
"e": 3120,
"s": 3037,
"text": "The other two files are just pipenv files made when using the virtual environment."
},
{
"code": null,
"e": 3188,
"s": 3120,
"text": "Now that we have gotten that out of the way, let’s write some code!"
},
{
"code": null,
"e": 3280,
"s": 3188,
"text": "The whole app consists of just some helper functions and one main function to run them all."
},
{
"code": null,
"e": 3318,
"s": 3280,
"text": "Let’s see what’s inside load_model.py"
},
{
"code": null,
"e": 3367,
"s": 3318,
"text": "Define the trained model path and import fastai:"
},
{
"code": null,
"e": 3446,
"s": 3367,
"text": "from fastai.vision.all import *SAVED_MODEL_PATH = './models/trained_model.pkl'"
},
{
"code": null,
"e": 3487,
"s": 3446,
"text": "Finally, define the classifier function:"
},
{
"code": null,
"e": 3693,
"s": 3487,
"text": "def _get_model(): model = load_learner(SAVED_MODEL_PATH) return modeldef perform_prediction(image_path): model = _get_model() pred = model.predict(PILImage.create(image_path)) return pred[0]"
},
{
"code": null,
"e": 3774,
"s": 3693,
"text": "We also define another function to open and display an image via the PIL libary:"
},
{
"code": null,
"e": 3831,
"s": 3774,
"text": "def get_opened_image(image): return Image.open(image)"
},
{
"code": null,
"e": 3868,
"s": 3831,
"text": "Now, we can move on to the start.py."
},
{
"code": null,
"e": 3960,
"s": 3868,
"text": "Let’s connect our helper function in the load_model.py module with a function in this file:"
},
{
"code": null,
"e": 4042,
"s": 3960,
"text": "def classify(image_file): pred = perform_prediction(image_file) return pred"
},
{
"code": null,
"e": 4083,
"s": 4042,
"text": "Let’s now get on with the streamlit app."
},
{
"code": null,
"e": 4102,
"s": 4083,
"text": "Import the libary:"
},
{
"code": null,
"e": 4125,
"s": 4102,
"text": "import streamlit as st"
},
{
"code": null,
"e": 4172,
"s": 4125,
"text": "First, we define the title of the app like so:"
},
{
"code": null,
"e": 4213,
"s": 4172,
"text": "st.sidebar.title(\"RPS Image Classifier\")"
},
{
"code": null,
"e": 4334,
"s": 4213,
"text": "Next, we make sure we have a main function. Let’s create a sidebar — to house all the interactive components in the app."
},
{
"code": null,
"e": 4387,
"s": 4334,
"text": "Now, we define the functionality to upload an image."
},
{
"code": null,
"e": 4458,
"s": 4387,
"text": "image_file = st.sidebar.file_uploader('Upload an image', type = 'png')"
},
{
"code": null,
"e": 4542,
"s": 4458,
"text": "Finally, we make a button that helps us load our image into the app and display it:"
},
{
"code": null,
"e": 4748,
"s": 4542,
"text": "if image_file and st.sidebar.button('Load'): image = get_opened_image(image_file) with st.beta_expander('Selected Image', expanded = True): st.image(image, use_column_width = True)"
},
{
"code": null,
"e": 4820,
"s": 4748,
"text": "Now, we include the option of selecting an existing image file as well:"
},
{
"code": null,
"e": 4912,
"s": 4820,
"text": "image_file_chosen = st.sidebar.selectbox('Select an existing image:', get_list_of_images())"
},
{
"code": null,
"e": 4987,
"s": 4912,
"text": "Here, get_list_of_images is a new function we define in the load_model.py:"
},
{
"code": null,
"e": 5138,
"s": 4987,
"text": "def get_list_of_images():file_list = os.listdir(PATH_TO_TEST_IMAGES)return [str(filename) for filename in file_list if str(filename).endswith('.png')]"
},
{
"code": null,
"e": 5183,
"s": 5138,
"text": "Pheww! This should now be enough for our UI."
},
{
"code": null,
"e": 5227,
"s": 5183,
"text": "This is how your app should look right now:"
},
{
"code": null,
"e": 5265,
"s": 5227,
"text": "These are the components we’ve built:"
},
{
"code": null,
"e": 5439,
"s": 5265,
"text": "st.title — the component to write the title. Other text components for various use cases are: st.text, st.write, st.markdown. We’ll be using all of that further in here too."
},
{
"code": null,
"e": 5501,
"s": 5439,
"text": "st.sidebar — to build the sidebar you see in the above image."
},
{
"code": null,
"e": 5741,
"s": 5501,
"text": "st.sidebar.file_uploader — to make a widget of file uploader available in the sidebar. You can also use it simply without the sidebar, like this: st.file_uploader. Then, it will appear in the main area which looks blank in the above image."
},
{
"code": null,
"e": 5809,
"s": 5741,
"text": "st.image — The widget to simply show the image loaded into the app."
},
{
"code": null,
"e": 5920,
"s": 5809,
"text": "st.selectbox — to make a drop-down list of items (images here) to select instead of uploading a file yourself."
},
{
"code": null,
"e": 6025,
"s": 5920,
"text": "This is the part we’ve been waiting for, isn’t it? Well, thanks to fastai, this step is very simple too."
},
{
"code": null,
"e": 6231,
"s": 6025,
"text": "def perform_prediction(image_path): model = _get_model() pred = model.predict(PILImage.create(image_path)) return pred[0]def _get_model(): model = load_learner(SAVED_MODEL_PATH) return model"
},
{
"code": null,
"e": 6436,
"s": 6231,
"text": "These two functions help us predict on any given image. We load the learner module from fastai, and then we return back the result. In this case, it is the first item in the pred variable. Hence, pred[0]."
},
{
"code": null,
"e": 6515,
"s": 6436,
"text": "The only thing we need to do now is to link this function inside our start.py."
},
{
"code": null,
"e": 6584,
"s": 6515,
"text": "prediction = classify(os.path.join(PATH_TO_TEST_IMAGES, image_file))"
},
{
"code": null,
"e": 6599,
"s": 6584,
"text": "And that’s it!"
},
{
"code": null,
"e": 6634,
"s": 6599,
"text": "Let’s display the results as well!"
},
{
"code": null,
"e": 6717,
"s": 6634,
"text": "st.subheader('Prediction')st.markdown(f'The predicted label is: **{prediction}**')"
},
{
"code": null,
"e": 6754,
"s": 6717,
"text": "The app is action now looks like so:"
},
{
"code": null,
"e": 7067,
"s": 6754,
"text": "If you’ve followed along with me thus far, take a moment to congratulate yourself on learning the fundamentals of a cool, useful library! The ease of building a frontend for a data science project had never been so great and you should definitely try to apply more of this in your future projects. I surely will!"
},
{
"code": null,
"e": 7174,
"s": 7067,
"text": "In the next article, I will be containerising this app with the help of Docker. So do stay tuned for that!"
},
{
"code": null,
"e": 7197,
"s": 7174,
"text": "Thank you for reading!"
},
{
"code": null,
"e": 7258,
"s": 7197,
"text": "Here is the master codebase of all my Data Science articles:"
},
{
"code": null,
"e": 7269,
"s": 7258,
"text": "github.com"
}
] |
How to Auto-Detect the Date/Datetime Columns and Set Their Datatype When Reading a CSV File in Pandas | Towards Data Science
|
Say I have a CSV data file that I want to read into a Pandas dataframe, and some of its columns are dates or datetimes, but I don’t want to bother identifying/specifying the names of these columns in advance. Instead I would like to automatically obtain the datatypes shown in the df.info() output pictured above, where the appropriate columns have been automatically given a datetime datatype (green outline boxes). Here’s how to accomplish that:
from dt_auto import read_csvdf=read_csv('myfile.csv')
Note that I did not invoke pd.read_csv (the Pandas version of read_csv) above directly. My dt_auto.read_csv function (see its code down below) has invoked pd.read_csv() itself and then automatically detected and converted the datatype of the two detected datetime columns. (The contents of this df will be shown down below.)
If I had used the regular Pandas pd.read_csv(), I would have obtained merely generic object datatypes by default as below (red outline boxes):
from pandas import read_csvdf=read_csv('myfile.csv')df.info()
Note that the only difference from the original code is in the import statement, where I changed “from dt_auto” to “from pandas”. This is sufficient so long as you use only “=read_csv()” throughout, not qualifying it as as “=pd.read_csv()” or “=dt_auto.read_csv()”.
Here is the contents of my dt_auto.py (“datetime automatic”):
import pandas as pddef dt_inplace(df): """Automatically detect and convert (in place!) each dataframe column of datatype 'object' to a datetime just when ALL of its non-NaN values can be successfully parsed by pd.to_datetime(). Also returns a ref. to df for convenient use in an expression. """ from pandas.errors import ParserError for c in df.columns[df.dtypes=='object']: #don't cnvt num try: df[c]=pd.to_datetime(df[c]) except (ParserError,ValueError): #Can't cnvrt some pass # ...so leave whole column as-is unconverted return dfdef read_csv(*args, **kwargs): """Drop-in replacement for Pandas pd.read_csv. It invokes pd.read_csv() (passing its arguments) and then auto- matically detects and converts each column whose datatype is 'object' to a datetime just when ALL of the column's non-NaN values can be successfully parsed by pd.to_datetime(), and returns the resulting dataframe. """ return dt_inplace(pd.read_csv(*args, **kwargs))
But isn’t this risky? What if one of the columns wasn’t entirely a datetime column? Of course you could have some obscure strings that just happen to look like dates but aren’t, but there is not much risk that this code will blindly convert or lose non-datetime strings, for two reasons:
This code will not convert any values in a column unless every non-NaN value in this column can successfully be parsed by pd.to_datetime and converted to a datetime. In other words, we will not let it ever convert a string to a pd.NaT (the “failure” result) because it can’t understand it as a datetime.It will not attempt to convert columns that have already been interpreted as being any type other than object, i.e. any specific type like int64 or float64, even though pd.to_datetime would have happily (but likely undesirably) converted a number like 2000 to the date 2000-01-01.
This code will not convert any values in a column unless every non-NaN value in this column can successfully be parsed by pd.to_datetime and converted to a datetime. In other words, we will not let it ever convert a string to a pd.NaT (the “failure” result) because it can’t understand it as a datetime.
It will not attempt to convert columns that have already been interpreted as being any type other than object, i.e. any specific type like int64 or float64, even though pd.to_datetime would have happily (but likely undesirably) converted a number like 2000 to the date 2000-01-01.
In my experience so far, the dt_auto.read_csv function does not take long to run on a typical dataframe. Even if there are a lot of non-datetime object (string) columns, it almost always very quickly encounters a value near the top of each such column that it can’t parse as a datetime and gives up and moves on to the next column without attempting to parse the rest of the column’s values.
Here’s what the resulting dataframe looks like from dt_auto.read_csv(), although you can’t necessarily tell by looking at it that the two appropriate columns are indeed datetime datatypes. As it happens, the CSV file had a varying number of decimal places (three, none, and nine) for the seconds in Update_Timestamp, but the datetime datatype itself shows nine such digits regardless. Birthdate in the csv file in fact had only dates (no times) but was stored as a full datetime, with zeros for the hours, minutes, and seconds (including zero as the decimal part), but all of the time components in the column being zero causes Pandas to display only the date (year-month-day) for this column.
Of course pd.to_datetime, and thus dt_auto.read_csv, cannot handle all possible date and datetime formats by default, but it will handle many common unambiguous (generally year month day) formats such as those written by the dataframe.to_csv method and many other tools, including many ISO datetime formats (which generally have a “T” separating the date from the time rather than a space). I haven’t experimented with datetimes that include timezone info because I don’t usually see data like that, but do please let me know in a response comment whether these could be handled better by further changes to the code.
What do you think? Did you find this little article useful? And should Pandas itself add (e.g. to the pd.read_csv function itself?) the capability to optionally do this for us so you wouldn’t need to copy/import my dt_auto.py code above? I’d be happy to see your comments and questions as responses here.
Some rights reserved
|
[
{
"code": null,
"e": 620,
"s": 172,
"text": "Say I have a CSV data file that I want to read into a Pandas dataframe, and some of its columns are dates or datetimes, but I don’t want to bother identifying/specifying the names of these columns in advance. Instead I would like to automatically obtain the datatypes shown in the df.info() output pictured above, where the appropriate columns have been automatically given a datetime datatype (green outline boxes). Here’s how to accomplish that:"
},
{
"code": null,
"e": 674,
"s": 620,
"text": "from dt_auto import read_csvdf=read_csv('myfile.csv')"
},
{
"code": null,
"e": 999,
"s": 674,
"text": "Note that I did not invoke pd.read_csv (the Pandas version of read_csv) above directly. My dt_auto.read_csv function (see its code down below) has invoked pd.read_csv() itself and then automatically detected and converted the datatype of the two detected datetime columns. (The contents of this df will be shown down below.)"
},
{
"code": null,
"e": 1142,
"s": 999,
"text": "If I had used the regular Pandas pd.read_csv(), I would have obtained merely generic object datatypes by default as below (red outline boxes):"
},
{
"code": null,
"e": 1204,
"s": 1142,
"text": "from pandas import read_csvdf=read_csv('myfile.csv')df.info()"
},
{
"code": null,
"e": 1470,
"s": 1204,
"text": "Note that the only difference from the original code is in the import statement, where I changed “from dt_auto” to “from pandas”. This is sufficient so long as you use only “=read_csv()” throughout, not qualifying it as as “=pd.read_csv()” or “=dt_auto.read_csv()”."
},
{
"code": null,
"e": 1532,
"s": 1470,
"text": "Here is the contents of my dt_auto.py (“datetime automatic”):"
},
{
"code": null,
"e": 2562,
"s": 1532,
"text": "import pandas as pddef dt_inplace(df): \"\"\"Automatically detect and convert (in place!) each dataframe column of datatype 'object' to a datetime just when ALL of its non-NaN values can be successfully parsed by pd.to_datetime(). Also returns a ref. to df for convenient use in an expression. \"\"\" from pandas.errors import ParserError for c in df.columns[df.dtypes=='object']: #don't cnvt num try: df[c]=pd.to_datetime(df[c]) except (ParserError,ValueError): #Can't cnvrt some pass # ...so leave whole column as-is unconverted return dfdef read_csv(*args, **kwargs): \"\"\"Drop-in replacement for Pandas pd.read_csv. It invokes pd.read_csv() (passing its arguments) and then auto- matically detects and converts each column whose datatype is 'object' to a datetime just when ALL of the column's non-NaN values can be successfully parsed by pd.to_datetime(), and returns the resulting dataframe. \"\"\" return dt_inplace(pd.read_csv(*args, **kwargs))"
},
{
"code": null,
"e": 2850,
"s": 2562,
"text": "But isn’t this risky? What if one of the columns wasn’t entirely a datetime column? Of course you could have some obscure strings that just happen to look like dates but aren’t, but there is not much risk that this code will blindly convert or lose non-datetime strings, for two reasons:"
},
{
"code": null,
"e": 3434,
"s": 2850,
"text": "This code will not convert any values in a column unless every non-NaN value in this column can successfully be parsed by pd.to_datetime and converted to a datetime. In other words, we will not let it ever convert a string to a pd.NaT (the “failure” result) because it can’t understand it as a datetime.It will not attempt to convert columns that have already been interpreted as being any type other than object, i.e. any specific type like int64 or float64, even though pd.to_datetime would have happily (but likely undesirably) converted a number like 2000 to the date 2000-01-01."
},
{
"code": null,
"e": 3738,
"s": 3434,
"text": "This code will not convert any values in a column unless every non-NaN value in this column can successfully be parsed by pd.to_datetime and converted to a datetime. In other words, we will not let it ever convert a string to a pd.NaT (the “failure” result) because it can’t understand it as a datetime."
},
{
"code": null,
"e": 4019,
"s": 3738,
"text": "It will not attempt to convert columns that have already been interpreted as being any type other than object, i.e. any specific type like int64 or float64, even though pd.to_datetime would have happily (but likely undesirably) converted a number like 2000 to the date 2000-01-01."
},
{
"code": null,
"e": 4411,
"s": 4019,
"text": "In my experience so far, the dt_auto.read_csv function does not take long to run on a typical dataframe. Even if there are a lot of non-datetime object (string) columns, it almost always very quickly encounters a value near the top of each such column that it can’t parse as a datetime and gives up and moves on to the next column without attempting to parse the rest of the column’s values."
},
{
"code": null,
"e": 5105,
"s": 4411,
"text": "Here’s what the resulting dataframe looks like from dt_auto.read_csv(), although you can’t necessarily tell by looking at it that the two appropriate columns are indeed datetime datatypes. As it happens, the CSV file had a varying number of decimal places (three, none, and nine) for the seconds in Update_Timestamp, but the datetime datatype itself shows nine such digits regardless. Birthdate in the csv file in fact had only dates (no times) but was stored as a full datetime, with zeros for the hours, minutes, and seconds (including zero as the decimal part), but all of the time components in the column being zero causes Pandas to display only the date (year-month-day) for this column."
},
{
"code": null,
"e": 5723,
"s": 5105,
"text": "Of course pd.to_datetime, and thus dt_auto.read_csv, cannot handle all possible date and datetime formats by default, but it will handle many common unambiguous (generally year month day) formats such as those written by the dataframe.to_csv method and many other tools, including many ISO datetime formats (which generally have a “T” separating the date from the time rather than a space). I haven’t experimented with datetimes that include timezone info because I don’t usually see data like that, but do please let me know in a response comment whether these could be handled better by further changes to the code."
},
{
"code": null,
"e": 6028,
"s": 5723,
"text": "What do you think? Did you find this little article useful? And should Pandas itself add (e.g. to the pd.read_csv function itself?) the capability to optionally do this for us so you wouldn’t need to copy/import my dt_auto.py code above? I’d be happy to see your comments and questions as responses here."
}
] |
Autohiding Scrollbars using Python-tkinter - GeeksforGeeks
|
26 Mar, 2020
Before moving on to the topic lets see what is Python Tkinter. So, we all know that Python has different options for creating GUI(s) and tkinter is one of them. It is the standard GUI library for Python. And it makes the creation of GUI applications very quick still simple when python is merged with it. It also gives a very effective object-oriented interface to the Tk GUI toolkit.
Note: For more information, refer to Python GUI – tkinter
Moreover, Tkinter enables number of controls like labels, text boxes, list boxes, buttons, scrollbars etc, which are used in a GUI applications. These controls are known as widgets.
These methods are used to organize widgets across parents widget area. Moreover, these methods can be accessed by all the tkinter widgets. There are three geometry management methods namely pack(), grid(), and place(). All these methods have different roles.Now, lets discuss about the topic Autohiding Scrollbars using Python-tkinter.In this topic, we will see how auto-hiding scrollbars are created using tkinter in Python. So, firstly lets see the meaning of auto hiding scrollbars below:
When a scrollbar hides itself if its not required i.e, it is not visible when its not needed then that type of scrollbar is known as Auto-hiding Scrollbar. In Python Autohiding scrollbars can be used with Listbox and Text widgets. It can be implemented using python tkinter with the help of some geometry management methods.Below examples illustrate the use of Autohiding Scrollbars using Python-tkinter:Example 1:
# Python program to illustrate the usage of # autohiding scrollbars using tkinter # Importing tkinterfrom tkinter import * # Creating class AutoScrollbarclass AutoScrollbar(Scrollbar): # Defining set method with all # its parameter def set(self, low, high): if float(low) <= 0.0 and float(high) >= 1.0: # Using grid_remove self.tk.call("grid", "remove", self) else: self.grid() Scrollbar.set(self, low, high) # Defining pack method def pack(self, **kw): # If pack is used it throws an error raise (TclError,"pack cannot be used with \ this widget") # Defining place method def place(self, **kw): # If place is used it throws an error raise (TclError, "place cannot be used with \ this widget") # creating tkinter window root = Tk() # Defining vertical scrollbarverscrollbar = AutoScrollbar(root) # Calling grid method with all its# parameter w.r.t vertical scrollbarverscrollbar.grid(row=0, column=1, sticky=N+S) # Defining horizontal scrollbarhoriscrollbar = AutoScrollbar(root, orient=HORIZONTAL) # Calling grid method with all its # parameter w.r.t horizontal scrollbarhoriscrollbar.grid(row=1, column=0, sticky=E+W) # Creating scrolled canvascanvas = Canvas(root, yscrollcommand=verscrollbar.set, xscrollcommand=horiscrollbar.set) canvas.grid(row=0, column=0, sticky=N+S+E+W) verscrollbar.config(command=canvas.yview)horiscrollbar.config(command=canvas.xview) # Making the canvas expandableroot.grid_rowconfigure(0, weight=1)root.grid_columnconfigure(0, weight=1) # creating canvas contentsframe = Frame(canvas)frame.rowconfigure(1, weight=1)frame.columnconfigure(1, weight=1) # Defining number of rows and columnsrows = 20for i in range(1,rows): for j in range(1,9): button = Button(frame, padx=8, pady=8, text="[%d,%d]" % (i,j)) button.grid(row=i, column=j, sticky='news') # Creating canvas windowcanvas.create_window(0, 0, anchor=NW, window=frame) # Calling update_idletasks methodframe.update_idletasks() # Configuring canvascanvas.config(scrollregion=canvas.bbox("all")) # Calling mainloop methodroot.mainloop()
Output:
Python-tkinter
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Defaultdict in Python
Python | Get unique values from a list
Python | os.path.join() method
Selecting rows in pandas DataFrame based on conditions
Create a directory in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24317,
"s": 24289,
"text": "\n26 Mar, 2020"
},
{
"code": null,
"e": 24702,
"s": 24317,
"text": "Before moving on to the topic lets see what is Python Tkinter. So, we all know that Python has different options for creating GUI(s) and tkinter is one of them. It is the standard GUI library for Python. And it makes the creation of GUI applications very quick still simple when python is merged with it. It also gives a very effective object-oriented interface to the Tk GUI toolkit."
},
{
"code": null,
"e": 24760,
"s": 24702,
"text": "Note: For more information, refer to Python GUI – tkinter"
},
{
"code": null,
"e": 24942,
"s": 24760,
"text": "Moreover, Tkinter enables number of controls like labels, text boxes, list boxes, buttons, scrollbars etc, which are used in a GUI applications. These controls are known as widgets."
},
{
"code": null,
"e": 25434,
"s": 24942,
"text": "These methods are used to organize widgets across parents widget area. Moreover, these methods can be accessed by all the tkinter widgets. There are three geometry management methods namely pack(), grid(), and place(). All these methods have different roles.Now, lets discuss about the topic Autohiding Scrollbars using Python-tkinter.In this topic, we will see how auto-hiding scrollbars are created using tkinter in Python. So, firstly lets see the meaning of auto hiding scrollbars below:"
},
{
"code": null,
"e": 25849,
"s": 25434,
"text": "When a scrollbar hides itself if its not required i.e, it is not visible when its not needed then that type of scrollbar is known as Auto-hiding Scrollbar. In Python Autohiding scrollbars can be used with Listbox and Text widgets. It can be implemented using python tkinter with the help of some geometry management methods.Below examples illustrate the use of Autohiding Scrollbars using Python-tkinter:Example 1:"
},
{
"code": "# Python program to illustrate the usage of # autohiding scrollbars using tkinter # Importing tkinterfrom tkinter import * # Creating class AutoScrollbarclass AutoScrollbar(Scrollbar): # Defining set method with all # its parameter def set(self, low, high): if float(low) <= 0.0 and float(high) >= 1.0: # Using grid_remove self.tk.call(\"grid\", \"remove\", self) else: self.grid() Scrollbar.set(self, low, high) # Defining pack method def pack(self, **kw): # If pack is used it throws an error raise (TclError,\"pack cannot be used with \\ this widget\") # Defining place method def place(self, **kw): # If place is used it throws an error raise (TclError, \"place cannot be used with \\ this widget\") # creating tkinter window root = Tk() # Defining vertical scrollbarverscrollbar = AutoScrollbar(root) # Calling grid method with all its# parameter w.r.t vertical scrollbarverscrollbar.grid(row=0, column=1, sticky=N+S) # Defining horizontal scrollbarhoriscrollbar = AutoScrollbar(root, orient=HORIZONTAL) # Calling grid method with all its # parameter w.r.t horizontal scrollbarhoriscrollbar.grid(row=1, column=0, sticky=E+W) # Creating scrolled canvascanvas = Canvas(root, yscrollcommand=verscrollbar.set, xscrollcommand=horiscrollbar.set) canvas.grid(row=0, column=0, sticky=N+S+E+W) verscrollbar.config(command=canvas.yview)horiscrollbar.config(command=canvas.xview) # Making the canvas expandableroot.grid_rowconfigure(0, weight=1)root.grid_columnconfigure(0, weight=1) # creating canvas contentsframe = Frame(canvas)frame.rowconfigure(1, weight=1)frame.columnconfigure(1, weight=1) # Defining number of rows and columnsrows = 20for i in range(1,rows): for j in range(1,9): button = Button(frame, padx=8, pady=8, text=\"[%d,%d]\" % (i,j)) button.grid(row=i, column=j, sticky='news') # Creating canvas windowcanvas.create_window(0, 0, anchor=NW, window=frame) # Calling update_idletasks methodframe.update_idletasks() # Configuring canvascanvas.config(scrollregion=canvas.bbox(\"all\")) # Calling mainloop methodroot.mainloop()",
"e": 28227,
"s": 25849,
"text": null
},
{
"code": null,
"e": 28235,
"s": 28227,
"text": "Output:"
},
{
"code": null,
"e": 28250,
"s": 28235,
"text": "Python-tkinter"
},
{
"code": null,
"e": 28257,
"s": 28250,
"text": "Python"
},
{
"code": null,
"e": 28355,
"s": 28257,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28387,
"s": 28355,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28443,
"s": 28387,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 28485,
"s": 28443,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 28527,
"s": 28485,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 28549,
"s": 28527,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 28588,
"s": 28549,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 28619,
"s": 28588,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 28674,
"s": 28619,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 28703,
"s": 28674,
"text": "Create a directory in Python"
}
] |
Can the abstract methods of an interface throw an exception in java?
|
Yes, the abstract methods of an interface can throw an exception.
In the following example the interface (MyInterface) contains an abstract method with name display, which throws an IOException.
import java.io.IOException;
abstract interface MyInterface {
public abstract void display()throws IOException ;
}
You need to follow the rules given below while implementing such method −
If the abstract method in the interface throws certain exception. The implemented method can throw the same exception as shown below −
If the abstract method in the interface throws certain exception. The implemented method can throw the same exception as shown below −
Live Demo
import java.io.IOException;
abstract interface MyInterface {
public abstract void display()throws IOException ;
}
public class InterfaceExample implements MyInterface{
public void display()throws IOException {
System.out.println("This is the subclass implementation of the display method");
}
public static void main (String args[]){
try {
new InterfaceExample().display();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
This is the subclass implementation of the display method
If the abstract method in the interface throws certain exception. The implemented method can choose not to throw any exception as shown below −
If the abstract method in the interface throws certain exception. The implemented method can choose not to throw any exception as shown below −
Live Demo
import java.io.IOException;
abstract interface MyInterface {
public abstract void display()throws IOException ;
}
public class InterfaceExample implements MyInterface{
public void display() {
System.out.println("This is the subclass implementation of the display method");
}
public static void main (String args[]){
try {
new InterfaceExample().display();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
This is the subclass implementation of the display method
If the abstract method in the interface throws certain exception. The implemented method can throw its subtype −
If the abstract method in the interface throws certain exception. The implemented method can throw its subtype −
Live Demo
import java.io.IOException;
abstract interface MyInterface {
public abstract void display()throws Exception ;
}
public class InterfaceExample implements MyInterface{
public void display()throws IOException {
System.out.println("This is the subclass implementation of the display method");
}
public static void main (String args[]){
try {
new InterfaceExample().display();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
This is the subclass implementation of the display method
If the abstract method in the interface throws certain exception. The implemented method should not throw its super type
If the abstract method in the interface throws certain exception. The implemented method should not throw its super type
Live Demo
import java.io.IOException;
abstract interface MyInterface {
public abstract void display()throws IOException ;
}
public class InterfaceExample implements MyInterface{
public void display()throws Exception {
System.out.println("This is the subclass implementation of the display method");
}
public static void main (String args[]){
try {
new InterfaceExample().display();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
InterfaceExample.java:8: error: display() in InterfaceExample cannot implement display() in MyInterface
public void display()throws Exception {
^
overridden method does not throw Exception
1 error
|
[
{
"code": null,
"e": 1128,
"s": 1062,
"text": "Yes, the abstract methods of an interface can throw an exception."
},
{
"code": null,
"e": 1257,
"s": 1128,
"text": "In the following example the interface (MyInterface) contains an abstract method with name display, which throws an IOException."
},
{
"code": null,
"e": 1374,
"s": 1257,
"text": "import java.io.IOException;\nabstract interface MyInterface {\n public abstract void display()throws IOException ;\n}"
},
{
"code": null,
"e": 1448,
"s": 1374,
"text": "You need to follow the rules given below while implementing such method −"
},
{
"code": null,
"e": 1583,
"s": 1448,
"text": "If the abstract method in the interface throws certain exception. The implemented method can throw the same exception as shown below −"
},
{
"code": null,
"e": 1718,
"s": 1583,
"text": "If the abstract method in the interface throws certain exception. The implemented method can throw the same exception as shown below −"
},
{
"code": null,
"e": 1729,
"s": 1718,
"text": " Live Demo"
},
{
"code": null,
"e": 2217,
"s": 1729,
"text": "import java.io.IOException;\nabstract interface MyInterface {\n public abstract void display()throws IOException ;\n}\npublic class InterfaceExample implements MyInterface{\n public void display()throws IOException {\n System.out.println(\"This is the subclass implementation of the display method\");\n }\n public static void main (String args[]){\n try {\n new InterfaceExample().display();\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n}"
},
{
"code": null,
"e": 2275,
"s": 2217,
"text": "This is the subclass implementation of the display method"
},
{
"code": null,
"e": 2419,
"s": 2275,
"text": "If the abstract method in the interface throws certain exception. The implemented method can choose not to throw any exception as shown below −"
},
{
"code": null,
"e": 2563,
"s": 2419,
"text": "If the abstract method in the interface throws certain exception. The implemented method can choose not to throw any exception as shown below −"
},
{
"code": null,
"e": 2574,
"s": 2563,
"text": " Live Demo"
},
{
"code": null,
"e": 3044,
"s": 2574,
"text": "import java.io.IOException;\nabstract interface MyInterface {\n public abstract void display()throws IOException ;\n}\npublic class InterfaceExample implements MyInterface{\n public void display() {\n System.out.println(\"This is the subclass implementation of the display method\");\n }\n public static void main (String args[]){\n try {\n new InterfaceExample().display();\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n}"
},
{
"code": null,
"e": 3102,
"s": 3044,
"text": "This is the subclass implementation of the display method"
},
{
"code": null,
"e": 3215,
"s": 3102,
"text": "If the abstract method in the interface throws certain exception. The implemented method can throw its subtype −"
},
{
"code": null,
"e": 3328,
"s": 3215,
"text": "If the abstract method in the interface throws certain exception. The implemented method can throw its subtype −"
},
{
"code": null,
"e": 3339,
"s": 3328,
"text": " Live Demo"
},
{
"code": null,
"e": 3825,
"s": 3339,
"text": "import java.io.IOException;\nabstract interface MyInterface {\n public abstract void display()throws Exception ;\n}\npublic class InterfaceExample implements MyInterface{\n public void display()throws IOException {\n System.out.println(\"This is the subclass implementation of the display method\");\n }\n public static void main (String args[]){\n try {\n new InterfaceExample().display();\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n}"
},
{
"code": null,
"e": 3883,
"s": 3825,
"text": "This is the subclass implementation of the display method"
},
{
"code": null,
"e": 4004,
"s": 3883,
"text": "If the abstract method in the interface throws certain exception. The implemented method should not throw its super type"
},
{
"code": null,
"e": 4125,
"s": 4004,
"text": "If the abstract method in the interface throws certain exception. The implemented method should not throw its super type"
},
{
"code": null,
"e": 4136,
"s": 4125,
"text": " Live Demo"
},
{
"code": null,
"e": 4622,
"s": 4136,
"text": "import java.io.IOException;\nabstract interface MyInterface {\n public abstract void display()throws IOException ;\n}\npublic class InterfaceExample implements MyInterface{\n public void display()throws Exception {\n System.out.println(\"This is the subclass implementation of the display method\");\n }\n public static void main (String args[]){\n try {\n new InterfaceExample().display();\n }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n}"
},
{
"code": null,
"e": 4840,
"s": 4622,
"text": "InterfaceExample.java:8: error: display() in InterfaceExample cannot implement display() in MyInterface\n public void display()throws Exception {\n ^\n overridden method does not throw Exception\n1 error"
}
] |
C program to store inventory system using structures
|
Structure is a collection of different datatype variables, grouped together under a single name.
The features of structure in the C programming language are as follows −
It is possible to copy the contents of all the structure elements of different datatypes to another structure variable of its type by using an assignment operator.
It is possible to copy the contents of all the structure elements of different datatypes to another structure variable of its type by using an assignment operator.
For handling the complex datatypes, it is better to create structure within another structure, which is called nested structures.
For handling the complex datatypes, it is better to create structure within another structure, which is called nested structures.
It is possible to pass an entire structure, individual elements of structure and an address of structure to a function.
It is possible to pass an entire structure, individual elements of structure and an address of structure to a function.
It is possible to create structure pointers.
It is possible to create structure pointers.
Following is the C program to store an inventory system by using the structures −
#include<stdio.h>
#include<conio.h>
void main(){
struct date{
int day;
int month;
int year;
};
struct details{
char name[20];
int price;
int code;
int qty;
struct date mfg;
};
struct details item[50];
int n,i;
printf("Enter number of items:");
scanf("%d",&n);
fflush(stdin);
for(i=0;i<n;i++){
fflush(stdin);
printf("Item name:");
scanf("%s",item[i].name);
fflush(stdin);
printf("Item code:");
scanf("%d",&item[i].code);
fflush(stdin);
printf("Quantity:");
scanf("%d",&item[i].qty);
fflush(stdin);
printf("price:");
scanf("%d",&item[i].price);
fflush(stdin);
printf("Manufacturing date(dd-mm-yyyy):");
scanf("%d-%d-%d",&item[i].mfg.day,&item[i].mfg.month,&item[i].mfg.year);
}
printf(" ***** INVENTORY *****\n");
printf("------------------------------------------------------------------\n");
printf("S.N.| NAME | CODE | QUANTITY | PRICE |MFG.DATE\n");
printf("------------------------------------------------------------------\n");
for(i=0;i<n;i++)
printf("%d %-15s %-d %-5d %-5d%d/%d/%d\n",i+1,item[i].name,item[i].code,item[i].qty,item[i].price,item[i].mfg.day,item[i].mfg.month,item[i].mfg.year);
printf("------------------------------------------------------------------\n");
getch();
}
When the above program is executed, it produces the following result −
Enter number of items:5
Item name:pen
Item code:12
Quantity:50
price:25
Manufacturing date(dd-mm-yyyy):12-02-2020
Item name:pencil
Item code:15
Quantity:100
price:30
Manufacturing date(dd-mm-yyyy):11-03-2020
Item name:book
Item code:34
Quantity:30
price:60
Manufacturing date(dd-mm-yyyy):15-04-2020
Item name:bag
Item code:39
Quantity:20
price:70
Manufacturing date(dd-mm-yyyy):12-03-2021
Item name:sharpner
Item code:33
Quantity:20
price:40
Manufacturing date(dd-mm-yyyy):12-04-2021
***** INVENTORY *****
------------------------------------------------------------------
S.N.| NAME | CODE | QUANTITY | PRICE |MFG.DATE
------------------------------------------------------------------
1 pen 12 50 25 12/2/2020
2 pencil 15 100 30 11/3/2020
3 book 34 30 60 15/4/2020
4 bag 39 20 70 12/3/2021
5 sharpner 33 20 40 12/4/2021
|
[
{
"code": null,
"e": 1159,
"s": 1062,
"text": "Structure is a collection of different datatype variables, grouped together under a single name."
},
{
"code": null,
"e": 1232,
"s": 1159,
"text": "The features of structure in the C programming language are as follows −"
},
{
"code": null,
"e": 1396,
"s": 1232,
"text": "It is possible to copy the contents of all the structure elements of different datatypes to another structure variable of its type by using an assignment operator."
},
{
"code": null,
"e": 1560,
"s": 1396,
"text": "It is possible to copy the contents of all the structure elements of different datatypes to another structure variable of its type by using an assignment operator."
},
{
"code": null,
"e": 1690,
"s": 1560,
"text": "For handling the complex datatypes, it is better to create structure within another structure, which is called nested structures."
},
{
"code": null,
"e": 1820,
"s": 1690,
"text": "For handling the complex datatypes, it is better to create structure within another structure, which is called nested structures."
},
{
"code": null,
"e": 1940,
"s": 1820,
"text": "It is possible to pass an entire structure, individual elements of structure and an address of structure to a function."
},
{
"code": null,
"e": 2060,
"s": 1940,
"text": "It is possible to pass an entire structure, individual elements of structure and an address of structure to a function."
},
{
"code": null,
"e": 2105,
"s": 2060,
"text": "It is possible to create structure pointers."
},
{
"code": null,
"e": 2150,
"s": 2105,
"text": "It is possible to create structure pointers."
},
{
"code": null,
"e": 2232,
"s": 2150,
"text": "Following is the C program to store an inventory system by using the structures −"
},
{
"code": null,
"e": 3621,
"s": 2232,
"text": "#include<stdio.h>\n#include<conio.h>\nvoid main(){\n struct date{\n int day;\n int month;\n int year;\n };\n struct details{\n char name[20];\n int price;\n int code;\n int qty;\n struct date mfg;\n };\n struct details item[50];\n int n,i;\n printf(\"Enter number of items:\");\n scanf(\"%d\",&n);\n fflush(stdin);\n for(i=0;i<n;i++){\n fflush(stdin);\n printf(\"Item name:\");\n scanf(\"%s\",item[i].name);\n fflush(stdin);\n printf(\"Item code:\");\n scanf(\"%d\",&item[i].code);\n fflush(stdin);\n printf(\"Quantity:\");\n scanf(\"%d\",&item[i].qty);\n fflush(stdin);\n printf(\"price:\");\n scanf(\"%d\",&item[i].price);\n fflush(stdin);\n printf(\"Manufacturing date(dd-mm-yyyy):\");\n scanf(\"%d-%d-%d\",&item[i].mfg.day,&item[i].mfg.month,&item[i].mfg.year);\n }\n printf(\" ***** INVENTORY *****\\n\");\n printf(\"------------------------------------------------------------------\\n\");\n printf(\"S.N.| NAME | CODE | QUANTITY | PRICE |MFG.DATE\\n\");\n printf(\"------------------------------------------------------------------\\n\");\n for(i=0;i<n;i++)\n printf(\"%d %-15s %-d %-5d %-5d%d/%d/%d\\n\",i+1,item[i].name,item[i].code,item[i].qty,item[i].price,item[i].mfg.day,item[i].mfg.month,item[i].mfg.year);\n printf(\"------------------------------------------------------------------\\n\");\n getch();\n}"
},
{
"code": null,
"e": 3692,
"s": 3621,
"text": "When the above program is executed, it produces the following result −"
},
{
"code": null,
"e": 4624,
"s": 3692,
"text": "Enter number of items:5\nItem name:pen\nItem code:12\nQuantity:50\nprice:25\nManufacturing date(dd-mm-yyyy):12-02-2020\nItem name:pencil\nItem code:15\nQuantity:100\nprice:30\nManufacturing date(dd-mm-yyyy):11-03-2020\nItem name:book\nItem code:34\nQuantity:30\nprice:60\nManufacturing date(dd-mm-yyyy):15-04-2020\nItem name:bag\nItem code:39\nQuantity:20\nprice:70\nManufacturing date(dd-mm-yyyy):12-03-2021\nItem name:sharpner\nItem code:33\nQuantity:20\nprice:40\nManufacturing date(dd-mm-yyyy):12-04-2021\n***** INVENTORY *****\n------------------------------------------------------------------\nS.N.| NAME | CODE | QUANTITY | PRICE |MFG.DATE\n------------------------------------------------------------------\n1 pen 12 50 25 12/2/2020\n2 pencil 15 100 30 11/3/2020\n3 book 34 30 60 15/4/2020\n4 bag 39 20 70 12/3/2021\n5 sharpner 33 20 40 12/4/2021"
}
] |
How to catch a divide by zero error in C++?
|
The following is an example to catch a divide by zero error.
Live Demo
#include <iostream>
using namespace std;
int display(int x, int y) {
if( y == 0 ) {
throw "Division by zero condition!";
}
return (x/y);
}
int main () {
int a = 50;
int b = 0;
int c = 0;
try {
c = display(a, b);
cout << c << endl;
} catch (const char* msg) {
cerr << msg << endl;
}
return 0;
}
Division by zero condition!
In the above program, a function display() is defined with arguments x and y. It is returning x divide by y and throwing an error.
int display(int x, int y) {
if( y == 0 ) {
throw "Division by zero condition!";
}
return (x/y);
}
In the main() function, using try catch block, the error is caught by catch block and print the message.
try {
c = display(a, b);
cout << c << endl;
} catch (const char* msg) {
cerr << msg << endl;
}
|
[
{
"code": null,
"e": 1123,
"s": 1062,
"text": "The following is an example to catch a divide by zero error."
},
{
"code": null,
"e": 1134,
"s": 1123,
"text": " Live Demo"
},
{
"code": null,
"e": 1482,
"s": 1134,
"text": "#include <iostream>\nusing namespace std;\nint display(int x, int y) {\n if( y == 0 ) {\n throw \"Division by zero condition!\";\n }\n return (x/y);\n}\nint main () {\n int a = 50;\n int b = 0;\n int c = 0;\n try {\n c = display(a, b);\n cout << c << endl;\n } catch (const char* msg) {\n cerr << msg << endl;\n }\n return 0;\n}"
},
{
"code": null,
"e": 1510,
"s": 1482,
"text": "Division by zero condition!"
},
{
"code": null,
"e": 1641,
"s": 1510,
"text": "In the above program, a function display() is defined with arguments x and y. It is returning x divide by y and throwing an error."
},
{
"code": null,
"e": 1754,
"s": 1641,
"text": "int display(int x, int y) {\n if( y == 0 ) {\n throw \"Division by zero condition!\";\n }\n return (x/y);\n}"
},
{
"code": null,
"e": 1859,
"s": 1754,
"text": "In the main() function, using try catch block, the error is caught by catch block and print the message."
},
{
"code": null,
"e": 1963,
"s": 1859,
"text": "try {\n c = display(a, b);\n cout << c << endl;\n} catch (const char* msg) {\n cerr << msg << endl;\n}"
}
] |
Develop and sell a Python API — from start to end tutorial | by Daniel Deutsch | Towards Data Science
|
I recently read a blog post about setting up your own API and selling it.
I was quite inspired and wanted to test if it works. In just 5 days I was able to create an API from start to end. So I thought I share issues I came across, elaborate on concepts that the article was introducing, and provide a quick checklist to build something yourself. All of this by developing another API.
About this article
Disclaimer
Stack used
1. Create project formalities
2. Create a solution for a problem
3. Deploy to AWS
4. Set up Rapidapi
End result
Inspiration
About
This article can be considered as a tutorial and comprehension of other articles (listed in my “Inspiration” section).
It paints a picture for developing a Python API from start to finish and provides help in more difficult areas like the setup with AWS and Rapidapi.
I thought it will be useful for other people trying to do the same. I had some issues on the way, so I thought I share my approach. It is also a great way to build side projects and maybe even make some money.
As the Table of content shows, it consists of 4 major parts, namely:
Setting up the environmentCreating a problem solution with PythonSetting up AWSSetting up Rapidapi
Setting up the environment
Creating a problem solution with Python
Setting up AWS
Setting up Rapidapi
You will find all my code open sourced on Github:
https://github.com/Createdd/pandas_transform_format
You will find the end result here on Rapidapi:
https://rapidapi.com/Createdd/api/excel-to-other-formats
If you found this article helpful let me know and/or buy the functionality on Rapidapi to show support.
I am not associated with any of the services I use in this article.
I do not consider myself an expert. If you have the feeling that I am missing important steps or neglected something, consider pointing it out in the comment section or get in touch with me. Also, always make sure to monitor your AWS costs to not pay for things you do not know about.
I am always happy for constructive input and how to improve.
We will use
Github (Code hosting),
Anaconda (Dependency and environment management),
Jupyter Notebook (code development and documentation),
Python (programming language),
AWS (deployment),
Rapidapi (market to sell)
It’s always the same but necessary. I do it along with these steps:
Create a local folder mkdir NAMECreate a new repository on Github with NAMECreate conda environment conda create --name NAME python=3.7Activate conda environment conda activate PATH_TO_ENVIRONMENTCreate git repo git initConnect to Github repo. Add Readme file, commit it and
Create a local folder mkdir NAME
Create a new repository on Github with NAME
Create conda environment conda create --name NAME python=3.7
Activate conda environment conda activate PATH_TO_ENVIRONMENT
Create git repo git init
Connect to Github repo. Add Readme file, commit it and
git remote add origin URL_TO_GIT_REPOgit push -u origin master
Now we have:
local folder
github repository
anaconda virtual environment
git version control
Then we need to create a solution to some problem. For the sake of demonstration, I will show how to convert an excel csv file into other formats. The basic functionality will be coded and tested in a Jupyter Notebook first.
Install packages
Develop a solution to a problem
Download data
Create functionality
Build server to execute a function with REST
Install jupyter notebook and jupytext:
pip install notebook jupytext
set a hook in .git/hooks/pre-commit for tracking the notebook changes in git properly:
touch .git/hooks/pre-commitcode .git/hooks/pre-commit
copy this in the file
#!/bin/sh# For every ipynb file in the git index, add a Python representationjupytext --from ipynb --to py:light --pre-commit
afterwards for making the hook executable (on mac)
chmod +x .git/hooks/pre-commit
pip install pandas requests
Add a .gitignore file and add the data folder (data/) to not upload the data to the hosting.
Download an example dataset (titanic dataset) and save it into a data folder:
def download(url: str, dest_folder: str): if not os.path.exists(dest_folder): os.makedirs(dest_folder) filename = url.split('/')[-1].replace(" ", "_") file_path = os.path.join(dest_folder, filename) r = requests.get(url, stream=True) if r.ok: print("saving to", os.path.abspath(file_path)) with open(file_path, 'wb') as f: for chunk in r.iter_content(chunk_size=1024 * 8): if chunk: f.write(chunk) f.flush() os.fsync(f.fileno()) else: print("Download failed: status code {}\n{}".format(r.status_code, r.text))url_to_titanic_data = 'https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv'download(url_to_titanic_data,'./data')
Transform format
df = pd.read_csv('./data/titanic.csv')df.to_json(r'./data/titanic.json')
After developing the functionality in jupyter notebook we want to actually provide the functionality in a python app.
There are ways to use parts of the jupyter notebook, but for the sake of simplicity we create it again now.
Add an app.py file.
We want the user to upload an excel file and return the file converted into JSON for example.
Browsing through the internet we can see that there are already packages that work with flask and excel formats. So let's use them.
pip install Flask
Start Flask server with
env FLASK_APP=app.py FLASK_ENV=development flask run
Tipp: Test your backend functionality with Postman. It is easy to set up and allows us to test the backend functionality quickly. Uploading an excel is done in the “form-data” tab:
Here you can see the uploaded titanic csv file and the returned column names of the dataset.
Now we simply write the function to transform the excel into json, like:
import jsonimport pandas as pdfrom flask import Flask, requestapp = Flask(__name__)@app.route('/get_json', methods=['GET', 'POST'])def upload_file(): if request.method == 'POST': provided_data = request.files.get('file') if provided_data is None: return 'Please enter valid excel format ', 400 data = provided_data df = pd.read_csv(data) transformed = df.to_json() result = { 'result': transformed, } json.dumps(result) return resultif __name__ == '__main__': app.run()
(Check out my repository for the full code.)
Now we have the functionality to transform csv files into json for example.
After developing it locally we want to get it in the cloud.
Set up zappa
Set up AWS
AWS credentials
Set up credentials with users and roles in IAM
Add credentials in your project
AWS API Gateway
After we created the app locally we need to start setting up the hosting on a real server. We will use zappa.
Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. Think of it as “serverless” web hosting for your Python apps. That means infinite scaling, zero downtime, zero maintenance — and at a fraction of the cost of your current deployments!
pip install zappa
As we are using a conda environment we need to specify it:
which python
will give you /Users/XXX/opt/anaconda3/envs/XXXX/bin/python (for Mac)
remove the bin/python/ and export
export VIRTUAL_ENV=/Users/XXXX/opt/anaconda3/envs/XXXXX/
Now we can do
zappa init
to set up the config.
Just click through everything and you will have a zappa_settings.json like
{ "dev": { "app_function": "app.app", "aws_region": "eu-central-1", "profile_name": "default", "project_name": "pandas-transform-format", "runtime": "python3.7", "s3_bucket": "zappa-pandas-transform-format" }}
Note that we are not yet ready to deploy. First, we need to get some AWS credentials.
First, you need te get an AWS access key id and access key
You might think it is as easy as:
To get the credentials you need to
Go to: http://aws.amazon.com/
Sign Up & create a new account (they’ll give you the option for 1 year trial or similar)
Go to your AWS account overview
Account menu; sub-menu: Security Credentials
But no. There is more to permissions in AWS!
I found this article from Peter Kazarinoff to be very helpful. He explains the next section in great detail. My following bullet point approach is a quick summary and I often quote his steps. Please check out his article for more details if you are stuck somewhere.
I break it down as simple as possible:
Within the AWS Console, type IAM into the search box. IAM is the AWS user and permissions dashboard.Create a groupGive your group a name (for example zappa_group)Create our own specific inline policy for your groupIn the Permissions tab, under the Inline Policies section, choose the link to create a new Inline PolicyIn the Set Permissions screen, click the Custom Policy radio button and click the “Select” button on the right.Create a Custom Policy written in json formatRead through and copy a policy discussed here: https://github.com/Miserlou/Zappa/issues/244Scroll down to “My Custom policy” see a snippet of my policy.After pasting and modifying the json with your AWS Account Number, click the “Validate Policy” button to ensure you copied valid json. Then click the “Apply Policy” button to attach the inline policy to the group.Create a user and add the user to the groupBack at the IAM Dashboard, create a new user with the “Users” left-hand menu option and the “Add User” button.In the Add user screen, give your new user a name and select the Access Type for Programmatic access. Then click the “Next: Permissions” button.In the Set permissions screen, select the group you created earlier in the Add user to group section and click “Next: Tags”.Tags are optional. Add tags if you want, then click “Next: Review”.Review the user details and click “Create user”Copy the user’s keysDon’t close the AWS IAM window yet. In the next step, you will copy and paste these keys into a file. At this point, it’s not a bad idea to copy and save these keys into a text file in a secure location. Make sure you don’t save keys under version control.
Within the AWS Console, type IAM into the search box. IAM is the AWS user and permissions dashboard.
Create a group
Give your group a name (for example zappa_group)
Create our own specific inline policy for your group
In the Permissions tab, under the Inline Policies section, choose the link to create a new Inline Policy
In the Set Permissions screen, click the Custom Policy radio button and click the “Select” button on the right.
Create a Custom Policy written in json format
Read through and copy a policy discussed here: https://github.com/Miserlou/Zappa/issues/244
Scroll down to “My Custom policy” see a snippet of my policy.
After pasting and modifying the json with your AWS Account Number, click the “Validate Policy” button to ensure you copied valid json. Then click the “Apply Policy” button to attach the inline policy to the group.
Create a user and add the user to the group
Back at the IAM Dashboard, create a new user with the “Users” left-hand menu option and the “Add User” button.
In the Add user screen, give your new user a name and select the Access Type for Programmatic access. Then click the “Next: Permissions” button.
In the Set permissions screen, select the group you created earlier in the Add user to group section and click “Next: Tags”.
Tags are optional. Add tags if you want, then click “Next: Review”.
Review the user details and click “Create user”
Copy the user’s keys
Don’t close the AWS IAM window yet. In the next step, you will copy and paste these keys into a file. At this point, it’s not a bad idea to copy and save these keys into a text file in a secure location. Make sure you don’t save keys under version control.
My Custom policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:AttachRolePolicy", "iam:GetRole", "iam:CreateRole", "iam:PassRole", "iam:PutRolePolicy" ], "Resource": [ "arn:aws:iam::XXXXXXXXXXXXXXXX:role/*-ZappaLambdaExecutionRole" ] }, { "Effect": "Allow", "Action": [ "lambda:CreateFunction", "lambda:ListVersionsByFunction", "logs:DescribeLogStreams", "events:PutRule", "lambda:GetFunctionConfiguration", "cloudformation:DescribeStackResource", "apigateway:DELETE", "apigateway:UpdateRestApiPolicy", "events:ListRuleNamesByTarget", "apigateway:PATCH", "events:ListRules", "cloudformation:UpdateStack", "lambda:DeleteFunction", "events:RemoveTargets", "logs:FilterLogEvents", "apigateway:GET", "lambda:GetAlias", "events:ListTargetsByRule", "cloudformation:ListStackResources", "events:DescribeRule", "logs:DeleteLogGroup", "apigateway:PUT", "lambda:InvokeFunction", "lambda:GetFunction", "lambda:UpdateFunctionConfiguration", "cloudformation:DescribeStacks", "lambda:UpdateFunctionCode", "lambda:DeleteFunctionConcurrency", "events:DeleteRule", "events:PutTargets", "lambda:AddPermission", "cloudformation:CreateStack", "cloudformation:DeleteStack", "apigateway:POST", "lambda:RemovePermission", "lambda:GetPolicy" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucketMultipartUploads", "s3:CreateBucket", "s3:ListBucket" ], "Resource": "arn:aws:s3:::zappa-*" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:ListMultipartUploadParts" ], "Resource": "arn:aws:s3:::zappa-*/*" } ]}
NOTE: Replace XXXXXXXXXXX in the inline policy by your AWS Account Number.
Your AWS Account Number can be found by clicking “Support → “Support Center. Your Account Number is listed in the Support Center on the upper left-hand side. The json above is what worked for me. But, I expect this set of security permissions may be too open. To increase security, you could slowly pare down the permissions and see if Zappa still deploys. The settings above are the ones that finally worked for me. You can dig through this discussion on GitHub if you want to learn more about specific AWS permissions needed to run Zappa: https://github.com/Miserlou/Zappa/issues/244.
Create a .aws/credentials folder in your root with
mkdir ~/.awscode ~/.aws/credentials
and paste your credentials from AWS
[dev]aws_access_key_id = YOUR_KEYaws_secret_access_key = YOUR_KEY
Same with the config
code ~/.aws/config[default]region = YOUR_REGION (eg. eu-central-1)
Note that code is for opening a folder with vscode, my editor of choice.
Save the AWS access key id and secret access key assigned to the user you created in the file ~/.aws/credentials. Note the .aws/ directory needs to be in your home directory and the credentials file has no file extension.
Now you can do deploy your API with
zappa deploy dev
There shouldn’t be any errors anymore. However, if there are still some, you can debug with:
zappa statuszappa tail
The most common errors are permission related (then check your permission policy) or about python libraries that are incompatible. Either way, zappa will provide good enough error messages for debugging.
If you update your code don’t forget to update the deployment as well with
zappa update dev
To set up the API on a market we need to first restrict its usage with an API-key and then set it up on the market platform.
I found this article from Nagesh Bansal to be helpful. He explains the next section in great detail. My following bullet point approach is a quick summary and I often quote his steps. Please check out his article for more details if you are stuck somewhere.
Again, I break it down:
go to your AWS Console and go to API gatewayclick on your APIwe want to create an x-api-key to restrict undesired access to the API and also have a metered usagecreate a Usage plan for the API, with the desired throttle and quota limitscreate an associated API stageadd an API keyin the API key overview section, click “show” at the API key and copy itthen associate the API with the key and discard all requests that come without the keygo back to the API overview. under resources, click the “/ any” go to the “method request”. then in settings, set “API key required” to truedo the same for the “/{proxy+} Methods”
go to your AWS Console and go to API gateway
click on your API
we want to create an x-api-key to restrict undesired access to the API and also have a metered usage
create a Usage plan for the API, with the desired throttle and quota limits
create an associated API stage
add an API key
in the API key overview section, click “show” at the API key and copy it
then associate the API with the key and discard all requests that come without the key
go back to the API overview. under resources, click the “/ any” go to the “method request”. then in settings, set “API key required” to true
do the same for the “/{proxy+} Methods”
it looks like this
Now you have restricted access to your API.
Create API on Rapidapi
Test your own API
Create private plan for testing
Test endpoint with rapidapi
Create code to consume API
Go to “My APIs” and “Add new API”Add the name, description, and category. Note that you cannot change your API name afterward anymoreIn settings, add the URL of your AWS API (it was displayed when you deployed with zappa)In the section “Access Control” under “Transformations”, add the API key you added in AWS
Go to “My APIs” and “Add new API”
Add the name, description, and category. Note that you cannot change your API name afterward anymore
In settings, add the URL of your AWS API (it was displayed when you deployed with zappa)
In the section “Access Control” under “Transformations”, add the API key you added in AWS
5. In the security tab you can check everything
6. Then go to “endpoints” to add the routes from you Python app by clicking “create REST endpoint”
7. Add an image for your API
8. Set a pricing plan. Rapidapi published an own article on pricing options and strategies. As they conclude, it is up to your preferences and product on how to price it.
9. I created a freemium pricing plan. The reason for that is that I want to give the chance to test it without cost, but add a price for using it regularly. Also, I want to create a plan for supporting my work. For example:
10. Create some docs and a tutorial. This is pretty self-explaining. It is encouraged to do so as it is easier for people to use your API if it is documented properly.
11. The last step is to make your API publicly available. But before you do that it is useful to test it for yourself.
Having set up everything, you of course should test it with the provided snippets. This step is not trivial and I had to contact the support to understand it. Now I am simplifying it here.
Create a private plan for yourself, by setting no limits.
The go to the “Users” section of your API, then to “Users on free plans”, select yourself and “invite” you to the private plan.
Now you are subscribed to your own private plan and can test the functionality with the provided snippets.
Upload an example excel file and click on “test endpoint”. Then you will get a 200 ok response.
To consume the API now you can simply copy the snippet that Rapidapi provides. For example with Python and the requests library:
import requestsurl = "https://excel-to-other-formats.p.rapidapi.com/upload"payload = ""headers = { 'x-rapidapi-host': "excel-to-other-formats.p.rapidapi.com", 'x-rapidapi-key': "YOUR_KEY", 'content-type': "multipart/form-data" }response = requests.request("POST", url, data=payload, headers=headers)print(response.text)
The article “API as a product. How to sell your work when all you know is a back-end” by Artem provided a great idea, namely to
Make an API that solves a problem
Deploy it with a serverless architecture
Distribute through an API Marketplace
For the setting everything I found the articles from Nagesh Bansal very helpful:
https://medium.com/@bansalnagesh/how-to-sell-your-apis-b4b5c9a273f8
https://medium.com/@bansalnagesh/launch-your-api-on-aws-with-0-upfront-cost-using-zappa-in-10-minutes-eb6d00623842
Also this article from Peter Kazarinoff: https://pythonforundergradengineers.com/deploy-serverless-web-app-aws-lambda-zappa.html
I encourage you to have a look at those articles as well.
You can also read my article directly on Github (for better code formatting)
Daniel is an entrepreneur, software developer, and lawyer. He has worked at various IT companies, tax advisory, management consulting, and at the Austrian court.
His knowledge and interests currently revolve around programming machine learning applications and all its related aspects. To the core, he considers himself a problem solver of complex environments, which is reflected in his various projects.
Don’t hesitate to get in touch if you have ideas, projects, or problems.
|
[
{
"code": null,
"e": 245,
"s": 171,
"text": "I recently read a blog post about setting up your own API and selling it."
},
{
"code": null,
"e": 557,
"s": 245,
"text": "I was quite inspired and wanted to test if it works. In just 5 days I was able to create an API from start to end. So I thought I share issues I came across, elaborate on concepts that the article was introducing, and provide a quick checklist to build something yourself. All of this by developing another API."
},
{
"code": null,
"e": 576,
"s": 557,
"text": "About this article"
},
{
"code": null,
"e": 587,
"s": 576,
"text": "Disclaimer"
},
{
"code": null,
"e": 598,
"s": 587,
"text": "Stack used"
},
{
"code": null,
"e": 628,
"s": 598,
"text": "1. Create project formalities"
},
{
"code": null,
"e": 663,
"s": 628,
"text": "2. Create a solution for a problem"
},
{
"code": null,
"e": 680,
"s": 663,
"text": "3. Deploy to AWS"
},
{
"code": null,
"e": 699,
"s": 680,
"text": "4. Set up Rapidapi"
},
{
"code": null,
"e": 710,
"s": 699,
"text": "End result"
},
{
"code": null,
"e": 722,
"s": 710,
"text": "Inspiration"
},
{
"code": null,
"e": 728,
"s": 722,
"text": "About"
},
{
"code": null,
"e": 847,
"s": 728,
"text": "This article can be considered as a tutorial and comprehension of other articles (listed in my “Inspiration” section)."
},
{
"code": null,
"e": 996,
"s": 847,
"text": "It paints a picture for developing a Python API from start to finish and provides help in more difficult areas like the setup with AWS and Rapidapi."
},
{
"code": null,
"e": 1206,
"s": 996,
"text": "I thought it will be useful for other people trying to do the same. I had some issues on the way, so I thought I share my approach. It is also a great way to build side projects and maybe even make some money."
},
{
"code": null,
"e": 1275,
"s": 1206,
"text": "As the Table of content shows, it consists of 4 major parts, namely:"
},
{
"code": null,
"e": 1374,
"s": 1275,
"text": "Setting up the environmentCreating a problem solution with PythonSetting up AWSSetting up Rapidapi"
},
{
"code": null,
"e": 1401,
"s": 1374,
"text": "Setting up the environment"
},
{
"code": null,
"e": 1441,
"s": 1401,
"text": "Creating a problem solution with Python"
},
{
"code": null,
"e": 1456,
"s": 1441,
"text": "Setting up AWS"
},
{
"code": null,
"e": 1476,
"s": 1456,
"text": "Setting up Rapidapi"
},
{
"code": null,
"e": 1526,
"s": 1476,
"text": "You will find all my code open sourced on Github:"
},
{
"code": null,
"e": 1578,
"s": 1526,
"text": "https://github.com/Createdd/pandas_transform_format"
},
{
"code": null,
"e": 1625,
"s": 1578,
"text": "You will find the end result here on Rapidapi:"
},
{
"code": null,
"e": 1682,
"s": 1625,
"text": "https://rapidapi.com/Createdd/api/excel-to-other-formats"
},
{
"code": null,
"e": 1786,
"s": 1682,
"text": "If you found this article helpful let me know and/or buy the functionality on Rapidapi to show support."
},
{
"code": null,
"e": 1854,
"s": 1786,
"text": "I am not associated with any of the services I use in this article."
},
{
"code": null,
"e": 2139,
"s": 1854,
"text": "I do not consider myself an expert. If you have the feeling that I am missing important steps or neglected something, consider pointing it out in the comment section or get in touch with me. Also, always make sure to monitor your AWS costs to not pay for things you do not know about."
},
{
"code": null,
"e": 2200,
"s": 2139,
"text": "I am always happy for constructive input and how to improve."
},
{
"code": null,
"e": 2212,
"s": 2200,
"text": "We will use"
},
{
"code": null,
"e": 2235,
"s": 2212,
"text": "Github (Code hosting),"
},
{
"code": null,
"e": 2285,
"s": 2235,
"text": "Anaconda (Dependency and environment management),"
},
{
"code": null,
"e": 2340,
"s": 2285,
"text": "Jupyter Notebook (code development and documentation),"
},
{
"code": null,
"e": 2371,
"s": 2340,
"text": "Python (programming language),"
},
{
"code": null,
"e": 2389,
"s": 2371,
"text": "AWS (deployment),"
},
{
"code": null,
"e": 2415,
"s": 2389,
"text": "Rapidapi (market to sell)"
},
{
"code": null,
"e": 2483,
"s": 2415,
"text": "It’s always the same but necessary. I do it along with these steps:"
},
{
"code": null,
"e": 2758,
"s": 2483,
"text": "Create a local folder mkdir NAMECreate a new repository on Github with NAMECreate conda environment conda create --name NAME python=3.7Activate conda environment conda activate PATH_TO_ENVIRONMENTCreate git repo git initConnect to Github repo. Add Readme file, commit it and"
},
{
"code": null,
"e": 2791,
"s": 2758,
"text": "Create a local folder mkdir NAME"
},
{
"code": null,
"e": 2835,
"s": 2791,
"text": "Create a new repository on Github with NAME"
},
{
"code": null,
"e": 2896,
"s": 2835,
"text": "Create conda environment conda create --name NAME python=3.7"
},
{
"code": null,
"e": 2958,
"s": 2896,
"text": "Activate conda environment conda activate PATH_TO_ENVIRONMENT"
},
{
"code": null,
"e": 2983,
"s": 2958,
"text": "Create git repo git init"
},
{
"code": null,
"e": 3038,
"s": 2983,
"text": "Connect to Github repo. Add Readme file, commit it and"
},
{
"code": null,
"e": 3101,
"s": 3038,
"text": "git remote add origin URL_TO_GIT_REPOgit push -u origin master"
},
{
"code": null,
"e": 3114,
"s": 3101,
"text": "Now we have:"
},
{
"code": null,
"e": 3127,
"s": 3114,
"text": "local folder"
},
{
"code": null,
"e": 3145,
"s": 3127,
"text": "github repository"
},
{
"code": null,
"e": 3174,
"s": 3145,
"text": "anaconda virtual environment"
},
{
"code": null,
"e": 3194,
"s": 3174,
"text": "git version control"
},
{
"code": null,
"e": 3419,
"s": 3194,
"text": "Then we need to create a solution to some problem. For the sake of demonstration, I will show how to convert an excel csv file into other formats. The basic functionality will be coded and tested in a Jupyter Notebook first."
},
{
"code": null,
"e": 3436,
"s": 3419,
"text": "Install packages"
},
{
"code": null,
"e": 3468,
"s": 3436,
"text": "Develop a solution to a problem"
},
{
"code": null,
"e": 3482,
"s": 3468,
"text": "Download data"
},
{
"code": null,
"e": 3503,
"s": 3482,
"text": "Create functionality"
},
{
"code": null,
"e": 3548,
"s": 3503,
"text": "Build server to execute a function with REST"
},
{
"code": null,
"e": 3587,
"s": 3548,
"text": "Install jupyter notebook and jupytext:"
},
{
"code": null,
"e": 3617,
"s": 3587,
"text": "pip install notebook jupytext"
},
{
"code": null,
"e": 3704,
"s": 3617,
"text": "set a hook in .git/hooks/pre-commit for tracking the notebook changes in git properly:"
},
{
"code": null,
"e": 3759,
"s": 3704,
"text": "touch .git/hooks/pre-commitcode .git/hooks/pre-commit"
},
{
"code": null,
"e": 3781,
"s": 3759,
"text": "copy this in the file"
},
{
"code": null,
"e": 3907,
"s": 3781,
"text": "#!/bin/sh# For every ipynb file in the git index, add a Python representationjupytext --from ipynb --to py:light --pre-commit"
},
{
"code": null,
"e": 3958,
"s": 3907,
"text": "afterwards for making the hook executable (on mac)"
},
{
"code": null,
"e": 3989,
"s": 3958,
"text": "chmod +x .git/hooks/pre-commit"
},
{
"code": null,
"e": 4017,
"s": 3989,
"text": "pip install pandas requests"
},
{
"code": null,
"e": 4110,
"s": 4017,
"text": "Add a .gitignore file and add the data folder (data/) to not upload the data to the hosting."
},
{
"code": null,
"e": 4188,
"s": 4110,
"text": "Download an example dataset (titanic dataset) and save it into a data folder:"
},
{
"code": null,
"e": 4965,
"s": 4188,
"text": "def download(url: str, dest_folder: str): if not os.path.exists(dest_folder): os.makedirs(dest_folder) filename = url.split('/')[-1].replace(\" \", \"_\") file_path = os.path.join(dest_folder, filename) r = requests.get(url, stream=True) if r.ok: print(\"saving to\", os.path.abspath(file_path)) with open(file_path, 'wb') as f: for chunk in r.iter_content(chunk_size=1024 * 8): if chunk: f.write(chunk) f.flush() os.fsync(f.fileno()) else: print(\"Download failed: status code {}\\n{}\".format(r.status_code, r.text))url_to_titanic_data = 'https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv'download(url_to_titanic_data,'./data')"
},
{
"code": null,
"e": 4982,
"s": 4965,
"text": "Transform format"
},
{
"code": null,
"e": 5055,
"s": 4982,
"text": "df = pd.read_csv('./data/titanic.csv')df.to_json(r'./data/titanic.json')"
},
{
"code": null,
"e": 5173,
"s": 5055,
"text": "After developing the functionality in jupyter notebook we want to actually provide the functionality in a python app."
},
{
"code": null,
"e": 5281,
"s": 5173,
"text": "There are ways to use parts of the jupyter notebook, but for the sake of simplicity we create it again now."
},
{
"code": null,
"e": 5301,
"s": 5281,
"text": "Add an app.py file."
},
{
"code": null,
"e": 5395,
"s": 5301,
"text": "We want the user to upload an excel file and return the file converted into JSON for example."
},
{
"code": null,
"e": 5527,
"s": 5395,
"text": "Browsing through the internet we can see that there are already packages that work with flask and excel formats. So let's use them."
},
{
"code": null,
"e": 5545,
"s": 5527,
"text": "pip install Flask"
},
{
"code": null,
"e": 5569,
"s": 5545,
"text": "Start Flask server with"
},
{
"code": null,
"e": 5622,
"s": 5569,
"text": "env FLASK_APP=app.py FLASK_ENV=development flask run"
},
{
"code": null,
"e": 5803,
"s": 5622,
"text": "Tipp: Test your backend functionality with Postman. It is easy to set up and allows us to test the backend functionality quickly. Uploading an excel is done in the “form-data” tab:"
},
{
"code": null,
"e": 5896,
"s": 5803,
"text": "Here you can see the uploaded titanic csv file and the returned column names of the dataset."
},
{
"code": null,
"e": 5969,
"s": 5896,
"text": "Now we simply write the function to transform the excel into json, like:"
},
{
"code": null,
"e": 6530,
"s": 5969,
"text": "import jsonimport pandas as pdfrom flask import Flask, requestapp = Flask(__name__)@app.route('/get_json', methods=['GET', 'POST'])def upload_file(): if request.method == 'POST': provided_data = request.files.get('file') if provided_data is None: return 'Please enter valid excel format ', 400 data = provided_data df = pd.read_csv(data) transformed = df.to_json() result = { 'result': transformed, } json.dumps(result) return resultif __name__ == '__main__': app.run()"
},
{
"code": null,
"e": 6575,
"s": 6530,
"text": "(Check out my repository for the full code.)"
},
{
"code": null,
"e": 6651,
"s": 6575,
"text": "Now we have the functionality to transform csv files into json for example."
},
{
"code": null,
"e": 6711,
"s": 6651,
"text": "After developing it locally we want to get it in the cloud."
},
{
"code": null,
"e": 6724,
"s": 6711,
"text": "Set up zappa"
},
{
"code": null,
"e": 6735,
"s": 6724,
"text": "Set up AWS"
},
{
"code": null,
"e": 6751,
"s": 6735,
"text": "AWS credentials"
},
{
"code": null,
"e": 6798,
"s": 6751,
"text": "Set up credentials with users and roles in IAM"
},
{
"code": null,
"e": 6830,
"s": 6798,
"text": "Add credentials in your project"
},
{
"code": null,
"e": 6846,
"s": 6830,
"text": "AWS API Gateway"
},
{
"code": null,
"e": 6956,
"s": 6846,
"text": "After we created the app locally we need to start setting up the hosting on a real server. We will use zappa."
},
{
"code": null,
"e": 7308,
"s": 6956,
"text": "Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. Think of it as “serverless” web hosting for your Python apps. That means infinite scaling, zero downtime, zero maintenance — and at a fraction of the cost of your current deployments!"
},
{
"code": null,
"e": 7326,
"s": 7308,
"text": "pip install zappa"
},
{
"code": null,
"e": 7385,
"s": 7326,
"text": "As we are using a conda environment we need to specify it:"
},
{
"code": null,
"e": 7398,
"s": 7385,
"text": "which python"
},
{
"code": null,
"e": 7468,
"s": 7398,
"text": "will give you /Users/XXX/opt/anaconda3/envs/XXXX/bin/python (for Mac)"
},
{
"code": null,
"e": 7502,
"s": 7468,
"text": "remove the bin/python/ and export"
},
{
"code": null,
"e": 7559,
"s": 7502,
"text": "export VIRTUAL_ENV=/Users/XXXX/opt/anaconda3/envs/XXXXX/"
},
{
"code": null,
"e": 7573,
"s": 7559,
"text": "Now we can do"
},
{
"code": null,
"e": 7584,
"s": 7573,
"text": "zappa init"
},
{
"code": null,
"e": 7606,
"s": 7584,
"text": "to set up the config."
},
{
"code": null,
"e": 7681,
"s": 7606,
"text": "Just click through everything and you will have a zappa_settings.json like"
},
{
"code": null,
"e": 7939,
"s": 7681,
"text": "{ \"dev\": { \"app_function\": \"app.app\", \"aws_region\": \"eu-central-1\", \"profile_name\": \"default\", \"project_name\": \"pandas-transform-format\", \"runtime\": \"python3.7\", \"s3_bucket\": \"zappa-pandas-transform-format\" }}"
},
{
"code": null,
"e": 8025,
"s": 7939,
"text": "Note that we are not yet ready to deploy. First, we need to get some AWS credentials."
},
{
"code": null,
"e": 8084,
"s": 8025,
"text": "First, you need te get an AWS access key id and access key"
},
{
"code": null,
"e": 8118,
"s": 8084,
"text": "You might think it is as easy as:"
},
{
"code": null,
"e": 8153,
"s": 8118,
"text": "To get the credentials you need to"
},
{
"code": null,
"e": 8183,
"s": 8153,
"text": "Go to: http://aws.amazon.com/"
},
{
"code": null,
"e": 8272,
"s": 8183,
"text": "Sign Up & create a new account (they’ll give you the option for 1 year trial or similar)"
},
{
"code": null,
"e": 8304,
"s": 8272,
"text": "Go to your AWS account overview"
},
{
"code": null,
"e": 8349,
"s": 8304,
"text": "Account menu; sub-menu: Security Credentials"
},
{
"code": null,
"e": 8394,
"s": 8349,
"text": "But no. There is more to permissions in AWS!"
},
{
"code": null,
"e": 8660,
"s": 8394,
"text": "I found this article from Peter Kazarinoff to be very helpful. He explains the next section in great detail. My following bullet point approach is a quick summary and I often quote his steps. Please check out his article for more details if you are stuck somewhere."
},
{
"code": null,
"e": 8699,
"s": 8660,
"text": "I break it down as simple as possible:"
},
{
"code": null,
"e": 10350,
"s": 8699,
"text": "Within the AWS Console, type IAM into the search box. IAM is the AWS user and permissions dashboard.Create a groupGive your group a name (for example zappa_group)Create our own specific inline policy for your groupIn the Permissions tab, under the Inline Policies section, choose the link to create a new Inline PolicyIn the Set Permissions screen, click the Custom Policy radio button and click the “Select” button on the right.Create a Custom Policy written in json formatRead through and copy a policy discussed here: https://github.com/Miserlou/Zappa/issues/244Scroll down to “My Custom policy” see a snippet of my policy.After pasting and modifying the json with your AWS Account Number, click the “Validate Policy” button to ensure you copied valid json. Then click the “Apply Policy” button to attach the inline policy to the group.Create a user and add the user to the groupBack at the IAM Dashboard, create a new user with the “Users” left-hand menu option and the “Add User” button.In the Add user screen, give your new user a name and select the Access Type for Programmatic access. Then click the “Next: Permissions” button.In the Set permissions screen, select the group you created earlier in the Add user to group section and click “Next: Tags”.Tags are optional. Add tags if you want, then click “Next: Review”.Review the user details and click “Create user”Copy the user’s keysDon’t close the AWS IAM window yet. In the next step, you will copy and paste these keys into a file. At this point, it’s not a bad idea to copy and save these keys into a text file in a secure location. Make sure you don’t save keys under version control."
},
{
"code": null,
"e": 10451,
"s": 10350,
"text": "Within the AWS Console, type IAM into the search box. IAM is the AWS user and permissions dashboard."
},
{
"code": null,
"e": 10466,
"s": 10451,
"text": "Create a group"
},
{
"code": null,
"e": 10515,
"s": 10466,
"text": "Give your group a name (for example zappa_group)"
},
{
"code": null,
"e": 10568,
"s": 10515,
"text": "Create our own specific inline policy for your group"
},
{
"code": null,
"e": 10673,
"s": 10568,
"text": "In the Permissions tab, under the Inline Policies section, choose the link to create a new Inline Policy"
},
{
"code": null,
"e": 10785,
"s": 10673,
"text": "In the Set Permissions screen, click the Custom Policy radio button and click the “Select” button on the right."
},
{
"code": null,
"e": 10831,
"s": 10785,
"text": "Create a Custom Policy written in json format"
},
{
"code": null,
"e": 10923,
"s": 10831,
"text": "Read through and copy a policy discussed here: https://github.com/Miserlou/Zappa/issues/244"
},
{
"code": null,
"e": 10985,
"s": 10923,
"text": "Scroll down to “My Custom policy” see a snippet of my policy."
},
{
"code": null,
"e": 11199,
"s": 10985,
"text": "After pasting and modifying the json with your AWS Account Number, click the “Validate Policy” button to ensure you copied valid json. Then click the “Apply Policy” button to attach the inline policy to the group."
},
{
"code": null,
"e": 11243,
"s": 11199,
"text": "Create a user and add the user to the group"
},
{
"code": null,
"e": 11354,
"s": 11243,
"text": "Back at the IAM Dashboard, create a new user with the “Users” left-hand menu option and the “Add User” button."
},
{
"code": null,
"e": 11499,
"s": 11354,
"text": "In the Add user screen, give your new user a name and select the Access Type for Programmatic access. Then click the “Next: Permissions” button."
},
{
"code": null,
"e": 11624,
"s": 11499,
"text": "In the Set permissions screen, select the group you created earlier in the Add user to group section and click “Next: Tags”."
},
{
"code": null,
"e": 11692,
"s": 11624,
"text": "Tags are optional. Add tags if you want, then click “Next: Review”."
},
{
"code": null,
"e": 11740,
"s": 11692,
"text": "Review the user details and click “Create user”"
},
{
"code": null,
"e": 11761,
"s": 11740,
"text": "Copy the user’s keys"
},
{
"code": null,
"e": 12018,
"s": 11761,
"text": "Don’t close the AWS IAM window yet. In the next step, you will copy and paste these keys into a file. At this point, it’s not a bad idea to copy and save these keys into a text file in a secure location. Make sure you don’t save keys under version control."
},
{
"code": null,
"e": 12036,
"s": 12018,
"text": "My Custom policy:"
},
{
"code": null,
"e": 14083,
"s": 12036,
"text": "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"iam:AttachRolePolicy\", \"iam:GetRole\", \"iam:CreateRole\", \"iam:PassRole\", \"iam:PutRolePolicy\" ], \"Resource\": [ \"arn:aws:iam::XXXXXXXXXXXXXXXX:role/*-ZappaLambdaExecutionRole\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"lambda:CreateFunction\", \"lambda:ListVersionsByFunction\", \"logs:DescribeLogStreams\", \"events:PutRule\", \"lambda:GetFunctionConfiguration\", \"cloudformation:DescribeStackResource\", \"apigateway:DELETE\", \"apigateway:UpdateRestApiPolicy\", \"events:ListRuleNamesByTarget\", \"apigateway:PATCH\", \"events:ListRules\", \"cloudformation:UpdateStack\", \"lambda:DeleteFunction\", \"events:RemoveTargets\", \"logs:FilterLogEvents\", \"apigateway:GET\", \"lambda:GetAlias\", \"events:ListTargetsByRule\", \"cloudformation:ListStackResources\", \"events:DescribeRule\", \"logs:DeleteLogGroup\", \"apigateway:PUT\", \"lambda:InvokeFunction\", \"lambda:GetFunction\", \"lambda:UpdateFunctionConfiguration\", \"cloudformation:DescribeStacks\", \"lambda:UpdateFunctionCode\", \"lambda:DeleteFunctionConcurrency\", \"events:DeleteRule\", \"events:PutTargets\", \"lambda:AddPermission\", \"cloudformation:CreateStack\", \"cloudformation:DeleteStack\", \"apigateway:POST\", \"lambda:RemovePermission\", \"lambda:GetPolicy\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucketMultipartUploads\", \"s3:CreateBucket\", \"s3:ListBucket\" ], \"Resource\": \"arn:aws:s3:::zappa-*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:PutObject\", \"s3:GetObject\", \"s3:AbortMultipartUpload\", \"s3:DeleteObject\", \"s3:ListMultipartUploadParts\" ], \"Resource\": \"arn:aws:s3:::zappa-*/*\" } ]}"
},
{
"code": null,
"e": 14158,
"s": 14083,
"text": "NOTE: Replace XXXXXXXXXXX in the inline policy by your AWS Account Number."
},
{
"code": null,
"e": 14745,
"s": 14158,
"text": "Your AWS Account Number can be found by clicking “Support → “Support Center. Your Account Number is listed in the Support Center on the upper left-hand side. The json above is what worked for me. But, I expect this set of security permissions may be too open. To increase security, you could slowly pare down the permissions and see if Zappa still deploys. The settings above are the ones that finally worked for me. You can dig through this discussion on GitHub if you want to learn more about specific AWS permissions needed to run Zappa: https://github.com/Miserlou/Zappa/issues/244."
},
{
"code": null,
"e": 14796,
"s": 14745,
"text": "Create a .aws/credentials folder in your root with"
},
{
"code": null,
"e": 14832,
"s": 14796,
"text": "mkdir ~/.awscode ~/.aws/credentials"
},
{
"code": null,
"e": 14868,
"s": 14832,
"text": "and paste your credentials from AWS"
},
{
"code": null,
"e": 14934,
"s": 14868,
"text": "[dev]aws_access_key_id = YOUR_KEYaws_secret_access_key = YOUR_KEY"
},
{
"code": null,
"e": 14955,
"s": 14934,
"text": "Same with the config"
},
{
"code": null,
"e": 15022,
"s": 14955,
"text": "code ~/.aws/config[default]region = YOUR_REGION (eg. eu-central-1)"
},
{
"code": null,
"e": 15095,
"s": 15022,
"text": "Note that code is for opening a folder with vscode, my editor of choice."
},
{
"code": null,
"e": 15317,
"s": 15095,
"text": "Save the AWS access key id and secret access key assigned to the user you created in the file ~/.aws/credentials. Note the .aws/ directory needs to be in your home directory and the credentials file has no file extension."
},
{
"code": null,
"e": 15353,
"s": 15317,
"text": "Now you can do deploy your API with"
},
{
"code": null,
"e": 15370,
"s": 15353,
"text": "zappa deploy dev"
},
{
"code": null,
"e": 15463,
"s": 15370,
"text": "There shouldn’t be any errors anymore. However, if there are still some, you can debug with:"
},
{
"code": null,
"e": 15486,
"s": 15463,
"text": "zappa statuszappa tail"
},
{
"code": null,
"e": 15690,
"s": 15486,
"text": "The most common errors are permission related (then check your permission policy) or about python libraries that are incompatible. Either way, zappa will provide good enough error messages for debugging."
},
{
"code": null,
"e": 15765,
"s": 15690,
"text": "If you update your code don’t forget to update the deployment as well with"
},
{
"code": null,
"e": 15782,
"s": 15765,
"text": "zappa update dev"
},
{
"code": null,
"e": 15907,
"s": 15782,
"text": "To set up the API on a market we need to first restrict its usage with an API-key and then set it up on the market platform."
},
{
"code": null,
"e": 16165,
"s": 15907,
"text": "I found this article from Nagesh Bansal to be helpful. He explains the next section in great detail. My following bullet point approach is a quick summary and I often quote his steps. Please check out his article for more details if you are stuck somewhere."
},
{
"code": null,
"e": 16189,
"s": 16165,
"text": "Again, I break it down:"
},
{
"code": null,
"e": 16807,
"s": 16189,
"text": "go to your AWS Console and go to API gatewayclick on your APIwe want to create an x-api-key to restrict undesired access to the API and also have a metered usagecreate a Usage plan for the API, with the desired throttle and quota limitscreate an associated API stageadd an API keyin the API key overview section, click “show” at the API key and copy itthen associate the API with the key and discard all requests that come without the keygo back to the API overview. under resources, click the “/ any” go to the “method request”. then in settings, set “API key required” to truedo the same for the “/{proxy+} Methods”"
},
{
"code": null,
"e": 16852,
"s": 16807,
"text": "go to your AWS Console and go to API gateway"
},
{
"code": null,
"e": 16870,
"s": 16852,
"text": "click on your API"
},
{
"code": null,
"e": 16971,
"s": 16870,
"text": "we want to create an x-api-key to restrict undesired access to the API and also have a metered usage"
},
{
"code": null,
"e": 17047,
"s": 16971,
"text": "create a Usage plan for the API, with the desired throttle and quota limits"
},
{
"code": null,
"e": 17078,
"s": 17047,
"text": "create an associated API stage"
},
{
"code": null,
"e": 17093,
"s": 17078,
"text": "add an API key"
},
{
"code": null,
"e": 17166,
"s": 17093,
"text": "in the API key overview section, click “show” at the API key and copy it"
},
{
"code": null,
"e": 17253,
"s": 17166,
"text": "then associate the API with the key and discard all requests that come without the key"
},
{
"code": null,
"e": 17394,
"s": 17253,
"text": "go back to the API overview. under resources, click the “/ any” go to the “method request”. then in settings, set “API key required” to true"
},
{
"code": null,
"e": 17434,
"s": 17394,
"text": "do the same for the “/{proxy+} Methods”"
},
{
"code": null,
"e": 17453,
"s": 17434,
"text": "it looks like this"
},
{
"code": null,
"e": 17497,
"s": 17453,
"text": "Now you have restricted access to your API."
},
{
"code": null,
"e": 17520,
"s": 17497,
"text": "Create API on Rapidapi"
},
{
"code": null,
"e": 17538,
"s": 17520,
"text": "Test your own API"
},
{
"code": null,
"e": 17570,
"s": 17538,
"text": "Create private plan for testing"
},
{
"code": null,
"e": 17598,
"s": 17570,
"text": "Test endpoint with rapidapi"
},
{
"code": null,
"e": 17625,
"s": 17598,
"text": "Create code to consume API"
},
{
"code": null,
"e": 17936,
"s": 17625,
"text": "Go to “My APIs” and “Add new API”Add the name, description, and category. Note that you cannot change your API name afterward anymoreIn settings, add the URL of your AWS API (it was displayed when you deployed with zappa)In the section “Access Control” under “Transformations”, add the API key you added in AWS"
},
{
"code": null,
"e": 17970,
"s": 17936,
"text": "Go to “My APIs” and “Add new API”"
},
{
"code": null,
"e": 18071,
"s": 17970,
"text": "Add the name, description, and category. Note that you cannot change your API name afterward anymore"
},
{
"code": null,
"e": 18160,
"s": 18071,
"text": "In settings, add the URL of your AWS API (it was displayed when you deployed with zappa)"
},
{
"code": null,
"e": 18250,
"s": 18160,
"text": "In the section “Access Control” under “Transformations”, add the API key you added in AWS"
},
{
"code": null,
"e": 18298,
"s": 18250,
"text": "5. In the security tab you can check everything"
},
{
"code": null,
"e": 18397,
"s": 18298,
"text": "6. Then go to “endpoints” to add the routes from you Python app by clicking “create REST endpoint”"
},
{
"code": null,
"e": 18426,
"s": 18397,
"text": "7. Add an image for your API"
},
{
"code": null,
"e": 18597,
"s": 18426,
"text": "8. Set a pricing plan. Rapidapi published an own article on pricing options and strategies. As they conclude, it is up to your preferences and product on how to price it."
},
{
"code": null,
"e": 18821,
"s": 18597,
"text": "9. I created a freemium pricing plan. The reason for that is that I want to give the chance to test it without cost, but add a price for using it regularly. Also, I want to create a plan for supporting my work. For example:"
},
{
"code": null,
"e": 18989,
"s": 18821,
"text": "10. Create some docs and a tutorial. This is pretty self-explaining. It is encouraged to do so as it is easier for people to use your API if it is documented properly."
},
{
"code": null,
"e": 19108,
"s": 18989,
"text": "11. The last step is to make your API publicly available. But before you do that it is useful to test it for yourself."
},
{
"code": null,
"e": 19297,
"s": 19108,
"text": "Having set up everything, you of course should test it with the provided snippets. This step is not trivial and I had to contact the support to understand it. Now I am simplifying it here."
},
{
"code": null,
"e": 19355,
"s": 19297,
"text": "Create a private plan for yourself, by setting no limits."
},
{
"code": null,
"e": 19483,
"s": 19355,
"text": "The go to the “Users” section of your API, then to “Users on free plans”, select yourself and “invite” you to the private plan."
},
{
"code": null,
"e": 19590,
"s": 19483,
"text": "Now you are subscribed to your own private plan and can test the functionality with the provided snippets."
},
{
"code": null,
"e": 19686,
"s": 19590,
"text": "Upload an example excel file and click on “test endpoint”. Then you will get a 200 ok response."
},
{
"code": null,
"e": 19815,
"s": 19686,
"text": "To consume the API now you can simply copy the snippet that Rapidapi provides. For example with Python and the requests library:"
},
{
"code": null,
"e": 20147,
"s": 19815,
"text": "import requestsurl = \"https://excel-to-other-formats.p.rapidapi.com/upload\"payload = \"\"headers = { 'x-rapidapi-host': \"excel-to-other-formats.p.rapidapi.com\", 'x-rapidapi-key': \"YOUR_KEY\", 'content-type': \"multipart/form-data\" }response = requests.request(\"POST\", url, data=payload, headers=headers)print(response.text)"
},
{
"code": null,
"e": 20275,
"s": 20147,
"text": "The article “API as a product. How to sell your work when all you know is a back-end” by Artem provided a great idea, namely to"
},
{
"code": null,
"e": 20309,
"s": 20275,
"text": "Make an API that solves a problem"
},
{
"code": null,
"e": 20350,
"s": 20309,
"text": "Deploy it with a serverless architecture"
},
{
"code": null,
"e": 20388,
"s": 20350,
"text": "Distribute through an API Marketplace"
},
{
"code": null,
"e": 20469,
"s": 20388,
"text": "For the setting everything I found the articles from Nagesh Bansal very helpful:"
},
{
"code": null,
"e": 20537,
"s": 20469,
"text": "https://medium.com/@bansalnagesh/how-to-sell-your-apis-b4b5c9a273f8"
},
{
"code": null,
"e": 20652,
"s": 20537,
"text": "https://medium.com/@bansalnagesh/launch-your-api-on-aws-with-0-upfront-cost-using-zappa-in-10-minutes-eb6d00623842"
},
{
"code": null,
"e": 20781,
"s": 20652,
"text": "Also this article from Peter Kazarinoff: https://pythonforundergradengineers.com/deploy-serverless-web-app-aws-lambda-zappa.html"
},
{
"code": null,
"e": 20839,
"s": 20781,
"text": "I encourage you to have a look at those articles as well."
},
{
"code": null,
"e": 20916,
"s": 20839,
"text": "You can also read my article directly on Github (for better code formatting)"
},
{
"code": null,
"e": 21078,
"s": 20916,
"text": "Daniel is an entrepreneur, software developer, and lawyer. He has worked at various IT companies, tax advisory, management consulting, and at the Austrian court."
},
{
"code": null,
"e": 21322,
"s": 21078,
"text": "His knowledge and interests currently revolve around programming machine learning applications and all its related aspects. To the core, he considers himself a problem solver of complex environments, which is reflected in his various projects."
}
] |
Angular Material 7 - Progress Bar
|
The <mat-progress-bar>, an Angular Directive, is used to show a progress bar with material styling.
In this chapter, we will showcase the configuration required to draw a deterministic as well as indeterministic progress bar using Angular Material.
Follow the following steps to update the Angular application we created in Angular 6 - Project Setup chapter −
Following is the content of the modified module descriptor app.module.ts.
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import {BrowserAnimationsModule} from '@angular/platform-browser/animations';
import {MatProgressBarModule, MatRadioModule, MatSliderModule} from '@angular/material'
import {FormsModule, ReactiveFormsModule} from '@angular/forms';
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
BrowserAnimationsModule,
MatProgressBarModule, MatRadioModule, MatSliderModule,
FormsModule,
ReactiveFormsModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
Following is the content of the modified ts file app.component.css.
.tp-section {
display: flex;
align-content: center;
align-items: center;
height: 60px;
}
.tp-margin {
margin: 0 10px;
}
Following is the content of the modified HTML host file app.component.html.
<section class = "tp-section">
<label class = "tp-margin">Color:</label>
<mat-radio-group [(ngModel)] = "color">
<mat-radio-button class = "tp-margin" value = "primary">
Primary
</mat-radio-button>
<mat-radio-button class = "tp-margin" value = "accent">
Accent
</mat-radio-button>
<mat-radio-button class = "tp-margin" value = "warn">
Warn
</mat-radio-button>
</mat-radio-group>
</section>
<section class = "tp-section">
<label class = "tp-margin">Mode:</label>
<mat-radio-group [(ngModel)] = "mode">
<mat-radio-button class = "tp-margin" value = "determinate">
Determinate
</mat-radio-button>
<mat-radio-button class = "tp-margin" value = "indeterminate">
Indeterminate
</mat-radio-button>
<mat-radio-button class = "tp-margin" value = "buffer">
Buffer
</mat-radio-button>
<mat-radio-button class = "tp-margin" value = "query">
Query
</mat-radio-button>
</mat-radio-group>
</section>
<section class = "tp-section" *ngIf = "mode === 'determinate' || mode === 'buffer'">
<label class = "tp-margin">Progress:</label>
<mat-slider class = "tp-margin" [(ngModel)] = "value"></mat-slider>
</section>
<section class = "tp-section" *ngIf = "mode === 'buffer'">
<label class = "tp-margin">Buffer:</label>
<mat-slider class = "tp-margin" [(ngModel)] = "bufferValue"></mat-slider>
</section>
<section class = "tp-section">
<label class = "tp-margin">Mode: {{mode}}</label>
<mat-progress-bar
class = "tp-margin"
[color] = "color"
[mode] = "mode"
[value] = "value"
[bufferValue] = "bufferValue"
>
</mat-progress-bar>
</section>
Following is the content of the modified ts file app.component.ts.
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'materialApp';
color = 'primary';
mode = 'determinate';
value = 50;
bufferValue = 75;
}
Verify the result.
Here, we've created progress bar using mat-progress-bar.
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2855,
"s": 2755,
"text": "The <mat-progress-bar>, an Angular Directive, is used to show a progress bar with material styling."
},
{
"code": null,
"e": 3004,
"s": 2855,
"text": "In this chapter, we will showcase the configuration required to draw a deterministic as well as indeterministic progress bar using Angular Material."
},
{
"code": null,
"e": 3115,
"s": 3004,
"text": "Follow the following steps to update the Angular application we created in Angular 6 - Project Setup chapter −"
},
{
"code": null,
"e": 3189,
"s": 3115,
"text": "Following is the content of the modified module descriptor app.module.ts."
},
{
"code": null,
"e": 3880,
"s": 3189,
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { AppComponent } from './app.component';\nimport {BrowserAnimationsModule} from '@angular/platform-browser/animations';\nimport {MatProgressBarModule, MatRadioModule, MatSliderModule} from '@angular/material'\nimport {FormsModule, ReactiveFormsModule} from '@angular/forms';\n@NgModule({\n declarations: [\n AppComponent\n ],\n imports: [\n BrowserModule,\n BrowserAnimationsModule,\n MatProgressBarModule, MatRadioModule, MatSliderModule,\n FormsModule,\n ReactiveFormsModule\n ],\n providers: [],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }"
},
{
"code": null,
"e": 3948,
"s": 3880,
"text": "Following is the content of the modified ts file app.component.css."
},
{
"code": null,
"e": 4083,
"s": 3948,
"text": ".tp-section {\n display: flex;\n align-content: center;\n align-items: center;\n height: 60px;\n}\n.tp-margin {\n margin: 0 10px;\n}"
},
{
"code": null,
"e": 4159,
"s": 4083,
"text": "Following is the content of the modified HTML host file app.component.html."
},
{
"code": null,
"e": 5896,
"s": 4159,
"text": "<section class = \"tp-section\">\n <label class = \"tp-margin\">Color:</label>\n <mat-radio-group [(ngModel)] = \"color\">\n <mat-radio-button class = \"tp-margin\" value = \"primary\">\n Primary\n </mat-radio-button>\n <mat-radio-button class = \"tp-margin\" value = \"accent\">\n Accent\n </mat-radio-button>\n <mat-radio-button class = \"tp-margin\" value = \"warn\">\n Warn\n </mat-radio-button>\n </mat-radio-group>\n</section>\n<section class = \"tp-section\">\n <label class = \"tp-margin\">Mode:</label>\n <mat-radio-group [(ngModel)] = \"mode\">\n <mat-radio-button class = \"tp-margin\" value = \"determinate\">\n Determinate\n </mat-radio-button>\n <mat-radio-button class = \"tp-margin\" value = \"indeterminate\">\n Indeterminate\n </mat-radio-button>\n <mat-radio-button class = \"tp-margin\" value = \"buffer\">\n Buffer\n </mat-radio-button>\n <mat-radio-button class = \"tp-margin\" value = \"query\">\n Query\n </mat-radio-button>\n </mat-radio-group>\n</section>\n<section class = \"tp-section\" *ngIf = \"mode === 'determinate' || mode === 'buffer'\">\n <label class = \"tp-margin\">Progress:</label>\n <mat-slider class = \"tp-margin\" [(ngModel)] = \"value\"></mat-slider>\n</section>\n<section class = \"tp-section\" *ngIf = \"mode === 'buffer'\">\n <label class = \"tp-margin\">Buffer:</label>\n <mat-slider class = \"tp-margin\" [(ngModel)] = \"bufferValue\"></mat-slider>\n</section>\n<section class = \"tp-section\">\n <label class = \"tp-margin\">Mode: {{mode}}</label>\n <mat-progress-bar\n class = \"tp-margin\"\n [color] = \"color\"\n [mode] = \"mode\"\n [value] = \"value\"\n [bufferValue] = \"bufferValue\"\n >\n </mat-progress-bar>\n</section>"
},
{
"code": null,
"e": 5963,
"s": 5896,
"text": "Following is the content of the modified ts file app.component.ts."
},
{
"code": null,
"e": 6265,
"s": 5963,
"text": "import { Component } from '@angular/core';\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'materialApp'; \n color = 'primary';\n mode = 'determinate';\n value = 50;\n bufferValue = 75;\n}"
},
{
"code": null,
"e": 6284,
"s": 6265,
"text": "Verify the result."
},
{
"code": null,
"e": 6341,
"s": 6284,
"text": "Here, we've created progress bar using mat-progress-bar."
},
{
"code": null,
"e": 6376,
"s": 6341,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6390,
"s": 6376,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6425,
"s": 6390,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6439,
"s": 6425,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6474,
"s": 6439,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 6494,
"s": 6474,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 6529,
"s": 6494,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6546,
"s": 6529,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6579,
"s": 6546,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6591,
"s": 6579,
"text": " Senol Atac"
},
{
"code": null,
"e": 6626,
"s": 6591,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6638,
"s": 6626,
"text": " Senol Atac"
},
{
"code": null,
"e": 6645,
"s": 6638,
"text": " Print"
},
{
"code": null,
"e": 6656,
"s": 6645,
"text": " Add Notes"
}
] |
Binary representation of a given number in C++
|
A binary number is a number that consists of only two digits 0 and 1. For example, 01010111.
There are various ways to represent a given number in binary form.
This method is used to represent a number in its binary form using recursion.
Step 1 : if number > 1. Follow step 2 and 3.
Step 2 : push the number to a stand.
Step 3 : call function recursively with number/2
Step 4 : pop number from stack and print remainder by dividing it by 2.
Live Demo
#include<iostream>
using namespace std;
void tobinary(unsigned number){
if (number > 1)
tobinary(number/2);
cout << number % 2;
}
int main(){
int n = 6;
cout<<"The number is "<<n<<" and its binary representation is ";
tobinary(n);
n = 12;
cout<<"\nThe number is "<<n<<" and its binary representation is ";
tobinary(n);
}
The number is 6 and its binary representation is 110
The number is 12 and its binary representation is 1100
|
[
{
"code": null,
"e": 1155,
"s": 1062,
"text": "A binary number is a number that consists of only two digits 0 and 1. For example, 01010111."
},
{
"code": null,
"e": 1222,
"s": 1155,
"text": "There are various ways to represent a given number in binary form."
},
{
"code": null,
"e": 1300,
"s": 1222,
"text": "This method is used to represent a number in its binary form using recursion."
},
{
"code": null,
"e": 1503,
"s": 1300,
"text": "Step 1 : if number > 1. Follow step 2 and 3.\nStep 2 : push the number to a stand.\nStep 3 : call function recursively with number/2\nStep 4 : pop number from stack and print remainder by dividing it by 2."
},
{
"code": null,
"e": 1514,
"s": 1503,
"text": " Live Demo"
},
{
"code": null,
"e": 1865,
"s": 1514,
"text": "#include<iostream>\nusing namespace std;\nvoid tobinary(unsigned number){\n if (number > 1)\n tobinary(number/2);\n cout << number % 2;\n}\nint main(){\n int n = 6;\n cout<<\"The number is \"<<n<<\" and its binary representation is \";\n tobinary(n);\n n = 12;\n cout<<\"\\nThe number is \"<<n<<\" and its binary representation is \";\n tobinary(n);\n}"
},
{
"code": null,
"e": 1973,
"s": 1865,
"text": "The number is 6 and its binary representation is 110\nThe number is 12 and its binary representation is 1100"
}
] |
Int16.Parse(String) Method in C# with Examples - GeeksforGeeks
|
12 Jun, 2019
Int16.Parse(String) Method is used to convert the string representation of a number to its 16-bit signed integer equivalent.
Syntax:
public static short Parse (string str);
Here, str is a string that contains a number to convert. The format of str will be [optional white space][optional sign]digits[optional white space].
Return Value: It is a 16-bit signed integer equivalent to the number contained in str.
Exceptions:
ArgumentNullException: If str is null.
FormatException: If str is not in the correct format.
OverflowException: If str represents a number less than MinValue or greater than MaxValue.
Below programs illustrate the use of above-discussed method:
Example 1:
// C# program to demonstrate// Int16.Parse(String) Methodusing System; class GFG { // Main Method public static void Main() { // passing different values // to the method to check checkParse("14321"); checkParse("15,784"); checkParse("-4589"); checkParse(" 456"); } // Defining checkParse method public static void checkParse(string input) { try { // declaring Int16 variable short val; // getting parsed value val = Int16.Parse(input); Console.WriteLine("'{0}' parsed as {1}", input, val); } catch (FormatException) { Console.WriteLine("Can't Parsed '{0}'", input); } }}
'14321' parsed as 14321
Can't Parsed '15,784'
'-4589' parsed as -4589
' 456' parsed as 456
Example 2: For ArgumentNullException
// C# program to demonstrate// Int16.Parse(String) Method// for ArgumentNullExceptionusing System; class GFG { // Main Method public static void Main() { try { // passing null value as a input checkParse(null); } catch (ArgumentNullException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } catch (FormatException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } } // Defining checkparse method public static void checkParse(string input) { // declaring Int16 variable short val; // getting parsed value val = Int16.Parse(input); Console.WriteLine("'{0}' parsed as {1}", input, val); }}
Exception Thrown: System.ArgumentNullException
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.int16.parse?view=netframework-4.7.2#System_Int16_Parse_System_String_
CSharp-Int16-Struct
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 50 C# Interview Questions & Answers
Extension Method in C#
HashSet in C# with Examples
Partial Classes in C#
C# | Inheritance
Convert String to Character Array in C#
Linked List Implementation in C#
C# | How to insert an element in an Array?
C# | List Class
Difference between Hashtable and Dictionary in C#
|
[
{
"code": null,
"e": 23911,
"s": 23883,
"text": "\n12 Jun, 2019"
},
{
"code": null,
"e": 24036,
"s": 23911,
"text": "Int16.Parse(String) Method is used to convert the string representation of a number to its 16-bit signed integer equivalent."
},
{
"code": null,
"e": 24044,
"s": 24036,
"text": "Syntax:"
},
{
"code": null,
"e": 24084,
"s": 24044,
"text": "public static short Parse (string str);"
},
{
"code": null,
"e": 24234,
"s": 24084,
"text": "Here, str is a string that contains a number to convert. The format of str will be [optional white space][optional sign]digits[optional white space]."
},
{
"code": null,
"e": 24321,
"s": 24234,
"text": "Return Value: It is a 16-bit signed integer equivalent to the number contained in str."
},
{
"code": null,
"e": 24333,
"s": 24321,
"text": "Exceptions:"
},
{
"code": null,
"e": 24372,
"s": 24333,
"text": "ArgumentNullException: If str is null."
},
{
"code": null,
"e": 24426,
"s": 24372,
"text": "FormatException: If str is not in the correct format."
},
{
"code": null,
"e": 24517,
"s": 24426,
"text": "OverflowException: If str represents a number less than MinValue or greater than MaxValue."
},
{
"code": null,
"e": 24578,
"s": 24517,
"text": "Below programs illustrate the use of above-discussed method:"
},
{
"code": null,
"e": 24589,
"s": 24578,
"text": "Example 1:"
},
{
"code": "// C# program to demonstrate// Int16.Parse(String) Methodusing System; class GFG { // Main Method public static void Main() { // passing different values // to the method to check checkParse(\"14321\"); checkParse(\"15,784\"); checkParse(\"-4589\"); checkParse(\" 456\"); } // Defining checkParse method public static void checkParse(string input) { try { // declaring Int16 variable short val; // getting parsed value val = Int16.Parse(input); Console.WriteLine(\"'{0}' parsed as {1}\", input, val); } catch (FormatException) { Console.WriteLine(\"Can't Parsed '{0}'\", input); } }}",
"e": 25344,
"s": 24589,
"text": null
},
{
"code": null,
"e": 25436,
"s": 25344,
"text": "'14321' parsed as 14321\nCan't Parsed '15,784'\n'-4589' parsed as -4589\n' 456' parsed as 456\n"
},
{
"code": null,
"e": 25473,
"s": 25436,
"text": "Example 2: For ArgumentNullException"
},
{
"code": "// C# program to demonstrate// Int16.Parse(String) Method// for ArgumentNullExceptionusing System; class GFG { // Main Method public static void Main() { try { // passing null value as a input checkParse(null); } catch (ArgumentNullException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } catch (FormatException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } } // Defining checkparse method public static void checkParse(string input) { // declaring Int16 variable short val; // getting parsed value val = Int16.Parse(input); Console.WriteLine(\"'{0}' parsed as {1}\", input, val); }}",
"e": 26330,
"s": 25473,
"text": null
},
{
"code": null,
"e": 26378,
"s": 26330,
"text": "Exception Thrown: System.ArgumentNullException\n"
},
{
"code": null,
"e": 26389,
"s": 26378,
"text": "Reference:"
},
{
"code": null,
"e": 26510,
"s": 26389,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.int16.parse?view=netframework-4.7.2#System_Int16_Parse_System_String_"
},
{
"code": null,
"e": 26530,
"s": 26510,
"text": "CSharp-Int16-Struct"
},
{
"code": null,
"e": 26544,
"s": 26530,
"text": "CSharp-method"
},
{
"code": null,
"e": 26547,
"s": 26544,
"text": "C#"
},
{
"code": null,
"e": 26645,
"s": 26547,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26654,
"s": 26645,
"text": "Comments"
},
{
"code": null,
"e": 26667,
"s": 26654,
"text": "Old Comments"
},
{
"code": null,
"e": 26707,
"s": 26667,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 26730,
"s": 26707,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 26758,
"s": 26730,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 26780,
"s": 26758,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 26797,
"s": 26780,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 26837,
"s": 26797,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 26870,
"s": 26837,
"text": "Linked List Implementation in C#"
},
{
"code": null,
"e": 26913,
"s": 26870,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 26929,
"s": 26913,
"text": "C# | List Class"
}
] |
Minimum value of "max + min" in a subarray - GeeksforGeeks
|
13 May, 2021
Given a array of n positive elements we need to find the lowest possible sum of max and min elements in a subarray given that size of subarray should be greater than equal to 2.Examples:
Input : arr[] = {1 12 2 2}
Output : 4
Sum of 2+2 of subarray [2, 2]
Input : arr[] = {10 20 30 40 23 45}
Output : 30
Sum of 10+20 of subarray[10, 20]
A simple solution is to generate all subarrays, compute sum of maximum and minimum and finally return lowest sum.An efficient solution is based on the fact that adding any element to a subarray would not increase sum of maximum and minimum. Consider the array [a1, a2, a3, a4, a5....an] Each ai will be minimum of some subarray [al, ar] such that i lies between [l, r] and all elements in the subarray are greater than or equal to ai. The cost of such subarray would be ai + max(subarray). Since the max of an array will never decrease on adding elements to the array, It will only increase if we add larger elements so It is always optimal to consider only those subarrays having length 2.In short consider all subarrays of length 2 and compare sum and take the minimum one which will reduce the time complexity by O(N) now we have to run the loop only once.
C++
Java
Python 3
C#
PHP
Javascript
// CPP program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.#include <bits/stdc++.h>using namespace std; int maxSum(int arr[], int n){ if (n < 2) return -1; int ans = arr[0] + arr[1]; for (int i = 1; i + 1 < n; i++) ans = min(ans, (arr[i] + arr[i + 1])); return ans;} // Driver codeint main(){ int arr[] = {1, 12, 2, 2}; int n = sizeof(arr) / sizeof(arr[0]); cout << maxSum(arr, n); return 0;}
// java program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.import java.io.*; class GFG { static int maxSum(int arr[], int n) { if (n < 2) return -1; int ans = arr[0] + arr[1]; for (int i = 1; i + 1 < n; i++) ans = Math.min(ans, (arr[i] + arr[i + 1])); return ans; } // Driver code public static void main (String[] args) { int arr[] = {1, 12, 2, 2}; int n = arr.length; System.out.println( maxSum(arr, n)); }} // This code is contributed by anuj_67.
# Python 3 program to find sum of maximum# and minimum in any subarray of an array# of positive numbers. def maxSum(arr, n): if (n < 2): return -1 ans = arr[0] + arr[1] for i in range(1, n - 1, 1): ans = min(ans, (arr[i] + arr[i + 1])) return ans # Driver codeif __name__ == '__main__': arr = [1, 12, 2, 2] n = len(arr) print(maxSum(arr, n)) # This code is contributed by# Surendra_Gangwar
// C# program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.using System ; class GFG { static int maxSum(int []arr, int n) { if (n < 2) return -1; int ans = arr[0] + arr[1]; for (int i = 1; i + 1 < n; i++) ans = Math.Min(ans, (arr[i] + arr[i + 1])); return ans; } // Driver code public static void Main () { int []arr = {1, 12, 2, 2}; int n = arr.Length; Console.WriteLine( maxSum(arr, n)); }} // This code is contributed by anuj_67.
<?php// PHP program to find sum of// maximum and minimum in any// subarray of an array of// positive numbers. function maxSum( $arr, $n){ if ($n < 2) return -1; $ans = $arr[0] + $arr[1]; for ( $i = 1; $i + 1 < $n; $i++) $ans = min($ans, ($arr[$i] + $arr[$i + 1])); return $ans;} // Driver code $arr = array(1, 12, 2, 2); $n = count($arr); echo maxSum($arr, $n); // This code is contributed by anuj_67.?>
<script>// java program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.function maxSum(arr,n) { if (n < 2) return -1; let ans = arr[0] + arr[1]; for (let i = 1; i + 1 < n; i++) ans = Math.min(ans, (arr[i] + arr[i + 1])); return ans; } // Driver code let arr = [1, 12, 2, 2]; let n = arr.length; document.write( maxSum(arr, n));// This code is contributed by sravan</script>
4
Time Complexity : O(n) Auxiliary Space : O(1)
vt_m
SURENDRA_GANGWAR
sravankumar8128
Arrays
Searching
Arrays
Searching
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Multidimensional Arrays in Java
Introduction to Arrays
Maximum and minimum of an array using minimum number of comparisons
Python | Using 2D arrays/lists the right way
Linked List vs Array
Binary Search
Maximum and minimum of an array using minimum number of comparisons
Find the Missing Number
K'th Smallest/Largest Element in Unsorted Array | Set 1
Program to find largest element in an array
|
[
{
"code": null,
"e": 25080,
"s": 25052,
"text": "\n13 May, 2021"
},
{
"code": null,
"e": 25269,
"s": 25080,
"text": "Given a array of n positive elements we need to find the lowest possible sum of max and min elements in a subarray given that size of subarray should be greater than equal to 2.Examples: "
},
{
"code": null,
"e": 25421,
"s": 25269,
"text": "Input : arr[] = {1 12 2 2}\nOutput : 4\nSum of 2+2 of subarray [2, 2]\n\nInput : arr[] = {10 20 30 40 23 45}\nOutput : 30 \nSum of 10+20 of subarray[10, 20]"
},
{
"code": null,
"e": 26284,
"s": 25423,
"text": "A simple solution is to generate all subarrays, compute sum of maximum and minimum and finally return lowest sum.An efficient solution is based on the fact that adding any element to a subarray would not increase sum of maximum and minimum. Consider the array [a1, a2, a3, a4, a5....an] Each ai will be minimum of some subarray [al, ar] such that i lies between [l, r] and all elements in the subarray are greater than or equal to ai. The cost of such subarray would be ai + max(subarray). Since the max of an array will never decrease on adding elements to the array, It will only increase if we add larger elements so It is always optimal to consider only those subarrays having length 2.In short consider all subarrays of length 2 and compare sum and take the minimum one which will reduce the time complexity by O(N) now we have to run the loop only once. "
},
{
"code": null,
"e": 26288,
"s": 26284,
"text": "C++"
},
{
"code": null,
"e": 26293,
"s": 26288,
"text": "Java"
},
{
"code": null,
"e": 26302,
"s": 26293,
"text": "Python 3"
},
{
"code": null,
"e": 26305,
"s": 26302,
"text": "C#"
},
{
"code": null,
"e": 26309,
"s": 26305,
"text": "PHP"
},
{
"code": null,
"e": 26320,
"s": 26309,
"text": "Javascript"
},
{
"code": "// CPP program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.#include <bits/stdc++.h>using namespace std; int maxSum(int arr[], int n){ if (n < 2) return -1; int ans = arr[0] + arr[1]; for (int i = 1; i + 1 < n; i++) ans = min(ans, (arr[i] + arr[i + 1])); return ans;} // Driver codeint main(){ int arr[] = {1, 12, 2, 2}; int n = sizeof(arr) / sizeof(arr[0]); cout << maxSum(arr, n); return 0;}",
"e": 26794,
"s": 26320,
"text": null
},
{
"code": "// java program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.import java.io.*; class GFG { static int maxSum(int arr[], int n) { if (n < 2) return -1; int ans = arr[0] + arr[1]; for (int i = 1; i + 1 < n; i++) ans = Math.min(ans, (arr[i] + arr[i + 1])); return ans; } // Driver code public static void main (String[] args) { int arr[] = {1, 12, 2, 2}; int n = arr.length; System.out.println( maxSum(arr, n)); }} // This code is contributed by anuj_67.",
"e": 27425,
"s": 26794,
"text": null
},
{
"code": "# Python 3 program to find sum of maximum# and minimum in any subarray of an array# of positive numbers. def maxSum(arr, n): if (n < 2): return -1 ans = arr[0] + arr[1] for i in range(1, n - 1, 1): ans = min(ans, (arr[i] + arr[i + 1])) return ans # Driver codeif __name__ == '__main__': arr = [1, 12, 2, 2] n = len(arr) print(maxSum(arr, n)) # This code is contributed by# Surendra_Gangwar",
"e": 27850,
"s": 27425,
"text": null
},
{
"code": "// C# program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.using System ; class GFG { static int maxSum(int []arr, int n) { if (n < 2) return -1; int ans = arr[0] + arr[1]; for (int i = 1; i + 1 < n; i++) ans = Math.Min(ans, (arr[i] + arr[i + 1])); return ans; } // Driver code public static void Main () { int []arr = {1, 12, 2, 2}; int n = arr.Length; Console.WriteLine( maxSum(arr, n)); }} // This code is contributed by anuj_67.",
"e": 28459,
"s": 27850,
"text": null
},
{
"code": "<?php// PHP program to find sum of// maximum and minimum in any// subarray of an array of// positive numbers. function maxSum( $arr, $n){ if ($n < 2) return -1; $ans = $arr[0] + $arr[1]; for ( $i = 1; $i + 1 < $n; $i++) $ans = min($ans, ($arr[$i] + $arr[$i + 1])); return $ans;} // Driver code $arr = array(1, 12, 2, 2); $n = count($arr); echo maxSum($arr, $n); // This code is contributed by anuj_67.?>",
"e": 28922,
"s": 28459,
"text": null
},
{
"code": "<script>// java program to find sum of maximum and// minimum in any subarray of an array of// positive numbers.function maxSum(arr,n) { if (n < 2) return -1; let ans = arr[0] + arr[1]; for (let i = 1; i + 1 < n; i++) ans = Math.min(ans, (arr[i] + arr[i + 1])); return ans; } // Driver code let arr = [1, 12, 2, 2]; let n = arr.length; document.write( maxSum(arr, n));// This code is contributed by sravan</script>",
"e": 29458,
"s": 28922,
"text": null
},
{
"code": null,
"e": 29460,
"s": 29458,
"text": "4"
},
{
"code": null,
"e": 29509,
"s": 29462,
"text": "Time Complexity : O(n) Auxiliary Space : O(1) "
},
{
"code": null,
"e": 29514,
"s": 29509,
"text": "vt_m"
},
{
"code": null,
"e": 29531,
"s": 29514,
"text": "SURENDRA_GANGWAR"
},
{
"code": null,
"e": 29547,
"s": 29531,
"text": "sravankumar8128"
},
{
"code": null,
"e": 29554,
"s": 29547,
"text": "Arrays"
},
{
"code": null,
"e": 29564,
"s": 29554,
"text": "Searching"
},
{
"code": null,
"e": 29571,
"s": 29564,
"text": "Arrays"
},
{
"code": null,
"e": 29581,
"s": 29571,
"text": "Searching"
},
{
"code": null,
"e": 29679,
"s": 29581,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29711,
"s": 29679,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 29734,
"s": 29711,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 29802,
"s": 29734,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 29847,
"s": 29802,
"text": "Python | Using 2D arrays/lists the right way"
},
{
"code": null,
"e": 29868,
"s": 29847,
"text": "Linked List vs Array"
},
{
"code": null,
"e": 29882,
"s": 29868,
"text": "Binary Search"
},
{
"code": null,
"e": 29950,
"s": 29882,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 29974,
"s": 29950,
"text": "Find the Missing Number"
},
{
"code": null,
"e": 30030,
"s": 29974,
"text": "K'th Smallest/Largest Element in Unsorted Array | Set 1"
}
] |
How to move a file from one folder to another using Python?
|
The shutil module provides functions for moving files, as well as entire folders. For moving multiple files at once, you'll have to have a list of all files you want to copy and loop over them to copy them.
Calling shutil.move(source, destination) will move the file at the path source to the folder at the path destination. (Both source and destination are strings.) If destination is a filename, it will be used as the new name of the moved file. This function returns a string of the path of the moved file.
import shutil, os
files = ['file1.txt', 'file2.txt', 'file3.txt']
for f in files:
shutil.move(f, 'dest_folder')
|
[
{
"code": null,
"e": 1269,
"s": 1062,
"text": "The shutil module provides functions for moving files, as well as entire folders. For moving multiple files at once, you'll have to have a list of all files you want to copy and loop over them to copy them."
},
{
"code": null,
"e": 1574,
"s": 1269,
"text": "Calling shutil.move(source, destination) will move the file at the path source to the folder at the path destination. (Both source and destination are strings.) If destination is a filename, it will be used as the new name of the moved file. This function returns a string of the path of the moved file. "
},
{
"code": null,
"e": 1690,
"s": 1574,
"text": "import shutil, os\nfiles = ['file1.txt', 'file2.txt', 'file3.txt']\nfor f in files:\n shutil.move(f, 'dest_folder')"
}
] |
Element repetition in list in Python
|
There are scenarios when we need to repeat the values in a list. This duplication of values can be achived in python in the following ways.
It is a straight forward approach in which pick each element, take through a inner for loop to create its duplicate and then pass both of them to an outer for loop.
Live Demo
# Given list
listA = ['Mon', 'Tue', 9, 3, 3]
print("Given list : ",listA)
# Adding another element for each element
Newlist = [i for i in listA for n in (0, 1)]
# Result
print("New list after duplication: ",Newlist)
Running the above code gives us the following result −
Given list : ['Mon', 'Tue', 9, 3, 3]
New list after duplication: ['Mon', 'Mon', 'Tue', 'Tue', 9, 9, 3, 3, 3, 3]
The itertools module deals with data manipulation in iterables. Here we apply the chain.from_iterables which
Live Demo
import itertools
# Given list
listA = ['Mon', 'Tue', 9, 3, 3]
print("Given list : ",listA)
# Adding another element for each element
Newlist = list(itertools.chain.from_iterable([n, n] for n in listA))
# Result
print("New list after duplication: ",Newlist)
Running the above code gives us the following result −
Given list : ['Mon', 'Tue', 9, 3, 3]
New list after duplication: ['Mon', 'Mon', 'Tue', 'Tue', 9, 9, 3, 3, 3, 3]
The reduce function applies a particular function passed to it as an argument to all of the list elements passed onto it as second argument. We use this with add function which adds the duplicate element of each element present in the list.
Live Demo
from functools import reduce
from operator import add
# Given list
listA = ['Mon', 'Tue', 9, 3, 3]
print("Given list : ",listA)
# Adding another element for each element
Newlist = list(reduce(add, [(i, i) for i in listA]))
# Result
print("New list after duplication: ",Newlist)
Running the above code gives us the following result −
Given list : ['Mon', 'Tue', 9, 3, 3]
New list after duplication: ['Mon', 'Mon', 'Tue', 'Tue', 9, 9, 3, 3, 3, 3]
|
[
{
"code": null,
"e": 1202,
"s": 1062,
"text": "There are scenarios when we need to repeat the values in a list. This duplication of values can be achived in python in the following ways."
},
{
"code": null,
"e": 1367,
"s": 1202,
"text": "It is a straight forward approach in which pick each element, take through a inner for loop to create its duplicate and then pass both of them to an outer for loop."
},
{
"code": null,
"e": 1378,
"s": 1367,
"text": " Live Demo"
},
{
"code": null,
"e": 1597,
"s": 1378,
"text": "# Given list\nlistA = ['Mon', 'Tue', 9, 3, 3]\n\nprint(\"Given list : \",listA)\n\n# Adding another element for each element\nNewlist = [i for i in listA for n in (0, 1)]\n\n# Result\nprint(\"New list after duplication: \",Newlist)"
},
{
"code": null,
"e": 1652,
"s": 1597,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 1764,
"s": 1652,
"text": "Given list : ['Mon', 'Tue', 9, 3, 3]\nNew list after duplication: ['Mon', 'Mon', 'Tue', 'Tue', 9, 9, 3, 3, 3, 3]"
},
{
"code": null,
"e": 1873,
"s": 1764,
"text": "The itertools module deals with data manipulation in iterables. Here we apply the chain.from_iterables which"
},
{
"code": null,
"e": 1884,
"s": 1873,
"text": " Live Demo"
},
{
"code": null,
"e": 2145,
"s": 1884,
"text": "import itertools\n\n# Given list\nlistA = ['Mon', 'Tue', 9, 3, 3]\n\nprint(\"Given list : \",listA)\n\n# Adding another element for each element\nNewlist = list(itertools.chain.from_iterable([n, n] for n in listA))\n\n# Result\nprint(\"New list after duplication: \",Newlist)"
},
{
"code": null,
"e": 2200,
"s": 2145,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2312,
"s": 2200,
"text": "Given list : ['Mon', 'Tue', 9, 3, 3]\nNew list after duplication: ['Mon', 'Mon', 'Tue', 'Tue', 9, 9, 3, 3, 3, 3]"
},
{
"code": null,
"e": 2553,
"s": 2312,
"text": "The reduce function applies a particular function passed to it as an argument to all of the list elements passed onto it as second argument. We use this with add function which adds the duplicate element of each element present in the list."
},
{
"code": null,
"e": 2564,
"s": 2553,
"text": " Live Demo"
},
{
"code": null,
"e": 2846,
"s": 2564,
"text": "from functools import reduce\nfrom operator import add\n\n# Given list\nlistA = ['Mon', 'Tue', 9, 3, 3]\n\nprint(\"Given list : \",listA)\n\n# Adding another element for each element\nNewlist = list(reduce(add, [(i, i) for i in listA]))\n\n# Result\nprint(\"New list after duplication: \",Newlist)"
},
{
"code": null,
"e": 2901,
"s": 2846,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 3013,
"s": 2901,
"text": "Given list : ['Mon', 'Tue', 9, 3, 3]\nNew list after duplication: ['Mon', 'Mon', 'Tue', 'Tue', 9, 9, 3, 3, 3, 3]"
}
] |
How to extract the value names and counts from value_counts() in Pandas ? - GeeksforGeeks
|
11 Dec, 2020
Prerequisites:
panda
matplotlib
In this article, we will learn how we can extract the names and values using values_count() from panda. The panda library is equipped with a number of useful functions for ‘value_counts’ is one of them. This function returns the counts of unique items in a pandas data frame.
Syntax:
<object>.value_count()
Import Required module.
Make the DataFrame
Process using value_count()
Display data
Example 1: To print all the unique country and the first country name in the list.
tolist() function return a list of the values.
Syntax: Index.tolist() Parameters : NoneReturns : list
Python3
import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # print all unique country name in the listsu1 = df['country'].value_counts().index.tolist()print(su1) # print 1st unique country name present in a listsu2 = df['country'].value_counts().index.tolist()[0]print(su2)
Output:
Example 2: To print all the unique values of the column and the first value of the column.
value_count() counts Unique Occurrences of Values in a Column
Syntax: Index.value_count() Parameters: NoneReturns: the count of occurrences of each of the unique values in this column.
Python3
import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # print country name and countssu3 = df['country'].value_counts()print(su3) # print 1st country count in a listsu4 = df['country'].value_counts()[0]print(su4)
Output:
Example 3: To print our data using a loop from a list.
Python3
import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # printing names and count using loop.for idx, name in enumerate(df['country'].value_counts().index.tolist()): print('Name :', name) print('Counts :', df['country'].value_counts()[idx])
Output:
Example 4: To print our data in the form of Bar graph.
Syntax: matplotlib.pyplot.plot(kind=’ ‘)Parameters: kind: type of graph, i.e. line, bar.Returns: This returns a Graph.
Python3
import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # Display data in a form of Graphdf['country'].value_counts().plot(kind='bar')plt.show()
Output:
Python Pandas-exercise
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
Python String | replace()
Create a Pandas DataFrame from Lists
Reading and Writing to text files in Python
*args and **kwargs in Python
How to drop one or multiple columns in Pandas Dataframe
Selecting rows in pandas DataFrame based on conditions
|
[
{
"code": null,
"e": 24518,
"s": 24490,
"text": "\n11 Dec, 2020"
},
{
"code": null,
"e": 24534,
"s": 24518,
"text": "Prerequisites: "
},
{
"code": null,
"e": 24541,
"s": 24534,
"text": "panda "
},
{
"code": null,
"e": 24552,
"s": 24541,
"text": "matplotlib"
},
{
"code": null,
"e": 24828,
"s": 24552,
"text": "In this article, we will learn how we can extract the names and values using values_count() from panda. The panda library is equipped with a number of useful functions for ‘value_counts’ is one of them. This function returns the counts of unique items in a pandas data frame."
},
{
"code": null,
"e": 24837,
"s": 24828,
"text": "Syntax: "
},
{
"code": null,
"e": 24860,
"s": 24837,
"text": "<object>.value_count()"
},
{
"code": null,
"e": 24884,
"s": 24860,
"text": "Import Required module."
},
{
"code": null,
"e": 24903,
"s": 24884,
"text": "Make the DataFrame"
},
{
"code": null,
"e": 24931,
"s": 24903,
"text": "Process using value_count()"
},
{
"code": null,
"e": 24944,
"s": 24931,
"text": "Display data"
},
{
"code": null,
"e": 25027,
"s": 24944,
"text": "Example 1: To print all the unique country and the first country name in the list."
},
{
"code": null,
"e": 25074,
"s": 25027,
"text": "tolist() function return a list of the values."
},
{
"code": null,
"e": 25129,
"s": 25074,
"text": "Syntax: Index.tolist() Parameters : NoneReturns : list"
},
{
"code": null,
"e": 25137,
"s": 25129,
"text": "Python3"
},
{
"code": "import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # print all unique country name in the listsu1 = df['country'].value_counts().index.tolist()print(su1) # print 1st unique country name present in a listsu2 = df['country'].value_counts().index.tolist()[0]print(su2)",
"e": 25908,
"s": 25137,
"text": null
},
{
"code": null,
"e": 25916,
"s": 25908,
"text": "Output:"
},
{
"code": null,
"e": 26007,
"s": 25916,
"text": "Example 2: To print all the unique values of the column and the first value of the column."
},
{
"code": null,
"e": 26069,
"s": 26007,
"text": "value_count() counts Unique Occurrences of Values in a Column"
},
{
"code": null,
"e": 26192,
"s": 26069,
"text": "Syntax: Index.value_count() Parameters: NoneReturns: the count of occurrences of each of the unique values in this column."
},
{
"code": null,
"e": 26200,
"s": 26192,
"text": "Python3"
},
{
"code": "import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # print country name and countssu3 = df['country'].value_counts()print(su3) # print 1st country count in a listsu4 = df['country'].value_counts()[0]print(su4)",
"e": 26915,
"s": 26200,
"text": null
},
{
"code": null,
"e": 26923,
"s": 26915,
"text": "Output:"
},
{
"code": null,
"e": 26978,
"s": 26923,
"text": "Example 3: To print our data using a loop from a list."
},
{
"code": null,
"e": 26986,
"s": 26978,
"text": "Python3"
},
{
"code": "import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # printing names and count using loop.for idx, name in enumerate(df['country'].value_counts().index.tolist()): print('Name :', name) print('Counts :', df['country'].value_counts()[idx])",
"e": 27733,
"s": 26986,
"text": null
},
{
"code": null,
"e": 27741,
"s": 27733,
"text": "Output:"
},
{
"code": null,
"e": 27796,
"s": 27741,
"text": "Example 4: To print our data in the form of Bar graph."
},
{
"code": null,
"e": 27915,
"s": 27796,
"text": "Syntax: matplotlib.pyplot.plot(kind=’ ‘)Parameters: kind: type of graph, i.e. line, bar.Returns: This returns a Graph."
},
{
"code": null,
"e": 27923,
"s": 27915,
"text": "Python3"
},
{
"code": "import pandas as pdimport matplotlib.pyplot as plt # Make example dataframedf = pd.DataFrame([(1, 'Germany'), (2, 'France'), (3, 'Indonesia'), (4, 'France'), (5, 'France'), (6, 'Germany'), (7, 'UK'), (8, 'India'), (9, 'India'), (10, 'Germany') ], columns=['groupid', 'country'], index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) # Display data in a form of Graphdf['country'].value_counts().plot(kind='bar')plt.show()",
"e": 28567,
"s": 27923,
"text": null
},
{
"code": null,
"e": 28575,
"s": 28567,
"text": "Output:"
},
{
"code": null,
"e": 28598,
"s": 28575,
"text": "Python Pandas-exercise"
},
{
"code": null,
"e": 28612,
"s": 28598,
"text": "Python-pandas"
},
{
"code": null,
"e": 28619,
"s": 28612,
"text": "Python"
},
{
"code": null,
"e": 28717,
"s": 28619,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28726,
"s": 28717,
"text": "Comments"
},
{
"code": null,
"e": 28739,
"s": 28726,
"text": "Old Comments"
},
{
"code": null,
"e": 28757,
"s": 28739,
"text": "Python Dictionary"
},
{
"code": null,
"e": 28789,
"s": 28757,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28811,
"s": 28789,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 28853,
"s": 28811,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 28879,
"s": 28853,
"text": "Python String | replace()"
},
{
"code": null,
"e": 28916,
"s": 28879,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 28960,
"s": 28916,
"text": "Reading and Writing to text files in Python"
},
{
"code": null,
"e": 28989,
"s": 28960,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 29045,
"s": 28989,
"text": "How to drop one or multiple columns in Pandas Dataframe"
}
] |
Balanced Ternary Number System - GeeksforGeeks
|
26 Apr, 2021
As we already know, a Binary number system is a number system that has only 2 digits in it, i.e. 0 and 1. Similarly, we also know that a Ternary number system is a number system that has only 3 digits in it, i.e. 0, 1 and 2. In this article, we will learn about Balanced Ternary Number System.A balanced ternary number system is a numeral system that comprises of digits -1, 0 and 1. Since it is inconvenient to write -1 as a digit, we’ll use letter Z further for this purpose.Conversion of Decimal to Balanced Ternary system The conversion from Decimal to balanced ternary is done in two steps:
Convert decimal to the ternary number system.Convert ternary to the balanced ternary system, using the below steps: traverse the ternary number, right to left by leaving 0 and 1 as it iswhen encounter 2, change it to Z and add +1 to the next digit in iteration.Some digits may become +3, then replace +3 with 0 and add +1 to next digit in iteration.complete this process until you convert all the digits.
Convert decimal to the ternary number system.
Convert ternary to the balanced ternary system, using the below steps: traverse the ternary number, right to left by leaving 0 and 1 as it iswhen encounter 2, change it to Z and add +1 to the next digit in iteration.Some digits may become +3, then replace +3 with 0 and add +1 to next digit in iteration.complete this process until you convert all the digits.
traverse the ternary number, right to left by leaving 0 and 1 as it is
when encounter 2, change it to Z and add +1 to the next digit in iteration.
Some digits may become +3, then replace +3 with 0 and add +1 to next digit in iteration.
complete this process until you convert all the digits.
Example: convert 23810 to balanced ternary and vice-versa
First convert 23810 to ternary number system. 23810 = 222113 Second convert ternary to balanced ternary number system :
By starting iteration from left to right, two 1’s are skipped as it remains same in balanced ternary.
Now convert first encountered 2 with z increasing it’s next digit in iteration by +1. So we get 23Z11.
Convert 3 to 0 with increment +1 in it’s next digit in iteration. So we get 30Z11.
Convert 3 to 0 with increment +1 in it’s next digit in iteration. So we get 100Z11. (Here assume 0 is present before most significant digit)
The final result is 100Z11.
Note:- The system also allows representation of negative numbers eliminating the need for a negative sign before the number. All negative numbers in a balanced ternary system start with Z. i.e.: −110 = Z3, −210 = Z13, −310 = Z03, −410 = ZZ3, −510 = Z113 .
Below is the program to convert positive decimals into the balanced ternary system:
C++
Java
Python3
C#
Javascript
// C++ program to convert positive// decimals into balanced ternary system #include <bits/stdc++.h>using namespace std; string balancedTernary(int n){ string output = ""; while (n > 0) { int rem = n % 3; n = n / 3; if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codeint main(){ int n = 238; cout << "Equivalent Balanced Ternary of " << n << " is: " << balancedTernary(n); return 0;}
// Java program to convert positive// decimals into balanced ternary systemclass GFG{ static String balancedTernary(int n){ String output = ""; while (n > 0) { int rem = n % 3; n = n / 3; if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codepublic static void main(String[] args){ int n = 238; System.out.print("Equivalent Balanced Ternary of " + n + " is: " + balancedTernary(n));}} // This code is contributed by Rajput-Ji
# Python3 program to convert positive# decimals into balanced ternary systemdef balancedTernary(n): output = "" while(n > 0): rem = n % 3 n = n // 3 if(rem == 2): rem = -1 n += 1 if(rem == 0): output = '0' + output else: if(rem == 1): output = '1' + output else: output = 'Z' + output return output # Driver Coden = 238 # Function callprint("Equivalent Balanced Ternary of", n , "is:", balancedTernary(n)) # This code is contributed by Shivam Singh
// C# program to convert positive// decimals into balanced ternary systemusing System;using System.Collections.Generic;class GFG{ static String balancedTernary(int n){ String output = ""; while (n > 0) { int rem = n % 3; n = n / 3; if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codepublic static void Main(String[] args){ int n = 238; Console.Write("Equivalent Balanced Ternary of " + n + " is: " + balancedTernary(n));}} // This code is contributed by Rajput-Ji
<script> // Javascript program to convert positive// decimals into balanced ternary system function balancedTernary(n){ var output = ""; while (n > 0) { var rem = n % 3; n = parseInt(n / 3); if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codevar n = 238;document.write( "Equivalent Balanced Ternary of " + n + " is: " + balancedTernary(n)); </script>
Equivalent Balanced Ternary of 238 is: 100Z11
SHIVAMSINGH67
Rajput-Ji
rrrtnx
number-theory
Mathematical
number-theory
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Algorithm to solve Rubik's Cube
Program to print prime numbers from 1 to N.
Program to multiply two matrices
Fizz Buzz Implementation
Complexity Analysis of Binary Search
Check if a number is Palindrome
Modular multiplicative inverse
Find Union and Intersection of two unsorted arrays
Count ways to reach the n'th stair
Find first and last digits of a number
|
[
{
"code": null,
"e": 24692,
"s": 24664,
"text": "\n26 Apr, 2021"
},
{
"code": null,
"e": 25290,
"s": 24692,
"text": "As we already know, a Binary number system is a number system that has only 2 digits in it, i.e. 0 and 1. Similarly, we also know that a Ternary number system is a number system that has only 3 digits in it, i.e. 0, 1 and 2. In this article, we will learn about Balanced Ternary Number System.A balanced ternary number system is a numeral system that comprises of digits -1, 0 and 1. Since it is inconvenient to write -1 as a digit, we’ll use letter Z further for this purpose.Conversion of Decimal to Balanced Ternary system The conversion from Decimal to balanced ternary is done in two steps: "
},
{
"code": null,
"e": 25695,
"s": 25290,
"text": "Convert decimal to the ternary number system.Convert ternary to the balanced ternary system, using the below steps: traverse the ternary number, right to left by leaving 0 and 1 as it iswhen encounter 2, change it to Z and add +1 to the next digit in iteration.Some digits may become +3, then replace +3 with 0 and add +1 to next digit in iteration.complete this process until you convert all the digits."
},
{
"code": null,
"e": 25741,
"s": 25695,
"text": "Convert decimal to the ternary number system."
},
{
"code": null,
"e": 26101,
"s": 25741,
"text": "Convert ternary to the balanced ternary system, using the below steps: traverse the ternary number, right to left by leaving 0 and 1 as it iswhen encounter 2, change it to Z and add +1 to the next digit in iteration.Some digits may become +3, then replace +3 with 0 and add +1 to next digit in iteration.complete this process until you convert all the digits."
},
{
"code": null,
"e": 26172,
"s": 26101,
"text": "traverse the ternary number, right to left by leaving 0 and 1 as it is"
},
{
"code": null,
"e": 26248,
"s": 26172,
"text": "when encounter 2, change it to Z and add +1 to the next digit in iteration."
},
{
"code": null,
"e": 26337,
"s": 26248,
"text": "Some digits may become +3, then replace +3 with 0 and add +1 to next digit in iteration."
},
{
"code": null,
"e": 26393,
"s": 26337,
"text": "complete this process until you convert all the digits."
},
{
"code": null,
"e": 26452,
"s": 26393,
"text": "Example: convert 23810 to balanced ternary and vice-versa "
},
{
"code": null,
"e": 26574,
"s": 26452,
"text": "First convert 23810 to ternary number system. 23810 = 222113 Second convert ternary to balanced ternary number system : "
},
{
"code": null,
"e": 26676,
"s": 26574,
"text": "By starting iteration from left to right, two 1’s are skipped as it remains same in balanced ternary."
},
{
"code": null,
"e": 26779,
"s": 26676,
"text": "Now convert first encountered 2 with z increasing it’s next digit in iteration by +1. So we get 23Z11."
},
{
"code": null,
"e": 26862,
"s": 26779,
"text": "Convert 3 to 0 with increment +1 in it’s next digit in iteration. So we get 30Z11."
},
{
"code": null,
"e": 27003,
"s": 26862,
"text": "Convert 3 to 0 with increment +1 in it’s next digit in iteration. So we get 100Z11. (Here assume 0 is present before most significant digit)"
},
{
"code": null,
"e": 27031,
"s": 27003,
"text": "The final result is 100Z11."
},
{
"code": null,
"e": 27288,
"s": 27031,
"text": "Note:- The system also allows representation of negative numbers eliminating the need for a negative sign before the number. All negative numbers in a balanced ternary system start with Z. i.e.: −110 = Z3, −210 = Z13, −310 = Z03, −410 = ZZ3, −510 = Z113 . "
},
{
"code": null,
"e": 27373,
"s": 27288,
"text": "Below is the program to convert positive decimals into the balanced ternary system: "
},
{
"code": null,
"e": 27377,
"s": 27373,
"text": "C++"
},
{
"code": null,
"e": 27382,
"s": 27377,
"text": "Java"
},
{
"code": null,
"e": 27390,
"s": 27382,
"text": "Python3"
},
{
"code": null,
"e": 27393,
"s": 27390,
"text": "C#"
},
{
"code": null,
"e": 27404,
"s": 27393,
"text": "Javascript"
},
{
"code": "// C++ program to convert positive// decimals into balanced ternary system #include <bits/stdc++.h>using namespace std; string balancedTernary(int n){ string output = \"\"; while (n > 0) { int rem = n % 3; n = n / 3; if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codeint main(){ int n = 238; cout << \"Equivalent Balanced Ternary of \" << n << \" is: \" << balancedTernary(n); return 0;}",
"e": 28071,
"s": 27404,
"text": null
},
{
"code": "// Java program to convert positive// decimals into balanced ternary systemclass GFG{ static String balancedTernary(int n){ String output = \"\"; while (n > 0) { int rem = n % 3; n = n / 3; if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codepublic static void main(String[] args){ int n = 238; System.out.print(\"Equivalent Balanced Ternary of \" + n + \" is: \" + balancedTernary(n));}} // This code is contributed by Rajput-Ji",
"e": 28692,
"s": 28071,
"text": null
},
{
"code": "# Python3 program to convert positive# decimals into balanced ternary systemdef balancedTernary(n): output = \"\" while(n > 0): rem = n % 3 n = n // 3 if(rem == 2): rem = -1 n += 1 if(rem == 0): output = '0' + output else: if(rem == 1): output = '1' + output else: output = 'Z' + output return output # Driver Coden = 238 # Function callprint(\"Equivalent Balanced Ternary of\", n , \"is:\", balancedTernary(n)) # This code is contributed by Shivam Singh",
"e": 29293,
"s": 28692,
"text": null
},
{
"code": "// C# program to convert positive// decimals into balanced ternary systemusing System;using System.Collections.Generic;class GFG{ static String balancedTernary(int n){ String output = \"\"; while (n > 0) { int rem = n % 3; n = n / 3; if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codepublic static void Main(String[] args){ int n = 238; Console.Write(\"Equivalent Balanced Ternary of \" + n + \" is: \" + balancedTernary(n));}} // This code is contributed by Rajput-Ji",
"e": 29952,
"s": 29293,
"text": null
},
{
"code": "<script> // Javascript program to convert positive// decimals into balanced ternary system function balancedTernary(n){ var output = \"\"; while (n > 0) { var rem = n % 3; n = parseInt(n / 3); if (rem == 2) { rem = -1; n++; } output = (rem == 0 ? '0' : (rem == 1) ? '1' : 'Z') + output; } return output;} // Driver codevar n = 238;document.write( \"Equivalent Balanced Ternary of \" + n + \" is: \" + balancedTernary(n)); </script>",
"e": 30568,
"s": 29952,
"text": null
},
{
"code": null,
"e": 30617,
"s": 30571,
"text": "Equivalent Balanced Ternary of 238 is: 100Z11"
},
{
"code": null,
"e": 30635,
"s": 30621,
"text": "SHIVAMSINGH67"
},
{
"code": null,
"e": 30645,
"s": 30635,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 30652,
"s": 30645,
"text": "rrrtnx"
},
{
"code": null,
"e": 30666,
"s": 30652,
"text": "number-theory"
},
{
"code": null,
"e": 30679,
"s": 30666,
"text": "Mathematical"
},
{
"code": null,
"e": 30693,
"s": 30679,
"text": "number-theory"
},
{
"code": null,
"e": 30706,
"s": 30693,
"text": "Mathematical"
},
{
"code": null,
"e": 30804,
"s": 30706,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30836,
"s": 30804,
"text": "Algorithm to solve Rubik's Cube"
},
{
"code": null,
"e": 30880,
"s": 30836,
"text": "Program to print prime numbers from 1 to N."
},
{
"code": null,
"e": 30913,
"s": 30880,
"text": "Program to multiply two matrices"
},
{
"code": null,
"e": 30938,
"s": 30913,
"text": "Fizz Buzz Implementation"
},
{
"code": null,
"e": 30975,
"s": 30938,
"text": "Complexity Analysis of Binary Search"
},
{
"code": null,
"e": 31007,
"s": 30975,
"text": "Check if a number is Palindrome"
},
{
"code": null,
"e": 31038,
"s": 31007,
"text": "Modular multiplicative inverse"
},
{
"code": null,
"e": 31089,
"s": 31038,
"text": "Find Union and Intersection of two unsorted arrays"
},
{
"code": null,
"e": 31124,
"s": 31089,
"text": "Count ways to reach the n'th stair"
}
] |
Find k longest words in given list in Python
|
We have a scenario where we have to pick the top n longest word from a list containing many words of varying length. In this article we will see various approaches to achieve that.
We first sort the elements of the list in the reverse order so that the longest words are available at the beginning of the list. Then find the length of each word and add the result of the count to a variable. Finally take a slice of the required number of longest words we need.
Live Demo
from itertools import count
def longwords(l, x):
c = count()
return sorted(l, key=lambda i: (len(i), next(c)),
reverse=True)[:x]
listA = ['Earth','Moonshine','Aurora','Snowflakes','Sunshine']
n = 2
print(longwords(listA, n))
Running the above code gives us the following result −
['Snowflakes', 'Moonshine']
In this approach we use enumerate to list out each element of the list and then apply sorted and zip function to get the count. The negative length values indicate the reverse order of sorting and finally we slice the required number of counts.
Live Demo
def longwords(l, x):
idx, words = zip(*sorted(enumerate(l),
key = lambda i: (-len(i[1]), -i[0]))[:x])
return list(words)
listA = ['Earth','Moonshine','Aurora','Snowflakes','Sunshine']
n = 2
print(longwords(listA, n))
Running the above code gives us the following result −
['Snowflakes', 'Moonshine']
|
[
{
"code": null,
"e": 1243,
"s": 1062,
"text": "We have a scenario where we have to pick the top n longest word from a list containing many words of varying length. In this article we will see various approaches to achieve that."
},
{
"code": null,
"e": 1524,
"s": 1243,
"text": "We first sort the elements of the list in the reverse order so that the longest words are available at the beginning of the list. Then find the length of each word and add the result of the count to a variable. Finally take a slice of the required number of longest words we need."
},
{
"code": null,
"e": 1535,
"s": 1524,
"text": " Live Demo"
},
{
"code": null,
"e": 1786,
"s": 1535,
"text": "from itertools import count\n\ndef longwords(l, x):\n c = count()\n return sorted(l, key=lambda i: (len(i), next(c)),\n reverse=True)[:x]\n\nlistA = ['Earth','Moonshine','Aurora','Snowflakes','Sunshine']\nn = 2\nprint(longwords(listA, n))"
},
{
"code": null,
"e": 1841,
"s": 1786,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 1869,
"s": 1841,
"text": "['Snowflakes', 'Moonshine']"
},
{
"code": null,
"e": 2114,
"s": 1869,
"text": "In this approach we use enumerate to list out each element of the list and then apply sorted and zip function to get the count. The negative length values indicate the reverse order of sorting and finally we slice the required number of counts."
},
{
"code": null,
"e": 2125,
"s": 2114,
"text": " Live Demo"
},
{
"code": null,
"e": 2352,
"s": 2125,
"text": "def longwords(l, x):\n idx, words = zip(*sorted(enumerate(l),\n key = lambda i: (-len(i[1]), -i[0]))[:x])\n return list(words)\n\nlistA = ['Earth','Moonshine','Aurora','Snowflakes','Sunshine']\nn = 2\nprint(longwords(listA, n))"
},
{
"code": null,
"e": 2407,
"s": 2352,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2435,
"s": 2407,
"text": "['Snowflakes', 'Moonshine']"
}
] |
Getting started in AI and computer vision with Nvidia Jetson Nano. Hands-on approach to the JetBot project | by David Retana | Towards Data Science
|
With the rise of AI and IoT, Nvidia offers us a simple, cheap and powerful solution for the development of embedded artificial intelligence applications. Jetson Nano opens up opportunities to create homemade robots, intelligent systems...
Jetson Nano is a raspberry like board but much more powerful which runs a custom Ubuntu image with preloaded Nvidia software. It has a Quad-core ARM® Cortex®-A57 MPCore processor, NVIDIA MaxwellTM architecture GPU with 128 NVIDIA CUDA® cores and 4 GB 64-bit LPDDR4 1600MHz memory. This hardware makes the Jetson Nano suitable for training and inference phases in deep learning problems. It supports most common deep learning frameworks like TensorFlow, Caffe or PyTorch.
With this board, you can make an educational AI robot called JetBot. Building and using JetBot gives the hands on experience needed to create entirely new AI projects, including hardware and software phases. Building JetBot is not the goal of this article. In the official GitHub page you can find all the information and requirements. https://github.com/NVIDIA-AI-IOT/jetbot
With Jetson Nano board you can easily run majority of deep learning frameworks like TensorFlow, Caffe or PyTorch in a efficient way.
All the necessary code to run this project is available on JetBot GitHub page mentioned above. However, you can take advantage of my modified notebook with improvements like taking pictures dynamically whereas moving your JetBot remotely with a Xbox controller (easy extensible to other controllers).
Step 1.- Collect data to feed your model
https://github.com/davidRetana/jetbot/blob/davidretana/notebooks/collision_avoidance/data_collection_teleoperation.ipynb
In order to train our neural network, we need to collect images from the camera. We need two kind of images: blocked and free. Take care of collect images from the 2 classes in a balanced way (50–50 %) and try different positions of the same blocking object or clear path. A good dataset is the key to the robot’s proper functioning.
If the path is blocked, the JetBot will turn left and on the contrary if the path is clear, the JetBot will continue straight.
Step 2: Train the network
https://github.com/davidRetana/jetbot/blob/davidretana/notebooks/collision_avoidance/train_model.ipynb
The network architecture chosen from this job is AlexNet but with a slightly modification to accomplish our requirements.
This architecture was designed to ImageNet LSVRC-2010 contest with 1000 different classes. However, our network only need to distinguish if a image is free or blocked to move the JetBot through so the final layer will only have 2 outputs. Moreover, PyTorch allows us to download a pretrained AlexNet network which is a huge advantage to solve our problem because in the training phase, we don’t need to start the network weights randomly. The layers have the knowledge learned with the 1000 different classes in ImageNet LSVRC-2010 contest. A model developed for a task is reused as the starting point for a model to accomplish a different task. This technique is called Transfer Learning.
import torchimport torchvision.models as modelsmodel = models.alexnet(pretrained=True)model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 2)print(model)
You can train the network in the JetBot itself or in a external system. The notebook in prepared to export your model’s weights in a file that you can import in the JetBot later.
Step 3: Run your model
https://github.com/davidRetana/jetbot/blob/davidretana/notebooks/collision_avoidance/live_demo.ipynb
Now, it’s time to run your model and see how it works.
|
[
{
"code": null,
"e": 411,
"s": 172,
"text": "With the rise of AI and IoT, Nvidia offers us a simple, cheap and powerful solution for the development of embedded artificial intelligence applications. Jetson Nano opens up opportunities to create homemade robots, intelligent systems..."
},
{
"code": null,
"e": 882,
"s": 411,
"text": "Jetson Nano is a raspberry like board but much more powerful which runs a custom Ubuntu image with preloaded Nvidia software. It has a Quad-core ARM® Cortex®-A57 MPCore processor, NVIDIA MaxwellTM architecture GPU with 128 NVIDIA CUDA® cores and 4 GB 64-bit LPDDR4 1600MHz memory. This hardware makes the Jetson Nano suitable for training and inference phases in deep learning problems. It supports most common deep learning frameworks like TensorFlow, Caffe or PyTorch."
},
{
"code": null,
"e": 1258,
"s": 882,
"text": "With this board, you can make an educational AI robot called JetBot. Building and using JetBot gives the hands on experience needed to create entirely new AI projects, including hardware and software phases. Building JetBot is not the goal of this article. In the official GitHub page you can find all the information and requirements. https://github.com/NVIDIA-AI-IOT/jetbot"
},
{
"code": null,
"e": 1391,
"s": 1258,
"text": "With Jetson Nano board you can easily run majority of deep learning frameworks like TensorFlow, Caffe or PyTorch in a efficient way."
},
{
"code": null,
"e": 1692,
"s": 1391,
"text": "All the necessary code to run this project is available on JetBot GitHub page mentioned above. However, you can take advantage of my modified notebook with improvements like taking pictures dynamically whereas moving your JetBot remotely with a Xbox controller (easy extensible to other controllers)."
},
{
"code": null,
"e": 1733,
"s": 1692,
"text": "Step 1.- Collect data to feed your model"
},
{
"code": null,
"e": 1854,
"s": 1733,
"text": "https://github.com/davidRetana/jetbot/blob/davidretana/notebooks/collision_avoidance/data_collection_teleoperation.ipynb"
},
{
"code": null,
"e": 2188,
"s": 1854,
"text": "In order to train our neural network, we need to collect images from the camera. We need two kind of images: blocked and free. Take care of collect images from the 2 classes in a balanced way (50–50 %) and try different positions of the same blocking object or clear path. A good dataset is the key to the robot’s proper functioning."
},
{
"code": null,
"e": 2315,
"s": 2188,
"text": "If the path is blocked, the JetBot will turn left and on the contrary if the path is clear, the JetBot will continue straight."
},
{
"code": null,
"e": 2341,
"s": 2315,
"text": "Step 2: Train the network"
},
{
"code": null,
"e": 2444,
"s": 2341,
"text": "https://github.com/davidRetana/jetbot/blob/davidretana/notebooks/collision_avoidance/train_model.ipynb"
},
{
"code": null,
"e": 2566,
"s": 2444,
"text": "The network architecture chosen from this job is AlexNet but with a slightly modification to accomplish our requirements."
},
{
"code": null,
"e": 3256,
"s": 2566,
"text": "This architecture was designed to ImageNet LSVRC-2010 contest with 1000 different classes. However, our network only need to distinguish if a image is free or blocked to move the JetBot through so the final layer will only have 2 outputs. Moreover, PyTorch allows us to download a pretrained AlexNet network which is a huge advantage to solve our problem because in the training phase, we don’t need to start the network weights randomly. The layers have the knowledge learned with the 1000 different classes in ImageNet LSVRC-2010 contest. A model developed for a task is reused as the starting point for a model to accomplish a different task. This technique is called Transfer Learning."
},
{
"code": null,
"e": 3428,
"s": 3256,
"text": "import torchimport torchvision.models as modelsmodel = models.alexnet(pretrained=True)model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 2)print(model)"
},
{
"code": null,
"e": 3607,
"s": 3428,
"text": "You can train the network in the JetBot itself or in a external system. The notebook in prepared to export your model’s weights in a file that you can import in the JetBot later."
},
{
"code": null,
"e": 3630,
"s": 3607,
"text": "Step 3: Run your model"
},
{
"code": null,
"e": 3731,
"s": 3630,
"text": "https://github.com/davidRetana/jetbot/blob/davidretana/notebooks/collision_avoidance/live_demo.ipynb"
}
] |
Docker for Python-Dash & R-Shiny. Easy start with Docker &... | by Meinhard Ploner | Towards Data Science
|
You wanna deploy your data-driven app using Docker & Docker Compose? Then read on, because this article will get you up-and-running in a few minutes. For both the most wide-spread Data Science / Analytics stacks: Python & R.
In my latest Medium stories, I explained how to set up a data-driven web application for the sake of showing case numbers of the Coronavirus. I created the exact same web application with the following two stacks:
Python & Dash (Link)
R & Shiny (Link)
In this article, I will show you how to deploy these apps using Docker & Compose. I will first go through Docker, for Python as well as R, and follow with docker-compose, later-on.
Installing Docker on the server of your choice is easy — there are plenty of instructions out there. As an example, if you have an AWS instance (EC2) running Amazon Linux, simply type:
sudo yum install -y docker # Install Docker.sudo service docker start # Start the service.sudo usermod -a -G docker ec2-user # Add ec2-user to Docker group.sudo chkconfig docker on # Let docker auto-start.
Perfect. Docker is ready!
First, we have to create the requirements.txt file, including all the necessary libraries. This can be done using the command
pip freeze > requirements.txt
or through manually creating the text-file and entering the name of the libraries. For the mentioned app we need the following libraries (btw, plotly is installed with dash):
dashpandas
Then we are ready to set up the DOCKERFILE itself. For a simply Python-Dash app the following code is sufficient:
FROM python:3.8LABEL maintainer "Meinhard Ploner <dummy@host.com>"WORKDIR /codeCOPY requirements.txt /RUN pip install -r /requirements.txtCOPY ./ ./EXPOSE 8050CMD ["python", "./app.py"]
Description of the individual commands:
FROM ... : pull the Python image with tag 3.8 (=version).
LABEL ... : optional. Name and E-mail of the maintainer.
WORKDIR ... : set the working directory.
COPY requ... : copy the requirements.txt file.
RUN pip ... : install all the libraries listed as requirements.
COPY ./ ... : copy all files to the Docker image.
EXPOSE ... : set the port to listen to.
CMD ... : set the command to be executed when running the image.
Perfect. If we are not interested in the R solution, we can jump to the definition of docker-compose. Otherwise, go ahead...
On R, the packages are usually not listed in a requirements file, but directly part of the DOCKERFILE code. Similar to Python, I propose a DOCKERFILE which works well for simple apps like the one I explained in the other post.
The file is slightly more complex as for Python but still compact:
FROM rocker/shiny:3.6.1LABEL maintainer "Meinhard Ploner <dummy@host.com>"WORKDIR /srv/shiny-serverRUN apt-get update \ && apt-get install -y libsasl2-dev libssl-devRUN echo \ 'options(repos=list(CRAN="https://cloud.r-project.org/"))' > \ ".Rprofile"RUN R -e "install.packages(c('dplyr','tidyr', 'plotly'))"ADD https://raw.githubusercontent.com/rocker-org/shiny/master/shiny-server.sh /usr/bin/COPY ./ ./EXPOSE 3838RUN chmod a+w .RUN chmod +x /usr/bin/shiny-server.shCMD /usr/bin/shiny-server.sh
Description of the single commands:
FROM ... : pull the R-Shiny image with tag 3.6.1 (=version).
LABEL ... : optional. Name and E-mail of the maintainer.
WORKDIR ... : set the working directory.
RUN apt-get ... : install libssl, needed for plotly. Might differ on other Linux instances or Windows.
RUN echo ... : write the CRAN repository URL into .Rprofile, which is used in the subsequent R call.
Run R ... : call R to install various packages.
ADD ... : download the shiny-server.sh file and add it to the image.
COPY ./ ... : copy all files to the Docker image.
EXPOSE ... : set the port to listen to.
RUN chmod +w... : make the home directory of the image writable.
RUN chmod +x... : make shiny-server.sh executable.
CMD ... : set the command to be executed when running the image.
That’s it. Let’s move on to the Compose part.
The installation of docker-compose is also straightforward. Copy the appropriate binary from GitHub, and fix the permissions:
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose
You might wonder what the command uname is for? uname returns basic information about the operating system. On AWS EC2 with Amazon Linux, it gives “Linux” for “uname -s” and “x86_64” for “uname -m”.
The docker-compose.yml YAML file does not differ between the two stacks. But before creating it, set up an environment file called “.env” with basic app information:
VERSION=1.0.0TARGET=LIVE
Now create the Compose file, which in our case is minimalistic:
version: "3.7"services: app-name: build: context: . image: app-name:$VERSION container_name: app-name ports: - "to:from" environment: - TARGET=$TARGET restart: unless-stopped
Replace “app-name” by the app name of your choice. The image name will use the version number and is in our case “app-name:1.0.0”.
Further, Python-Dash apps usually run on port 5050, while R-Shiny apps per default use port 3838. Therefore replace the ports (“to:from”) by:
80:5050 for Python-Dash
80:3838 for R-Shiny
Instead of port 80 you can use any other port you want to serve. Save the file as docker-compose.yml and you are done.
To build the image use:
docker-compose build
Having the image built, build and start the container by typing:
docker-compose up -d
The option “-d” ensures the app is running in the background. Without the option, you will directly see the logs, which can be useful for the first few runs, to test the setting.
If you downloaded the app code from GitHub and followed the instructions, your app will be running on the server.
Otherwise, you can also test the apps, which run on my EC2 instance:
Python-Dash app on AWS/EC2: https://go.aws/2xsdb7q
R-Shiny app on AWS/EC2: https://go.aws/3aoYsIW
As you can see, it is no big deal to deploy a simple Dash or Shiny app. And even if an app gets more complex, these examples can be used as a blueprint.
I hope you enjoyed the article and it helped in getting your own app up and running!
|
[
{
"code": null,
"e": 397,
"s": 172,
"text": "You wanna deploy your data-driven app using Docker & Docker Compose? Then read on, because this article will get you up-and-running in a few minutes. For both the most wide-spread Data Science / Analytics stacks: Python & R."
},
{
"code": null,
"e": 611,
"s": 397,
"text": "In my latest Medium stories, I explained how to set up a data-driven web application for the sake of showing case numbers of the Coronavirus. I created the exact same web application with the following two stacks:"
},
{
"code": null,
"e": 632,
"s": 611,
"text": "Python & Dash (Link)"
},
{
"code": null,
"e": 649,
"s": 632,
"text": "R & Shiny (Link)"
},
{
"code": null,
"e": 830,
"s": 649,
"text": "In this article, I will show you how to deploy these apps using Docker & Compose. I will first go through Docker, for Python as well as R, and follow with docker-compose, later-on."
},
{
"code": null,
"e": 1015,
"s": 830,
"text": "Installing Docker on the server of your choice is easy — there are plenty of instructions out there. As an example, if you have an AWS instance (EC2) running Amazon Linux, simply type:"
},
{
"code": null,
"e": 1228,
"s": 1015,
"text": "sudo yum install -y docker # Install Docker.sudo service docker start # Start the service.sudo usermod -a -G docker ec2-user # Add ec2-user to Docker group.sudo chkconfig docker on # Let docker auto-start."
},
{
"code": null,
"e": 1254,
"s": 1228,
"text": "Perfect. Docker is ready!"
},
{
"code": null,
"e": 1380,
"s": 1254,
"text": "First, we have to create the requirements.txt file, including all the necessary libraries. This can be done using the command"
},
{
"code": null,
"e": 1410,
"s": 1380,
"text": "pip freeze > requirements.txt"
},
{
"code": null,
"e": 1585,
"s": 1410,
"text": "or through manually creating the text-file and entering the name of the libraries. For the mentioned app we need the following libraries (btw, plotly is installed with dash):"
},
{
"code": null,
"e": 1596,
"s": 1585,
"text": "dashpandas"
},
{
"code": null,
"e": 1710,
"s": 1596,
"text": "Then we are ready to set up the DOCKERFILE itself. For a simply Python-Dash app the following code is sufficient:"
},
{
"code": null,
"e": 1896,
"s": 1710,
"text": "FROM python:3.8LABEL maintainer \"Meinhard Ploner <dummy@host.com>\"WORKDIR /codeCOPY requirements.txt /RUN pip install -r /requirements.txtCOPY ./ ./EXPOSE 8050CMD [\"python\", \"./app.py\"]"
},
{
"code": null,
"e": 1936,
"s": 1896,
"text": "Description of the individual commands:"
},
{
"code": null,
"e": 1994,
"s": 1936,
"text": "FROM ... : pull the Python image with tag 3.8 (=version)."
},
{
"code": null,
"e": 2051,
"s": 1994,
"text": "LABEL ... : optional. Name and E-mail of the maintainer."
},
{
"code": null,
"e": 2092,
"s": 2051,
"text": "WORKDIR ... : set the working directory."
},
{
"code": null,
"e": 2139,
"s": 2092,
"text": "COPY requ... : copy the requirements.txt file."
},
{
"code": null,
"e": 2203,
"s": 2139,
"text": "RUN pip ... : install all the libraries listed as requirements."
},
{
"code": null,
"e": 2253,
"s": 2203,
"text": "COPY ./ ... : copy all files to the Docker image."
},
{
"code": null,
"e": 2293,
"s": 2253,
"text": "EXPOSE ... : set the port to listen to."
},
{
"code": null,
"e": 2358,
"s": 2293,
"text": "CMD ... : set the command to be executed when running the image."
},
{
"code": null,
"e": 2483,
"s": 2358,
"text": "Perfect. If we are not interested in the R solution, we can jump to the definition of docker-compose. Otherwise, go ahead..."
},
{
"code": null,
"e": 2710,
"s": 2483,
"text": "On R, the packages are usually not listed in a requirements file, but directly part of the DOCKERFILE code. Similar to Python, I propose a DOCKERFILE which works well for simple apps like the one I explained in the other post."
},
{
"code": null,
"e": 2777,
"s": 2710,
"text": "The file is slightly more complex as for Python but still compact:"
},
{
"code": null,
"e": 3278,
"s": 2777,
"text": "FROM rocker/shiny:3.6.1LABEL maintainer \"Meinhard Ploner <dummy@host.com>\"WORKDIR /srv/shiny-serverRUN apt-get update \\ && apt-get install -y libsasl2-dev libssl-devRUN echo \\ 'options(repos=list(CRAN=\"https://cloud.r-project.org/\"))' > \\ \".Rprofile\"RUN R -e \"install.packages(c('dplyr','tidyr', 'plotly'))\"ADD https://raw.githubusercontent.com/rocker-org/shiny/master/shiny-server.sh /usr/bin/COPY ./ ./EXPOSE 3838RUN chmod a+w .RUN chmod +x /usr/bin/shiny-server.shCMD /usr/bin/shiny-server.sh"
},
{
"code": null,
"e": 3314,
"s": 3278,
"text": "Description of the single commands:"
},
{
"code": null,
"e": 3375,
"s": 3314,
"text": "FROM ... : pull the R-Shiny image with tag 3.6.1 (=version)."
},
{
"code": null,
"e": 3432,
"s": 3375,
"text": "LABEL ... : optional. Name and E-mail of the maintainer."
},
{
"code": null,
"e": 3473,
"s": 3432,
"text": "WORKDIR ... : set the working directory."
},
{
"code": null,
"e": 3576,
"s": 3473,
"text": "RUN apt-get ... : install libssl, needed for plotly. Might differ on other Linux instances or Windows."
},
{
"code": null,
"e": 3677,
"s": 3576,
"text": "RUN echo ... : write the CRAN repository URL into .Rprofile, which is used in the subsequent R call."
},
{
"code": null,
"e": 3725,
"s": 3677,
"text": "Run R ... : call R to install various packages."
},
{
"code": null,
"e": 3794,
"s": 3725,
"text": "ADD ... : download the shiny-server.sh file and add it to the image."
},
{
"code": null,
"e": 3844,
"s": 3794,
"text": "COPY ./ ... : copy all files to the Docker image."
},
{
"code": null,
"e": 3884,
"s": 3844,
"text": "EXPOSE ... : set the port to listen to."
},
{
"code": null,
"e": 3949,
"s": 3884,
"text": "RUN chmod +w... : make the home directory of the image writable."
},
{
"code": null,
"e": 4000,
"s": 3949,
"text": "RUN chmod +x... : make shiny-server.sh executable."
},
{
"code": null,
"e": 4065,
"s": 4000,
"text": "CMD ... : set the command to be executed when running the image."
},
{
"code": null,
"e": 4111,
"s": 4065,
"text": "That’s it. Let’s move on to the Compose part."
},
{
"code": null,
"e": 4237,
"s": 4111,
"text": "The installation of docker-compose is also straightforward. Copy the appropriate binary from GitHub, and fix the permissions:"
},
{
"code": null,
"e": 4425,
"s": 4237,
"text": "sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose"
},
{
"code": null,
"e": 4624,
"s": 4425,
"text": "You might wonder what the command uname is for? uname returns basic information about the operating system. On AWS EC2 with Amazon Linux, it gives “Linux” for “uname -s” and “x86_64” for “uname -m”."
},
{
"code": null,
"e": 4790,
"s": 4624,
"text": "The docker-compose.yml YAML file does not differ between the two stacks. But before creating it, set up an environment file called “.env” with basic app information:"
},
{
"code": null,
"e": 4815,
"s": 4790,
"text": "VERSION=1.0.0TARGET=LIVE"
},
{
"code": null,
"e": 4879,
"s": 4815,
"text": "Now create the Compose file, which in our case is minimalistic:"
},
{
"code": null,
"e": 5088,
"s": 4879,
"text": "version: \"3.7\"services: app-name: build: context: . image: app-name:$VERSION container_name: app-name ports: - \"to:from\" environment: - TARGET=$TARGET restart: unless-stopped"
},
{
"code": null,
"e": 5219,
"s": 5088,
"text": "Replace “app-name” by the app name of your choice. The image name will use the version number and is in our case “app-name:1.0.0”."
},
{
"code": null,
"e": 5361,
"s": 5219,
"text": "Further, Python-Dash apps usually run on port 5050, while R-Shiny apps per default use port 3838. Therefore replace the ports (“to:from”) by:"
},
{
"code": null,
"e": 5385,
"s": 5361,
"text": "80:5050 for Python-Dash"
},
{
"code": null,
"e": 5405,
"s": 5385,
"text": "80:3838 for R-Shiny"
},
{
"code": null,
"e": 5524,
"s": 5405,
"text": "Instead of port 80 you can use any other port you want to serve. Save the file as docker-compose.yml and you are done."
},
{
"code": null,
"e": 5548,
"s": 5524,
"text": "To build the image use:"
},
{
"code": null,
"e": 5569,
"s": 5548,
"text": "docker-compose build"
},
{
"code": null,
"e": 5634,
"s": 5569,
"text": "Having the image built, build and start the container by typing:"
},
{
"code": null,
"e": 5655,
"s": 5634,
"text": "docker-compose up -d"
},
{
"code": null,
"e": 5834,
"s": 5655,
"text": "The option “-d” ensures the app is running in the background. Without the option, you will directly see the logs, which can be useful for the first few runs, to test the setting."
},
{
"code": null,
"e": 5948,
"s": 5834,
"text": "If you downloaded the app code from GitHub and followed the instructions, your app will be running on the server."
},
{
"code": null,
"e": 6017,
"s": 5948,
"text": "Otherwise, you can also test the apps, which run on my EC2 instance:"
},
{
"code": null,
"e": 6068,
"s": 6017,
"text": "Python-Dash app on AWS/EC2: https://go.aws/2xsdb7q"
},
{
"code": null,
"e": 6115,
"s": 6068,
"text": "R-Shiny app on AWS/EC2: https://go.aws/3aoYsIW"
},
{
"code": null,
"e": 6268,
"s": 6115,
"text": "As you can see, it is no big deal to deploy a simple Dash or Shiny app. And even if an app gets more complex, these examples can be used as a blueprint."
}
] |
Hibernate Query Language | HQL | HQL Select Tutorials Point
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorial, we are going to learn about Hibernate Query Language (HQL). HQL is mainly used to perform the bulk operations in hibernate.
In our Hibernate Tutorials, so far, we have executed CURD operations on a single object at a time. If we want to execute a set of operations at a time, we can go with bulk operations.
To perform bulk operations, we can use one of the following techniques.
Hibernate Query Language (HQL)
Hibernate Criteria API
Native SQL
In this tutorials, we are mainly concentrate about HQL operations.
HQL is a Hibernate’s own query language.
HQL is a Database independent query language, that is., using this query language, we can develop data base independent queries.
HQL queries are easy to learn, because HQL looks like SQL only.
We can easily translate the SQL commands into HQL commands, by replacing the table columns with pojo class variable names, and table name with pojo class name with reference variable.
HQL queries are also called as Object Oriented form of SQL queries.
At runtime Hibernate will convert the HQL query into SQL query according to the database. So we no need to write a query according to the database.
In hibernate, we can divide the select operations into 2 types :
Reading Complete Entity
Reading Partial Entity
In Hibernate, selecting the all columns is called reading a complete entity.
Example :
SQL : select * from emp;
HQL : from Employee e;
HQL commands to read a complete entity or entities are directly begins with “from” keyword.
In Hibernate, reading a specific columns is called reading partial entity or entities.
Example :
SQL : select empno,sal from emp;
HQL : select e.employeeId, e.employeeSalary from Employee e;
In HQL, reading a partial entity is begins with “select” keyword.
In HQL we can pass the query parameters either in index parameters or named parameters.
Example :
SQL : select * from emp where deptno = ?;
HQL : from Employee e where e.deptNumber = ?
OR
HQL : from Employee e where e.deptNumber =:deptNo;
[box type=”info” align=”alignleft” class=”” width=”100%”]Named parameters are indicated with “:”[/box]
To perform the select operation, Hibernate given us Query object.
Query : A Query is a Object Orientation representation of Hibernate query. We can get the Query instance, by calling the session.createQuery();
We can also pass the parameters to the Hibernate Query. To do that, we can use query.setParameter(int pos, Object o)
And then call the list() on query object : query.list();
query.list() returns the query results in the form of java.util.List.
Example :
A simple HQL Query :
Query query = session.createQuery("from Employee e");
List qryResults = query.list();
HQL Query with parameters :
Query query = session.createQuery("from Employee e where e.deptNumber=?");
query.setParameter(0,300);
List qryResults = query.list();
If the HQL command is to read the complete entity or entities then the list contains the complete objects. That means, Hibernate internally does like below:
Hibernate gets the data from the database and it stores the records of table in ResultSet object.
Each record of the ResultSet will be set to a pojo class object.
Adds each object of the pojo class to the java.util.List and finally returns the List.
Iterator it =list.iterator();
while (it.hasNext()) {
Employee emp = (Employee) it.next();
System.out.println("Employee Name : " + emp.getEmployeeName()
+ " , Salary : " + emp.getSalary());
}
If HQL command is to read partial entity or entities, then the list contains the Objects array (Object[]). That is the hibernate does internally like below:
Hibernate reads partial entities from table and stores the result in ResultSet.
Hibernate will set the data of each record in ResultSet to an Object array.
Each object[] will be added to the list.
Finally it returns the list.
Iterator it = list.iterator();
while (it.hasNext()) {
Object[] object = (Object[]) it2.next();
System.out.println("Department Number : " + object[0]
+ " Salary : " + object[1]);
}
If the HQL command is to read partial entity with single column then the list contains Objects(Integer,String and etc.). That is., the Hibernate does internally like below:
Hibernate reads a single column of each record and stores in ResultSet.
Hibernate will set each record of ResultSet into an Object of that property type (depends on the property type)
Adds objects to the list and finally returns the list.
Iterator it = list.iterator();
while (it.hasNext()) {
String salary = (String) it3.next();
System.out.println("Salary : " + salary);
}
This is how the Hibernate Select Query works. Download for complete example.
Happy learning 🙂
Hibernate Query Language Select Example
File size: 13 KB
Downloads: 840
HQL update, delete Query Example
Hibernate Native SQL Query Example
Hibernate groupby criteria HQL query Example
Hibernate Named Query with Example
Hibernate Criteria API with Example
Hibernate Projection with Example
Hibernate Restrictions with Example
JDBC Select Program Example
Calling Stored Procedures in Hibernate
Hibernate Left Join Example
Hibernate Filters Example Annotation
What is Hibernate
Top 10 Advantages of Hibernate
hibernate update query example
Hibernate Right Join Example
HQL update, delete Query Example
Hibernate Native SQL Query Example
Hibernate groupby criteria HQL query Example
Hibernate Named Query with Example
Hibernate Criteria API with Example
Hibernate Projection with Example
Hibernate Restrictions with Example
JDBC Select Program Example
Calling Stored Procedures in Hibernate
Hibernate Left Join Example
Hibernate Filters Example Annotation
What is Hibernate
Top 10 Advantages of Hibernate
hibernate update query example
Hibernate Right Join Example
Milana Travis
October 25, 2017 at 10:31 am - Reply
Thank you very much for your blog.
I enjoyed reading this article.
Milana Travis
October 25, 2017 at 10:31 am - Reply
Thank you very much for your blog.
I enjoyed reading this article.
Thank you very much for your blog.
I enjoyed reading this article.
Δ
Hibernate – Introduction
Hibernate – Advantages
Hibernate – Download and Setup
Hibernate – Sql Dialect list
Hibernate – Helloworld – XML
Hibernate – Install Tools in Eclipse
Hibernate – Object States
Hibernate – Helloworld – Annotations
Hibernate – One to One Mapping – XML
Hibernate – One to One Mapping foreign key – XML
Hibernate – One To Many -XML
Hibernate – One To Many – Annotations
Hibernate – Many to Many Mapping – XML
Hibernate – Many to One – XML
Hibernate – Composite Key Mapping
Hibernate – Named Query
Hibernate – Native SQL Query
Hibernate – load() vs get()
Hibernate Criteria API with Example
Hibernate – Restrictions
Hibernate – Projection
Hibernate – Query Language (HQL)
Hibernate – Groupby Criteria HQL
Hibernate – Orderby Criteria
Hibernate – HQLSelect Operation
Hibernate – HQL Update, Delete
Hibernate – Update Query
Hibernate – Update vs Merge
Hibernate – Right Join
Hibernate – Left Join
Hibernate – Pagination
Hibernate – Generator Classes
Hibernate – Custom Generator
Hibernate – Inheritance Mappings
Hibernate – Table per Class
Hibernate – Table per Sub Class
Hibernate – Table per Concrete Class
Hibernate – Table per Class Annotations
Hibernate – Stored Procedures
Hibernate – @Formula Annotation
Hibernate – Singleton SessionFactory
Hibernate – Interceptor
hbm2ddl.auto Example in Hibernate XML Config
Hibernate – First Level Cache
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 540,
"s": 398,
"text": "In this tutorial, we are going to learn about Hibernate Query Language (HQL). HQL is mainly used to perform the bulk operations in hibernate."
},
{
"code": null,
"e": 724,
"s": 540,
"text": "In our Hibernate Tutorials, so far, we have executed CURD operations on a single object at a time. If we want to execute a set of operations at a time, we can go with bulk operations."
},
{
"code": null,
"e": 796,
"s": 724,
"text": "To perform bulk operations, we can use one of the following techniques."
},
{
"code": null,
"e": 827,
"s": 796,
"text": "Hibernate Query Language (HQL)"
},
{
"code": null,
"e": 850,
"s": 827,
"text": "Hibernate Criteria API"
},
{
"code": null,
"e": 861,
"s": 850,
"text": "Native SQL"
},
{
"code": null,
"e": 928,
"s": 861,
"text": "In this tutorials, we are mainly concentrate about HQL operations."
},
{
"code": null,
"e": 969,
"s": 928,
"text": "HQL is a Hibernate’s own query language."
},
{
"code": null,
"e": 1098,
"s": 969,
"text": "HQL is a Database independent query language, that is., using this query language, we can develop data base independent queries."
},
{
"code": null,
"e": 1162,
"s": 1098,
"text": "HQL queries are easy to learn, because HQL looks like SQL only."
},
{
"code": null,
"e": 1346,
"s": 1162,
"text": "We can easily translate the SQL commands into HQL commands, by replacing the table columns with pojo class variable names, and table name with pojo class name with reference variable."
},
{
"code": null,
"e": 1414,
"s": 1346,
"text": "HQL queries are also called as Object Oriented form of SQL queries."
},
{
"code": null,
"e": 1562,
"s": 1414,
"text": "At runtime Hibernate will convert the HQL query into SQL query according to the database. So we no need to write a query according to the database."
},
{
"code": null,
"e": 1627,
"s": 1562,
"text": "In hibernate, we can divide the select operations into 2 types :"
},
{
"code": null,
"e": 1651,
"s": 1627,
"text": "Reading Complete Entity"
},
{
"code": null,
"e": 1674,
"s": 1651,
"text": "Reading Partial Entity"
},
{
"code": null,
"e": 1751,
"s": 1674,
"text": "In Hibernate, selecting the all columns is called reading a complete entity."
},
{
"code": null,
"e": 1761,
"s": 1751,
"text": "Example :"
},
{
"code": null,
"e": 1786,
"s": 1761,
"text": "SQL : select * from emp;"
},
{
"code": null,
"e": 1810,
"s": 1786,
"text": "HQL : from Employee e;\n"
},
{
"code": null,
"e": 1902,
"s": 1810,
"text": "HQL commands to read a complete entity or entities are directly begins with “from” keyword."
},
{
"code": null,
"e": 1989,
"s": 1902,
"text": "In Hibernate, reading a specific columns is called reading partial entity or entities."
},
{
"code": null,
"e": 1999,
"s": 1989,
"text": "Example :"
},
{
"code": null,
"e": 2032,
"s": 1999,
"text": "SQL : select empno,sal from emp;"
},
{
"code": null,
"e": 2093,
"s": 2032,
"text": "HQL : select e.employeeId, e.employeeSalary from Employee e;"
},
{
"code": null,
"e": 2159,
"s": 2093,
"text": "In HQL, reading a partial entity is begins with “select” keyword."
},
{
"code": null,
"e": 2247,
"s": 2159,
"text": "In HQL we can pass the query parameters either in index parameters or named parameters."
},
{
"code": null,
"e": 2257,
"s": 2247,
"text": "Example :"
},
{
"code": null,
"e": 2299,
"s": 2257,
"text": "SQL : select * from emp where deptno = ?;"
},
{
"code": null,
"e": 2344,
"s": 2299,
"text": "HQL : from Employee e where e.deptNumber = ?"
},
{
"code": null,
"e": 2347,
"s": 2344,
"text": "OR"
},
{
"code": null,
"e": 2398,
"s": 2347,
"text": "HQL : from Employee e where e.deptNumber =:deptNo;"
},
{
"code": null,
"e": 2501,
"s": 2398,
"text": "[box type=”info” align=”alignleft” class=”” width=”100%”]Named parameters are indicated with “:”[/box]"
},
{
"code": null,
"e": 2567,
"s": 2501,
"text": "To perform the select operation, Hibernate given us Query object."
},
{
"code": null,
"e": 2711,
"s": 2567,
"text": "Query : A Query is a Object Orientation representation of Hibernate query. We can get the Query instance, by calling the session.createQuery();"
},
{
"code": null,
"e": 2828,
"s": 2711,
"text": "We can also pass the parameters to the Hibernate Query. To do that, we can use query.setParameter(int pos, Object o)"
},
{
"code": null,
"e": 2885,
"s": 2828,
"text": "And then call the list() on query object : query.list();"
},
{
"code": null,
"e": 2955,
"s": 2885,
"text": "query.list() returns the query results in the form of java.util.List."
},
{
"code": null,
"e": 2965,
"s": 2955,
"text": "Example :"
},
{
"code": null,
"e": 2986,
"s": 2965,
"text": "A simple HQL Query :"
},
{
"code": null,
"e": 3073,
"s": 2986,
"text": "Query query = session.createQuery(\"from Employee e\");\n\nList qryResults = query.list();"
},
{
"code": null,
"e": 3101,
"s": 3073,
"text": "HQL Query with parameters :"
},
{
"code": null,
"e": 3237,
"s": 3101,
"text": "Query query = session.createQuery(\"from Employee e where e.deptNumber=?\");\n\nquery.setParameter(0,300);\n\nList qryResults = query.list();"
},
{
"code": null,
"e": 3394,
"s": 3237,
"text": "If the HQL command is to read the complete entity or entities then the list contains the complete objects. That means, Hibernate internally does like below:"
},
{
"code": null,
"e": 3492,
"s": 3394,
"text": "Hibernate gets the data from the database and it stores the records of table in ResultSet object."
},
{
"code": null,
"e": 3557,
"s": 3492,
"text": "Each record of the ResultSet will be set to a pojo class object."
},
{
"code": null,
"e": 3644,
"s": 3557,
"text": "Adds each object of the pojo class to the java.util.List and finally returns the List."
},
{
"code": null,
"e": 3835,
"s": 3644,
"text": "Iterator it =list.iterator();\nwhile (it.hasNext()) {\nEmployee emp = (Employee) it.next();\nSystem.out.println(\"Employee Name : \" + emp.getEmployeeName()\n+ \" , Salary : \" + emp.getSalary());\n}"
},
{
"code": null,
"e": 3992,
"s": 3835,
"text": "If HQL command is to read partial entity or entities, then the list contains the Objects array (Object[]). That is the hibernate does internally like below:"
},
{
"code": null,
"e": 4072,
"s": 3992,
"text": "Hibernate reads partial entities from table and stores the result in ResultSet."
},
{
"code": null,
"e": 4148,
"s": 4072,
"text": "Hibernate will set the data of each record in ResultSet to an Object array."
},
{
"code": null,
"e": 4189,
"s": 4148,
"text": "Each object[] will be added to the list."
},
{
"code": null,
"e": 4218,
"s": 4189,
"text": "Finally it returns the list."
},
{
"code": null,
"e": 4399,
"s": 4218,
"text": "Iterator it = list.iterator();\n\nwhile (it.hasNext()) {\nObject[] object = (Object[]) it2.next();\nSystem.out.println(\"Department Number : \" + object[0]\n+ \" Salary : \" + object[1]);\n}"
},
{
"code": null,
"e": 4572,
"s": 4399,
"text": "If the HQL command is to read partial entity with single column then the list contains Objects(Integer,String and etc.). That is., the Hibernate does internally like below:"
},
{
"code": null,
"e": 4644,
"s": 4572,
"text": "Hibernate reads a single column of each record and stores in ResultSet."
},
{
"code": null,
"e": 4757,
"s": 4644,
"text": "Hibernate will set each record of ResultSet into an Object of that property type (depends on the property type)"
},
{
"code": null,
"e": 4812,
"s": 4757,
"text": "Adds objects to the list and finally returns the list."
},
{
"code": null,
"e": 4947,
"s": 4812,
"text": "Iterator it = list.iterator();\nwhile (it.hasNext()) {\nString salary = (String) it3.next();\nSystem.out.println(\"Salary : \" + salary);\n}"
},
{
"code": null,
"e": 5024,
"s": 4947,
"text": "This is how the Hibernate Select Query works. Download for complete example."
},
{
"code": null,
"e": 5041,
"s": 5024,
"text": "Happy learning 🙂"
},
{
"code": null,
"e": 5117,
"s": 5041,
"text": "\n\nHibernate Query Language Select Example\n\nFile size: 13 KB\nDownloads: 840\n"
},
{
"code": null,
"e": 5614,
"s": 5117,
"text": "\nHQL update, delete Query Example\nHibernate Native SQL Query Example\nHibernate groupby criteria HQL query Example\nHibernate Named Query with Example\nHibernate Criteria API with Example\nHibernate Projection with Example\nHibernate Restrictions with Example\nJDBC Select Program Example\nCalling Stored Procedures in Hibernate\nHibernate Left Join Example\nHibernate Filters Example Annotation\nWhat is Hibernate\nTop 10 Advantages of Hibernate\nhibernate update query example\nHibernate Right Join Example\n"
},
{
"code": null,
"e": 5647,
"s": 5614,
"text": "HQL update, delete Query Example"
},
{
"code": null,
"e": 5682,
"s": 5647,
"text": "Hibernate Native SQL Query Example"
},
{
"code": null,
"e": 5727,
"s": 5682,
"text": "Hibernate groupby criteria HQL query Example"
},
{
"code": null,
"e": 5762,
"s": 5727,
"text": "Hibernate Named Query with Example"
},
{
"code": null,
"e": 5798,
"s": 5762,
"text": "Hibernate Criteria API with Example"
},
{
"code": null,
"e": 5832,
"s": 5798,
"text": "Hibernate Projection with Example"
},
{
"code": null,
"e": 5868,
"s": 5832,
"text": "Hibernate Restrictions with Example"
},
{
"code": null,
"e": 5896,
"s": 5868,
"text": "JDBC Select Program Example"
},
{
"code": null,
"e": 5935,
"s": 5896,
"text": "Calling Stored Procedures in Hibernate"
},
{
"code": null,
"e": 5963,
"s": 5935,
"text": "Hibernate Left Join Example"
},
{
"code": null,
"e": 6000,
"s": 5963,
"text": "Hibernate Filters Example Annotation"
},
{
"code": null,
"e": 6018,
"s": 6000,
"text": "What is Hibernate"
},
{
"code": null,
"e": 6049,
"s": 6018,
"text": "Top 10 Advantages of Hibernate"
},
{
"code": null,
"e": 6080,
"s": 6049,
"text": "hibernate update query example"
},
{
"code": null,
"e": 6109,
"s": 6080,
"text": "Hibernate Right Join Example"
},
{
"code": null,
"e": 6240,
"s": 6109,
"text": "\n\n\n\n\n\nMilana Travis\nOctober 25, 2017 at 10:31 am - Reply \n\nThank you very much for your blog.\nI enjoyed reading this article.\n\n\n\n\n"
},
{
"code": null,
"e": 6369,
"s": 6240,
"text": "\n\n\n\n\nMilana Travis\nOctober 25, 2017 at 10:31 am - Reply \n\nThank you very much for your blog.\nI enjoyed reading this article.\n\n\n\n"
},
{
"code": null,
"e": 6404,
"s": 6369,
"text": "Thank you very much for your blog."
},
{
"code": null,
"e": 6436,
"s": 6404,
"text": "I enjoyed reading this article."
},
{
"code": null,
"e": 6442,
"s": 6440,
"text": "Δ"
},
{
"code": null,
"e": 6468,
"s": 6442,
"text": " Hibernate – Introduction"
},
{
"code": null,
"e": 6492,
"s": 6468,
"text": " Hibernate – Advantages"
},
{
"code": null,
"e": 6524,
"s": 6492,
"text": " Hibernate – Download and Setup"
},
{
"code": null,
"e": 6554,
"s": 6524,
"text": " Hibernate – Sql Dialect list"
},
{
"code": null,
"e": 6584,
"s": 6554,
"text": " Hibernate – Helloworld – XML"
},
{
"code": null,
"e": 6622,
"s": 6584,
"text": " Hibernate – Install Tools in Eclipse"
},
{
"code": null,
"e": 6649,
"s": 6622,
"text": " Hibernate – Object States"
},
{
"code": null,
"e": 6687,
"s": 6649,
"text": " Hibernate – Helloworld – Annotations"
},
{
"code": null,
"e": 6725,
"s": 6687,
"text": " Hibernate – One to One Mapping – XML"
},
{
"code": null,
"e": 6775,
"s": 6725,
"text": " Hibernate – One to One Mapping foreign key – XML"
},
{
"code": null,
"e": 6805,
"s": 6775,
"text": " Hibernate – One To Many -XML"
},
{
"code": null,
"e": 6844,
"s": 6805,
"text": " Hibernate – One To Many – Annotations"
},
{
"code": null,
"e": 6884,
"s": 6844,
"text": " Hibernate – Many to Many Mapping – XML"
},
{
"code": null,
"e": 6915,
"s": 6884,
"text": " Hibernate – Many to One – XML"
},
{
"code": null,
"e": 6950,
"s": 6915,
"text": " Hibernate – Composite Key Mapping"
},
{
"code": null,
"e": 6975,
"s": 6950,
"text": " Hibernate – Named Query"
},
{
"code": null,
"e": 7005,
"s": 6975,
"text": " Hibernate – Native SQL Query"
},
{
"code": null,
"e": 7034,
"s": 7005,
"text": " Hibernate – load() vs get()"
},
{
"code": null,
"e": 7071,
"s": 7034,
"text": " Hibernate Criteria API with Example"
},
{
"code": null,
"e": 7097,
"s": 7071,
"text": " Hibernate – Restrictions"
},
{
"code": null,
"e": 7121,
"s": 7097,
"text": " Hibernate – Projection"
},
{
"code": null,
"e": 7155,
"s": 7121,
"text": " Hibernate – Query Language (HQL)"
},
{
"code": null,
"e": 7189,
"s": 7155,
"text": " Hibernate – Groupby Criteria HQL"
},
{
"code": null,
"e": 7219,
"s": 7189,
"text": " Hibernate – Orderby Criteria"
},
{
"code": null,
"e": 7252,
"s": 7219,
"text": " Hibernate – HQLSelect Operation"
},
{
"code": null,
"e": 7284,
"s": 7252,
"text": " Hibernate – HQL Update, Delete"
},
{
"code": null,
"e": 7310,
"s": 7284,
"text": " Hibernate – Update Query"
},
{
"code": null,
"e": 7339,
"s": 7310,
"text": " Hibernate – Update vs Merge"
},
{
"code": null,
"e": 7363,
"s": 7339,
"text": " Hibernate – Right Join"
},
{
"code": null,
"e": 7386,
"s": 7363,
"text": " Hibernate – Left Join"
},
{
"code": null,
"e": 7410,
"s": 7386,
"text": " Hibernate – Pagination"
},
{
"code": null,
"e": 7441,
"s": 7410,
"text": " Hibernate – Generator Classes"
},
{
"code": null,
"e": 7471,
"s": 7441,
"text": " Hibernate – Custom Generator"
},
{
"code": null,
"e": 7505,
"s": 7471,
"text": " Hibernate – Inheritance Mappings"
},
{
"code": null,
"e": 7534,
"s": 7505,
"text": " Hibernate – Table per Class"
},
{
"code": null,
"e": 7567,
"s": 7534,
"text": " Hibernate – Table per Sub Class"
},
{
"code": null,
"e": 7605,
"s": 7567,
"text": " Hibernate – Table per Concrete Class"
},
{
"code": null,
"e": 7647,
"s": 7605,
"text": " Hibernate – Table per Class Annotations"
},
{
"code": null,
"e": 7678,
"s": 7647,
"text": " Hibernate – Stored Procedures"
},
{
"code": null,
"e": 7711,
"s": 7678,
"text": " Hibernate – @Formula Annotation"
},
{
"code": null,
"e": 7749,
"s": 7711,
"text": " Hibernate – Singleton SessionFactory"
},
{
"code": null,
"e": 7774,
"s": 7749,
"text": " Hibernate – Interceptor"
},
{
"code": null,
"e": 7820,
"s": 7774,
"text": " hbm2ddl.auto Example in Hibernate XML Config"
}
] |
Sum of XOR of all possible subsets in C++
|
In this problem, we are given an array aar[] of n numbers. Our task is to create a program to find the Sum of XOR of all possible subsets.
Here, we will find all subsets of the array. Then for each subset, we will find the XOR of elements of the subset and add them to the sum variable.
Input: arr[] = {5, 1, 4}
Output: 20
Explanation: XOR of all subsets:
{5} = 5
{1} = 1
{4} = 4
{5, 1} = 4
{5, 4} = 1
{1, 4} = 5
{5, 1, 4} = 0
Sum of XOR = 5 + 1 + 4 + 4 + 1 + 5 = 20
A simple solution to the problem, is using loop and find all possible subsets of the array and then for each subset find XOR of all the elements and update the sum. Return sum at the end.
This is not an effective approach, for the large value, the time complexity will grow exponentially.
An efficient approach is using the properties of XOR. Here, we will find the OR of all elements of the array and check the bits. If the ith is set, then update sum with (2^(n-1+i)).
Program to illustrate the working of our solution,
Live Demo
#include <iostream>
#include <math.h>
using namespace std;
int subSetXORSum(int arr[], int n) {
int bitOR = 0;
for (int i=0; i < n; ++i)
bitOR |= arr[i];
return (bitOR * pow(2, n-1));
}
int main() {
int arr[] = {1, 5, 4};
int size = sizeof(arr) / sizeof(arr[0]);
cout<<"Sum of XOR of all possible subsets is "<<subSetXORSum(arr, size);
}
Sum of XOR of all possible subsets is 20
|
[
{
"code": null,
"e": 1201,
"s": 1062,
"text": "In this problem, we are given an array aar[] of n numbers. Our task is to create a program to find the Sum of XOR of all possible subsets."
},
{
"code": null,
"e": 1349,
"s": 1201,
"text": "Here, we will find all subsets of the array. Then for each subset, we will find the XOR of elements of the subset and add them to the sum variable."
},
{
"code": null,
"e": 1529,
"s": 1349,
"text": "Input: arr[] = {5, 1, 4}\nOutput: 20\nExplanation: XOR of all subsets:\n{5} = 5\n{1} = 1\n{4} = 4\n{5, 1} = 4\n{5, 4} = 1\n{1, 4} = 5\n{5, 1, 4} = 0\nSum of XOR = 5 + 1 + 4 + 4 + 1 + 5 = 20"
},
{
"code": null,
"e": 1717,
"s": 1529,
"text": "A simple solution to the problem, is using loop and find all possible subsets of the array and then for each subset find XOR of all the elements and update the sum. Return sum at the end."
},
{
"code": null,
"e": 1818,
"s": 1717,
"text": "This is not an effective approach, for the large value, the time complexity will grow exponentially."
},
{
"code": null,
"e": 2000,
"s": 1818,
"text": "An efficient approach is using the properties of XOR. Here, we will find the OR of all elements of the array and check the bits. If the ith is set, then update sum with (2^(n-1+i))."
},
{
"code": null,
"e": 2051,
"s": 2000,
"text": "Program to illustrate the working of our solution,"
},
{
"code": null,
"e": 2062,
"s": 2051,
"text": " Live Demo"
},
{
"code": null,
"e": 2421,
"s": 2062,
"text": "#include <iostream>\n#include <math.h>\nusing namespace std;\nint subSetXORSum(int arr[], int n) {\n int bitOR = 0;\n for (int i=0; i < n; ++i)\n bitOR |= arr[i];\n return (bitOR * pow(2, n-1));\n}\nint main() {\n int arr[] = {1, 5, 4};\n int size = sizeof(arr) / sizeof(arr[0]);\n cout<<\"Sum of XOR of all possible subsets is \"<<subSetXORSum(arr, size);\n}"
},
{
"code": null,
"e": 2462,
"s": 2421,
"text": "Sum of XOR of all possible subsets is 20"
}
] |
Introduction to Graph Algorithm: Breadth-First Search Algorithm in Python | by Rashida Nasrin Sucky | Towards Data Science
|
Graph form data is present in many popular and widely used applications. Web crawlers, computer networks, relational databases, and social networks are some good examples. The graph search algorithms are important for any section of computer science. Also, it is important and useful for many coding interviews.
There are a couple of different graph search algorithms available. This is one of the simplest algorithms for graph search and also a type of prototype for many other graph algorithms. Today I will explain the Breadth-first search algorithm in detail and also show a use case of the Breadth-first search algorithm. Here are the elements of this article:
How the Breadth_first_search algorithm works with visualsDeveloping the algorithm in PythonHow to use this algorithm to find the shortest path of any node from the source node.Time complexity
How the Breadth_first_search algorithm works with visuals
Developing the algorithm in Python
How to use this algorithm to find the shortest path of any node from the source node.
Time complexity
Let’s start!
A graph has two elements. Vertices and edges.
Given,
A graph G = (V, E),
where V is the vertices and E is the edges.
The breadth-first search algorithm systematically explores the edges level by level to discover each vertex that is reachable from the given source vertex s.
Here are the steps to a Breadth-first search process:
There is a start vertex S.Initialize a set for level with start vertex S as level 1.Explore which other vertex is reachable from the start. Those vertices will be considered as level 2.In this way, vertices will be opened level by level.
There is a start vertex S.
Initialize a set for level with start vertex S as level 1.
Explore which other vertex is reachable from the start. Those vertices will be considered as level 2.
In this way, vertices will be opened level by level.
Here is a visual demonstration of the steps:
Here, we have six vertices, u, v, w, x, y, z, and seven edges ux, uv, vx, vy, xy, wy, wz.
Consider the vertex u as the source or start vertex. Now see how they open level by level in the pictures below.
The source vertex is u is level 1. We check where can we go from L1. From the picture, you can see that ‘u’ has a direct path to v and x. So, they are level 2.
Now, we are in nodes x and v. Both x and v have direct access only to y. So, y is the level3. From both x and v, we can go to u also. But we ignore the already visited nodes.
y has direct access to w only. So, w is the level4. We can go to v and x as well from y. But they are already visited. So, we do not need to worry about them anymore.
At last, w can go to z and z is level5.
Before we can dive into the algorithm let’s make an adjacency list. That is to make a dictionary where each node will be a key and the nodes that are linked to it will be the values stored in a list.
For example, node u is linked to nodes v and x. So, it will be expressed as:
'u': ['v', 'x']
Here ‘u’ is the parent of ‘v’ and ‘x’.
We need to do the same with all the other nodes as well. The adjacency list will look like:
adj = { 'u': ['v', 'x'], 'x': ['u', 'v', 'y'], 'v': ['u', 'x', 'y'], 'y': ['w'], 'w': ['y', 'z'], 'z': ['w'] }
Next, We need to initialize a few variables:
‘visited’ variable to keep track of the node that we already visited,
‘level’ variable to keep track of which level we are currently in,
‘parent’ variable to store the parents of the nodes.
‘traversal_output’ to list the nodes traveled.
Finally, we will use a queue to develop this algorithm. Python has a built-in queue that we can import and use.
from queue import Queuevisited = {}level = {}parent = {}traversal_output = []queue = Queue()
In the beginning, set ‘False’ to all the nodes in the ‘visited’ dictionary and ‘None’ to all the nodes in the ‘parents’ dictionary and -1 in the level.
for node in adj_list.keys(): visited[node] = False parent[node] = None level[node] = -1
As in the picture, assume that the source is ‘u’. To start with, use visited[s] = True, use level 0 and add ‘u’ in the Queue.
s = "u"visited[s] = Truelevel[s] = 0queue.put(s)
Here comes the loop!
At this stage, we need to visit the nodes that are linked to the source node ‘u’. We have it listed in the adjacency list above. For each of them, set them as visited, upgrade their levels as one level above the source node’s level, set their parent as ‘u’, and finally add them in the Queue.
Then do repeat the same with their child nodes. Here is the complete loop:
while not queue.empty(): u = queue.get() traversal_output.append(u) for v in adj_list[u]: if not visited[v]: visited[v] = True parent[v] = u level[v] = level[u] + 1 queue.put(v)print(traversal_output)print(visited)print(level)print(parent)
Output:
['u', 'v', 'x', 'y', 'w', 'z']{'u': True, 'x': True, 'v': True, 'y': True, 'w': True, 'z': True}{'u': 0, 'x': 1, 'v': 1, 'y': 2, 'w': 3, 'z': 4}{'u': None, 'x': 'u', 'v': 'u', 'y': 'v', 'w': 'y', 'z': 'w'}
Traversal_output shows that we traversed through all the nodes.
For each node, visited is true in the second row.
In the third row, we have the level for all the nodes. Please check with the pictures above.
In the fourth row, we have the parents of all the nodes. ‘u’ is the source node. So, ‘u’ does not have a parent.
Combining all the code and putting them in a function:
def Breadth_first_search(adj_list): visited = {} level = {} parent = {} traversal_output = [] queue = Queue() for node in adj_list.keys(): visited[node] = False parent[node] = None level[node] = -1 s = "u" visited[s] = True level[s] = 0 queue.put(s) while not queue.empty(): u = queue.get() traversal_output.append(u) for v in adj_list[u]: if not visited[v]: visited[v] = True parent[v] = u level[v] = level[u] + 1 queue.put(v) return traversal_output, visited, level, parent
Calling the function and pass the adjacency list ‘adj’ will gives you the same output.
This algorithm can be used to find the shortest path from the source to any other node. How?
Look, we know the parent of each node. From any node, we keep going back through the parents, it will eventually go back to the source node. Right?
For example, say, I want to find the shortest path of ‘w’ from the source node ‘u’. Let’s see, who is w’s parent. it’s ‘y’. y’s parent is ‘v’ and then v’s parent is ‘u’. So, the shortest path is u, v, y, w.
Check in the picture to see if you think this is the shortest path.
We can find the parents of each node from the function we defined above.
traversed, visited, level, parent = Breadth_first_search(adj)
Here is the code to find the shortest path
v = "w"path = []while v is not None: path.append(v) v = parent[v]path.reverse()print(path)
Output:
['u', 'v', 'y', 'w']
We have only two elements here. Vertices and edges.
Notice, carefully. We visit each vertex only one time. In the for loop, we ignore the already visited vertices. Consider, V as the set of vertices.
We used an undirected graph here. For an undirected graph, we can visit both ways. The way we can go from ‘u’ to ‘v’, we can go from ‘v’ to ‘u’ as well. In the adjacency list ‘adj’ above, you can see that one node can come up more than once. At most, we will traverse one edge twice. Let E be the set of edges, it will traverse the edges 2E times in the worst case. Som the total time in worst case V+2E.
The time complexity can be expressed as O(V+E) as the coefficient is subsumed by the O.
I tried to explain, how the Breadth_first_search algorithm works using visuals, developed the algorithm in Python, How to find the shortest path using the Breadth_first_search algorithm, and the time complexity of this algorithm. I hope it is clear to you now.
Feel free to follow me on Twitter and like my Facebook page.
|
[
{
"code": null,
"e": 358,
"s": 46,
"text": "Graph form data is present in many popular and widely used applications. Web crawlers, computer networks, relational databases, and social networks are some good examples. The graph search algorithms are important for any section of computer science. Also, it is important and useful for many coding interviews."
},
{
"code": null,
"e": 712,
"s": 358,
"text": "There are a couple of different graph search algorithms available. This is one of the simplest algorithms for graph search and also a type of prototype for many other graph algorithms. Today I will explain the Breadth-first search algorithm in detail and also show a use case of the Breadth-first search algorithm. Here are the elements of this article:"
},
{
"code": null,
"e": 904,
"s": 712,
"text": "How the Breadth_first_search algorithm works with visualsDeveloping the algorithm in PythonHow to use this algorithm to find the shortest path of any node from the source node.Time complexity"
},
{
"code": null,
"e": 962,
"s": 904,
"text": "How the Breadth_first_search algorithm works with visuals"
},
{
"code": null,
"e": 997,
"s": 962,
"text": "Developing the algorithm in Python"
},
{
"code": null,
"e": 1083,
"s": 997,
"text": "How to use this algorithm to find the shortest path of any node from the source node."
},
{
"code": null,
"e": 1099,
"s": 1083,
"text": "Time complexity"
},
{
"code": null,
"e": 1112,
"s": 1099,
"text": "Let’s start!"
},
{
"code": null,
"e": 1158,
"s": 1112,
"text": "A graph has two elements. Vertices and edges."
},
{
"code": null,
"e": 1165,
"s": 1158,
"text": "Given,"
},
{
"code": null,
"e": 1185,
"s": 1165,
"text": "A graph G = (V, E),"
},
{
"code": null,
"e": 1229,
"s": 1185,
"text": "where V is the vertices and E is the edges."
},
{
"code": null,
"e": 1387,
"s": 1229,
"text": "The breadth-first search algorithm systematically explores the edges level by level to discover each vertex that is reachable from the given source vertex s."
},
{
"code": null,
"e": 1441,
"s": 1387,
"text": "Here are the steps to a Breadth-first search process:"
},
{
"code": null,
"e": 1679,
"s": 1441,
"text": "There is a start vertex S.Initialize a set for level with start vertex S as level 1.Explore which other vertex is reachable from the start. Those vertices will be considered as level 2.In this way, vertices will be opened level by level."
},
{
"code": null,
"e": 1706,
"s": 1679,
"text": "There is a start vertex S."
},
{
"code": null,
"e": 1765,
"s": 1706,
"text": "Initialize a set for level with start vertex S as level 1."
},
{
"code": null,
"e": 1867,
"s": 1765,
"text": "Explore which other vertex is reachable from the start. Those vertices will be considered as level 2."
},
{
"code": null,
"e": 1920,
"s": 1867,
"text": "In this way, vertices will be opened level by level."
},
{
"code": null,
"e": 1965,
"s": 1920,
"text": "Here is a visual demonstration of the steps:"
},
{
"code": null,
"e": 2055,
"s": 1965,
"text": "Here, we have six vertices, u, v, w, x, y, z, and seven edges ux, uv, vx, vy, xy, wy, wz."
},
{
"code": null,
"e": 2168,
"s": 2055,
"text": "Consider the vertex u as the source or start vertex. Now see how they open level by level in the pictures below."
},
{
"code": null,
"e": 2328,
"s": 2168,
"text": "The source vertex is u is level 1. We check where can we go from L1. From the picture, you can see that ‘u’ has a direct path to v and x. So, they are level 2."
},
{
"code": null,
"e": 2503,
"s": 2328,
"text": "Now, we are in nodes x and v. Both x and v have direct access only to y. So, y is the level3. From both x and v, we can go to u also. But we ignore the already visited nodes."
},
{
"code": null,
"e": 2670,
"s": 2503,
"text": "y has direct access to w only. So, w is the level4. We can go to v and x as well from y. But they are already visited. So, we do not need to worry about them anymore."
},
{
"code": null,
"e": 2710,
"s": 2670,
"text": "At last, w can go to z and z is level5."
},
{
"code": null,
"e": 2910,
"s": 2710,
"text": "Before we can dive into the algorithm let’s make an adjacency list. That is to make a dictionary where each node will be a key and the nodes that are linked to it will be the values stored in a list."
},
{
"code": null,
"e": 2987,
"s": 2910,
"text": "For example, node u is linked to nodes v and x. So, it will be expressed as:"
},
{
"code": null,
"e": 3003,
"s": 2987,
"text": "'u': ['v', 'x']"
},
{
"code": null,
"e": 3042,
"s": 3003,
"text": "Here ‘u’ is the parent of ‘v’ and ‘x’."
},
{
"code": null,
"e": 3134,
"s": 3042,
"text": "We need to do the same with all the other nodes as well. The adjacency list will look like:"
},
{
"code": null,
"e": 3266,
"s": 3134,
"text": "adj = { 'u': ['v', 'x'], 'x': ['u', 'v', 'y'], 'v': ['u', 'x', 'y'], 'y': ['w'], 'w': ['y', 'z'], 'z': ['w'] }"
},
{
"code": null,
"e": 3311,
"s": 3266,
"text": "Next, We need to initialize a few variables:"
},
{
"code": null,
"e": 3381,
"s": 3311,
"text": "‘visited’ variable to keep track of the node that we already visited,"
},
{
"code": null,
"e": 3448,
"s": 3381,
"text": "‘level’ variable to keep track of which level we are currently in,"
},
{
"code": null,
"e": 3501,
"s": 3448,
"text": "‘parent’ variable to store the parents of the nodes."
},
{
"code": null,
"e": 3548,
"s": 3501,
"text": "‘traversal_output’ to list the nodes traveled."
},
{
"code": null,
"e": 3660,
"s": 3548,
"text": "Finally, we will use a queue to develop this algorithm. Python has a built-in queue that we can import and use."
},
{
"code": null,
"e": 3753,
"s": 3660,
"text": "from queue import Queuevisited = {}level = {}parent = {}traversal_output = []queue = Queue()"
},
{
"code": null,
"e": 3905,
"s": 3753,
"text": "In the beginning, set ‘False’ to all the nodes in the ‘visited’ dictionary and ‘None’ to all the nodes in the ‘parents’ dictionary and -1 in the level."
},
{
"code": null,
"e": 4014,
"s": 3905,
"text": "for node in adj_list.keys(): visited[node] = False parent[node] = None level[node] = -1"
},
{
"code": null,
"e": 4140,
"s": 4014,
"text": "As in the picture, assume that the source is ‘u’. To start with, use visited[s] = True, use level 0 and add ‘u’ in the Queue."
},
{
"code": null,
"e": 4189,
"s": 4140,
"text": "s = \"u\"visited[s] = Truelevel[s] = 0queue.put(s)"
},
{
"code": null,
"e": 4210,
"s": 4189,
"text": "Here comes the loop!"
},
{
"code": null,
"e": 4503,
"s": 4210,
"text": "At this stage, we need to visit the nodes that are linked to the source node ‘u’. We have it listed in the adjacency list above. For each of them, set them as visited, upgrade their levels as one level above the source node’s level, set their parent as ‘u’, and finally add them in the Queue."
},
{
"code": null,
"e": 4578,
"s": 4503,
"text": "Then do repeat the same with their child nodes. Here is the complete loop:"
},
{
"code": null,
"e": 4878,
"s": 4578,
"text": "while not queue.empty(): u = queue.get() traversal_output.append(u) for v in adj_list[u]: if not visited[v]: visited[v] = True parent[v] = u level[v] = level[u] + 1 queue.put(v)print(traversal_output)print(visited)print(level)print(parent)"
},
{
"code": null,
"e": 4886,
"s": 4878,
"text": "Output:"
},
{
"code": null,
"e": 5092,
"s": 4886,
"text": "['u', 'v', 'x', 'y', 'w', 'z']{'u': True, 'x': True, 'v': True, 'y': True, 'w': True, 'z': True}{'u': 0, 'x': 1, 'v': 1, 'y': 2, 'w': 3, 'z': 4}{'u': None, 'x': 'u', 'v': 'u', 'y': 'v', 'w': 'y', 'z': 'w'}"
},
{
"code": null,
"e": 5156,
"s": 5092,
"text": "Traversal_output shows that we traversed through all the nodes."
},
{
"code": null,
"e": 5206,
"s": 5156,
"text": "For each node, visited is true in the second row."
},
{
"code": null,
"e": 5299,
"s": 5206,
"text": "In the third row, we have the level for all the nodes. Please check with the pictures above."
},
{
"code": null,
"e": 5412,
"s": 5299,
"text": "In the fourth row, we have the parents of all the nodes. ‘u’ is the source node. So, ‘u’ does not have a parent."
},
{
"code": null,
"e": 5467,
"s": 5412,
"text": "Combining all the code and putting them in a function:"
},
{
"code": null,
"e": 6092,
"s": 5467,
"text": "def Breadth_first_search(adj_list): visited = {} level = {} parent = {} traversal_output = [] queue = Queue() for node in adj_list.keys(): visited[node] = False parent[node] = None level[node] = -1 s = \"u\" visited[s] = True level[s] = 0 queue.put(s) while not queue.empty(): u = queue.get() traversal_output.append(u) for v in adj_list[u]: if not visited[v]: visited[v] = True parent[v] = u level[v] = level[u] + 1 queue.put(v) return traversal_output, visited, level, parent"
},
{
"code": null,
"e": 6179,
"s": 6092,
"text": "Calling the function and pass the adjacency list ‘adj’ will gives you the same output."
},
{
"code": null,
"e": 6272,
"s": 6179,
"text": "This algorithm can be used to find the shortest path from the source to any other node. How?"
},
{
"code": null,
"e": 6420,
"s": 6272,
"text": "Look, we know the parent of each node. From any node, we keep going back through the parents, it will eventually go back to the source node. Right?"
},
{
"code": null,
"e": 6627,
"s": 6420,
"text": "For example, say, I want to find the shortest path of ‘w’ from the source node ‘u’. Let’s see, who is w’s parent. it’s ‘y’. y’s parent is ‘v’ and then v’s parent is ‘u’. So, the shortest path is u, v, y, w."
},
{
"code": null,
"e": 6695,
"s": 6627,
"text": "Check in the picture to see if you think this is the shortest path."
},
{
"code": null,
"e": 6768,
"s": 6695,
"text": "We can find the parents of each node from the function we defined above."
},
{
"code": null,
"e": 6830,
"s": 6768,
"text": "traversed, visited, level, parent = Breadth_first_search(adj)"
},
{
"code": null,
"e": 6873,
"s": 6830,
"text": "Here is the code to find the shortest path"
},
{
"code": null,
"e": 6970,
"s": 6873,
"text": "v = \"w\"path = []while v is not None: path.append(v) v = parent[v]path.reverse()print(path)"
},
{
"code": null,
"e": 6978,
"s": 6970,
"text": "Output:"
},
{
"code": null,
"e": 6999,
"s": 6978,
"text": "['u', 'v', 'y', 'w']"
},
{
"code": null,
"e": 7051,
"s": 6999,
"text": "We have only two elements here. Vertices and edges."
},
{
"code": null,
"e": 7199,
"s": 7051,
"text": "Notice, carefully. We visit each vertex only one time. In the for loop, we ignore the already visited vertices. Consider, V as the set of vertices."
},
{
"code": null,
"e": 7604,
"s": 7199,
"text": "We used an undirected graph here. For an undirected graph, we can visit both ways. The way we can go from ‘u’ to ‘v’, we can go from ‘v’ to ‘u’ as well. In the adjacency list ‘adj’ above, you can see that one node can come up more than once. At most, we will traverse one edge twice. Let E be the set of edges, it will traverse the edges 2E times in the worst case. Som the total time in worst case V+2E."
},
{
"code": null,
"e": 7692,
"s": 7604,
"text": "The time complexity can be expressed as O(V+E) as the coefficient is subsumed by the O."
},
{
"code": null,
"e": 7953,
"s": 7692,
"text": "I tried to explain, how the Breadth_first_search algorithm works using visuals, developed the algorithm in Python, How to find the shortest path using the Breadth_first_search algorithm, and the time complexity of this algorithm. I hope it is clear to you now."
}
] |
C# | How to change BackGround Color of Text in Console - GeeksforGeeks
|
28 Jan, 2019
Given the normal Console in C#, the default color of the text background is “Black”. The task is to change this color to some other color.
Approach: This can be done using the BackgroundColor property in the Console class of the System package in C#.
Program 1: Changing the Console Background Color to Blue.
// C# program to illustrate the // BackgroundColor Propertyusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG {class Program { static void Main(string[] args) { // Display current Background color Console.WriteLine("Default Background Color: {0}", Console.BackgroundColor); // Set the Background color to blue Console.BackgroundColor = ConsoleColor.Blue; // Display current Background color Console.WriteLine("Changed Background Color: {0}", Console.BackgroundColor); }}}
Output:
Program 2: The list of available colors in which the BackgroundColor can be changed are
// C# program to get the// list of available colorsusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG {class Program { static void Main(string[] args) { // Get the list of available colors // that can be changed ConsoleColor[] consoleColors = (ConsoleColor[])ConsoleColor .GetValues(typeof(ConsoleColor)); // Display the list // of available console colors Console.WriteLine("List of available " + "Console Colors:"); foreach(var color in consoleColors) Console.WriteLine(color); }}}
Output:
CSharp-Console-Class
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between Ref and Out keywords in C#
C# | Delegates
Top 50 C# Interview Questions & Answers
Introduction to .NET Framework
C# | Constructors
Extension Method in C#
C# | Class and Object
C# | Abstract Classes
C# | String.IndexOf( ) Method | Set - 1
Common Language Runtime (CLR) in C#
|
[
{
"code": null,
"e": 24615,
"s": 24587,
"text": "\n28 Jan, 2019"
},
{
"code": null,
"e": 24754,
"s": 24615,
"text": "Given the normal Console in C#, the default color of the text background is “Black”. The task is to change this color to some other color."
},
{
"code": null,
"e": 24866,
"s": 24754,
"text": "Approach: This can be done using the BackgroundColor property in the Console class of the System package in C#."
},
{
"code": null,
"e": 24924,
"s": 24866,
"text": "Program 1: Changing the Console Background Color to Blue."
},
{
"code": "// C# program to illustrate the // BackgroundColor Propertyusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG {class Program { static void Main(string[] args) { // Display current Background color Console.WriteLine(\"Default Background Color: {0}\", Console.BackgroundColor); // Set the Background color to blue Console.BackgroundColor = ConsoleColor.Blue; // Display current Background color Console.WriteLine(\"Changed Background Color: {0}\", Console.BackgroundColor); }}}",
"e": 25590,
"s": 24924,
"text": null
},
{
"code": null,
"e": 25598,
"s": 25590,
"text": "Output:"
},
{
"code": null,
"e": 25686,
"s": 25598,
"text": "Program 2: The list of available colors in which the BackgroundColor can be changed are"
},
{
"code": "// C# program to get the// list of available colorsusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG {class Program { static void Main(string[] args) { // Get the list of available colors // that can be changed ConsoleColor[] consoleColors = (ConsoleColor[])ConsoleColor .GetValues(typeof(ConsoleColor)); // Display the list // of available console colors Console.WriteLine(\"List of available \" + \"Console Colors:\"); foreach(var color in consoleColors) Console.WriteLine(color); }}}",
"e": 26372,
"s": 25686,
"text": null
},
{
"code": null,
"e": 26380,
"s": 26372,
"text": "Output:"
},
{
"code": null,
"e": 26401,
"s": 26380,
"text": "CSharp-Console-Class"
},
{
"code": null,
"e": 26404,
"s": 26401,
"text": "C#"
},
{
"code": null,
"e": 26502,
"s": 26404,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26511,
"s": 26502,
"text": "Comments"
},
{
"code": null,
"e": 26524,
"s": 26511,
"text": "Old Comments"
},
{
"code": null,
"e": 26570,
"s": 26524,
"text": "Difference between Ref and Out keywords in C#"
},
{
"code": null,
"e": 26585,
"s": 26570,
"text": "C# | Delegates"
},
{
"code": null,
"e": 26625,
"s": 26585,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 26656,
"s": 26625,
"text": "Introduction to .NET Framework"
},
{
"code": null,
"e": 26674,
"s": 26656,
"text": "C# | Constructors"
},
{
"code": null,
"e": 26697,
"s": 26674,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 26719,
"s": 26697,
"text": "C# | Class and Object"
},
{
"code": null,
"e": 26741,
"s": 26719,
"text": "C# | Abstract Classes"
},
{
"code": null,
"e": 26781,
"s": 26741,
"text": "C# | String.IndexOf( ) Method | Set - 1"
}
] |
10 Examples to Master Python Dictionary Comprehensions | by Soner Yıldırım | Towards Data Science
|
A dictionary is an unordered collection of key-value pairs. Each entry has a key and value. A dictionary can be considered as a list with special index.
The keys must be unique and immutable. So we can use strings, numbers (int or float), or tuples as keys. Values can be of any type.
In this article, we will focus on dictionary comprehension which is a method to create dictionaries using iterables. The logic is the same as list comprehension but the syntax is different due to the structure of dictionaries.
In order to see the similarity between a list and dictionary comprehension, I will create both a list and dictionary comprehension in the first two examples.
words = ['data', 'science', 'machine', 'learning']#list comprehension[len(i) for i in words][4, 7, 7, 8]#dictionary comprehension{i:len(i) for i in words}{'data': 4, 'science': 7, 'machine': 7, 'learning': 8}
We have an iterable which is a list named “words”. In the list comprehension, we create a list that contains the length of the words. In the dictionary comprehension, we need to specify both keys and values based on the iteration. The returned dictionary contains the words as keys and their length as values.
The basic syntax for list and dictionary comprehension are:
For this example, we will repeat the task in the first example with an additional condition. Both list and dictionary comprehensions accept if/else conditional statements.
words = ['data', 'science', 'machine', 'learning']#list comprehension[len(i) for i in words if len(i) > 5][7, 7, 8]#dictionary comprehension{i:len(i) for i in words if len(i) > 5}{'science': 7, 'machine': 7, 'learning': 8}
The returned variables only contain the words longer than 5 characters.
In this example, we will slightly increase the complexity of the conditional statement.
words_dict = {i:len(i) if len(i) > 5 else 'short' for i in words}print(words_dict){'data': 'short', 'science': 7, 'machine': 7, 'learning': 8}
We implement an if/else conditional in the dictionary comprehension. If the length is greater than 5, the value becomes the length. Otherwise, we assign the word ‘short’ as the value.
What makes comprehensions appealing is their one liner syntax. It looks quite simple and easier to understand than the equivalent for loops. For instance, the equivalent for loop of the comprehension above is:
We can iterate over two iterables in a dictionary comprehension.
words = ['data', 'science', 'machine', 'learning']values = [5, 3, 1, 8]dict_a = {i:j for i, j in zip(words, values)}print(dict_a){'data': 5, 'science': 3, 'machine': 1, 'learning': 8}
Key-value pairs are created by iterating over separate lists for keys and values. The zip function returns an iterable of tuples by combining the items from each list.
We can also put a condition on the values when iterating over a list of tuples.
words = ['data', 'science', 'machine', 'learning']values = [5, 3, 1, 8]dict_a = {i:j for i, j in zip(words, values) if j > 4}print(dict_a){'data': 5, 'learning': 8}
We can also apply transformations on key-value pairs.
dict_b = {i.upper():j**2 for i, j in zip(words, values)}print(dict_b){'DATA': 25, 'SCIENCE': 9, 'MACHINE': 1, 'LEARNING': 64}
Both keys and values are modified using simple Python methods.
We can access the key-value pairs in a dictionary by using the items method.
print(dict_b.items())dict_items([('DATA', 25), ('SCIENCE', 9), ('MACHINE', 1), ('LEARNING', 64)])
We can use the items of an existing dictionary as iterable in a dictionary comprehension. It allows us to create dictionaries based on existing dictionaries and modify both keys and values.
dict_c = {i.lower():j%2 for i, j in dict_b.items()}print(dict_c){'data': 1, 'science': 1, 'machine': 1, 'learning': 0}
The enumerate function of Python can be used to create an iterable of tuples based on a list. Each tuple contains the items in the list with incrementing integer values.
names = ['John', 'Jane', 'Adam', 'Eva', 'Ashley']list(enumerate(names))[(0, 'John'), (1, 'Jane'), (2, 'Adam'), (3, 'Eva'), (4, 'Ashley')]
We can use the enumerate function in a dictionary comprehension.
dict_names = {i:len(j) for i, j in enumerate(names)}print(dict_names){0: 4, 1: 4, 2: 4, 3: 3, 4: 6}
If you just want to create a dictionary based on a list of tuples without any modification on the values, you do not need to use a comprehension. The dict function will do the job.
dict(enumerate(names)){0: 'John', 1: 'Jane', 2: 'Adam', 3: 'Eva', 4: 'Ashley'}
This example contains a slightly more complicated conditionals than the previous ones. Consider we have the following dictionary and list.
lst = ['data','science','artificial', 'intelligence']dct = {'data': 5, 'science': 3, 'machine': 1, 'learning': 8}
We want to create a new dictionary using the list and dictionary defined above. The keys of the new dictionary will be the elements in the list so we will iterate over the elements in list. If the element is also in the dictionary, the value will be the values of that key in the dictionary. Otherwise, the value will be the length of the key.
{i:dct[i] if i in dct else len(i) for i in lst}{'artificial': 10, 'data': 5, 'intelligence': 12, 'science': 3}
The word artificial is not in the dictionary so its value is the length of the word. The word data is in the dictionary so its value is taken from the dictionary.
The keys of a dictionary must be immutable so tuples can be used as keys. Dictionary comprehensions allow for generating keys of tuples by implemented nested loops.
a = [1,2,3,4]b = [5,6,7]dct = {(i,j):i*j for i in a for j in b}print(dct){(1, 5): 5, (1, 6): 6, (1, 7): 7, (2, 5): 10, (2, 6): 12, (2, 7): 14, (3, 5): 15, (3, 6): 18, (3, 7): 21, (4, 5): 20, (4, 6): 24, (4, 7): 28}
Each pair of items in the lists is a key in the dictionary. The value is the product of the items in keys.
The equivalent for loop syntax:
Dictionaries are very important data structures in Python and used in many cases. The examples we did in this post will cover most of what you need to know about dictionary comprehensions. They will make you feel comfortable when working with and creating new dictionaries.
Thank you for reading. Please let me know if you have any feedback.
|
[
{
"code": null,
"e": 324,
"s": 171,
"text": "A dictionary is an unordered collection of key-value pairs. Each entry has a key and value. A dictionary can be considered as a list with special index."
},
{
"code": null,
"e": 456,
"s": 324,
"text": "The keys must be unique and immutable. So we can use strings, numbers (int or float), or tuples as keys. Values can be of any type."
},
{
"code": null,
"e": 683,
"s": 456,
"text": "In this article, we will focus on dictionary comprehension which is a method to create dictionaries using iterables. The logic is the same as list comprehension but the syntax is different due to the structure of dictionaries."
},
{
"code": null,
"e": 841,
"s": 683,
"text": "In order to see the similarity between a list and dictionary comprehension, I will create both a list and dictionary comprehension in the first two examples."
},
{
"code": null,
"e": 1050,
"s": 841,
"text": "words = ['data', 'science', 'machine', 'learning']#list comprehension[len(i) for i in words][4, 7, 7, 8]#dictionary comprehension{i:len(i) for i in words}{'data': 4, 'science': 7, 'machine': 7, 'learning': 8}"
},
{
"code": null,
"e": 1360,
"s": 1050,
"text": "We have an iterable which is a list named “words”. In the list comprehension, we create a list that contains the length of the words. In the dictionary comprehension, we need to specify both keys and values based on the iteration. The returned dictionary contains the words as keys and their length as values."
},
{
"code": null,
"e": 1420,
"s": 1360,
"text": "The basic syntax for list and dictionary comprehension are:"
},
{
"code": null,
"e": 1592,
"s": 1420,
"text": "For this example, we will repeat the task in the first example with an additional condition. Both list and dictionary comprehensions accept if/else conditional statements."
},
{
"code": null,
"e": 1815,
"s": 1592,
"text": "words = ['data', 'science', 'machine', 'learning']#list comprehension[len(i) for i in words if len(i) > 5][7, 7, 8]#dictionary comprehension{i:len(i) for i in words if len(i) > 5}{'science': 7, 'machine': 7, 'learning': 8}"
},
{
"code": null,
"e": 1887,
"s": 1815,
"text": "The returned variables only contain the words longer than 5 characters."
},
{
"code": null,
"e": 1975,
"s": 1887,
"text": "In this example, we will slightly increase the complexity of the conditional statement."
},
{
"code": null,
"e": 2118,
"s": 1975,
"text": "words_dict = {i:len(i) if len(i) > 5 else 'short' for i in words}print(words_dict){'data': 'short', 'science': 7, 'machine': 7, 'learning': 8}"
},
{
"code": null,
"e": 2302,
"s": 2118,
"text": "We implement an if/else conditional in the dictionary comprehension. If the length is greater than 5, the value becomes the length. Otherwise, we assign the word ‘short’ as the value."
},
{
"code": null,
"e": 2512,
"s": 2302,
"text": "What makes comprehensions appealing is their one liner syntax. It looks quite simple and easier to understand than the equivalent for loops. For instance, the equivalent for loop of the comprehension above is:"
},
{
"code": null,
"e": 2577,
"s": 2512,
"text": "We can iterate over two iterables in a dictionary comprehension."
},
{
"code": null,
"e": 2761,
"s": 2577,
"text": "words = ['data', 'science', 'machine', 'learning']values = [5, 3, 1, 8]dict_a = {i:j for i, j in zip(words, values)}print(dict_a){'data': 5, 'science': 3, 'machine': 1, 'learning': 8}"
},
{
"code": null,
"e": 2929,
"s": 2761,
"text": "Key-value pairs are created by iterating over separate lists for keys and values. The zip function returns an iterable of tuples by combining the items from each list."
},
{
"code": null,
"e": 3009,
"s": 2929,
"text": "We can also put a condition on the values when iterating over a list of tuples."
},
{
"code": null,
"e": 3174,
"s": 3009,
"text": "words = ['data', 'science', 'machine', 'learning']values = [5, 3, 1, 8]dict_a = {i:j for i, j in zip(words, values) if j > 4}print(dict_a){'data': 5, 'learning': 8}"
},
{
"code": null,
"e": 3228,
"s": 3174,
"text": "We can also apply transformations on key-value pairs."
},
{
"code": null,
"e": 3354,
"s": 3228,
"text": "dict_b = {i.upper():j**2 for i, j in zip(words, values)}print(dict_b){'DATA': 25, 'SCIENCE': 9, 'MACHINE': 1, 'LEARNING': 64}"
},
{
"code": null,
"e": 3417,
"s": 3354,
"text": "Both keys and values are modified using simple Python methods."
},
{
"code": null,
"e": 3494,
"s": 3417,
"text": "We can access the key-value pairs in a dictionary by using the items method."
},
{
"code": null,
"e": 3592,
"s": 3494,
"text": "print(dict_b.items())dict_items([('DATA', 25), ('SCIENCE', 9), ('MACHINE', 1), ('LEARNING', 64)])"
},
{
"code": null,
"e": 3782,
"s": 3592,
"text": "We can use the items of an existing dictionary as iterable in a dictionary comprehension. It allows us to create dictionaries based on existing dictionaries and modify both keys and values."
},
{
"code": null,
"e": 3901,
"s": 3782,
"text": "dict_c = {i.lower():j%2 for i, j in dict_b.items()}print(dict_c){'data': 1, 'science': 1, 'machine': 1, 'learning': 0}"
},
{
"code": null,
"e": 4071,
"s": 3901,
"text": "The enumerate function of Python can be used to create an iterable of tuples based on a list. Each tuple contains the items in the list with incrementing integer values."
},
{
"code": null,
"e": 4209,
"s": 4071,
"text": "names = ['John', 'Jane', 'Adam', 'Eva', 'Ashley']list(enumerate(names))[(0, 'John'), (1, 'Jane'), (2, 'Adam'), (3, 'Eva'), (4, 'Ashley')]"
},
{
"code": null,
"e": 4274,
"s": 4209,
"text": "We can use the enumerate function in a dictionary comprehension."
},
{
"code": null,
"e": 4374,
"s": 4274,
"text": "dict_names = {i:len(j) for i, j in enumerate(names)}print(dict_names){0: 4, 1: 4, 2: 4, 3: 3, 4: 6}"
},
{
"code": null,
"e": 4555,
"s": 4374,
"text": "If you just want to create a dictionary based on a list of tuples without any modification on the values, you do not need to use a comprehension. The dict function will do the job."
},
{
"code": null,
"e": 4634,
"s": 4555,
"text": "dict(enumerate(names)){0: 'John', 1: 'Jane', 2: 'Adam', 3: 'Eva', 4: 'Ashley'}"
},
{
"code": null,
"e": 4773,
"s": 4634,
"text": "This example contains a slightly more complicated conditionals than the previous ones. Consider we have the following dictionary and list."
},
{
"code": null,
"e": 4887,
"s": 4773,
"text": "lst = ['data','science','artificial', 'intelligence']dct = {'data': 5, 'science': 3, 'machine': 1, 'learning': 8}"
},
{
"code": null,
"e": 5231,
"s": 4887,
"text": "We want to create a new dictionary using the list and dictionary defined above. The keys of the new dictionary will be the elements in the list so we will iterate over the elements in list. If the element is also in the dictionary, the value will be the values of that key in the dictionary. Otherwise, the value will be the length of the key."
},
{
"code": null,
"e": 5342,
"s": 5231,
"text": "{i:dct[i] if i in dct else len(i) for i in lst}{'artificial': 10, 'data': 5, 'intelligence': 12, 'science': 3}"
},
{
"code": null,
"e": 5505,
"s": 5342,
"text": "The word artificial is not in the dictionary so its value is the length of the word. The word data is in the dictionary so its value is taken from the dictionary."
},
{
"code": null,
"e": 5670,
"s": 5505,
"text": "The keys of a dictionary must be immutable so tuples can be used as keys. Dictionary comprehensions allow for generating keys of tuples by implemented nested loops."
},
{
"code": null,
"e": 5885,
"s": 5670,
"text": "a = [1,2,3,4]b = [5,6,7]dct = {(i,j):i*j for i in a for j in b}print(dct){(1, 5): 5, (1, 6): 6, (1, 7): 7, (2, 5): 10, (2, 6): 12, (2, 7): 14, (3, 5): 15, (3, 6): 18, (3, 7): 21, (4, 5): 20, (4, 6): 24, (4, 7): 28}"
},
{
"code": null,
"e": 5992,
"s": 5885,
"text": "Each pair of items in the lists is a key in the dictionary. The value is the product of the items in keys."
},
{
"code": null,
"e": 6024,
"s": 5992,
"text": "The equivalent for loop syntax:"
},
{
"code": null,
"e": 6298,
"s": 6024,
"text": "Dictionaries are very important data structures in Python and used in many cases. The examples we did in this post will cover most of what you need to know about dictionary comprehensions. They will make you feel comfortable when working with and creating new dictionaries."
}
] |
PHP – Make an upper case string using mb_strtoupper()
|
In PHP, mb_strtoupper() is an inbuilt function that is used to change a given string to upper case.
string mb_strtoupper(str $string, str $encoding)
mb_strtoupper() accepts two parameters: $string and $encoding.
$string− The string being uppercased.
$string− The string being uppercased.
$encoding− This parameter is the character encoding. If it is absent or null, then the internal character encoding value will be used.
$encoding− This parameter is the character encoding. If it is absent or null, then the internal character encoding value will be used.
string with all alphabetic characters converted to uppercase.
Live Demo
<?php
$string = "Hello World!, Welcome to online tutorials";
$string = mb_strtoupper($string);
echo $string;
?>
It will convert the given string to upper case.
HELLO WORLD!, WELCOME TO ONLINE TUTORIALS
|
[
{
"code": null,
"e": 1162,
"s": 1062,
"text": "In PHP, mb_strtoupper() is an inbuilt function that is used to change a given string to upper case."
},
{
"code": null,
"e": 1211,
"s": 1162,
"text": "string mb_strtoupper(str $string, str $encoding)"
},
{
"code": null,
"e": 1274,
"s": 1211,
"text": "mb_strtoupper() accepts two parameters: $string and $encoding."
},
{
"code": null,
"e": 1312,
"s": 1274,
"text": "$string− The string being uppercased."
},
{
"code": null,
"e": 1350,
"s": 1312,
"text": "$string− The string being uppercased."
},
{
"code": null,
"e": 1485,
"s": 1350,
"text": "$encoding− This parameter is the character encoding. If it is absent or null, then the internal character encoding value will be used."
},
{
"code": null,
"e": 1620,
"s": 1485,
"text": "$encoding− This parameter is the character encoding. If it is absent or null, then the internal character encoding value will be used."
},
{
"code": null,
"e": 1682,
"s": 1620,
"text": "string with all alphabetic characters converted to uppercase."
},
{
"code": null,
"e": 1693,
"s": 1682,
"text": " Live Demo"
},
{
"code": null,
"e": 1814,
"s": 1693,
"text": "<?php\n $string = \"Hello World!, Welcome to online tutorials\";\n $string = mb_strtoupper($string);\n echo $string;\n?>"
},
{
"code": null,
"e": 1862,
"s": 1814,
"text": "It will convert the given string to upper case."
},
{
"code": null,
"e": 1904,
"s": 1862,
"text": "HELLO WORLD!, WELCOME TO ONLINE TUTORIALS"
}
] |
Filter Color with OpenCV - GeeksforGeeks
|
16 Feb, 2021
Colour segmentation or color filtering is widely used in OpenCV for identifying specific objects/regions having a specific color. The most widely used color space is RGB color space, it is called an additive color space as the three color shades add up to give color to the image. To identify a region of a specific color, put the threshold and create a mask to separate the different colors. HSV color space is much more useful for this purpose as the colors in HSV space are much more localized thus can be easily separated. Color Filtering has many applications and uses cases such as in Cryptography, infrared analysis, food preservation of perishable foods, etc. In such cases, the concepts of Image processing can be used to find out or extract out regions of a particular color. For color segmentation, all we need is the threshold values or the knowledge of the lower bound and upper bound range of colors in one of the color spaces. It works best in the Hue-Saturation-Value color space. After specifying the range of color to be segmented, it is needed to create a mask accordingly and by using it, a particular region of interest can be separated out.
Below is the code:
Python3
import cv2import numpy as np cap = cv2.VideoCapture(0) while(1): _, frame = cap.read() # It converts the BGR color space of image to HSV color space hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # Threshold of blue in HSV space lower_blue = np.array([60, 35, 140]) upper_blue = np.array([180, 255, 255]) # preparing the mask to overlay mask = cv2.inRange(hsv, lower_blue, upper_blue) # The black region in the mask has the value of 0, # so when multiplied with original image removes all non-blue regions result = cv2.bitwise_and(frame, frame, mask = mask) cv2.imshow('frame', frame) cv2.imshow('mask', mask) cv2.imshow('result', result) cv2.waitKey(0) cv2.destroyAllWindows()cap.release()
Original Image-
Masked Image-
Blue Color segmented regions-
trevorspreadbury
Image-Processing
OpenCV
Advanced Computer Subject
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Naive Bayes Classifiers
Linear Regression (Python Implementation)
Removing stop words with NLTK in Python
ML | Linear Regression
Apriori Algorithm
Python program to convert a list to string
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python program to check whether a number is Prime or not
Python | Convert a list to dictionary
|
[
{
"code": null,
"e": 41162,
"s": 41134,
"text": "\n16 Feb, 2021"
},
{
"code": null,
"e": 42326,
"s": 41162,
"text": "Colour segmentation or color filtering is widely used in OpenCV for identifying specific objects/regions having a specific color. The most widely used color space is RGB color space, it is called an additive color space as the three color shades add up to give color to the image. To identify a region of a specific color, put the threshold and create a mask to separate the different colors. HSV color space is much more useful for this purpose as the colors in HSV space are much more localized thus can be easily separated. Color Filtering has many applications and uses cases such as in Cryptography, infrared analysis, food preservation of perishable foods, etc. In such cases, the concepts of Image processing can be used to find out or extract out regions of a particular color. For color segmentation, all we need is the threshold values or the knowledge of the lower bound and upper bound range of colors in one of the color spaces. It works best in the Hue-Saturation-Value color space. After specifying the range of color to be segmented, it is needed to create a mask accordingly and by using it, a particular region of interest can be separated out. "
},
{
"code": null,
"e": 42347,
"s": 42326,
"text": "Below is the code: "
},
{
"code": null,
"e": 42355,
"s": 42347,
"text": "Python3"
},
{
"code": "import cv2import numpy as np cap = cv2.VideoCapture(0) while(1): _, frame = cap.read() # It converts the BGR color space of image to HSV color space hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # Threshold of blue in HSV space lower_blue = np.array([60, 35, 140]) upper_blue = np.array([180, 255, 255]) # preparing the mask to overlay mask = cv2.inRange(hsv, lower_blue, upper_blue) # The black region in the mask has the value of 0, # so when multiplied with original image removes all non-blue regions result = cv2.bitwise_and(frame, frame, mask = mask) cv2.imshow('frame', frame) cv2.imshow('mask', mask) cv2.imshow('result', result) cv2.waitKey(0) cv2.destroyAllWindows()cap.release()",
"e": 43106,
"s": 42355,
"text": null
},
{
"code": null,
"e": 43124,
"s": 43106,
"text": "Original Image- "
},
{
"code": null,
"e": 43140,
"s": 43124,
"text": "Masked Image- "
},
{
"code": null,
"e": 43172,
"s": 43140,
"text": "Blue Color segmented regions- "
},
{
"code": null,
"e": 43191,
"s": 43174,
"text": "trevorspreadbury"
},
{
"code": null,
"e": 43208,
"s": 43191,
"text": "Image-Processing"
},
{
"code": null,
"e": 43215,
"s": 43208,
"text": "OpenCV"
},
{
"code": null,
"e": 43241,
"s": 43215,
"text": "Advanced Computer Subject"
},
{
"code": null,
"e": 43257,
"s": 43241,
"text": "Python Programs"
},
{
"code": null,
"e": 43355,
"s": 43257,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 43364,
"s": 43355,
"text": "Comments"
},
{
"code": null,
"e": 43377,
"s": 43364,
"text": "Old Comments"
},
{
"code": null,
"e": 43401,
"s": 43377,
"text": "Naive Bayes Classifiers"
},
{
"code": null,
"e": 43443,
"s": 43401,
"text": "Linear Regression (Python Implementation)"
},
{
"code": null,
"e": 43483,
"s": 43443,
"text": "Removing stop words with NLTK in Python"
},
{
"code": null,
"e": 43506,
"s": 43483,
"text": "ML | Linear Regression"
},
{
"code": null,
"e": 43524,
"s": 43506,
"text": "Apriori Algorithm"
},
{
"code": null,
"e": 43567,
"s": 43524,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 43606,
"s": 43567,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 43652,
"s": 43606,
"text": "Python | Split string into list of characters"
},
{
"code": null,
"e": 43709,
"s": 43652,
"text": "Python program to check whether a number is Prime or not"
}
] |
How to hold key down with Selenium?
|
We can hold a key down with Selenium webdriver. We mostly utilize the CONTROL/SHIFT/ALT keys to hold down and then click on other keys. So, only mentioning the modifier keys like keys.CONTROL/ keys.SHIFT or Keys.ALT is not
sufficient.
To hold down a key simultaneously while another key is being pressed, we use the keyDown() and keyUp() methods. Both these methods accept the modifier key as a parameter.
The action of these two methods on a key yields a special functionality of a key. All these methods are a part of Actions class in Selenium. We have to add the import org.openqa.selenium.interactions.Actions package to our code for using the methods under Actions class.
import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.interactions.Action;
import org.openqa.selenium.interactions.Actions;
public class MetdKeyDown{
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver","C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
String url = "https://www.tutorialspoint.com/index.htm";
driver.get(url);
driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);
// identify element
WebElement l = driver.findElement(By.id("gsc-i-id1"));
// Actions class
Actions a = new Actions(driver);
// moveToElement() and then click()
a.moveToElement(l).click();
//enter text with keyDown() SHIFT key ,keyUp() then build() ,perform()
a.keyDown(Keys.SHIFT);
a.sendKeys("hello").keyUp(Keys.SHIFT).build().perform();
driver.quit()
}
}
|
[
{
"code": null,
"e": 1297,
"s": 1062,
"text": "We can hold a key down with Selenium webdriver. We mostly utilize the CONTROL/SHIFT/ALT keys to hold down and then click on other keys. So, only mentioning the modifier keys like keys.CONTROL/ keys.SHIFT or Keys.ALT is not\nsufficient."
},
{
"code": null,
"e": 1468,
"s": 1297,
"text": "To hold down a key simultaneously while another key is being pressed, we use the keyDown() and keyUp() methods. Both these methods accept the modifier key as a parameter."
},
{
"code": null,
"e": 1739,
"s": 1468,
"text": "The action of these two methods on a key yields a special functionality of a key. All these methods are a part of Actions class in Selenium. We have to add the import org.openqa.selenium.interactions.Actions package to our code for using the methods under Actions class."
},
{
"code": null,
"e": 2857,
"s": 1739,
"text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.Keys;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.interactions.Action;\nimport org.openqa.selenium.interactions.Actions;\n\npublic class MetdKeyDown{\n public static void main(String[] args) {\nSystem.setProperty(\"webdriver.chrome.driver\",\"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n String url = \"https://www.tutorialspoint.com/index.htm\";\n driver.get(url);\n driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);\n // identify element\n WebElement l = driver.findElement(By.id(\"gsc-i-id1\"));\n // Actions class\n Actions a = new Actions(driver);\n // moveToElement() and then click()\n a.moveToElement(l).click();\n //enter text with keyDown() SHIFT key ,keyUp() then build() ,perform()\n a.keyDown(Keys.SHIFT);\n a.sendKeys(\"hello\").keyUp(Keys.SHIFT).build().perform();\n driver.quit()\n }\n}"
}
] |
Monkey Patching in Ruby - GeeksforGeeks
|
22 Oct, 2020
In Ruby, a Monkey Patch (MP) is referred to as a dynamic modification to a class and by a dynamic modification to a class means to add new or overwrite existing methods at runtime. This ability is provided by ruby to give more flexibility to the coders.
Ruby, being a very powerful and agile language, provides this extreme power to the developer of patching a class. In real the importance of it is to patch a buggy class and make its method behave in a manner to solve the purpose. As said with great power comes great responsibilities, it’s a big responsibility on the developer to use this feature to patch the class rather than to brake its original functionality as patching leads to overriding of the original methods.
It is a very important feature and needs extra care while using it. There are some basic properties for monkey patching in ruby listed as follows.
If multiple libraries have the same method, the first one will get overwritten.If the class is not imported before the patch, it will lead to a redefinition of the class instead of patching it.All the patches are global in nature and can actually disrupt multiple libraries.Monkey patching is used to patch up classes that are owned by the coder and it’s not recommended to patch a class already defined in Ruby which are used frequently like Hashes, Lists, etc.
If multiple libraries have the same method, the first one will get overwritten.
If the class is not imported before the patch, it will lead to a redefinition of the class instead of patching it.
All the patches are global in nature and can actually disrupt multiple libraries.
Monkey patching is used to patch up classes that are owned by the coder and it’s not recommended to patch a class already defined in Ruby which are used frequently like Hashes, Lists, etc.
The general syntax for applying a patch is to simply make a method inside a class, having a class name same as that on which patch has to be applied. Syntax
class [class_name]
def [method_to_patch]:
#do_something
end
end
Example:In this example, Monkey patching is used to block the user to reverse the string.
Ruby
# Ruby program to illustrate monkey patching # Before applying patchingputs "Before blocking reverse: " + "Geeks for Geeks".reverse # Apply patchingclass String def reverse "Reversing blocked!!" endend # After applying patchingputs "After blocking reverse: " + "Geeks for Geeks".reverse
Output:
Before blocking reverse: skeeG rof skeeG
After blocking reverse: Reversing blocked!!
In the above code, the String class of the ruby is altered in order to block the functionality of reversing the string. After defining the method for the patch, the patch actually blocks a basic functionality of the class, thus to be used with care.
Example:In this example, Monkey patching is used to block the user to delete any key from Hash.
Ruby
# Ruby program to illustrate monkey patching # Before applying patchinghash = { "Geeks"=>"G", "for"=>"F", "geeks"=>"g" } puts "Before blocking reverse: " hash.delete "for"puts "Deleted 'for' key"puts hash # Apply patchingclass Hash def delete(key) "Delete blocked!!" endend # After applying patchinghash = { "Geeks"=>"G", "for"=>"F", "geeks"=>"g" } puts "Before blocking reverse: "puts "Deleting 'for' key but " + hash.delete("for")puts hash
Output:
Before blocking reverse:
Deleted 'for' key
{"Geeks"=>"G", "geeks"=>"g"}
Before blocking reverse:
Deleting 'for' key but Delete blocked!!
{"Geeks"=>"G", "for"=>"F", "geeks"=>"g"}
Similar to the above code, here the deletion functionality is blocked due to the patch.
Some basic tips on when to actually use monkey patch:
Cases when reopening a class is required for example in the case of code refactoring and simplification when the method is written in a dirty manner inside the class.
When patching is required for a developer’s own class methods. It is not well recommended to use a monkey patch in such a case though.
Picked
Ruby
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Ruby | Array reverse() function
Method Overloading In Ruby
Ruby | Array transpose() function
Instance Variables in Ruby
Ruby | Array class insert() function
Ruby Static Members
Ruby | Method overriding
Ruby | Array replace() function
Ruby | clear() function
Ruby | Array unshift() function
|
[
{
"code": null,
"e": 23595,
"s": 23567,
"text": "\n22 Oct, 2020"
},
{
"code": null,
"e": 23849,
"s": 23595,
"text": "In Ruby, a Monkey Patch (MP) is referred to as a dynamic modification to a class and by a dynamic modification to a class means to add new or overwrite existing methods at runtime. This ability is provided by ruby to give more flexibility to the coders."
},
{
"code": null,
"e": 24322,
"s": 23849,
"text": "Ruby, being a very powerful and agile language, provides this extreme power to the developer of patching a class. In real the importance of it is to patch a buggy class and make its method behave in a manner to solve the purpose. As said with great power comes great responsibilities, it’s a big responsibility on the developer to use this feature to patch the class rather than to brake its original functionality as patching leads to overriding of the original methods. "
},
{
"code": null,
"e": 24469,
"s": 24322,
"text": "It is a very important feature and needs extra care while using it. There are some basic properties for monkey patching in ruby listed as follows."
},
{
"code": null,
"e": 24932,
"s": 24469,
"text": "If multiple libraries have the same method, the first one will get overwritten.If the class is not imported before the patch, it will lead to a redefinition of the class instead of patching it.All the patches are global in nature and can actually disrupt multiple libraries.Monkey patching is used to patch up classes that are owned by the coder and it’s not recommended to patch a class already defined in Ruby which are used frequently like Hashes, Lists, etc."
},
{
"code": null,
"e": 25012,
"s": 24932,
"text": "If multiple libraries have the same method, the first one will get overwritten."
},
{
"code": null,
"e": 25127,
"s": 25012,
"text": "If the class is not imported before the patch, it will lead to a redefinition of the class instead of patching it."
},
{
"code": null,
"e": 25209,
"s": 25127,
"text": "All the patches are global in nature and can actually disrupt multiple libraries."
},
{
"code": null,
"e": 25398,
"s": 25209,
"text": "Monkey patching is used to patch up classes that are owned by the coder and it’s not recommended to patch a class already defined in Ruby which are used frequently like Hashes, Lists, etc."
},
{
"code": null,
"e": 25558,
"s": 25398,
"text": "The general syntax for applying a patch is to simply make a method inside a class, having a class name same as that on which patch has to be applied. Syntax "
},
{
"code": null,
"e": 25641,
"s": 25558,
"text": "class [class_name]\n def [method_to_patch]:\n #do_something\n end\nend \n"
},
{
"code": null,
"e": 25731,
"s": 25641,
"text": "Example:In this example, Monkey patching is used to block the user to reverse the string."
},
{
"code": null,
"e": 25736,
"s": 25731,
"text": "Ruby"
},
{
"code": "# Ruby program to illustrate monkey patching # Before applying patchingputs \"Before blocking reverse: \" + \"Geeks for Geeks\".reverse # Apply patchingclass String def reverse \"Reversing blocked!!\" endend # After applying patchingputs \"After blocking reverse: \" + \"Geeks for Geeks\".reverse",
"e": 26045,
"s": 25736,
"text": null
},
{
"code": null,
"e": 26053,
"s": 26045,
"text": "Output:"
},
{
"code": null,
"e": 26139,
"s": 26053,
"text": "Before blocking reverse: skeeG rof skeeG\nAfter blocking reverse: Reversing blocked!!\n"
},
{
"code": null,
"e": 26390,
"s": 26139,
"text": "In the above code, the String class of the ruby is altered in order to block the functionality of reversing the string. After defining the method for the patch, the patch actually blocks a basic functionality of the class, thus to be used with care. "
},
{
"code": null,
"e": 26487,
"s": 26390,
"text": "Example:In this example, Monkey patching is used to block the user to delete any key from Hash. "
},
{
"code": null,
"e": 26492,
"s": 26487,
"text": "Ruby"
},
{
"code": "# Ruby program to illustrate monkey patching # Before applying patchinghash = { \"Geeks\"=>\"G\", \"for\"=>\"F\", \"geeks\"=>\"g\" } puts \"Before blocking reverse: \" hash.delete \"for\"puts \"Deleted 'for' key\"puts hash # Apply patchingclass Hash def delete(key) \"Delete blocked!!\" endend # After applying patchinghash = { \"Geeks\"=>\"G\", \"for\"=>\"F\", \"geeks\"=>\"g\" } puts \"Before blocking reverse: \"puts \"Deleting 'for' key but \" + hash.delete(\"for\")puts hash",
"e": 27004,
"s": 26492,
"text": null
},
{
"code": null,
"e": 27012,
"s": 27004,
"text": "Output:"
},
{
"code": null,
"e": 27193,
"s": 27012,
"text": "Before blocking reverse: \nDeleted 'for' key\n{\"Geeks\"=>\"G\", \"geeks\"=>\"g\"}\nBefore blocking reverse: \nDeleting 'for' key but Delete blocked!!\n{\"Geeks\"=>\"G\", \"for\"=>\"F\", \"geeks\"=>\"g\"}\n"
},
{
"code": null,
"e": 27282,
"s": 27193,
"text": "Similar to the above code, here the deletion functionality is blocked due to the patch. "
},
{
"code": null,
"e": 27337,
"s": 27282,
"text": "Some basic tips on when to actually use monkey patch: "
},
{
"code": null,
"e": 27504,
"s": 27337,
"text": "Cases when reopening a class is required for example in the case of code refactoring and simplification when the method is written in a dirty manner inside the class."
},
{
"code": null,
"e": 27640,
"s": 27504,
"text": "When patching is required for a developer’s own class methods. It is not well recommended to use a monkey patch in such a case though. "
},
{
"code": null,
"e": 27647,
"s": 27640,
"text": "Picked"
},
{
"code": null,
"e": 27652,
"s": 27647,
"text": "Ruby"
},
{
"code": null,
"e": 27750,
"s": 27652,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27759,
"s": 27750,
"text": "Comments"
},
{
"code": null,
"e": 27772,
"s": 27759,
"text": "Old Comments"
},
{
"code": null,
"e": 27804,
"s": 27772,
"text": "Ruby | Array reverse() function"
},
{
"code": null,
"e": 27831,
"s": 27804,
"text": "Method Overloading In Ruby"
},
{
"code": null,
"e": 27865,
"s": 27831,
"text": "Ruby | Array transpose() function"
},
{
"code": null,
"e": 27892,
"s": 27865,
"text": "Instance Variables in Ruby"
},
{
"code": null,
"e": 27929,
"s": 27892,
"text": "Ruby | Array class insert() function"
},
{
"code": null,
"e": 27949,
"s": 27929,
"text": "Ruby Static Members"
},
{
"code": null,
"e": 27974,
"s": 27949,
"text": "Ruby | Method overriding"
},
{
"code": null,
"e": 28006,
"s": 27974,
"text": "Ruby | Array replace() function"
},
{
"code": null,
"e": 28030,
"s": 28006,
"text": "Ruby | clear() function"
}
] |
HandleBars Templating in ExpressJS - GeeksforGeeks
|
27 Apr, 2020
Handlebars.js is a templating engine similar to the ejs module in node.js, but more powerful and simple to use. It ensures minimum templating and is a logicless engine that keeps the view and the code separated. It can be used with express as the hbs module, available through npm. HandleBars can be used to render web pages to the client side from data on the server-side.
Command to install hbs module:
npm i hbs
To use handlebars in express, we need to store HTML code into a .hbs extension in the ‘views’ folder in the source directory as hbs looks for the pages in the views folder.
The first thing we need to do in index.js file is to require the hbs module
var express = require('express')var hbs = require('hbs')var app = express()
Now, we need to change the default view engine.
app.set('view engine', 'hbs')
In case the views directory is undesirable, you can change the viewpath by the following command:
app.set('views', <pathname>)
Now let us create a demo.hbs file in our views directory with the following content:
<!DOCTYPE html><html> <body> <p>This is a Demo Page on localhost!</p> </body></html>
Now, we render our webpage through express to the local server.
app.get('/', (req, res)=>{ res.render('demo')}) app.listen(3000)
Now, open your browser and type localhost:3000 on web address and verify the webpage at your server.
Now we will see how we can dynamically link the pages to server-side data.In the index.js, we declare a demo object, in practice, the object can be a result of the request body and/or database query.
var demo = { name : 'Rohan', age : 26} app.get('/', (req, res)=>{ res.render('dynamic', {demo : demo})})
Here we send the demo object as a demo to our hbs page. We can retrieve the information in dynamic.hbs present in the views folder.
<!DOCTYPE html><html> <body> <p>{{demo.name}} is {{demo.age}} years old.</p> </body></html>
Output:
Rohan is 26 years old
Given multiple values, we can iterate over all of them to perform the same functionality/display for each of the elements.
Let’s take an example, add the following code to your index.js and run the server and get a response.
var projects = { name : 'Rahul', skills : ['Data Mining', 'BlockChain Dev', 'node.js']} app.get('/projects', (req, res)=>{ res.render('projects', {project : project});})
where out views/projects.hbs looks something like:
<!DOCTYPE html><html> <body> {{projects.name}} has the following skills : <br> {{#each projects.skills}} {{this}} <br> {{/each}} </body></html>
Output:
Rahul has the following skills :
Data Mining
BlockChain Dev
node.js
Node.js-Misc
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between dependencies, devDependencies and peerDependencies
Node.js Export Module
Mongoose Populate() Method
Mongoose find() Function
How to connect Node.js with React.js ?
Remove elements from a JavaScript Array
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript
|
[
{
"code": null,
"e": 26293,
"s": 26265,
"text": "\n27 Apr, 2020"
},
{
"code": null,
"e": 26667,
"s": 26293,
"text": "Handlebars.js is a templating engine similar to the ejs module in node.js, but more powerful and simple to use. It ensures minimum templating and is a logicless engine that keeps the view and the code separated. It can be used with express as the hbs module, available through npm. HandleBars can be used to render web pages to the client side from data on the server-side."
},
{
"code": null,
"e": 26698,
"s": 26667,
"text": "Command to install hbs module:"
},
{
"code": null,
"e": 26708,
"s": 26698,
"text": "npm i hbs"
},
{
"code": null,
"e": 26881,
"s": 26708,
"text": "To use handlebars in express, we need to store HTML code into a .hbs extension in the ‘views’ folder in the source directory as hbs looks for the pages in the views folder."
},
{
"code": null,
"e": 26957,
"s": 26881,
"text": "The first thing we need to do in index.js file is to require the hbs module"
},
{
"code": "var express = require('express')var hbs = require('hbs')var app = express()",
"e": 27033,
"s": 26957,
"text": null
},
{
"code": null,
"e": 27081,
"s": 27033,
"text": "Now, we need to change the default view engine."
},
{
"code": "app.set('view engine', 'hbs')",
"e": 27111,
"s": 27081,
"text": null
},
{
"code": null,
"e": 27209,
"s": 27111,
"text": "In case the views directory is undesirable, you can change the viewpath by the following command:"
},
{
"code": "app.set('views', <pathname>)",
"e": 27238,
"s": 27209,
"text": null
},
{
"code": null,
"e": 27323,
"s": 27238,
"text": "Now let us create a demo.hbs file in our views directory with the following content:"
},
{
"code": "<!DOCTYPE html><html> <body> <p>This is a Demo Page on localhost!</p> </body></html>",
"e": 27421,
"s": 27323,
"text": null
},
{
"code": null,
"e": 27485,
"s": 27421,
"text": "Now, we render our webpage through express to the local server."
},
{
"code": "app.get('/', (req, res)=>{ res.render('demo')}) app.listen(3000)",
"e": 27554,
"s": 27485,
"text": null
},
{
"code": null,
"e": 27655,
"s": 27554,
"text": "Now, open your browser and type localhost:3000 on web address and verify the webpage at your server."
},
{
"code": null,
"e": 27855,
"s": 27655,
"text": "Now we will see how we can dynamically link the pages to server-side data.In the index.js, we declare a demo object, in practice, the object can be a result of the request body and/or database query."
},
{
"code": "var demo = { name : 'Rohan', age : 26} app.get('/', (req, res)=>{ res.render('dynamic', {demo : demo})})",
"e": 27971,
"s": 27855,
"text": null
},
{
"code": null,
"e": 28103,
"s": 27971,
"text": "Here we send the demo object as a demo to our hbs page. We can retrieve the information in dynamic.hbs present in the views folder."
},
{
"code": "<!DOCTYPE html><html> <body> <p>{{demo.name}} is {{demo.age}} years old.</p> </body></html>",
"e": 28208,
"s": 28103,
"text": null
},
{
"code": null,
"e": 28216,
"s": 28208,
"text": "Output:"
},
{
"code": null,
"e": 28238,
"s": 28216,
"text": "Rohan is 26 years old"
},
{
"code": null,
"e": 28361,
"s": 28238,
"text": "Given multiple values, we can iterate over all of them to perform the same functionality/display for each of the elements."
},
{
"code": null,
"e": 28463,
"s": 28361,
"text": "Let’s take an example, add the following code to your index.js and run the server and get a response."
},
{
"code": "var projects = { name : 'Rahul', skills : ['Data Mining', 'BlockChain Dev', 'node.js']} app.get('/projects', (req, res)=>{ res.render('projects', {project : project});})",
"e": 28644,
"s": 28463,
"text": null
},
{
"code": null,
"e": 28695,
"s": 28644,
"text": "where out views/projects.hbs looks something like:"
},
{
"code": "<!DOCTYPE html><html> <body> {{projects.name}} has the following skills : <br> {{#each projects.skills}} {{this}} <br> {{/each}} </body></html>",
"e": 28877,
"s": 28695,
"text": null
},
{
"code": null,
"e": 28885,
"s": 28877,
"text": "Output:"
},
{
"code": null,
"e": 28955,
"s": 28885,
"text": "Rahul has the following skills : \nData Mining\nBlockChain Dev\nnode.js\n"
},
{
"code": null,
"e": 28968,
"s": 28955,
"text": "Node.js-Misc"
},
{
"code": null,
"e": 28976,
"s": 28968,
"text": "Node.js"
},
{
"code": null,
"e": 28993,
"s": 28976,
"text": "Web Technologies"
},
{
"code": null,
"e": 29091,
"s": 28993,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29161,
"s": 29091,
"text": "Difference between dependencies, devDependencies and peerDependencies"
},
{
"code": null,
"e": 29183,
"s": 29161,
"text": "Node.js Export Module"
},
{
"code": null,
"e": 29210,
"s": 29183,
"text": "Mongoose Populate() Method"
},
{
"code": null,
"e": 29235,
"s": 29210,
"text": "Mongoose find() Function"
},
{
"code": null,
"e": 29274,
"s": 29235,
"text": "How to connect Node.js with React.js ?"
},
{
"code": null,
"e": 29314,
"s": 29274,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 29359,
"s": 29314,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 29402,
"s": 29359,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 29452,
"s": 29402,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
System.lineSeparator() method in Java With Examples - GeeksforGeeks
|
19 Jun, 2018
The lineSeparator() is a built-in method in Java which returns the system-dependent line separator string. It always returns the same value – the initial value of the system property line.separator.
Syntax:
public static String lineSeparator()
Parameters: This method does not take any parameters.
Return Values: In any UNIX systems, it will return “\n” or a positive integer; and on Windows systems it returns “\r\n” or a positive integer.
Exception : It throws NullPointerException , if the string is null
Below programs illustrate the System.lineSeparator() method:
Program 1: To illustrate the working of static String lineSeparator() method.
// Java program to demonstrate working// of static String lineSeparator() methodimport java.io.IOException;import java.lang.*;import java.nio.channels.Channel;public class LineSeparatorExample { public static void main(String[] args) { String s = System.lineSeparator(); for (char c : s.toCharArray()) { System.out.println((int)c); } }}
10
Note: Here it returns 10. So here 10 is the line separator.
Program 2: To illustrate the working of static String lineSeparator() method for an integral value.
// Java program to demonstrate working// of static String lineSeparator() methodimport java.io.IOException;import java.lang.*;import java.nio.channels.Channel;class SystemDemo { public static void main(String args[]) throws NullPointerException, IOException { Integer x = 636; System.out.println(System.lineSeparator()); }}
\r\n
Note: Here it returns “\r\n” since its a Microsoft Windows systems.
Reference: https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#lineSeparator()
Java-Functions
Java-I/O
Java-lang package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Object Oriented Programming (OOPs) Concept in Java
Stream In Java
HashMap in Java with Examples
Interfaces in Java
How to iterate any Map in Java
Initialize an ArrayList in Java
ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multidimensional Arrays in Java
|
[
{
"code": null,
"e": 26391,
"s": 26363,
"text": "\n19 Jun, 2018"
},
{
"code": null,
"e": 26590,
"s": 26391,
"text": "The lineSeparator() is a built-in method in Java which returns the system-dependent line separator string. It always returns the same value – the initial value of the system property line.separator."
},
{
"code": null,
"e": 26598,
"s": 26590,
"text": "Syntax:"
},
{
"code": null,
"e": 26635,
"s": 26598,
"text": "public static String lineSeparator()"
},
{
"code": null,
"e": 26689,
"s": 26635,
"text": "Parameters: This method does not take any parameters."
},
{
"code": null,
"e": 26832,
"s": 26689,
"text": "Return Values: In any UNIX systems, it will return “\\n” or a positive integer; and on Windows systems it returns “\\r\\n” or a positive integer."
},
{
"code": null,
"e": 26899,
"s": 26832,
"text": "Exception : It throws NullPointerException , if the string is null"
},
{
"code": null,
"e": 26960,
"s": 26899,
"text": "Below programs illustrate the System.lineSeparator() method:"
},
{
"code": null,
"e": 27038,
"s": 26960,
"text": "Program 1: To illustrate the working of static String lineSeparator() method."
},
{
"code": "// Java program to demonstrate working// of static String lineSeparator() methodimport java.io.IOException;import java.lang.*;import java.nio.channels.Channel;public class LineSeparatorExample { public static void main(String[] args) { String s = System.lineSeparator(); for (char c : s.toCharArray()) { System.out.println((int)c); } }}",
"e": 27420,
"s": 27038,
"text": null
},
{
"code": null,
"e": 27424,
"s": 27420,
"text": "10\n"
},
{
"code": null,
"e": 27484,
"s": 27424,
"text": "Note: Here it returns 10. So here 10 is the line separator."
},
{
"code": null,
"e": 27584,
"s": 27484,
"text": "Program 2: To illustrate the working of static String lineSeparator() method for an integral value."
},
{
"code": "// Java program to demonstrate working// of static String lineSeparator() methodimport java.io.IOException;import java.lang.*;import java.nio.channels.Channel;class SystemDemo { public static void main(String args[]) throws NullPointerException, IOException { Integer x = 636; System.out.println(System.lineSeparator()); }}",
"e": 27952,
"s": 27584,
"text": null
},
{
"code": null,
"e": 27958,
"s": 27952,
"text": "\\r\\n\n"
},
{
"code": null,
"e": 28026,
"s": 27958,
"text": "Note: Here it returns “\\r\\n” since its a Microsoft Windows systems."
},
{
"code": null,
"e": 28117,
"s": 28026,
"text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#lineSeparator()"
},
{
"code": null,
"e": 28132,
"s": 28117,
"text": "Java-Functions"
},
{
"code": null,
"e": 28141,
"s": 28132,
"text": "Java-I/O"
},
{
"code": null,
"e": 28159,
"s": 28141,
"text": "Java-lang package"
},
{
"code": null,
"e": 28164,
"s": 28159,
"text": "Java"
},
{
"code": null,
"e": 28169,
"s": 28164,
"text": "Java"
},
{
"code": null,
"e": 28267,
"s": 28169,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28318,
"s": 28267,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 28333,
"s": 28318,
"text": "Stream In Java"
},
{
"code": null,
"e": 28363,
"s": 28333,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 28382,
"s": 28363,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 28413,
"s": 28382,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 28445,
"s": 28413,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 28463,
"s": 28445,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 28483,
"s": 28463,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 28507,
"s": 28483,
"text": "Singleton Class in Java"
}
] |
Largest value of K such that both K and -K exist in Array in given index range [L, R] - GeeksforGeeks
|
30 Nov, 2021
Given an array, arr[] of N integers and 2 integers L and R, the task is to return the largest integer K greater than 0 and L<=K<=R, such that both values K and -K exist in array arr[]. If there is no such integer, then return 0.
Examples:
Input: N = 5, arr[] = {3, 2, -2, 5, -3}, L = 2, R = 3Output: 3Explanation: The largest value of K in the range [2, 3] such that both K and -K exist in the array is 3 as 3 is present at arr[0] and -3 is present at arr[4].
Input: N = 4, arr[] = {1, 2, 3, -4}, L = 1, R = 4Output: 0
Approach: The idea is to traverse the array and add the element into the Set and simultaneously check for the negative of it i.e, arr[i]*-1 into the Set. If it is found then push it into the vector possible[]. Follow the steps below to solve the problem:
Initialize an unordered_set<int> s[] to store the elements.
Initialize a vector possible[] to store the possible answers.
Iterate over the range [0, N) using the variable i and perform the following steps:If -arr[i] is present in the set s[], then push the value abs(arr[i]) into the vector possible[].Else add the element arr[i] into the set s[].
If -arr[i] is present in the set s[], then push the value abs(arr[i]) into the vector possible[].
Else add the element arr[i] into the set s[].
Initialize a variable ans as 0 to store the answer.
Iterate over the range [0, size) where size is the size of the vector possible[], using the variable i and perform the following steps:If possible[i] is greater than equal to L and less than equal to R, then update the value of ans as the max of ans or possible[i].
If possible[i] is greater than equal to L and less than equal to R, then update the value of ans as the max of ans or possible[i].
After performing the above steps, print the value of ans as the answer.
Below is the implementation of the above approach.
C++14
Java
Python3
C#
Javascript
// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to find the maximum value of Kint findMax(int N, int arr[], int L, int R){ // Using a set to store the elements unordered_set<int> s; // Vector to store the possible answers vector<int> possible; // Store the answer int ans = 0; // Traverse the array for (int i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.find(arr[i] * -1) != s.end()) possible.push_back(abs(arr[i])); else s.insert(arr[i]); } // Find the maximum possible answer for (int i = 0; i < possible.size(); i++) { if (possible[i] >= L and possible[i] <= R) ans = max(ans, possible[i]); } return ans;} // Driver Codeint main(){ int arr[] = { 3, 2, -2, 5, -3 }, N = 5, L = 2, R = 3; int max = findMax(N, arr, L, R); // Display the output cout << max << endl; return 0;}
// Java program for the above approachimport java.util.*; public class GFG{ // Function to find the maximum value of Kstatic int findMax(int N, int []arr, int L, int R){ // Using a set to store the elements HashSet<Integer> s = new HashSet<Integer>(); // ArrayList to store the possible answers ArrayList<Integer> possible = new ArrayList<Integer>(); // Store the answer int ans = 0; // Traverse the array for (int i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.contains(arr[i] * -1)) possible.add(Math.abs(arr[i])); else s.add(arr[i]); } // Find the maximum possible answer for (int i = 0; i < possible.size(); i++) { if (possible.get(i) >= L && possible.get(i) <= R) { ans = Math.max(ans, possible.get(i)); } } return ans;} // Driver Codepublic static void main(String args[]){ int []arr = { 3, 2, -2, 5, -3 }; int N = 5, L = 2, R = 3; int max = findMax(N, arr, L, R); // Display the output System.out.println(max); }} // This code is contributed by Samim Hossain Mondal.
# Python3 program for the above approach # Function to find the maximum value of Kdef findMax(N, arr, L, R) : # Using a set to store the elements s = set(); # Vector to store the possible answers possible = []; # Store the answer ans = 0; # Traverse the array for i in range(N) : # If set has it's negation, # check if it is max if arr[i] * -1 in s : possible.append(abs(arr[i])); else : s.add(arr[i]); # Find the maximum possible answer for i in range(len(possible)) : if (possible[i] >= L and possible[i] <= R) : ans = max(ans, possible[i]); return ans; # Driver Codeif __name__ == "__main__" : arr = [ 3, 2, -2, 5, -3 ]; N = 5; L = 2; R = 3; Max = findMax(N, arr, L, R); # Display the output print(Max); # This code is contributed by Ankthon
// Java program for the above approachusing System;using System.Collections;using System.Collections.Generic; public class GFG{// Function to find the maximum value of Kstatic int findMax(int N, int []arr, int L, int R){ // Using a set to store the elements HashSet<int> s = new HashSet<int>(); // ArrayList to store the possible answers ArrayList possible = new ArrayList(); // Store the answer int ans = 0; // Traverse the array for (int i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.Contains(arr[i] * -1)) possible.Add(Math.Abs(arr[i])); else s.Add(arr[i]); } // Find the maximum possible answer for (int i = 0; i < possible.Count; i++) { if ((int)possible[i] >= L && (int)possible[i] <= R) { ans = Math.Max(ans, (int)possible[i]); } } return ans;} // Driver Codepublic static void Main(){ int []arr = { 3, 2, -2, 5, -3 }; int N = 5, L = 2, R = 3; int max = findMax(N, arr, L, R); // Display the output Console.Write(max);; }} // This code is contributed by Samim Hossain Mondal.
<script> // JavaScript Program to implement // the above approach // Function to find the maximum value of K function findMax(N, arr, L, R) { // Using a set to store the elements let s = new Set(); // Vector to store the possible answers let possible = []; // Store the answer let ans = 0; // Traverse the array for (let i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.has(arr[i] * -1)) possible.push(Math.abs(arr[i])); else s.add(arr[i]); } // Find the maximum possible answer for (let i = 0; i < possible.length; i++) { if (possible[i] >= L && possible[i] <= R) ans = Math.max(ans, possible[i]); } return ans; } // Driver Code let arr = [3, 2, -2, 5, -3]; let N = 5, L = 2, R = 3; let max = findMax(N, arr, L, R); // Display the output document.write(max + '<br'); // This code is contributed by Potta Lokesh </script>
3
Time Complexity: O(N)Auxiliary Space: O(N)
lokeshpotta20
samim2000
ankthon
array-range-queries
Arrays
Arrays
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Window Sliding Technique
Program to find sum of elements in a given array
Reversal algorithm for array rotation
Find duplicates in O(n) time and O(1) extra space | Set 1
Trapping Rain Water
Next Greater Element
Building Heap from Array
Move all negative numbers to beginning and positive to end with constant extra space
Count pairs with given sum
Sliding Window Maximum (Maximum of all subarrays of size k)
|
[
{
"code": null,
"e": 24716,
"s": 24688,
"text": "\n30 Nov, 2021"
},
{
"code": null,
"e": 24946,
"s": 24716,
"text": "Given an array, arr[] of N integers and 2 integers L and R, the task is to return the largest integer K greater than 0 and L<=K<=R, such that both values K and -K exist in array arr[]. If there is no such integer, then return 0. "
},
{
"code": null,
"e": 24956,
"s": 24946,
"text": "Examples:"
},
{
"code": null,
"e": 25178,
"s": 24956,
"text": "Input: N = 5, arr[] = {3, 2, -2, 5, -3}, L = 2, R = 3Output: 3Explanation: The largest value of K in the range [2, 3] such that both K and -K exist in the array is 3 as 3 is present at arr[0] and -3 is present at arr[4]."
},
{
"code": null,
"e": 25238,
"s": 25178,
"text": "Input: N = 4, arr[] = {1, 2, 3, -4}, L = 1, R = 4Output: 0"
},
{
"code": null,
"e": 25493,
"s": 25238,
"text": "Approach: The idea is to traverse the array and add the element into the Set and simultaneously check for the negative of it i.e, arr[i]*-1 into the Set. If it is found then push it into the vector possible[]. Follow the steps below to solve the problem:"
},
{
"code": null,
"e": 25553,
"s": 25493,
"text": "Initialize an unordered_set<int> s[] to store the elements."
},
{
"code": null,
"e": 25615,
"s": 25553,
"text": "Initialize a vector possible[] to store the possible answers."
},
{
"code": null,
"e": 25841,
"s": 25615,
"text": "Iterate over the range [0, N) using the variable i and perform the following steps:If -arr[i] is present in the set s[], then push the value abs(arr[i]) into the vector possible[].Else add the element arr[i] into the set s[]."
},
{
"code": null,
"e": 25939,
"s": 25841,
"text": "If -arr[i] is present in the set s[], then push the value abs(arr[i]) into the vector possible[]."
},
{
"code": null,
"e": 25985,
"s": 25939,
"text": "Else add the element arr[i] into the set s[]."
},
{
"code": null,
"e": 26037,
"s": 25985,
"text": "Initialize a variable ans as 0 to store the answer."
},
{
"code": null,
"e": 26303,
"s": 26037,
"text": "Iterate over the range [0, size) where size is the size of the vector possible[], using the variable i and perform the following steps:If possible[i] is greater than equal to L and less than equal to R, then update the value of ans as the max of ans or possible[i]."
},
{
"code": null,
"e": 26434,
"s": 26303,
"text": "If possible[i] is greater than equal to L and less than equal to R, then update the value of ans as the max of ans or possible[i]."
},
{
"code": null,
"e": 26506,
"s": 26434,
"text": "After performing the above steps, print the value of ans as the answer."
},
{
"code": null,
"e": 26557,
"s": 26506,
"text": "Below is the implementation of the above approach."
},
{
"code": null,
"e": 26563,
"s": 26557,
"text": "C++14"
},
{
"code": null,
"e": 26568,
"s": 26563,
"text": "Java"
},
{
"code": null,
"e": 26576,
"s": 26568,
"text": "Python3"
},
{
"code": null,
"e": 26579,
"s": 26576,
"text": "C#"
},
{
"code": null,
"e": 26590,
"s": 26579,
"text": "Javascript"
},
{
"code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to find the maximum value of Kint findMax(int N, int arr[], int L, int R){ // Using a set to store the elements unordered_set<int> s; // Vector to store the possible answers vector<int> possible; // Store the answer int ans = 0; // Traverse the array for (int i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.find(arr[i] * -1) != s.end()) possible.push_back(abs(arr[i])); else s.insert(arr[i]); } // Find the maximum possible answer for (int i = 0; i < possible.size(); i++) { if (possible[i] >= L and possible[i] <= R) ans = max(ans, possible[i]); } return ans;} // Driver Codeint main(){ int arr[] = { 3, 2, -2, 5, -3 }, N = 5, L = 2, R = 3; int max = findMax(N, arr, L, R); // Display the output cout << max << endl; return 0;}",
"e": 27584,
"s": 26590,
"text": null
},
{
"code": "// Java program for the above approachimport java.util.*; public class GFG{ // Function to find the maximum value of Kstatic int findMax(int N, int []arr, int L, int R){ // Using a set to store the elements HashSet<Integer> s = new HashSet<Integer>(); // ArrayList to store the possible answers ArrayList<Integer> possible = new ArrayList<Integer>(); // Store the answer int ans = 0; // Traverse the array for (int i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.contains(arr[i] * -1)) possible.add(Math.abs(arr[i])); else s.add(arr[i]); } // Find the maximum possible answer for (int i = 0; i < possible.size(); i++) { if (possible.get(i) >= L && possible.get(i) <= R) { ans = Math.max(ans, possible.get(i)); } } return ans;} // Driver Codepublic static void main(String args[]){ int []arr = { 3, 2, -2, 5, -3 }; int N = 5, L = 2, R = 3; int max = findMax(N, arr, L, R); // Display the output System.out.println(max); }} // This code is contributed by Samim Hossain Mondal.",
"e": 28743,
"s": 27584,
"text": null
},
{
"code": "# Python3 program for the above approach # Function to find the maximum value of Kdef findMax(N, arr, L, R) : # Using a set to store the elements s = set(); # Vector to store the possible answers possible = []; # Store the answer ans = 0; # Traverse the array for i in range(N) : # If set has it's negation, # check if it is max if arr[i] * -1 in s : possible.append(abs(arr[i])); else : s.add(arr[i]); # Find the maximum possible answer for i in range(len(possible)) : if (possible[i] >= L and possible[i] <= R) : ans = max(ans, possible[i]); return ans; # Driver Codeif __name__ == \"__main__\" : arr = [ 3, 2, -2, 5, -3 ]; N = 5; L = 2; R = 3; Max = findMax(N, arr, L, R); # Display the output print(Max); # This code is contributed by Ankthon",
"e": 29620,
"s": 28743,
"text": null
},
{
"code": "// Java program for the above approachusing System;using System.Collections;using System.Collections.Generic; public class GFG{// Function to find the maximum value of Kstatic int findMax(int N, int []arr, int L, int R){ // Using a set to store the elements HashSet<int> s = new HashSet<int>(); // ArrayList to store the possible answers ArrayList possible = new ArrayList(); // Store the answer int ans = 0; // Traverse the array for (int i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.Contains(arr[i] * -1)) possible.Add(Math.Abs(arr[i])); else s.Add(arr[i]); } // Find the maximum possible answer for (int i = 0; i < possible.Count; i++) { if ((int)possible[i] >= L && (int)possible[i] <= R) { ans = Math.Max(ans, (int)possible[i]); } } return ans;} // Driver Codepublic static void Main(){ int []arr = { 3, 2, -2, 5, -3 }; int N = 5, L = 2, R = 3; int max = findMax(N, arr, L, R); // Display the output Console.Write(max);; }} // This code is contributed by Samim Hossain Mondal.",
"e": 30776,
"s": 29620,
"text": null
},
{
"code": "<script> // JavaScript Program to implement // the above approach // Function to find the maximum value of K function findMax(N, arr, L, R) { // Using a set to store the elements let s = new Set(); // Vector to store the possible answers let possible = []; // Store the answer let ans = 0; // Traverse the array for (let i = 0; i < N; i++) { // If set has it's negation, // check if it is max if (s.has(arr[i] * -1)) possible.push(Math.abs(arr[i])); else s.add(arr[i]); } // Find the maximum possible answer for (let i = 0; i < possible.length; i++) { if (possible[i] >= L && possible[i] <= R) ans = Math.max(ans, possible[i]); } return ans; } // Driver Code let arr = [3, 2, -2, 5, -3]; let N = 5, L = 2, R = 3; let max = findMax(N, arr, L, R); // Display the output document.write(max + '<br'); // This code is contributed by Potta Lokesh </script>",
"e": 31998,
"s": 30776,
"text": null
},
{
"code": null,
"e": 32000,
"s": 31998,
"text": "3"
},
{
"code": null,
"e": 32043,
"s": 32000,
"text": "Time Complexity: O(N)Auxiliary Space: O(N)"
},
{
"code": null,
"e": 32057,
"s": 32043,
"text": "lokeshpotta20"
},
{
"code": null,
"e": 32067,
"s": 32057,
"text": "samim2000"
},
{
"code": null,
"e": 32075,
"s": 32067,
"text": "ankthon"
},
{
"code": null,
"e": 32095,
"s": 32075,
"text": "array-range-queries"
},
{
"code": null,
"e": 32102,
"s": 32095,
"text": "Arrays"
},
{
"code": null,
"e": 32109,
"s": 32102,
"text": "Arrays"
},
{
"code": null,
"e": 32207,
"s": 32109,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32216,
"s": 32207,
"text": "Comments"
},
{
"code": null,
"e": 32229,
"s": 32216,
"text": "Old Comments"
},
{
"code": null,
"e": 32254,
"s": 32229,
"text": "Window Sliding Technique"
},
{
"code": null,
"e": 32303,
"s": 32254,
"text": "Program to find sum of elements in a given array"
},
{
"code": null,
"e": 32341,
"s": 32303,
"text": "Reversal algorithm for array rotation"
},
{
"code": null,
"e": 32399,
"s": 32341,
"text": "Find duplicates in O(n) time and O(1) extra space | Set 1"
},
{
"code": null,
"e": 32419,
"s": 32399,
"text": "Trapping Rain Water"
},
{
"code": null,
"e": 32440,
"s": 32419,
"text": "Next Greater Element"
},
{
"code": null,
"e": 32465,
"s": 32440,
"text": "Building Heap from Array"
},
{
"code": null,
"e": 32550,
"s": 32465,
"text": "Move all negative numbers to beginning and positive to end with constant extra space"
},
{
"code": null,
"e": 32577,
"s": 32550,
"text": "Count pairs with given sum"
}
] |
How to Query PostgreSQL using Python (with SSH) in 3 Steps | by Erik Yan | Towards Data Science
|
*- This story describes how to use this script to quickly setup connection with a remote PostgreSQL database with or without SSH .pem authentication. -*
Often data is housed within databases like PostgreSQL on remote servers, which can make it difficult for data analysts to access the data quickly. Sometimes, there are intermediate teams that assist analysts with retrieving the data from the database. In this story we’ll talk about how you can directly query the data using Python, without the need for intermediates.
If you know how to use Python (mainly for analytics and visualizations) but don’t have experience with databases or how to interact with them, then this post is for you!
There are a variety of tools at our disposal that will enable you to interact/retrieve data yourself from a database and spend more time extracting insights.
STEP 1: Install all required packages.
STEP 2: Import or paste query.py contents into your Notebook.
STEP 3: Start querying!
This story assumes the following:
You have Python already installed on your local environment
You are using a live-code environment like Jupyter Notebooks
Have been given the necessary credentials to SSH into the remote server (.pem certificate) and query the PostgreSQL database (username and password).
First we’ll need to install several packages from terminal.
pip3 install paramikopip3 install sshtunnelpip3 install SQLAlchemypip3 install pandas
Next, you’ll need to either save the query.py file from the repo to your working directory (where your Jupyter Notebook file is) or simply copy the contents of the file into your Notebook directly.
If you are placing the query.py file in your working directory, then include the following import line in your Notebook:
from query.py import *
Alternatively, simply copy and paste the code below into a code cell in your Notebook:
Now were ready to start querying! The defined class only provides a handful of basic functions. Let’s walk through how to use the class and what we can do with it.
First, we’ll need to specify our PostgreSQL connection arguments, and SSH arguments (if SSH tunneling is required to access the remote server).
We define pgres as our connection to simplify each time we want to query the database or explore the organizational structure of the database. You will also be prompted for your PostgreSQL username and password, which are stored as temporary variables (best-practice is to save these variables as environment variables instead).
Next, we can explore the schemas of our given database named ‘database_name’ to find our schema of interest using the .schemas() function.
If we want to explore the schema named ‘schema_name’, we can return the names of the tables within the schema using the .tables() function.
Finally, we can use .query() to run standard SQL queries (for PostgreSQL). In this example we’re querying the column names and data types from the table named ‘ey_test_table’
Try replacing the contents of sql_statement with your own query, and have fun!
|
[
{
"code": null,
"e": 200,
"s": 47,
"text": "*- This story describes how to use this script to quickly setup connection with a remote PostgreSQL database with or without SSH .pem authentication. -*"
},
{
"code": null,
"e": 569,
"s": 200,
"text": "Often data is housed within databases like PostgreSQL on remote servers, which can make it difficult for data analysts to access the data quickly. Sometimes, there are intermediate teams that assist analysts with retrieving the data from the database. In this story we’ll talk about how you can directly query the data using Python, without the need for intermediates."
},
{
"code": null,
"e": 739,
"s": 569,
"text": "If you know how to use Python (mainly for analytics and visualizations) but don’t have experience with databases or how to interact with them, then this post is for you!"
},
{
"code": null,
"e": 897,
"s": 739,
"text": "There are a variety of tools at our disposal that will enable you to interact/retrieve data yourself from a database and spend more time extracting insights."
},
{
"code": null,
"e": 936,
"s": 897,
"text": "STEP 1: Install all required packages."
},
{
"code": null,
"e": 998,
"s": 936,
"text": "STEP 2: Import or paste query.py contents into your Notebook."
},
{
"code": null,
"e": 1022,
"s": 998,
"text": "STEP 3: Start querying!"
},
{
"code": null,
"e": 1056,
"s": 1022,
"text": "This story assumes the following:"
},
{
"code": null,
"e": 1116,
"s": 1056,
"text": "You have Python already installed on your local environment"
},
{
"code": null,
"e": 1177,
"s": 1116,
"text": "You are using a live-code environment like Jupyter Notebooks"
},
{
"code": null,
"e": 1327,
"s": 1177,
"text": "Have been given the necessary credentials to SSH into the remote server (.pem certificate) and query the PostgreSQL database (username and password)."
},
{
"code": null,
"e": 1387,
"s": 1327,
"text": "First we’ll need to install several packages from terminal."
},
{
"code": null,
"e": 1473,
"s": 1387,
"text": "pip3 install paramikopip3 install sshtunnelpip3 install SQLAlchemypip3 install pandas"
},
{
"code": null,
"e": 1671,
"s": 1473,
"text": "Next, you’ll need to either save the query.py file from the repo to your working directory (where your Jupyter Notebook file is) or simply copy the contents of the file into your Notebook directly."
},
{
"code": null,
"e": 1792,
"s": 1671,
"text": "If you are placing the query.py file in your working directory, then include the following import line in your Notebook:"
},
{
"code": null,
"e": 1815,
"s": 1792,
"text": "from query.py import *"
},
{
"code": null,
"e": 1902,
"s": 1815,
"text": "Alternatively, simply copy and paste the code below into a code cell in your Notebook:"
},
{
"code": null,
"e": 2066,
"s": 1902,
"text": "Now were ready to start querying! The defined class only provides a handful of basic functions. Let’s walk through how to use the class and what we can do with it."
},
{
"code": null,
"e": 2210,
"s": 2066,
"text": "First, we’ll need to specify our PostgreSQL connection arguments, and SSH arguments (if SSH tunneling is required to access the remote server)."
},
{
"code": null,
"e": 2539,
"s": 2210,
"text": "We define pgres as our connection to simplify each time we want to query the database or explore the organizational structure of the database. You will also be prompted for your PostgreSQL username and password, which are stored as temporary variables (best-practice is to save these variables as environment variables instead)."
},
{
"code": null,
"e": 2678,
"s": 2539,
"text": "Next, we can explore the schemas of our given database named ‘database_name’ to find our schema of interest using the .schemas() function."
},
{
"code": null,
"e": 2818,
"s": 2678,
"text": "If we want to explore the schema named ‘schema_name’, we can return the names of the tables within the schema using the .tables() function."
},
{
"code": null,
"e": 2993,
"s": 2818,
"text": "Finally, we can use .query() to run standard SQL queries (for PostgreSQL). In this example we’re querying the column names and data types from the table named ‘ey_test_table’"
}
] |
GATE | GATE-CS-2004 | Question 15 - GeeksforGeeks
|
28 Jun, 2021
Choose the best matching between Group 1 and Group 2.
Group-1 Group-2
P. Data link 1. Ensures reliable transport of data
over a physical point-to-point link
Q. Network layer 2. Encoder/decodes data for physical
transmission
R. Transport layer 3. Allows end-to-end communication
between two processes
4. Routes data from one network
node to the next
(A) P-1, Q-4, R-3(B) P-2, Q-4, R-1(C) P-2, Q-3, R-1(D) P-1, Q-3, R-2Answer: (A)Explanation: Data link layer is the second layer of the OSI Model. This layer is responsible for data transfer between nodes on the network and providing a point to point local delivery framework. So, P matches with 1.
Network layer is the third layer of the OSI Model. This layer is responsible for forwarding of data packets and routing through intermediate routers. So, Q matches with 4.
Transport layer is the fourth layer of the OSI Model. This layer is responsible for delivering data from process to process. So, R matches with 3.
Thus, A is the correct option.
Please comment below if you find anything wrong in the above post.Quiz of this Question
GATE-CS-2004
GATE-GATE-CS-2004
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GATE | Gate IT 2007 | Question 25
GATE | GATE-CS-2000 | Question 41
GATE | GATE-CS-2001 | Question 39
GATE | GATE-CS-2005 | Question 6
GATE | GATE MOCK 2017 | Question 21
GATE | GATE-CS-2006 | Question 47
GATE | GATE MOCK 2017 | Question 24
GATE | Gate IT 2008 | Question 43
GATE | GATE-CS-2009 | Question 38
GATE | GATE-CS-2003 | Question 90
|
[
{
"code": null,
"e": 25797,
"s": 25769,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 25851,
"s": 25797,
"text": "Choose the best matching between Group 1 and Group 2."
},
{
"code": null,
"e": 26314,
"s": 25851,
"text": " Group-1 Group-2 \n P. Data link 1. Ensures reliable transport of data\n over a physical point-to-point link\n Q. Network layer 2. Encoder/decodes data for physical\n transmission\n R. Transport layer 3. Allows end-to-end communication\n between two processes\n 4. Routes data from one network\n node to the next"
},
{
"code": null,
"e": 26612,
"s": 26314,
"text": "(A) P-1, Q-4, R-3(B) P-2, Q-4, R-1(C) P-2, Q-3, R-1(D) P-1, Q-3, R-2Answer: (A)Explanation: Data link layer is the second layer of the OSI Model. This layer is responsible for data transfer between nodes on the network and providing a point to point local delivery framework. So, P matches with 1."
},
{
"code": null,
"e": 26784,
"s": 26612,
"text": "Network layer is the third layer of the OSI Model. This layer is responsible for forwarding of data packets and routing through intermediate routers. So, Q matches with 4."
},
{
"code": null,
"e": 26931,
"s": 26784,
"text": "Transport layer is the fourth layer of the OSI Model. This layer is responsible for delivering data from process to process. So, R matches with 3."
},
{
"code": null,
"e": 26964,
"s": 26933,
"text": "Thus, A is the correct option."
},
{
"code": null,
"e": 27054,
"s": 26966,
"text": "Please comment below if you find anything wrong in the above post.Quiz of this Question"
},
{
"code": null,
"e": 27067,
"s": 27054,
"text": "GATE-CS-2004"
},
{
"code": null,
"e": 27085,
"s": 27067,
"text": "GATE-GATE-CS-2004"
},
{
"code": null,
"e": 27090,
"s": 27085,
"text": "GATE"
},
{
"code": null,
"e": 27188,
"s": 27090,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27222,
"s": 27188,
"text": "GATE | Gate IT 2007 | Question 25"
},
{
"code": null,
"e": 27256,
"s": 27222,
"text": "GATE | GATE-CS-2000 | Question 41"
},
{
"code": null,
"e": 27290,
"s": 27256,
"text": "GATE | GATE-CS-2001 | Question 39"
},
{
"code": null,
"e": 27323,
"s": 27290,
"text": "GATE | GATE-CS-2005 | Question 6"
},
{
"code": null,
"e": 27359,
"s": 27323,
"text": "GATE | GATE MOCK 2017 | Question 21"
},
{
"code": null,
"e": 27393,
"s": 27359,
"text": "GATE | GATE-CS-2006 | Question 47"
},
{
"code": null,
"e": 27429,
"s": 27393,
"text": "GATE | GATE MOCK 2017 | Question 24"
},
{
"code": null,
"e": 27463,
"s": 27429,
"text": "GATE | Gate IT 2008 | Question 43"
},
{
"code": null,
"e": 27497,
"s": 27463,
"text": "GATE | GATE-CS-2009 | Question 38"
}
] |
HTML - <blockquote> Tag
|
The HTML <blockquote> tag is used for indicating long quotations (i.e. quotations that span multiple lines). It should contain only block-level elements within it, and not just plain text.
<!DOCTYPE html>
<html>
<head>
<title>HTML blockquote Tag</title>
</head>
<body>
<blockquote>Browsers generally render blockquote text as indented text. If your
quoted text needs to display within a non-quoted paragraph, you should use the
HTML q tag. Most browsers surround q text with quotation marks.</blockquote>
<q>Browsers generally render blockquote text as indented text. If your quoted text
needs to display within a non-quoted paragraph, you should use the HTML q tag.
Most browsers surround q text with quotation marks.</q>
</body>
</html>
This will produce the following result −
This tag supports all the global attributes described in HTML Attribute Reference
The HTML <blockquote> tag also supports the following additional attributes −
This tag supports all the event attributes described in HTML Events Reference
19 Lectures
2 hours
Anadi Sharma
16 Lectures
1.5 hours
Anadi Sharma
18 Lectures
1.5 hours
Frahaan Hussain
57 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
54 Lectures
6 hours
DigiFisk (Programming Is Fun)
45 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2563,
"s": 2374,
"text": "The HTML <blockquote> tag is used for indicating long quotations (i.e. quotations that span multiple lines). It should contain only block-level elements within it, and not just plain text."
},
{
"code": null,
"e": 3182,
"s": 2563,
"text": "<!DOCTYPE html>\n<html>\n\n <head>\n <title>HTML blockquote Tag</title>\n </head>\n\n <body>\n <blockquote>Browsers generally render blockquote text as indented text. If your\n quoted text needs to display within a non-quoted paragraph, you should use the\n HTML q tag. Most browsers surround q text with quotation marks.</blockquote>\n <q>Browsers generally render blockquote text as indented text. If your quoted text\n needs to display within a non-quoted paragraph, you should use the HTML q tag.\n Most browsers surround q text with quotation marks.</q>\n </body>\n\n</html>"
},
{
"code": null,
"e": 3223,
"s": 3182,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 3305,
"s": 3223,
"text": "This tag supports all the global attributes described in HTML Attribute Reference"
},
{
"code": null,
"e": 3383,
"s": 3305,
"text": "The HTML <blockquote> tag also supports the following additional attributes −"
},
{
"code": null,
"e": 3461,
"s": 3383,
"text": "This tag supports all the event attributes described in HTML Events Reference"
},
{
"code": null,
"e": 3494,
"s": 3461,
"text": "\n 19 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3508,
"s": 3494,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 3543,
"s": 3508,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 3557,
"s": 3543,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 3592,
"s": 3557,
"text": "\n 18 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 3609,
"s": 3592,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 3644,
"s": 3609,
"text": "\n 57 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3675,
"s": 3644,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3708,
"s": 3675,
"text": "\n 54 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3739,
"s": 3708,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3774,
"s": 3739,
"text": "\n 45 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3805,
"s": 3774,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3812,
"s": 3805,
"text": " Print"
},
{
"code": null,
"e": 3823,
"s": 3812,
"text": " Add Notes"
}
] |
Predicting Loan Repayment. Introduction | by Imad Dabbura | Towards Data Science
|
The two most critical questions in the lending industry are: 1) How risky is the borrower? 2) Given the borrower’s risk, should we lend him/her? The answer to the first question determines the interest rate the borrower would have. Interest rate measures among other things (such as time value of money) the riskness of the borrower, i.e. the riskier the borrower, the higher the interest rate. With interest rate in mind, we can then determine if the borrower is eligible for the loan.
Investors (lenders) provide loans to borrowers in exchange for the promise of repayment with interest. That means the lender only makes profit (interest) if the borrower pays off the loan. However, if he/she doesn’t repay the loan, then the lender loses money.
We’ll be using publicly available data from LendingClub.com. The data covers the 9,578 loans funded by the platform between May 2007 and February 2010. The interest rate is provided to us for each borrower. Therefore, so we’ll address the second question indirectly by trying to predict if the borrower will repay the loan by its mature date or not. Through this excerise we’ll illustrate three modeling concepts:
What to do with missing values.
Techniques used with imbalanced classification problems.
Illustrate how to build an ensemble model using two methods: blending and stacking, which most likely gives us a boost in performance.
Below is a short description of each feature in the data set:
credit_policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise.
purpose: The purpose of the loan such as: credit_card, debt_consolidation, etc.
int_rate: The interest rate of the loan (proportion).
installment: The monthly installments ($) owed by the borrower if the loan is funded.
log_annual_inc: The natural log of the annual income of the borrower.
dti: The debt-to-income ratio of the borrower.
fico: The FICO credit score of the borrower.
days_with_cr_line: The number of days the borrower has had a credit line.
revol_bal: The borrower’s revolving balance.
revol_util: The borrower’s revolving line utilization rate.
inq_last_6mths: The borrower’s number of inquiries by creditors in the last 6 months.
delinq_2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years.
pub_rec: The borrower’s number of derogatory public records.
not_fully_paid: indicates whether the loan was not paid back in full (the borrower either defaulted or the borrower was deemed unlikely to pay it back).
Let’s load the data and check:
Data types of each feature
If we have missing values
If we have imbalanced data
Source code that created this post can be found here.
Positive examples = 1533Negative examples = 8045Proportion of positive to negative examples = 19.06%
It looks like we have only one categorical feature (“purpose”). Also, six features have missing values (no missing values in labels). Moreover, the data set is pretty imbalanced as expected where positive examples (“not paid fully”) are only 19%. We’ll explain in the next section how to handle all of them after giving an overview of ensemble methods.
Ensemble methods can be defined as combining several different models (base learners) into final model (meta learner) to reduce the generalization error. It relies on the assumption that each model would look at a different aspect of the data which yield to capturing part of the truth. Combining good performing models the were trained independently will capture more of the truth than a single model. Therefore, this would result in more accurate predictions and lower generalization errors.
Almost always ensemble model performance gets improved as we add more models.
Try to combine models that are as much different as possible. This will reduce the correlation between the models that will improve the performance of the ensemble model that will lead to significantly outperform the best model. In the worst case where all models are perfectly correlated, the ensemble would have the same performance as the best model and sometimes even lower if some models are very bad. As a result, pick models that are as good as possible.
Different ensemble methods construct the ensemble of models in different ways. Below are the most common methods:
Blending: Averaging the predictions of all models.
Bagging: Build different models on different datasets and then take the majority vote from all the models. Given the original dataset, we sample with replacement to get the same size of the original dataset. Therefore, each dataset will include, on average, 2/3 of the original data and the rest 1/3 will be duplicates. Since each model will be built on a different dataset, it can be seen as a different model. Random Forest improves default bagging trees by reducing the likelihood of strong features to picked on every split. In other word, it reduces the number of features available at each split from n features to, for example, n/2 or log(n) features. This will reduce correlation –> reduce variance.
Boosting: Build models sequentially. That means each model learns from the residuals of the previous model. The output will be all output of each single model weighted by the learning rate λ. It reduces the bias resulted from bagging by learning sequentially from residuals of previous trees (models).
Stacking: Build k models called base learners. Then fit a model to the output of the base learners to predict the final output.
Since we’ll be using Random Fores (bagging) and Gradient Boosting (boosting) classifiers as base learners in the ensemble model, we’ll illustrate only averaging and stacking ensemble methods. Therefore, modeling parts would be consisted of three parts:
Strategies to deal with missing values.
Strategies to deal with imbalanced datasets.
Build ensemble models.
Before going further, the following data preprocessing steps will be applicable to all models:
Create dummy variables from the feature “purpose” since its nominal (not ordinal) categorical variable. It’s also a good practice to drop the first one to avoid linear dependency between the resulted features since some algorithms may struggle with this issue.Split the data into training set (70%), and test set (30%). Training set will be used to fit the model, and test set will be to evaluate the best model to get an estimation of generalization error. Instead of having validation set to tune hyperparameters and evaluate different models, we’ll use 10-folds cross validation because it’s more reliable estimate of generalization error.Standardize the data. We’ll be using RobustScaler so that the standarization will be less influenced by the outliers, i.e. more robust. It centers the data around the median and scale it using interquartile range (IQR). This step will be included in the pipelines for each model as a transformer so we will not do it separately.
Create dummy variables from the feature “purpose” since its nominal (not ordinal) categorical variable. It’s also a good practice to drop the first one to avoid linear dependency between the resulted features since some algorithms may struggle with this issue.
Split the data into training set (70%), and test set (30%). Training set will be used to fit the model, and test set will be to evaluate the best model to get an estimation of generalization error. Instead of having validation set to tune hyperparameters and evaluate different models, we’ll use 10-folds cross validation because it’s more reliable estimate of generalization error.
Standardize the data. We’ll be using RobustScaler so that the standarization will be less influenced by the outliers, i.e. more robust. It centers the data around the median and scale it using interquartile range (IQR). This step will be included in the pipelines for each model as a transformer so we will not do it separately.
# Create dummy variables from the feature purposedf = pd.get_dummies(df, columns=["purpose"], drop_first=True)
Almost always real world data sets have missing values. This can be due, for example, users didn’t fill some part of the forms or some transformations happened while collecting and cleaning the data before they send it to you. Sometimes missing values are informative and weren’t generated randomly. Therefore, it’s a good practice to add binary features to check if there is missing values in each row for each feature that has missing values. In our case, six features have missing values so we would add six binary features one for each feature. For example, “log_annual_inc” feature has missing values, so we would add a feature “is_log_annual_inc_missing” that takes the values ∈ {0, 1}. Good thing is that the missing values are in the predictors only and not the labels. Below are some of the most common strategies for dealing with missing values:
Simply delete all examples that have any missing values. This is usually done if the missing values are very small compared to the size of the data set and the missing values were random. In other words, the added binary features did not improve the model. One disadvantage for this strategy is that the model will throw an error when test data has missing values at prediction.
Impute the missing values using the mean of each feature separately.
Impute the missing values using the median of each feature separately.
Use Multivariate Imputation by Chained Equations (MICE). The main disadvantage of MICE is that we can’t use it as a transformer in sklearn pipelines and it requires to use the full data set when imputing the missing values. This means that there will be a risk of data leakage since we’re using both training and test sets to impute the missing values. The following steps explain how MICE works:
First step: Impute the missing values using the mean of each feature separately.
First step: Impute the missing values using the mean of each feature separately.
2. Second step: For each feature that has missing values, we take all other features as predictors (including the ones that had missing values) and try to predict the values for this feature using linear regression for example. The predicted values will replace the old values for that feature. We do this for all features that have missing values, i.e. each feature will be used once as a target variable to predict its values and the rest of the time as a predictor to predict other features’ values. Therefore, one complete cycle (iteration) will be done once we run the model $k$ times to predict the $k$ features that have missing values. For our data set, each iteration will run the linear regression 6 times to predict the 6 features.
3. Third step: Repeat step 2 until there is not much of change between predictions.
Impute the missing values using K-Nearest Neighbors. We compute distance between all examples (excluding missing values) in the data set and take the average of k-nearest neighbors of each missing value. There’s no implementation for it yet in sklearn and it’s pretty inefficient to compute it since we’ll have to go through all examples to calculate distances. Therefore, we’ll skip this strategy in this post.
To evaluate each strategy, we’ll use Random Forest classifier with hyperparameters’ values guided by Data-driven Advice for Applying Machine Learning to Bioinformatics Problems as a starting point.
Let’s first create binary features for missing values and then prepare the data for each strategy discussed above. Next, we’ll compute the 10-folds cross validation AUC score for all the models using training data.
Original data shapes: ((7662, 24), (1916, 24))After dropping NAs: ((7611, 18), (1905, 18))MICE data shapes: ((7662, 24), (1916, 24))u001bBaseline model's average AUC: 0.651 Mean imputation model's average AUC: 0.651u001bMedian imputation model's average AUC: 0.651u001bMICE imputation model's average AUC: 0.656
Let’s plot the feature importances to check if the added binary features added anything to the model.
Guided by the 10-fold cross validation AUC scores, it looks like all strategies have comparable results and missing values were generated randomly. Also, the added six binary features showed no importance when plotting feature importances from Random Forest classifier. Therefore, it’s safe to drop those features and use Median Imputation method as a transformer later on in the pipeline.
# Drop generated binary featuresX_train = X_train[:, :-6]X_test = X_test[:, :-6]
Classification problems in most real world applications have imbalanced data sets. In other words, the positive examples (minority class) are a lot less than negative examples (majority class). We can see that in spam detection, ads click, loan approvals, etc. In our example, the positive examples (people who haven’t fully paid) were only 19% from the total examples. Therefore, accuracy is no longer a good measure of performance for different models because if we simply predict all examples to belong to the negative class, we achieve 81% accuracy. Better metrics for imbalanced data sets are AUC (area under the ROC curve) and f1-score. However, that’s not enough because class imbalance influences a learning algorithm during training by making the decision rule biased towards the majority class by implicitly learns a model that optimizes the predictions based on the majority class in the dataset. As a result, we’ll explore different methods to overcome class imbalance problem.
Under-Sample: Under-sample the majority class with or w/o replacement by making the number of positive and negative examples equal. One of the drawbacks of under-sampling is that it ignores a good portion of training data that has valuable information. In our example, it would loose around 6500 examples. However, it’s very fast to train.
Over-Sample: Over-sample the minority class with or w/o replacement by making the number of positive and negative examples equal. We’ll add around 6500 samples from the training data set with this strategy. It’s a lot more computationally expensive than under-sampling. Also, it’s more prune to overfitting due to repeated examples.
EasyEnsemble: Sample several subsets from the majority class, build a classifier on top of each sampled data, and combine the output of all classifiers. More details can be found here.
Synthetic Minority Oversampling Technique (SMOTE): It over-samples the minority class but using synthesized examples. It operates on feature space not the data space. Here how it works:
Compute the k-nearest neighbors for all minority samples.Randomly choose number between 1-k.For each feature:
Compute the k-nearest neighbors for all minority samples.
Randomly choose number between 1-k.
For each feature:
a. Compute the difference between minority sample and its randomly chosen neighbor (from previous step).
b. Multiply the difference by random number between 0 and 1.
c. Add the obtained feature to the synthesized sample attributes.
4. Repeat the above until we get the number of synthesized samples needed. More information can be found here.
There are other methods such as EditedNearestNeighbors and CondensedNearestNeighbors that we will not cover in this post and are rarely used in practice.
In most applications, misclassifying the minority class (false negative) is a lot more expensive than misclassifying the majority class (false positive). In the context of lending, loosing money by lending to a risky borrower who is more likely to not fully pay the loan back is a lot more costly than missing the opportunity of lending to trust-worthy borrower (less risky). As a result, we can use class_weight that changes the weight of misclassifying positive example in the loss function. Also, we can use different cut-offs assign examples to classes. By default, 0.5 is the cut-off; however, we see more often in applications such as lending that the cut-off is less than 0.5. Note that changing the cut-off from the default 0.5 reduce the overall accuracy but may improve the accuracy of predicting positive/negative examples.
We’ll evaluate all the above methods plus the original model without resampling as a baseline model using the same Random Forest classifier we used in the missing values section.
u001bOriginal model's average AUC: 0.652 Under-sampled model's average AUC: 0.656 Over-sampled model's average AUC: 0.651 EasyEnsemble model's average AUC: 0.665 SMOTE model's average AUC: 0.641
EasyEnsemble method has the highest 10-folds CV with average AUC = 0.665.
We’ll build ensemble models using three different models as base learners:
Gradient Boosting
Support Vector Classifier
Random Forest
The ensemble models will be built using two different methods:
Blending (average) ensemble model. Fits the base learners to the training data and then, at test time, average the predictions generated by all the base learners. Use VotingClassifier from sklearn that:
Fits all the base learners on the training dataAt test time, use all base learners to predict test data and then take the average of all predictions.
Fits all the base learners on the training data
At test time, use all base learners to predict test data and then take the average of all predictions.
Stacked ensemble model: Fits the base learners to the training data. Next, use those trained base learners to generate predictions (meta-features) used by the meta-learner (assuming we have only one layer of base learners). There are few different ways of training stacked ensemble model:
Fitting the base learners to all training data and then generate predictions using the same training data it was used to fit those learners. This method is more prune to overfitting because the meta learner will give more weights to the base learner who memorized the training data better, i.e. meta-learner won’t generate well and would overfit.Split the training data into 2 to 3 different parts that will be used for training, validation, and generate predictions. It’s a suboptimal method because held out sets usually have higher variance and different splits give different results as well as learning algorithms would have fewer data to train.Use k-folds cross validation where we split the data into k-folds. We fit the base learners to the (k -1) folds and use the fitted models to generate predictions of the held out fold. We repeat the process until we generate the predictions for all the k-folds. When done, refit the base learners to the full training data. This method is more reliable and will give models that memorize the data less weight. Therefore, it generalizes better on future data.
Fitting the base learners to all training data and then generate predictions using the same training data it was used to fit those learners. This method is more prune to overfitting because the meta learner will give more weights to the base learner who memorized the training data better, i.e. meta-learner won’t generate well and would overfit.
Split the training data into 2 to 3 different parts that will be used for training, validation, and generate predictions. It’s a suboptimal method because held out sets usually have higher variance and different splits give different results as well as learning algorithms would have fewer data to train.
Use k-folds cross validation where we split the data into k-folds. We fit the base learners to the (k -1) folds and use the fitted models to generate predictions of the held out fold. We repeat the process until we generate the predictions for all the k-folds. When done, refit the base learners to the full training data. This method is more reliable and will give models that memorize the data less weight. Therefore, it generalizes better on future data.
We’ll use logistic regression as the meta-learner for the stacked model. Note that we can use k-folds cross validation to validate and tune the hyperparameters of the meta learner. We will not tune the hyperparameters of any of the base learners or the meta-learner; however, we will use some of the values recommended by the Pennsylvania Benchmarking Paper. Additionally, we won’t use EasyEnsemble in training because, after some experimentation, it didn’t improve the AUC of the ensemble model more than 2% on average and it was computationally very expensive. In practice, we sometimes are willing to give up small improvements if the model would become a lot more complex computationally. Therefore, we will use RandomUnderSampler. Also, we’ll impute the missing values and standardize the data beforehand so that it would shorten the code of the ensemble models and allows use to avoid using Pipeline. Additionally, we will plot ROC and PR curves using test data and evaluate the performance of all models.
As we can see from the chart above, stacked ensemble model didn’t improve the performance. One of the major reasons are that the base learners are considerably highly correlated especially Random Forest and Gradient Boosting (see the correlation matrix below).
# Plot the correlation between base learners probs_df = pd.DataFrame(meta_features, columns=["xgb", "svm", "rf"]) corrmat(probs_df.corr(), inflate=True);
In addition, with classification problems where False Negatives are a lot more expensive than False Positives, we may want to have a model with a high recall rather than high precision. Below is the confusion matrix:
Let’s finally check the partial dependence plots to see what are the most important features and their relationships with whether the borrower will most likely pay the loan in full before mature data. we will plot only the top 8 features to make it easier to read. Note that the partial plots are based on Gradient Boosting model.
As we might expected, borrowers with lower annual income and less FICO scores are less likely to pay the loan fully; however, borrowers with lower interest rates (riskier) and smaller installments are more likely to pay the loan fully.
Most classification problems in the real world are imbalanced. Also, almost always data sets have missing values. In this post, we covered strategies to deal with both missing values and imbalanced data sets. We also explored different ways of building ensembles in sklearn. Below are some takeaway points:
There is no definitive guide of which algorithms to use given any situation. What may work on some data sets may not necessarily work on others. Therefore, always evaluate methods using cross validation to get a reliable estimates.
Sometimes we may be willing to give up some improvement to the model if that would increase the complexity much more than the percentage change in the improvement to the evaluation metrics.
In some classification problems, False Negatives are a lot more expensive than False Positives. Therefore, we can reduce cut-off points to reduce the False Negatives.
When building ensemble models, try to use good models that are as different as possible to reduce correlation between the base learners. We could’ve enhanced our stacked ensemble model by adding Dense Neural Network and some other kind of base learners as well as adding more layers to the stacked model.
EasyEnsemble usually performs better than any other resampling methods.
Missing values sometimes add more information to the model than we might expect. One way of capturing it is to add binary features for each feature that has missing values to check if each example is missing or not.
Originally published at imaddabbura.github.io on March 15, 2018.
|
[
{
"code": null,
"e": 659,
"s": 172,
"text": "The two most critical questions in the lending industry are: 1) How risky is the borrower? 2) Given the borrower’s risk, should we lend him/her? The answer to the first question determines the interest rate the borrower would have. Interest rate measures among other things (such as time value of money) the riskness of the borrower, i.e. the riskier the borrower, the higher the interest rate. With interest rate in mind, we can then determine if the borrower is eligible for the loan."
},
{
"code": null,
"e": 920,
"s": 659,
"text": "Investors (lenders) provide loans to borrowers in exchange for the promise of repayment with interest. That means the lender only makes profit (interest) if the borrower pays off the loan. However, if he/she doesn’t repay the loan, then the lender loses money."
},
{
"code": null,
"e": 1334,
"s": 920,
"text": "We’ll be using publicly available data from LendingClub.com. The data covers the 9,578 loans funded by the platform between May 2007 and February 2010. The interest rate is provided to us for each borrower. Therefore, so we’ll address the second question indirectly by trying to predict if the borrower will repay the loan by its mature date or not. Through this excerise we’ll illustrate three modeling concepts:"
},
{
"code": null,
"e": 1366,
"s": 1334,
"text": "What to do with missing values."
},
{
"code": null,
"e": 1423,
"s": 1366,
"text": "Techniques used with imbalanced classification problems."
},
{
"code": null,
"e": 1558,
"s": 1423,
"text": "Illustrate how to build an ensemble model using two methods: blending and stacking, which most likely gives us a boost in performance."
},
{
"code": null,
"e": 1620,
"s": 1558,
"text": "Below is a short description of each feature in the data set:"
},
{
"code": null,
"e": 1729,
"s": 1620,
"text": "credit_policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise."
},
{
"code": null,
"e": 1809,
"s": 1729,
"text": "purpose: The purpose of the loan such as: credit_card, debt_consolidation, etc."
},
{
"code": null,
"e": 1863,
"s": 1809,
"text": "int_rate: The interest rate of the loan (proportion)."
},
{
"code": null,
"e": 1949,
"s": 1863,
"text": "installment: The monthly installments ($) owed by the borrower if the loan is funded."
},
{
"code": null,
"e": 2019,
"s": 1949,
"text": "log_annual_inc: The natural log of the annual income of the borrower."
},
{
"code": null,
"e": 2066,
"s": 2019,
"text": "dti: The debt-to-income ratio of the borrower."
},
{
"code": null,
"e": 2111,
"s": 2066,
"text": "fico: The FICO credit score of the borrower."
},
{
"code": null,
"e": 2185,
"s": 2111,
"text": "days_with_cr_line: The number of days the borrower has had a credit line."
},
{
"code": null,
"e": 2230,
"s": 2185,
"text": "revol_bal: The borrower’s revolving balance."
},
{
"code": null,
"e": 2290,
"s": 2230,
"text": "revol_util: The borrower’s revolving line utilization rate."
},
{
"code": null,
"e": 2376,
"s": 2290,
"text": "inq_last_6mths: The borrower’s number of inquiries by creditors in the last 6 months."
},
{
"code": null,
"e": 2483,
"s": 2376,
"text": "delinq_2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years."
},
{
"code": null,
"e": 2544,
"s": 2483,
"text": "pub_rec: The borrower’s number of derogatory public records."
},
{
"code": null,
"e": 2697,
"s": 2544,
"text": "not_fully_paid: indicates whether the loan was not paid back in full (the borrower either defaulted or the borrower was deemed unlikely to pay it back)."
},
{
"code": null,
"e": 2728,
"s": 2697,
"text": "Let’s load the data and check:"
},
{
"code": null,
"e": 2755,
"s": 2728,
"text": "Data types of each feature"
},
{
"code": null,
"e": 2781,
"s": 2755,
"text": "If we have missing values"
},
{
"code": null,
"e": 2808,
"s": 2781,
"text": "If we have imbalanced data"
},
{
"code": null,
"e": 2862,
"s": 2808,
"text": "Source code that created this post can be found here."
},
{
"code": null,
"e": 2963,
"s": 2862,
"text": "Positive examples = 1533Negative examples = 8045Proportion of positive to negative examples = 19.06%"
},
{
"code": null,
"e": 3316,
"s": 2963,
"text": "It looks like we have only one categorical feature (“purpose”). Also, six features have missing values (no missing values in labels). Moreover, the data set is pretty imbalanced as expected where positive examples (“not paid fully”) are only 19%. We’ll explain in the next section how to handle all of them after giving an overview of ensemble methods."
},
{
"code": null,
"e": 3810,
"s": 3316,
"text": "Ensemble methods can be defined as combining several different models (base learners) into final model (meta learner) to reduce the generalization error. It relies on the assumption that each model would look at a different aspect of the data which yield to capturing part of the truth. Combining good performing models the were trained independently will capture more of the truth than a single model. Therefore, this would result in more accurate predictions and lower generalization errors."
},
{
"code": null,
"e": 3888,
"s": 3810,
"text": "Almost always ensemble model performance gets improved as we add more models."
},
{
"code": null,
"e": 4350,
"s": 3888,
"text": "Try to combine models that are as much different as possible. This will reduce the correlation between the models that will improve the performance of the ensemble model that will lead to significantly outperform the best model. In the worst case where all models are perfectly correlated, the ensemble would have the same performance as the best model and sometimes even lower if some models are very bad. As a result, pick models that are as good as possible."
},
{
"code": null,
"e": 4464,
"s": 4350,
"text": "Different ensemble methods construct the ensemble of models in different ways. Below are the most common methods:"
},
{
"code": null,
"e": 4515,
"s": 4464,
"text": "Blending: Averaging the predictions of all models."
},
{
"code": null,
"e": 5223,
"s": 4515,
"text": "Bagging: Build different models on different datasets and then take the majority vote from all the models. Given the original dataset, we sample with replacement to get the same size of the original dataset. Therefore, each dataset will include, on average, 2/3 of the original data and the rest 1/3 will be duplicates. Since each model will be built on a different dataset, it can be seen as a different model. Random Forest improves default bagging trees by reducing the likelihood of strong features to picked on every split. In other word, it reduces the number of features available at each split from n features to, for example, n/2 or log(n) features. This will reduce correlation –> reduce variance."
},
{
"code": null,
"e": 5525,
"s": 5223,
"text": "Boosting: Build models sequentially. That means each model learns from the residuals of the previous model. The output will be all output of each single model weighted by the learning rate λ. It reduces the bias resulted from bagging by learning sequentially from residuals of previous trees (models)."
},
{
"code": null,
"e": 5653,
"s": 5525,
"text": "Stacking: Build k models called base learners. Then fit a model to the output of the base learners to predict the final output."
},
{
"code": null,
"e": 5906,
"s": 5653,
"text": "Since we’ll be using Random Fores (bagging) and Gradient Boosting (boosting) classifiers as base learners in the ensemble model, we’ll illustrate only averaging and stacking ensemble methods. Therefore, modeling parts would be consisted of three parts:"
},
{
"code": null,
"e": 5946,
"s": 5906,
"text": "Strategies to deal with missing values."
},
{
"code": null,
"e": 5991,
"s": 5946,
"text": "Strategies to deal with imbalanced datasets."
},
{
"code": null,
"e": 6014,
"s": 5991,
"text": "Build ensemble models."
},
{
"code": null,
"e": 6109,
"s": 6014,
"text": "Before going further, the following data preprocessing steps will be applicable to all models:"
},
{
"code": null,
"e": 7080,
"s": 6109,
"text": "Create dummy variables from the feature “purpose” since its nominal (not ordinal) categorical variable. It’s also a good practice to drop the first one to avoid linear dependency between the resulted features since some algorithms may struggle with this issue.Split the data into training set (70%), and test set (30%). Training set will be used to fit the model, and test set will be to evaluate the best model to get an estimation of generalization error. Instead of having validation set to tune hyperparameters and evaluate different models, we’ll use 10-folds cross validation because it’s more reliable estimate of generalization error.Standardize the data. We’ll be using RobustScaler so that the standarization will be less influenced by the outliers, i.e. more robust. It centers the data around the median and scale it using interquartile range (IQR). This step will be included in the pipelines for each model as a transformer so we will not do it separately."
},
{
"code": null,
"e": 7341,
"s": 7080,
"text": "Create dummy variables from the feature “purpose” since its nominal (not ordinal) categorical variable. It’s also a good practice to drop the first one to avoid linear dependency between the resulted features since some algorithms may struggle with this issue."
},
{
"code": null,
"e": 7724,
"s": 7341,
"text": "Split the data into training set (70%), and test set (30%). Training set will be used to fit the model, and test set will be to evaluate the best model to get an estimation of generalization error. Instead of having validation set to tune hyperparameters and evaluate different models, we’ll use 10-folds cross validation because it’s more reliable estimate of generalization error."
},
{
"code": null,
"e": 8053,
"s": 7724,
"text": "Standardize the data. We’ll be using RobustScaler so that the standarization will be less influenced by the outliers, i.e. more robust. It centers the data around the median and scale it using interquartile range (IQR). This step will be included in the pipelines for each model as a transformer so we will not do it separately."
},
{
"code": null,
"e": 8164,
"s": 8053,
"text": "# Create dummy variables from the feature purposedf = pd.get_dummies(df, columns=[\"purpose\"], drop_first=True)"
},
{
"code": null,
"e": 9020,
"s": 8164,
"text": "Almost always real world data sets have missing values. This can be due, for example, users didn’t fill some part of the forms or some transformations happened while collecting and cleaning the data before they send it to you. Sometimes missing values are informative and weren’t generated randomly. Therefore, it’s a good practice to add binary features to check if there is missing values in each row for each feature that has missing values. In our case, six features have missing values so we would add six binary features one for each feature. For example, “log_annual_inc” feature has missing values, so we would add a feature “is_log_annual_inc_missing” that takes the values ∈ {0, 1}. Good thing is that the missing values are in the predictors only and not the labels. Below are some of the most common strategies for dealing with missing values:"
},
{
"code": null,
"e": 9399,
"s": 9020,
"text": "Simply delete all examples that have any missing values. This is usually done if the missing values are very small compared to the size of the data set and the missing values were random. In other words, the added binary features did not improve the model. One disadvantage for this strategy is that the model will throw an error when test data has missing values at prediction."
},
{
"code": null,
"e": 9468,
"s": 9399,
"text": "Impute the missing values using the mean of each feature separately."
},
{
"code": null,
"e": 9539,
"s": 9468,
"text": "Impute the missing values using the median of each feature separately."
},
{
"code": null,
"e": 9936,
"s": 9539,
"text": "Use Multivariate Imputation by Chained Equations (MICE). The main disadvantage of MICE is that we can’t use it as a transformer in sklearn pipelines and it requires to use the full data set when imputing the missing values. This means that there will be a risk of data leakage since we’re using both training and test sets to impute the missing values. The following steps explain how MICE works:"
},
{
"code": null,
"e": 10017,
"s": 9936,
"text": "First step: Impute the missing values using the mean of each feature separately."
},
{
"code": null,
"e": 10098,
"s": 10017,
"text": "First step: Impute the missing values using the mean of each feature separately."
},
{
"code": null,
"e": 10841,
"s": 10098,
"text": "2. Second step: For each feature that has missing values, we take all other features as predictors (including the ones that had missing values) and try to predict the values for this feature using linear regression for example. The predicted values will replace the old values for that feature. We do this for all features that have missing values, i.e. each feature will be used once as a target variable to predict its values and the rest of the time as a predictor to predict other features’ values. Therefore, one complete cycle (iteration) will be done once we run the model $k$ times to predict the $k$ features that have missing values. For our data set, each iteration will run the linear regression 6 times to predict the 6 features."
},
{
"code": null,
"e": 10925,
"s": 10841,
"text": "3. Third step: Repeat step 2 until there is not much of change between predictions."
},
{
"code": null,
"e": 11337,
"s": 10925,
"text": "Impute the missing values using K-Nearest Neighbors. We compute distance between all examples (excluding missing values) in the data set and take the average of k-nearest neighbors of each missing value. There’s no implementation for it yet in sklearn and it’s pretty inefficient to compute it since we’ll have to go through all examples to calculate distances. Therefore, we’ll skip this strategy in this post."
},
{
"code": null,
"e": 11535,
"s": 11337,
"text": "To evaluate each strategy, we’ll use Random Forest classifier with hyperparameters’ values guided by Data-driven Advice for Applying Machine Learning to Bioinformatics Problems as a starting point."
},
{
"code": null,
"e": 11750,
"s": 11535,
"text": "Let’s first create binary features for missing values and then prepare the data for each strategy discussed above. Next, we’ll compute the 10-folds cross validation AUC score for all the models using training data."
},
{
"code": null,
"e": 12062,
"s": 11750,
"text": "Original data shapes: ((7662, 24), (1916, 24))After dropping NAs: ((7611, 18), (1905, 18))MICE data shapes: ((7662, 24), (1916, 24))u001bBaseline model's average AUC: 0.651 Mean imputation model's average AUC: 0.651u001bMedian imputation model's average AUC: 0.651u001bMICE imputation model's average AUC: 0.656"
},
{
"code": null,
"e": 12164,
"s": 12062,
"text": "Let’s plot the feature importances to check if the added binary features added anything to the model."
},
{
"code": null,
"e": 12554,
"s": 12164,
"text": "Guided by the 10-fold cross validation AUC scores, it looks like all strategies have comparable results and missing values were generated randomly. Also, the added six binary features showed no importance when plotting feature importances from Random Forest classifier. Therefore, it’s safe to drop those features and use Median Imputation method as a transformer later on in the pipeline."
},
{
"code": null,
"e": 12635,
"s": 12554,
"text": "# Drop generated binary featuresX_train = X_train[:, :-6]X_test = X_test[:, :-6]"
},
{
"code": null,
"e": 13625,
"s": 12635,
"text": "Classification problems in most real world applications have imbalanced data sets. In other words, the positive examples (minority class) are a lot less than negative examples (majority class). We can see that in spam detection, ads click, loan approvals, etc. In our example, the positive examples (people who haven’t fully paid) were only 19% from the total examples. Therefore, accuracy is no longer a good measure of performance for different models because if we simply predict all examples to belong to the negative class, we achieve 81% accuracy. Better metrics for imbalanced data sets are AUC (area under the ROC curve) and f1-score. However, that’s not enough because class imbalance influences a learning algorithm during training by making the decision rule biased towards the majority class by implicitly learns a model that optimizes the predictions based on the majority class in the dataset. As a result, we’ll explore different methods to overcome class imbalance problem."
},
{
"code": null,
"e": 13965,
"s": 13625,
"text": "Under-Sample: Under-sample the majority class with or w/o replacement by making the number of positive and negative examples equal. One of the drawbacks of under-sampling is that it ignores a good portion of training data that has valuable information. In our example, it would loose around 6500 examples. However, it’s very fast to train."
},
{
"code": null,
"e": 14298,
"s": 13965,
"text": "Over-Sample: Over-sample the minority class with or w/o replacement by making the number of positive and negative examples equal. We’ll add around 6500 samples from the training data set with this strategy. It’s a lot more computationally expensive than under-sampling. Also, it’s more prune to overfitting due to repeated examples."
},
{
"code": null,
"e": 14483,
"s": 14298,
"text": "EasyEnsemble: Sample several subsets from the majority class, build a classifier on top of each sampled data, and combine the output of all classifiers. More details can be found here."
},
{
"code": null,
"e": 14669,
"s": 14483,
"text": "Synthetic Minority Oversampling Technique (SMOTE): It over-samples the minority class but using synthesized examples. It operates on feature space not the data space. Here how it works:"
},
{
"code": null,
"e": 14779,
"s": 14669,
"text": "Compute the k-nearest neighbors for all minority samples.Randomly choose number between 1-k.For each feature:"
},
{
"code": null,
"e": 14837,
"s": 14779,
"text": "Compute the k-nearest neighbors for all minority samples."
},
{
"code": null,
"e": 14873,
"s": 14837,
"text": "Randomly choose number between 1-k."
},
{
"code": null,
"e": 14891,
"s": 14873,
"text": "For each feature:"
},
{
"code": null,
"e": 14996,
"s": 14891,
"text": "a. Compute the difference between minority sample and its randomly chosen neighbor (from previous step)."
},
{
"code": null,
"e": 15057,
"s": 14996,
"text": "b. Multiply the difference by random number between 0 and 1."
},
{
"code": null,
"e": 15123,
"s": 15057,
"text": "c. Add the obtained feature to the synthesized sample attributes."
},
{
"code": null,
"e": 15234,
"s": 15123,
"text": "4. Repeat the above until we get the number of synthesized samples needed. More information can be found here."
},
{
"code": null,
"e": 15388,
"s": 15234,
"text": "There are other methods such as EditedNearestNeighbors and CondensedNearestNeighbors that we will not cover in this post and are rarely used in practice."
},
{
"code": null,
"e": 16223,
"s": 15388,
"text": "In most applications, misclassifying the minority class (false negative) is a lot more expensive than misclassifying the majority class (false positive). In the context of lending, loosing money by lending to a risky borrower who is more likely to not fully pay the loan back is a lot more costly than missing the opportunity of lending to trust-worthy borrower (less risky). As a result, we can use class_weight that changes the weight of misclassifying positive example in the loss function. Also, we can use different cut-offs assign examples to classes. By default, 0.5 is the cut-off; however, we see more often in applications such as lending that the cut-off is less than 0.5. Note that changing the cut-off from the default 0.5 reduce the overall accuracy but may improve the accuracy of predicting positive/negative examples."
},
{
"code": null,
"e": 16402,
"s": 16223,
"text": "We’ll evaluate all the above methods plus the original model without resampling as a baseline model using the same Random Forest classifier we used in the missing values section."
},
{
"code": null,
"e": 16597,
"s": 16402,
"text": "u001bOriginal model's average AUC: 0.652 Under-sampled model's average AUC: 0.656 Over-sampled model's average AUC: 0.651 EasyEnsemble model's average AUC: 0.665 SMOTE model's average AUC: 0.641"
},
{
"code": null,
"e": 16671,
"s": 16597,
"text": "EasyEnsemble method has the highest 10-folds CV with average AUC = 0.665."
},
{
"code": null,
"e": 16746,
"s": 16671,
"text": "We’ll build ensemble models using three different models as base learners:"
},
{
"code": null,
"e": 16764,
"s": 16746,
"text": "Gradient Boosting"
},
{
"code": null,
"e": 16790,
"s": 16764,
"text": "Support Vector Classifier"
},
{
"code": null,
"e": 16804,
"s": 16790,
"text": "Random Forest"
},
{
"code": null,
"e": 16867,
"s": 16804,
"text": "The ensemble models will be built using two different methods:"
},
{
"code": null,
"e": 17070,
"s": 16867,
"text": "Blending (average) ensemble model. Fits the base learners to the training data and then, at test time, average the predictions generated by all the base learners. Use VotingClassifier from sklearn that:"
},
{
"code": null,
"e": 17220,
"s": 17070,
"text": "Fits all the base learners on the training dataAt test time, use all base learners to predict test data and then take the average of all predictions."
},
{
"code": null,
"e": 17268,
"s": 17220,
"text": "Fits all the base learners on the training data"
},
{
"code": null,
"e": 17371,
"s": 17268,
"text": "At test time, use all base learners to predict test data and then take the average of all predictions."
},
{
"code": null,
"e": 17660,
"s": 17371,
"text": "Stacked ensemble model: Fits the base learners to the training data. Next, use those trained base learners to generate predictions (meta-features) used by the meta-learner (assuming we have only one layer of base learners). There are few different ways of training stacked ensemble model:"
},
{
"code": null,
"e": 18768,
"s": 17660,
"text": "Fitting the base learners to all training data and then generate predictions using the same training data it was used to fit those learners. This method is more prune to overfitting because the meta learner will give more weights to the base learner who memorized the training data better, i.e. meta-learner won’t generate well and would overfit.Split the training data into 2 to 3 different parts that will be used for training, validation, and generate predictions. It’s a suboptimal method because held out sets usually have higher variance and different splits give different results as well as learning algorithms would have fewer data to train.Use k-folds cross validation where we split the data into k-folds. We fit the base learners to the (k -1) folds and use the fitted models to generate predictions of the held out fold. We repeat the process until we generate the predictions for all the k-folds. When done, refit the base learners to the full training data. This method is more reliable and will give models that memorize the data less weight. Therefore, it generalizes better on future data."
},
{
"code": null,
"e": 19115,
"s": 18768,
"text": "Fitting the base learners to all training data and then generate predictions using the same training data it was used to fit those learners. This method is more prune to overfitting because the meta learner will give more weights to the base learner who memorized the training data better, i.e. meta-learner won’t generate well and would overfit."
},
{
"code": null,
"e": 19420,
"s": 19115,
"text": "Split the training data into 2 to 3 different parts that will be used for training, validation, and generate predictions. It’s a suboptimal method because held out sets usually have higher variance and different splits give different results as well as learning algorithms would have fewer data to train."
},
{
"code": null,
"e": 19878,
"s": 19420,
"text": "Use k-folds cross validation where we split the data into k-folds. We fit the base learners to the (k -1) folds and use the fitted models to generate predictions of the held out fold. We repeat the process until we generate the predictions for all the k-folds. When done, refit the base learners to the full training data. This method is more reliable and will give models that memorize the data less weight. Therefore, it generalizes better on future data."
},
{
"code": null,
"e": 20890,
"s": 19878,
"text": "We’ll use logistic regression as the meta-learner for the stacked model. Note that we can use k-folds cross validation to validate and tune the hyperparameters of the meta learner. We will not tune the hyperparameters of any of the base learners or the meta-learner; however, we will use some of the values recommended by the Pennsylvania Benchmarking Paper. Additionally, we won’t use EasyEnsemble in training because, after some experimentation, it didn’t improve the AUC of the ensemble model more than 2% on average and it was computationally very expensive. In practice, we sometimes are willing to give up small improvements if the model would become a lot more complex computationally. Therefore, we will use RandomUnderSampler. Also, we’ll impute the missing values and standardize the data beforehand so that it would shorten the code of the ensemble models and allows use to avoid using Pipeline. Additionally, we will plot ROC and PR curves using test data and evaluate the performance of all models."
},
{
"code": null,
"e": 21151,
"s": 20890,
"text": "As we can see from the chart above, stacked ensemble model didn’t improve the performance. One of the major reasons are that the base learners are considerably highly correlated especially Random Forest and Gradient Boosting (see the correlation matrix below)."
},
{
"code": null,
"e": 21305,
"s": 21151,
"text": "# Plot the correlation between base learners probs_df = pd.DataFrame(meta_features, columns=[\"xgb\", \"svm\", \"rf\"]) corrmat(probs_df.corr(), inflate=True);"
},
{
"code": null,
"e": 21522,
"s": 21305,
"text": "In addition, with classification problems where False Negatives are a lot more expensive than False Positives, we may want to have a model with a high recall rather than high precision. Below is the confusion matrix:"
},
{
"code": null,
"e": 21853,
"s": 21522,
"text": "Let’s finally check the partial dependence plots to see what are the most important features and their relationships with whether the borrower will most likely pay the loan in full before mature data. we will plot only the top 8 features to make it easier to read. Note that the partial plots are based on Gradient Boosting model."
},
{
"code": null,
"e": 22089,
"s": 21853,
"text": "As we might expected, borrowers with lower annual income and less FICO scores are less likely to pay the loan fully; however, borrowers with lower interest rates (riskier) and smaller installments are more likely to pay the loan fully."
},
{
"code": null,
"e": 22396,
"s": 22089,
"text": "Most classification problems in the real world are imbalanced. Also, almost always data sets have missing values. In this post, we covered strategies to deal with both missing values and imbalanced data sets. We also explored different ways of building ensembles in sklearn. Below are some takeaway points:"
},
{
"code": null,
"e": 22628,
"s": 22396,
"text": "There is no definitive guide of which algorithms to use given any situation. What may work on some data sets may not necessarily work on others. Therefore, always evaluate methods using cross validation to get a reliable estimates."
},
{
"code": null,
"e": 22818,
"s": 22628,
"text": "Sometimes we may be willing to give up some improvement to the model if that would increase the complexity much more than the percentage change in the improvement to the evaluation metrics."
},
{
"code": null,
"e": 22985,
"s": 22818,
"text": "In some classification problems, False Negatives are a lot more expensive than False Positives. Therefore, we can reduce cut-off points to reduce the False Negatives."
},
{
"code": null,
"e": 23290,
"s": 22985,
"text": "When building ensemble models, try to use good models that are as different as possible to reduce correlation between the base learners. We could’ve enhanced our stacked ensemble model by adding Dense Neural Network and some other kind of base learners as well as adding more layers to the stacked model."
},
{
"code": null,
"e": 23362,
"s": 23290,
"text": "EasyEnsemble usually performs better than any other resampling methods."
},
{
"code": null,
"e": 23578,
"s": 23362,
"text": "Missing values sometimes add more information to the model than we might expect. One way of capturing it is to add binary features for each feature that has missing values to check if each example is missing or not."
}
] |
Python | Test for False list - GeeksforGeeks
|
04 Jan, 2019
Sometimes, we need to check if a list is completely True of False, these occurrences come more often in testing purposes after the development phase. Hence, having a knowledge of all this is necessary and useful. Lets discuss certain ways in which this can be performed.
Method #1 : Naive MethodIn the naive method, we just run a loop from beg to end of list and check manually for each value. This is the most basic way to perform this particular task.
# Python3 code to demonstrate # to check for False list # using naive method # initializing list test_list = [False, False, False, False] # printing original listprint ("The original list is : " + str(test_list)) flag = 0 # using naive method # to check for False list for i in test_list : if i == True : flag = 1 break # printing resultprint ("Is List completely false ? : " + str(bool(not flag)))
The original list is : [False, False, False, False]
Is List completely false ? : True
Method #2 : Using all()This function tests each value to be False and if yes, returns boolean True, else returns false. The list iteration is done using list comprehension.
# Python3 code to demonstrate # to check for False list # using all() # initializing list test_list = [False, False, False, False] # printing original listprint ("The original list is : " + str(test_list)) flag = 0 # using all()# to check for False list res = all(not i for i in test_list) # printing resultprint ("Is List completely false ? : " + str(res))
The original list is : [False, False, False, False]
Is List completely false ? : True
Method #3 : Using any()This function tests for any one of the True value, if found returns True, else returns False value. Negation of this function is used as the result.
# Python3 code to demonstrate # to check for False list # using any() # initializing list test_list = [False, False, False, False] # printing original listprint ("The original list is : " + str(test_list)) # using any()# to check for False list res = not any(test_list) # printing resultprint ("Is List completely false ? : " + str(res))
The original list is : [False, False, False, False]
Is List completely false ? : True
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Python | os.path.join() method
Defaultdict in Python
Python | Split string into list of characters
Python | Get dictionary keys as a list
Python | Convert a list to dictionary
Python program to check whether a number is Prime or not
|
[
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n04 Jan, 2019"
},
{
"code": null,
"e": 24563,
"s": 24292,
"text": "Sometimes, we need to check if a list is completely True of False, these occurrences come more often in testing purposes after the development phase. Hence, having a knowledge of all this is necessary and useful. Lets discuss certain ways in which this can be performed."
},
{
"code": null,
"e": 24746,
"s": 24563,
"text": "Method #1 : Naive MethodIn the naive method, we just run a loop from beg to end of list and check manually for each value. This is the most basic way to perform this particular task."
},
{
"code": "# Python3 code to demonstrate # to check for False list # using naive method # initializing list test_list = [False, False, False, False] # printing original listprint (\"The original list is : \" + str(test_list)) flag = 0 # using naive method # to check for False list for i in test_list : if i == True : flag = 1 break # printing resultprint (\"Is List completely false ? : \" + str(bool(not flag)))",
"e": 25169,
"s": 24746,
"text": null
},
{
"code": null,
"e": 25256,
"s": 25169,
"text": "The original list is : [False, False, False, False]\nIs List completely false ? : True\n"
},
{
"code": null,
"e": 25430,
"s": 25256,
"text": " Method #2 : Using all()This function tests each value to be False and if yes, returns boolean True, else returns false. The list iteration is done using list comprehension."
},
{
"code": "# Python3 code to demonstrate # to check for False list # using all() # initializing list test_list = [False, False, False, False] # printing original listprint (\"The original list is : \" + str(test_list)) flag = 0 # using all()# to check for False list res = all(not i for i in test_list) # printing resultprint (\"Is List completely false ? : \" + str(res))",
"e": 25795,
"s": 25430,
"text": null
},
{
"code": null,
"e": 25882,
"s": 25795,
"text": "The original list is : [False, False, False, False]\nIs List completely false ? : True\n"
},
{
"code": null,
"e": 26055,
"s": 25882,
"text": " Method #3 : Using any()This function tests for any one of the True value, if found returns True, else returns False value. Negation of this function is used as the result."
},
{
"code": "# Python3 code to demonstrate # to check for False list # using any() # initializing list test_list = [False, False, False, False] # printing original listprint (\"The original list is : \" + str(test_list)) # using any()# to check for False list res = not any(test_list) # printing resultprint (\"Is List completely false ? : \" + str(res))",
"e": 26399,
"s": 26055,
"text": null
},
{
"code": null,
"e": 26486,
"s": 26399,
"text": "The original list is : [False, False, False, False]\nIs List completely false ? : True\n"
},
{
"code": null,
"e": 26507,
"s": 26486,
"text": "Python list-programs"
},
{
"code": null,
"e": 26514,
"s": 26507,
"text": "Python"
},
{
"code": null,
"e": 26530,
"s": 26514,
"text": "Python Programs"
},
{
"code": null,
"e": 26628,
"s": 26530,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26660,
"s": 26628,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26702,
"s": 26660,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26758,
"s": 26702,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26800,
"s": 26758,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26831,
"s": 26800,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26853,
"s": 26831,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26899,
"s": 26853,
"text": "Python | Split string into list of characters"
},
{
"code": null,
"e": 26938,
"s": 26899,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 26976,
"s": 26938,
"text": "Python | Convert a list to dictionary"
}
] |
MySQL query to return a substring after delimiter?
|
Use SUBSTRING() to return values after delimiter. Let us first create a table −
mysql> create table DemoTable
-> (
-> Title text
-> );
Query OK, 0 rows affected (0.56 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable values('John is good in MySQL,Sam is good in MongoDB,Mike is good in Java');
Query OK, 1 row affected (0.19 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+-------------------------------------------------------------------+
| Title |
+-------------------------------------------------------------------+
| John is good in MySQL,Sam is good in MongoDB,Mike is good in Java |
+-------------------------------------------------------------------+
1 row in set (0.00 sec)
Here is the query to return substring after delimiter.
mysql> select substring(Title, instr(Title, ',') + 1) as AfterDelimiter from DemoTable;
This will produce the following output −
+---------------------------------------------+
| AfterDelimiter |
+---------------------------------------------+
| Sam is good in MongoDB,Mike is good in Java |
+---------------------------------------------+
1 row in set (0.03 sec)
|
[
{
"code": null,
"e": 1142,
"s": 1062,
"text": "Use SUBSTRING() to return values after delimiter. Let us first create a table −"
},
{
"code": null,
"e": 1234,
"s": 1142,
"text": "mysql> create table DemoTable\n-> (\n-> Title text\n-> );\nQuery OK, 0 rows affected (0.56 sec)"
},
{
"code": null,
"e": 1290,
"s": 1234,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1432,
"s": 1290,
"text": "mysql> insert into DemoTable values('John is good in MySQL,Sam is good in MongoDB,Mike is good in Java');\nQuery OK, 1 row affected (0.19 sec)"
},
{
"code": null,
"e": 1492,
"s": 1432,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1523,
"s": 1492,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 1564,
"s": 1523,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1938,
"s": 1564,
"text": "+-------------------------------------------------------------------+\n| Title |\n+-------------------------------------------------------------------+\n| John is good in MySQL,Sam is good in MongoDB,Mike is good in Java |\n+-------------------------------------------------------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 1993,
"s": 1938,
"text": "Here is the query to return substring after delimiter."
},
{
"code": null,
"e": 2081,
"s": 1993,
"text": "mysql> select substring(Title, instr(Title, ',') + 1) as AfterDelimiter from DemoTable;"
},
{
"code": null,
"e": 2122,
"s": 2081,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2386,
"s": 2122,
"text": "+---------------------------------------------+\n| AfterDelimiter |\n+---------------------------------------------+\n| Sam is good in MongoDB,Mike is good in Java |\n+---------------------------------------------+\n1 row in set (0.03 sec)"
}
] |
Spark MultiLayer Perceptron Classifier for POI Classification | by Pranav Thaenraj | Towards Data Science
|
Note: This article is the third in a series of articles regarding the classification of transportation POI data. The first article looked to use various Machine Learning models to classify records as Airports, Bus stops, and Train stations. The second article was centered around the use of feature reduction algorithms as means of tuning the models from the first article to provide better accuracy. Check out these articles to see how this project has evolved in the search for the best way to classify POI data.
This article will use the Spark MLlib package- specifically the MultiLayer Perceptron Classifier properly classifying the POI records from SafeGraph using foot traffic patterns. SafeGraph is a data provider that provides POI data for hundreds of businesses and categories. It provides data for free to academics. For this project, I have chosen to use SafeGraph patterns data to classify records as various POI’s. The schema for the patterns data can be found here: Schema Info
Apache Spark is an open-source framework developed as means to handle Big Data. The framework's call to fame is its ability to massively increase performance and reduce runtime through the use of parallel (or distributed) computing.
Why Spark MLlib?
Supports many of the popular Machine Learning Algorithms supported by packages such as Statsmodels and Scikit-LearnAllows for the construction of complex Machine Learning pipelinesAllows for the saving and reuse of algorithms and ML pipelinesSupports distributed computing techniques that vastly reduce the run-time and improve performance of all algorithms by splitting tasks into chunks and assigning them to multiple nodes to perform in parallel
Supports many of the popular Machine Learning Algorithms supported by packages such as Statsmodels and Scikit-Learn
Allows for the construction of complex Machine Learning pipelines
Allows for the saving and reuse of algorithms and ML pipelines
Supports distributed computing techniques that vastly reduce the run-time and improve performance of all algorithms by splitting tasks into chunks and assigning them to multiple nodes to perform in parallel
Spark MLlib is truly a very interesting and straightforward package to have an understanding of and can provide some very interesting insights in a very short amount of time, providing the best of both worlds in terms of runtime and performance.
Spark Deep Learning
The MLlib package has very few accommodations for Deep Learning. As of Spark 3.0, the package does not support many Deep Learning models and is more focused on the notions of Regression, Classification, and Clustering. The one exception to this shortcoming is the Multilayer Perceptron Classifier.
Why the MultiLayer Perceptron Classifier?
Each Classification model in the MLlib package has its advantages and disadvantages and different data may call for the use of a different model for maximum efficiency. The MLPC is superior to the other supervised classification algorithms that are available in the Spark MLlib package because it has the capability of finding correlations between features on its own. furthermore, the classifier works well for multi-class classifications and requires not nearly as much preprocessing and pipeline creation as traditional algorithms such as the SVM algorithm do in the Spark interface. Lastly, with a large enough and diverse enough dataset, the MLPC algorithm can perform better than most classifiers.
Given this intro let’s see if the Spark MLPC classifier can perform better than our classifiers from part 1 of this series
Before we can start with the Deep Learning aspects we must first load the Data. This particular step has been covered in both the first article of the series as well as the second one. The basic steps behind the data loading and preprocessing to fit our needs for this article are:
Before we take our first steps into the notions of feature reduction, we must first load the data that we will use for this project: This process of loading the data can be found in the notebook and has been explained in detail in the first part of the series. For our purposes all that is required is a brief outline of the steps taken and the resulting dataframe:
Dropping unnecesscay columns- [‘parent_safegraph_place_id’,’placekey’,’safegraph_place_id’,’parent_placekey’,’parent_placekey’,’safegraph_brand_ids’,’brands’, ‘poi_cbg’]Creating ground truth column that establishes each record as either Airport, Bus station, Airport, or UnkownDropping Unknown records to clear out records that cannot be identifiedHorizontally exploding columns of JSON strings using pysparkHorizontally exploding columns of arraysUsing Sklearn LabelEncoder package to transform class column
Dropping unnecesscay columns- [‘parent_safegraph_place_id’,’placekey’,’safegraph_place_id’,’parent_placekey’,’parent_placekey’,’safegraph_brand_ids’,’brands’, ‘poi_cbg’]
Creating ground truth column that establishes each record as either Airport, Bus station, Airport, or Unkown
Dropping Unknown records to clear out records that cannot be identified
Horizontally exploding columns of JSON strings using pyspark
Horizontally exploding columns of arrays
Using Sklearn LabelEncoder package to transform class column
As a result of these transformations the outputted data looks like this and has the following columns:
Raw_visit_counts: Number of visits in our panel to this POI during the date range.
Raw_visitor_counts: Number of unique visitors from our panel to this POI during the date range.
Distance_from_home: Median distance from home traveled by visitors (of visitors whose home we have identified) in meters.
Median_dwell: Median minimum dwell time in minutes.
Bucketed Dwells (Exploded to <5, 5–10,11–20,21–60,61–120,121–240): Key is the range of minutes and value is the number of visits that were within that duration
Popularity_by_day(Exploded to Monday-Sunday): A mapping of the day of the week to the number of visits on each day (local time) in the course of the date range
Popularity_by_hour(Exploded to popularity_1-popularity_24): A mapping of the hour of the day to the number of visits in each hour over the course of the date range in local time. The first element in the array corresponds to the hour of midnight to 1 am
Device_type(Exploded to IOS and Android): The number of visitors to the POI that is using android vs. ios. Only device_type with at least 2 devices are shown and any category with less than 5 devices are reported as
Now that the data is ready to go, we can begin on the Deep Learning aspects.
The Spark MLlib package performs ML functions in a manner that is slightly different from the traditional code snippets written with packages such as Sci-kit Learn. The package requires all of the features used for prediction to be compiled into one singular column of vectors using the Vector Assembler transformer. This requires the Label column to be separated from the feature columns beforehand, the code for this step of preprocessing looks like this:
Step 1: separating of label column from features column and conversion of Pandas DF to Spark DF
from pyspark.ml.classification import MultilayerPerceptronClassifierfrom pyspark.ml.evaluation import MulticlassClassificationEvaluatorlabel = transportation_df[‘Class’]transportation_df = spark.createDataFrame(transportation_df)transportation_df.show(5)
Step 2: Using Vector Assembler to create features column
from pyspark.ml.feature import VectorAssemblertransportation_df = transportation_df.drop(‘Class’)assembler = VectorAssembler().setInputCols(transportation_df.columns[1:]).setOutputCol(“features”)transportation_df = assembler.transform(transportation_df)transportation_df.show(10)
The end result of this should have all of the features compiled into one column as such:
From here the next step is to combine this data back with the label column. Since the other columns are no longer necessary (They have been compiled into the features column), all we need to run our model is the features and the label columns
transportation_df = transportation_df.toPandas()features = transportation_df[‘features’]features_df = pd.DataFrame(data = {‘features’: features})features_df = features_df.loc[~features_df.index.duplicated(), :]label_df = pd.DataFrame(data = {‘label’: label})label_df = label_df.loc[~label_df.index.duplicated(), :]transportation_df = pd.concat([features_df, label_df], axis= 1).dropna()transportation_df.head()
Next step would be to split the data into training and testing data:
transportation_df = spark.createDataFrame(transportation_df)trainSet, testSet = transportation_df.randomSplit([0.8, 0.2])
Now it’s time to create the Classifier model. The model has many hyperparameters that we need to know and tune in order to maximize model performance:
Tol: This parameter refers to the convergence tolerance or the value at which the weights begin to converge towards the predicted answer. The default value for this parameter is .000001. smaller values for the parameter may lead to overfitting while larger values may lead to more accurate results
Seed: The random seed for the randomly generated values of the model
Blocksize: This parameter refers to the number of records that will be included per iteration of the model. The default value for this parameter is 128. Higher block size will lead to overfitting while a lower one will provide more accurate results at the cost of runtime
Stepsize: This parameter refers to the model’s learning rate. the default value for this parameter is .03. A smaller step size will lead to a more accurate model at the cost of a larger run time and a larger one will lead to overfitting
Layers: Perhaps the most important hyperparameter of all, this parameter refers to the number of layers and nodes per layer that will be present in this model. The first layer must always be the number of features present in the data and the last layer must always be the number of outputted labels available. There must be at least one hidden layer in this array of layers.
With these requirements in mind let us look at the code for the MLPC model:
layers = [44,50,50,3]mlpc = MultilayerPerceptronClassifier(layers = layers, solver=’gd’, tol=.0000001, stepSize=.00001, blockSize= 30).setLabelCol(“label”).setFeaturesCol(“features”).setSeed(20).setMaxIter(500)model = mlpc.fit(trainSet)
Here we have made a Neural Net with two hidden layers each with 50 nodes. The model is trained using Gradient Descent and has a very high convergence tolerance and step size. Let us see the results:
result = model.transform(testSet)result.show(10)
from pyspark.ml.evaluation import MulticlassClassificationEvaluatorevaluator = MulticlassClassificationEvaluator(labelCol = ‘label’, predictionCol = ‘prediction’, metricName = ‘accuracy’)mlpacc = evaluator.evaluate(result)mlpacc
The overall accuracy is... unsatisfying, but this is the first of many experiments that we must run with this data in order to find the most effective number of layers and nodes to provide better results. Below is a table of tried layers and accuracies. If you would like to try this code, don’t hesitate to check out the notebook and run it yourself.
Immediate takeaways from this experiment:
More hidden layers do not necessarily mean better accuracyDeeper hidden layers do not necessarily mean better accuracy
More hidden layers do not necessarily mean better accuracy
Deeper hidden layers do not necessarily mean better accuracy
We can see from the table above that the model performs best when it remains simple rather than complex. Even so, we see that the accuracy for this particular dataset is barely exceeding that of the Gaussian Naive Bayes model from the first part of the series. Why could this be the case? Well, the simple answer is the imbalance in the data. When extracting this data from SafeGraph, three NAICS codes were utilized to retrieve data, and the data that was outputted was severely imbalanced, with the number of airport records being almost 4 times that of the number of bus stop records. This imbalance has shown itself to be an issue throughout this series and detrimentally affected the results of many models we trained on the data. To rectify the imbalance, random sampling of the different classes in the data was applied but doing so led to the data for the Airport and Train stations being underrepresented and thus still hurting the accuracy of the model.
Regardless of the results, This article provides an introduction to the notions of Spark Deep Learning through the MLPC model. The article shows the ease that Spark brings to creating Deep Learning models and furthermore shows the benefits and shortcomings of using such a complex model on a real-world dataset such as SafeGraph Patterns data.
In the next and final article of this series, we will look into the use of ensemble classification on this Patterns data and see in the use of multiple classifiers layered on top of one another helps to provide the best results.
Questions?
I invite you to ask them in the #safegraphdata channel of the SafeGraph Community, a free Slack community for data enthusiasts. Receive support, share your work, or connect with others in the GIS community. Through the SafeGraph Community, academics have free access to data on over 7 million businesses in the USA, UK, and Canada.
|
[
{
"code": null,
"e": 687,
"s": 172,
"text": "Note: This article is the third in a series of articles regarding the classification of transportation POI data. The first article looked to use various Machine Learning models to classify records as Airports, Bus stops, and Train stations. The second article was centered around the use of feature reduction algorithms as means of tuning the models from the first article to provide better accuracy. Check out these articles to see how this project has evolved in the search for the best way to classify POI data."
},
{
"code": null,
"e": 1165,
"s": 687,
"text": "This article will use the Spark MLlib package- specifically the MultiLayer Perceptron Classifier properly classifying the POI records from SafeGraph using foot traffic patterns. SafeGraph is a data provider that provides POI data for hundreds of businesses and categories. It provides data for free to academics. For this project, I have chosen to use SafeGraph patterns data to classify records as various POI’s. The schema for the patterns data can be found here: Schema Info"
},
{
"code": null,
"e": 1398,
"s": 1165,
"text": "Apache Spark is an open-source framework developed as means to handle Big Data. The framework's call to fame is its ability to massively increase performance and reduce runtime through the use of parallel (or distributed) computing."
},
{
"code": null,
"e": 1415,
"s": 1398,
"text": "Why Spark MLlib?"
},
{
"code": null,
"e": 1864,
"s": 1415,
"text": "Supports many of the popular Machine Learning Algorithms supported by packages such as Statsmodels and Scikit-LearnAllows for the construction of complex Machine Learning pipelinesAllows for the saving and reuse of algorithms and ML pipelinesSupports distributed computing techniques that vastly reduce the run-time and improve performance of all algorithms by splitting tasks into chunks and assigning them to multiple nodes to perform in parallel"
},
{
"code": null,
"e": 1980,
"s": 1864,
"text": "Supports many of the popular Machine Learning Algorithms supported by packages such as Statsmodels and Scikit-Learn"
},
{
"code": null,
"e": 2046,
"s": 1980,
"text": "Allows for the construction of complex Machine Learning pipelines"
},
{
"code": null,
"e": 2109,
"s": 2046,
"text": "Allows for the saving and reuse of algorithms and ML pipelines"
},
{
"code": null,
"e": 2316,
"s": 2109,
"text": "Supports distributed computing techniques that vastly reduce the run-time and improve performance of all algorithms by splitting tasks into chunks and assigning them to multiple nodes to perform in parallel"
},
{
"code": null,
"e": 2562,
"s": 2316,
"text": "Spark MLlib is truly a very interesting and straightforward package to have an understanding of and can provide some very interesting insights in a very short amount of time, providing the best of both worlds in terms of runtime and performance."
},
{
"code": null,
"e": 2582,
"s": 2562,
"text": "Spark Deep Learning"
},
{
"code": null,
"e": 2880,
"s": 2582,
"text": "The MLlib package has very few accommodations for Deep Learning. As of Spark 3.0, the package does not support many Deep Learning models and is more focused on the notions of Regression, Classification, and Clustering. The one exception to this shortcoming is the Multilayer Perceptron Classifier."
},
{
"code": null,
"e": 2922,
"s": 2880,
"text": "Why the MultiLayer Perceptron Classifier?"
},
{
"code": null,
"e": 3626,
"s": 2922,
"text": "Each Classification model in the MLlib package has its advantages and disadvantages and different data may call for the use of a different model for maximum efficiency. The MLPC is superior to the other supervised classification algorithms that are available in the Spark MLlib package because it has the capability of finding correlations between features on its own. furthermore, the classifier works well for multi-class classifications and requires not nearly as much preprocessing and pipeline creation as traditional algorithms such as the SVM algorithm do in the Spark interface. Lastly, with a large enough and diverse enough dataset, the MLPC algorithm can perform better than most classifiers."
},
{
"code": null,
"e": 3749,
"s": 3626,
"text": "Given this intro let’s see if the Spark MLPC classifier can perform better than our classifiers from part 1 of this series"
},
{
"code": null,
"e": 4031,
"s": 3749,
"text": "Before we can start with the Deep Learning aspects we must first load the Data. This particular step has been covered in both the first article of the series as well as the second one. The basic steps behind the data loading and preprocessing to fit our needs for this article are:"
},
{
"code": null,
"e": 4397,
"s": 4031,
"text": "Before we take our first steps into the notions of feature reduction, we must first load the data that we will use for this project: This process of loading the data can be found in the notebook and has been explained in detail in the first part of the series. For our purposes all that is required is a brief outline of the steps taken and the resulting dataframe:"
},
{
"code": null,
"e": 4906,
"s": 4397,
"text": "Dropping unnecesscay columns- [‘parent_safegraph_place_id’,’placekey’,’safegraph_place_id’,’parent_placekey’,’parent_placekey’,’safegraph_brand_ids’,’brands’, ‘poi_cbg’]Creating ground truth column that establishes each record as either Airport, Bus station, Airport, or UnkownDropping Unknown records to clear out records that cannot be identifiedHorizontally exploding columns of JSON strings using pysparkHorizontally exploding columns of arraysUsing Sklearn LabelEncoder package to transform class column"
},
{
"code": null,
"e": 5076,
"s": 4906,
"text": "Dropping unnecesscay columns- [‘parent_safegraph_place_id’,’placekey’,’safegraph_place_id’,’parent_placekey’,’parent_placekey’,’safegraph_brand_ids’,’brands’, ‘poi_cbg’]"
},
{
"code": null,
"e": 5185,
"s": 5076,
"text": "Creating ground truth column that establishes each record as either Airport, Bus station, Airport, or Unkown"
},
{
"code": null,
"e": 5257,
"s": 5185,
"text": "Dropping Unknown records to clear out records that cannot be identified"
},
{
"code": null,
"e": 5318,
"s": 5257,
"text": "Horizontally exploding columns of JSON strings using pyspark"
},
{
"code": null,
"e": 5359,
"s": 5318,
"text": "Horizontally exploding columns of arrays"
},
{
"code": null,
"e": 5420,
"s": 5359,
"text": "Using Sklearn LabelEncoder package to transform class column"
},
{
"code": null,
"e": 5523,
"s": 5420,
"text": "As a result of these transformations the outputted data looks like this and has the following columns:"
},
{
"code": null,
"e": 5606,
"s": 5523,
"text": "Raw_visit_counts: Number of visits in our panel to this POI during the date range."
},
{
"code": null,
"e": 5702,
"s": 5606,
"text": "Raw_visitor_counts: Number of unique visitors from our panel to this POI during the date range."
},
{
"code": null,
"e": 5824,
"s": 5702,
"text": "Distance_from_home: Median distance from home traveled by visitors (of visitors whose home we have identified) in meters."
},
{
"code": null,
"e": 5876,
"s": 5824,
"text": "Median_dwell: Median minimum dwell time in minutes."
},
{
"code": null,
"e": 6036,
"s": 5876,
"text": "Bucketed Dwells (Exploded to <5, 5–10,11–20,21–60,61–120,121–240): Key is the range of minutes and value is the number of visits that were within that duration"
},
{
"code": null,
"e": 6196,
"s": 6036,
"text": "Popularity_by_day(Exploded to Monday-Sunday): A mapping of the day of the week to the number of visits on each day (local time) in the course of the date range"
},
{
"code": null,
"e": 6450,
"s": 6196,
"text": "Popularity_by_hour(Exploded to popularity_1-popularity_24): A mapping of the hour of the day to the number of visits in each hour over the course of the date range in local time. The first element in the array corresponds to the hour of midnight to 1 am"
},
{
"code": null,
"e": 6666,
"s": 6450,
"text": "Device_type(Exploded to IOS and Android): The number of visitors to the POI that is using android vs. ios. Only device_type with at least 2 devices are shown and any category with less than 5 devices are reported as"
},
{
"code": null,
"e": 6743,
"s": 6666,
"text": "Now that the data is ready to go, we can begin on the Deep Learning aspects."
},
{
"code": null,
"e": 7201,
"s": 6743,
"text": "The Spark MLlib package performs ML functions in a manner that is slightly different from the traditional code snippets written with packages such as Sci-kit Learn. The package requires all of the features used for prediction to be compiled into one singular column of vectors using the Vector Assembler transformer. This requires the Label column to be separated from the feature columns beforehand, the code for this step of preprocessing looks like this:"
},
{
"code": null,
"e": 7297,
"s": 7201,
"text": "Step 1: separating of label column from features column and conversion of Pandas DF to Spark DF"
},
{
"code": null,
"e": 7552,
"s": 7297,
"text": "from pyspark.ml.classification import MultilayerPerceptronClassifierfrom pyspark.ml.evaluation import MulticlassClassificationEvaluatorlabel = transportation_df[‘Class’]transportation_df = spark.createDataFrame(transportation_df)transportation_df.show(5)"
},
{
"code": null,
"e": 7609,
"s": 7552,
"text": "Step 2: Using Vector Assembler to create features column"
},
{
"code": null,
"e": 7889,
"s": 7609,
"text": "from pyspark.ml.feature import VectorAssemblertransportation_df = transportation_df.drop(‘Class’)assembler = VectorAssembler().setInputCols(transportation_df.columns[1:]).setOutputCol(“features”)transportation_df = assembler.transform(transportation_df)transportation_df.show(10)"
},
{
"code": null,
"e": 7978,
"s": 7889,
"text": "The end result of this should have all of the features compiled into one column as such:"
},
{
"code": null,
"e": 8221,
"s": 7978,
"text": "From here the next step is to combine this data back with the label column. Since the other columns are no longer necessary (They have been compiled into the features column), all we need to run our model is the features and the label columns"
},
{
"code": null,
"e": 8632,
"s": 8221,
"text": "transportation_df = transportation_df.toPandas()features = transportation_df[‘features’]features_df = pd.DataFrame(data = {‘features’: features})features_df = features_df.loc[~features_df.index.duplicated(), :]label_df = pd.DataFrame(data = {‘label’: label})label_df = label_df.loc[~label_df.index.duplicated(), :]transportation_df = pd.concat([features_df, label_df], axis= 1).dropna()transportation_df.head()"
},
{
"code": null,
"e": 8701,
"s": 8632,
"text": "Next step would be to split the data into training and testing data:"
},
{
"code": null,
"e": 8823,
"s": 8701,
"text": "transportation_df = spark.createDataFrame(transportation_df)trainSet, testSet = transportation_df.randomSplit([0.8, 0.2])"
},
{
"code": null,
"e": 8974,
"s": 8823,
"text": "Now it’s time to create the Classifier model. The model has many hyperparameters that we need to know and tune in order to maximize model performance:"
},
{
"code": null,
"e": 9272,
"s": 8974,
"text": "Tol: This parameter refers to the convergence tolerance or the value at which the weights begin to converge towards the predicted answer. The default value for this parameter is .000001. smaller values for the parameter may lead to overfitting while larger values may lead to more accurate results"
},
{
"code": null,
"e": 9341,
"s": 9272,
"text": "Seed: The random seed for the randomly generated values of the model"
},
{
"code": null,
"e": 9613,
"s": 9341,
"text": "Blocksize: This parameter refers to the number of records that will be included per iteration of the model. The default value for this parameter is 128. Higher block size will lead to overfitting while a lower one will provide more accurate results at the cost of runtime"
},
{
"code": null,
"e": 9850,
"s": 9613,
"text": "Stepsize: This parameter refers to the model’s learning rate. the default value for this parameter is .03. A smaller step size will lead to a more accurate model at the cost of a larger run time and a larger one will lead to overfitting"
},
{
"code": null,
"e": 10225,
"s": 9850,
"text": "Layers: Perhaps the most important hyperparameter of all, this parameter refers to the number of layers and nodes per layer that will be present in this model. The first layer must always be the number of features present in the data and the last layer must always be the number of outputted labels available. There must be at least one hidden layer in this array of layers."
},
{
"code": null,
"e": 10301,
"s": 10225,
"text": "With these requirements in mind let us look at the code for the MLPC model:"
},
{
"code": null,
"e": 10538,
"s": 10301,
"text": "layers = [44,50,50,3]mlpc = MultilayerPerceptronClassifier(layers = layers, solver=’gd’, tol=.0000001, stepSize=.00001, blockSize= 30).setLabelCol(“label”).setFeaturesCol(“features”).setSeed(20).setMaxIter(500)model = mlpc.fit(trainSet)"
},
{
"code": null,
"e": 10737,
"s": 10538,
"text": "Here we have made a Neural Net with two hidden layers each with 50 nodes. The model is trained using Gradient Descent and has a very high convergence tolerance and step size. Let us see the results:"
},
{
"code": null,
"e": 10786,
"s": 10737,
"text": "result = model.transform(testSet)result.show(10)"
},
{
"code": null,
"e": 11015,
"s": 10786,
"text": "from pyspark.ml.evaluation import MulticlassClassificationEvaluatorevaluator = MulticlassClassificationEvaluator(labelCol = ‘label’, predictionCol = ‘prediction’, metricName = ‘accuracy’)mlpacc = evaluator.evaluate(result)mlpacc"
},
{
"code": null,
"e": 11367,
"s": 11015,
"text": "The overall accuracy is... unsatisfying, but this is the first of many experiments that we must run with this data in order to find the most effective number of layers and nodes to provide better results. Below is a table of tried layers and accuracies. If you would like to try this code, don’t hesitate to check out the notebook and run it yourself."
},
{
"code": null,
"e": 11409,
"s": 11367,
"text": "Immediate takeaways from this experiment:"
},
{
"code": null,
"e": 11528,
"s": 11409,
"text": "More hidden layers do not necessarily mean better accuracyDeeper hidden layers do not necessarily mean better accuracy"
},
{
"code": null,
"e": 11587,
"s": 11528,
"text": "More hidden layers do not necessarily mean better accuracy"
},
{
"code": null,
"e": 11648,
"s": 11587,
"text": "Deeper hidden layers do not necessarily mean better accuracy"
},
{
"code": null,
"e": 12612,
"s": 11648,
"text": "We can see from the table above that the model performs best when it remains simple rather than complex. Even so, we see that the accuracy for this particular dataset is barely exceeding that of the Gaussian Naive Bayes model from the first part of the series. Why could this be the case? Well, the simple answer is the imbalance in the data. When extracting this data from SafeGraph, three NAICS codes were utilized to retrieve data, and the data that was outputted was severely imbalanced, with the number of airport records being almost 4 times that of the number of bus stop records. This imbalance has shown itself to be an issue throughout this series and detrimentally affected the results of many models we trained on the data. To rectify the imbalance, random sampling of the different classes in the data was applied but doing so led to the data for the Airport and Train stations being underrepresented and thus still hurting the accuracy of the model."
},
{
"code": null,
"e": 12956,
"s": 12612,
"text": "Regardless of the results, This article provides an introduction to the notions of Spark Deep Learning through the MLPC model. The article shows the ease that Spark brings to creating Deep Learning models and furthermore shows the benefits and shortcomings of using such a complex model on a real-world dataset such as SafeGraph Patterns data."
},
{
"code": null,
"e": 13185,
"s": 12956,
"text": "In the next and final article of this series, we will look into the use of ensemble classification on this Patterns data and see in the use of multiple classifiers layered on top of one another helps to provide the best results."
},
{
"code": null,
"e": 13196,
"s": 13185,
"text": "Questions?"
}
] |
Spring Boot JPA - Quick Guide
|
Java Persistence API is a collection of classes and methods to persistently store the vast amounts of data into a database which is provided by the Oracle Corporation.
To reduce the burden of writing codes for relational object management, a programmer follows the ‘JPA Provider’ framework, which allows easy interaction with database instance. Here the required framework is taken over by JPA.
Earlier versions of EJB, defined persistence layer combined with business logic layer using javax.ejb.EntityBean Interface.
While introducing EJB 3.0, the persistence layer was separated and specified as JPA 1.0 (Java Persistence API). The specifications of this API were released along with the specifications of JAVA EE5 on May 11, 2006 using JSR 220.
While introducing EJB 3.0, the persistence layer was separated and specified as JPA 1.0 (Java Persistence API). The specifications of this API were released along with the specifications of JAVA EE5 on May 11, 2006 using JSR 220.
JPA 2.0 was released with the specifications of JAVA EE6 on December 10, 2009 as a part of Java Community Process JSR 317.
JPA 2.0 was released with the specifications of JAVA EE6 on December 10, 2009 as a part of Java Community Process JSR 317.
JPA 2.1 was released with the specification of JAVA EE7 on April 22, 2013 using JSR 338.
JPA 2.1 was released with the specification of JAVA EE7 on April 22, 2013 using JSR 338.
JPA is an open source API, therefore various enterprise vendors such as Oracle, Redhat, Eclipse, etc. provide new products by adding the JPA persistence flavor in them. Some of these products include −
Hibernate, Eclipselink, Toplink, Spring Data JPA, etc.
This chapter will guide you on how to prepare a development environment to start your work with Spring Boot Framework. It will also teach you how to set up JDK, Eclipse on your machine before you set up Spring Boot Framework −
Java SE is available for download for free. To download click here, please download a version compatible with your operating system.
Follow the instructions to download Java, and run the .exe to install Java on your machine. Once you have installed Java on your machine, you would need to set environment variables to point to correct installation directories.
Assuming you have installed Java in c:\Program Files\java\jdk directory −
Right-click on 'My Computer' and select 'Properties'.
Right-click on 'My Computer' and select 'Properties'.
Click on the 'Environment variables' button under the 'Advanced' tab.
Click on the 'Environment variables' button under the 'Advanced' tab.
Now, edit the 'Path' variable and add the path to the Java executable directory at the end of it. For example, if the path is currently set to C:\Windows\System32, then edit it the following way
C:\Windows\System32;c:\Program Files\java\jdk\bin.
Now, edit the 'Path' variable and add the path to the Java executable directory at the end of it. For example, if the path is currently set to C:\Windows\System32, then edit it the following way
C:\Windows\System32;c:\Program Files\java\jdk\bin.
Assuming you have installed Java in c:\Program Files\java\jdk directory −
Edit the 'C:\autoexec.bat' file and add the following line at the end −
SET PATH=%PATH%;C:\Program Files\java\jdk\bin
Edit the 'C:\autoexec.bat' file and add the following line at the end −
SET PATH=%PATH%;C:\Program Files\java\jdk\bin
Environment variable PATH should be set to point to where the Java binaries have been installed. Refer to your shell documentation if you have trouble doing this.
For example, if you use bash as your shell, then you would add the following line at the end of your .bashrc −
export PATH=/path/to/java:$PATH'
Alternatively, if you use an Integrated Development Environment (IDE) like Borland JBuilder, Eclipse, IntelliJ IDEA, or Sun ONE Studio, you will have to compile and run a simple program to confirm that the IDE knows where you have installed Java. Otherwise, you will have to carry out a proper setup as given in the document of the IDE.
All the examples in this tutorial have been written using Eclipse IDE. So we would suggest you should have the latest version of Eclipse installed on your machine.
To install Eclipse IDE, download the latest Eclipse binaries from www.eclipse.org/downloads/. Once you download the installation, unpack the binary distribution into a convenient location. For example, in C:\eclipse on Windows, or /usr/local/eclipse on Linux/Unix and finally set PATH variable appropriately.
Eclipse can be started by executing the following commands on Windows machine, or you can simply double-click on eclipse.exe
%C:\eclipse\eclipse.exe
Eclipse can be started by executing the following commands on Unix (Solaris, Linux, etc.) machine −
$/usr/local/eclipse/eclipse
After a successful startup, if everything is fine then it should display the following result −
M2Eclipse is eclipse plugin which is very useful integration for Apache Maven into the Eclipse IDE. We are using maven in this tutorial to build spring boot project and examples are run within eclipse using m2eclipse.
Install the latest M2Eclipse release by using the Install New Software dialog in Eclipse IDE,and point it to this p2 repository −
https://download.eclipse.org/technology/m2e/releases/latest/
Now if everything is fine, then you can proceed to set up your Spring Boot. Following are the simple steps to download and install the Spring Boot Project on your machine.
Go to spring initializer link to create a spring boot project, https://start.spring.io/.
Go to spring initializer link to create a spring boot project, https://start.spring.io/.
Select project as Maven Project.
Select project as Maven Project.
Select language as Java.
Select language as Java.
Select Spring Boot version as 2.5.3.
Select Spring Boot version as 2.5.3.
Set Project Metadata - Group as com.tutorialspoint, Artifact as springboot-h2, name as springboot-h2, Description as Demo project for Spring Boot and H2 Database and package name as com.tutorialspoint.springboot-h2.
Set Project Metadata - Group as com.tutorialspoint, Artifact as springboot-h2, name as springboot-h2, Description as Demo project for Spring Boot and H2 Database and package name as com.tutorialspoint.springboot-h2.
Select packaging as Jar.
Select packaging as Jar.
Select java as 11.
Select java as 11.
Add dependencies as Spring Web, Spring Data JPA, H2 Database and Spring Boot DevTools.
Add dependencies as Spring Web, Spring Data JPA, H2 Database and Spring Boot DevTools.
Now click on GENERATE Button to generate the project structure.
Once the maven based spring boot project is downloaded, then import the maven project into eclipse and rest eclipse will handle. It will download the maven dependencies and build the project to make it ready for further development.
POSTMAN is a useful tool to test REST Based APIs. To install POSTMAN, download the latest POSTMAN binaries from www.postman.com/downloads/. Once you download the installable, follow the instructions to install and use it.
Java Persistence API is a source to store business entities as relational entities. It shows how to define a PLAIN OLD JAVA OBJECT (POJO) as an entity and how to manage entities with relations.
The following image shows the class level architecture of JPA. It shows the core classes and interfaces of JPA.
The following table describes each of the units shown in the above architecture.
EntityManagerFactory
This is a factory class of EntityManager. It creates and manages multiple EntityManager instances.
EntityManager
It is an Interface, it manages the persistence operations on objects. It works like factory for Query instance.
Entity
Entities are the persistence objects, stores as records in the database.
EntityTransaction
It has one-to-one relationship with EntityManager. For each EntityManager, operations are maintained by EntityTransaction class.
Persistence
This class contain static methods to obtain EntityManagerFactory instance.
Query
This interface is implemented by each JPA vendor to obtain relational objects that meet the criteria.
The above classes and interfaces are used for storing entities into a database as a record. They help programmers by reducing their efforts to write codes for storing data into a database so that they can concentrate on more important activities such as writing codes for mapping the classes with database tables.
In the above architecture, the relations between the classes and interfaces belong to the javax.persistence package. The following diagram shows the relationship between them.
The relationship between EntityManagerFactory and EntityManager is one-to-many. It is a factory class to EntityManager instances.
The relationship between EntityManagerFactory and EntityManager is one-to-many. It is a factory class to EntityManager instances.
The relationship between EntityManager and EntityTransaction is one-to-one. For each EntityManager operation, there is an EntityTransaction instance.
The relationship between EntityManager and EntityTransaction is one-to-one. For each EntityManager operation, there is an EntityTransaction instance.
The relationship between EntityManager and Query is one-to-many. Many number of queries can execute using one EntityManager instance.
The relationship between EntityManager and Query is one-to-many. Many number of queries can execute using one EntityManager instance.
The relationship between EntityManager and Entity is one-to-many. One EntityManager instance can manage multiple Entities.
The relationship between EntityManager and Entity is one-to-many. One EntityManager instance can manage multiple Entities.
JPA is a specification which specifies how to access, manage and persist information/data between java objects and relational databases. It provides a standard approach for ORM, Object Relational Mapping.
Hibernate is an implementation of JPA. It provides a lightweight framework and is one of the most popular ORM tool used.
Following table summerises the differences between JPA and Hibernate.
As in previous chapter Environment Setup, we've imported the generated spring boot project in eclipse. Now let's create the following structure in src/main/java folder.
com.tutorialspoint.controller.EmployeeController − A REST Based Controller to implement REST based APIs.
com.tutorialspoint.controller.EmployeeController − A REST Based Controller to implement REST based APIs.
com.tutorialspoint.entity.Employee − An entity class representing the corresponding table in database.
com.tutorialspoint.entity.Employee − An entity class representing the corresponding table in database.
com.tutorialspoint.repository.EmployeeRepository − A Repository Interface to implement the CRUD operations on the database.
com.tutorialspoint.repository.EmployeeRepository − A Repository Interface to implement the CRUD operations on the database.
com.tutorialspoint.service.EmployeeService − A Service Class to implement the business opearations over repository functions.
com.tutorialspoint.service.EmployeeService − A Service Class to implement the business opearations over repository functions.
com.tutorialspoint.springbooth2.SprintBootH2Application − A Spring Boot Application class.
com.tutorialspoint.springbooth2.SprintBootH2Application − A Spring Boot Application class.
SprintBootH2Application class is already present. We need to create the above packages and relevant classes and interface as shown below −
Following is the default code of Employee. It represents a Employee table with id, name, age and email columns.
package com.tutorialspoint.entity;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;
@Entity
@Table
public class Employee {
@Id
@Column
private int id;
@Column
private String name;
@Column
private int age;
@Column
private String email;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
}
Following is the default code of Repository to implement CRUD operations on above entity, Employee.
package com.tutorialspoint.repository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import com.tutorialspoint.entity.Employee;
@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Integer> {
}
Following is the default code of Service to implement operations over repository functions.
package com.tutorialspoint.service;
import java.util.ArrayList;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.repository.EmployeeRepository;
@Service
public class EmployeeService {
@Autowired
EmployeeRepository repository;
public Employee getEmployeeById(int id) {
return repository.findById(id).get();
}
public List<Employee> getAllEmployees(){
List<Employee> employees = new ArrayList<Employee>();
repository.findAll().forEach(employee -> employees.add(employee));
return employees;
}
public void saveOrUpdate(Employee employee) {
repository.save(employee);
}
public void deleteEmployeeById(int id) {
repository.deleteById(id);
}
}
Following is the default code of Controller to implement REST APIs.
package com.tutorialspoint.controller;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.service.EmployeeService;
@RestController
@RequestMapping(path = "/emp")
public class EmployeeController {
@Autowired
EmployeeService employeeService;
@GetMapping("/employees")
public List<Employee> getAllEmployees(){
return employeeService.getAllEmployees();
}
@GetMapping("/employee/{id}")
public Employee getEmployee(@PathVariable("id") int id) {
return employeeService.getEmployeeById(id);
}
@DeleteMapping("/employee/{id}")
public void deleteEmployee(@PathVariable("id") int id) {
employeeService.deleteEmployeeById(id);
}
@PostMapping("/employee")
public void addEmployee(@RequestBody Employee employee) {
employeeService.saveOrUpdate(employee);
}
@PutMapping("/employee")
public void updateEmployee(@RequestBody Employee employee) {
employeeService.saveOrUpdate(employee);
}
}
Following is the updated code of Application to use above classes.
package com.tutorialspoint.sprintbooth2;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.domain.EntityScan;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
@ComponentScan({"com.tutorialspoint.controller","com.tutorialspoint.service"})
@EntityScan("com.tutorialspoint.entity")
@EnableJpaRepositories("com.tutorialspoint.repository")
@SpringBootApplication
public class SprintBootH2Application {
public static void main(String[] args) {
SpringApplication.run(SprintBootH2Application.class, args);
}
}
Create following maven configuration in eclipse to run the springboot application with goal spring-boot:run. This configuration will help to run the REST APIs and we can test them using POSTMAN.
In eclipse, run the Employee Application configuration. Eclipse console will show the similar output.
[INFO] Scanning for projects...
...
2021-07-24 20:51:14.823 INFO 9760 --- [restartedMain] c.t.s.SprintBootH2Application:
Started SprintBootH2Application in 7.353 seconds (JVM running for 8.397)
Once server is up and running, Use Postman to make a POST request to add a record first.
Set the following parameters in POSTMAN.
HTTP Method - POST
HTTP Method - POST
URL - http://localhost:8080/emp/employee
URL - http://localhost:8080/emp/employee
BODY - An employee JSON
BODY - An employee JSON
{
"id": "1",
"age": "35",
"name": "Julie",
"email": "julie@gmail.com"
}
Click on Send Button and check the response status to be OK. Now make a GET Request to get all records.
Set the following parameters in POSTMAN.
HTTP Method - GET
HTTP Method - GET
URL - http://localhost:8080/emp/employees
URL - http://localhost:8080/emp/employees
Click the send button and verify the response.
[{
"id": "1",
"age": "35",
"name": "Julie",
"email": "julie@gmail.com"
}]
To test a Repository, we need the following annotation and classes −
@ExtendWith(SpringExtension.class) − Mark the class to run as test case using SpringExtension class.
@ExtendWith(SpringExtension.class) − Mark the class to run as test case using SpringExtension class.
@SpringBootTest(classes = SprintBootH2Application.class) − Configure the Spring Boot application.
@SpringBootTest(classes = SprintBootH2Application.class) − Configure the Spring Boot application.
@Transactional − To mark repository to do CRUD Operation capable.
@Transactional − To mark repository to do CRUD Operation capable.
@Autowired private EmployeeRepository employeeRepository − EmployeeRepository object to be tested.
@Autowired private EmployeeRepository employeeRepository − EmployeeRepository object to be tested.
Following is the complete code of EmployeeRepositoryTest.
package com.tutorialspoint.repository;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.util.ArrayList;
import java.util.List;
import javax.transaction.Transactional;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.sprintbooth2.SprintBootH2Application;
@ExtendWith(SpringExtension.class)
@Transactional
@SpringBootTest(classes = SprintBootH2Application.class)
public class EmployeeRepositoryTest {
@Autowired
private EmployeeRepository employeeRepository;
@Test
public void testFindById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), result.getId());
}
@Test
public void testFindAll() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testSave() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee found = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), found.getId());
}
@Test
public void testDeleteById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
employeeRepository.deleteById(employee.getId());
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 0);
}
private Employee getEmployee() {
Employee employee = new Employee();
employee.setId(1);
employee.setName("Mahesh");
employee.setAge(30);
employee.setEmail("mahesh@test.com");
return employee;
}
}
Right Click on the file in eclipse and select Run a JUnit Test and verify the result.
Let's now analyze the methods available in repository interface which we've created.
Following is the default code of Repository to implement CRUD operations on above entity, Employee.
package com.tutorialspoint.repository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import com.tutorialspoint.entity.Employee;
@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Integer> {
}
Now this repository contains following methods by default.
count(): long
returns the number of entities available.
delete(Employee entity): void
deletes an entity.
deleteAll():void
deletes all the entities.
deleteAll(Iterable< extends Employee > entities):void
deletes the entities passed as argument.
deleteAll(Iterable< extends Integer > ids):void
deletes the entities identified using their ids passed as argument.
existsById(Integer id):boolean
checks if an entity exists using its id.
findAll():Iterable< Employee >
returns all the entities.
findAllByIds(Iterable< Integer > ids):Iterable< Employee >
returns all the entities identified using ids passed as argument.
findById(Integer id):Optional< Employee >
returns an entity identified using id.
save(Employee entity): Employee
saves an entity and return the updated one.
saveAll(Iterable< Employee> entities): Iterable< Employee>
saves all entities passed and return the updated entities.
We've checked the methods available by default in Repository in JPA Methods chapter. Now let's add a method and test it.
Add a method to find an employee by its name.
package com.tutorialspoint.repository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import com.tutorialspoint.entity.Employee;
@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Integer> {
public List<Employee> findByName(String name);
public List<Employee> findByAge(int age);
}
Now Spring JPA will create the implementation of above methods automatically as we've following the property based nomenclature. Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom methods added.
Following is the complete code of EmployeeRepositoryTest.
package com.tutorialspoint.repository;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.util.ArrayList;
import java.util.List;
import javax.transaction.Transactional;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.sprintbooth2.SprintBootH2Application;
@ExtendWith(SpringExtension.class)
@Transactional
@SpringBootTest(classes = SprintBootH2Application.class)
public class EmployeeRepositoryTest {
@Autowired
private EmployeeRepository employeeRepository;
@Test
public void testFindById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), result.getId());
}
@Test
public void testFindAll() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testSave() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee found = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), found.getId());
}
@Test
public void testDeleteById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
employeeRepository.deleteById(employee.getId());
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 0);
}
private Employee getEmployee() {
Employee employee = new Employee();
employee.setId(1);
employee.setName("Mahesh");
employee.setAge(30);
employee.setEmail("mahesh@test.com");
return employee;
}
@Test
public void testFindByName() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByAge() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
}
Right Click on the file in eclipse and select Run a JUnit Test and verify the result.
Some time case arises, where we need a custom query to fulfil one test case. We can use @NamedQuery annotation to specify a named query within an entity class and then declare that method in repository. Following is an example.
We've added custom methods in Repository in JPA Custom Methods chapter. Now let's add another method using @NamedQuery and test it.
Following is the default code of Employee. It represents a Employee table with id, name, age and email columns.
package com.tutorialspoint.entity;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
@Entity
@Table
@NamedQuery(name = "Employee.findByEmail",
query = "select e from Employee e where e.email = ?1")
public class Employee {
@Id
@Column
private int id;
@Column
private String name;
@Column
private int age;
@Column
private String email;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
}
Add a method to find an employee by its name and age.
package com.tutorialspoint.repository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import com.tutorialspoint.entity.Employee;
@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Integer> {
public List<Employee> findByName(String name);
public List<Employee> findByAge(int age);
public Employee findByEmail(String email);
}
Now Spring JPA will create the implementation of above methods automatically using the query provided in named query. Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the named query method added.
Following is the complete code of EmployeeRepositoryTest.
package com.tutorialspoint.repository;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.util.ArrayList;
import java.util.List;
import javax.transaction.Transactional;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.sprintbooth2.SprintBootH2Application;
@ExtendWith(SpringExtension.class)
@Transactional
@SpringBootTest(classes = SprintBootH2Application.class)
public class EmployeeRepositoryTest {
@Autowired
private EmployeeRepository employeeRepository;
@Test
public void testFindById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), result.getId());
}
@Test
public void testFindAll() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testSave() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee found = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), found.getId());
}
@Test
public void testDeleteById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
employeeRepository.deleteById(employee.getId());
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 0);
}
private Employee getEmployee() {
Employee employee = new Employee();
employee.setId(1);
employee.setName("Mahesh");
employee.setAge(30);
employee.setEmail("mahesh@test.com");
return employee;
}
@Test
public void testFindByName() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByAge() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByEmail() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findByEmail(employee.getEmail());
assertNotNull(result);
}
}
Right Click on the file in eclipse and select Run a JUnit Test and verify the result.
Some time case arises, where we need a custom query to fulfil one test case. We can use @Query annotation to specify a query within a repository. Following is an example. In this example, we are using JPQL, Java Persistence Query Language.
We've added name query custom methods in Repository in JPA Named Query chapter. Now let's add another method using @Query and test it.
Add a method to get list of employees order by their names.
package com.tutorialspoint.repository;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import com.tutorialspoint.entity.Employee;
@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Integer> {
public List<Employee> findByName(String name);
public List<Employee> findByAge(int age);
public Employee findByEmail(String email);
@Query(value = "SELECT e FROM Employee e ORDER BY name")
public List<Employee> findAllSortedByName();
}
Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom query method added.
Following is the complete code of EmployeeRepositoryTest.
package com.tutorialspoint.repository;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.util.ArrayList;
import java.util.List;
import javax.transaction.Transactional;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.sprintbooth2.SprintBootH2Application;
@ExtendWith(SpringExtension.class)
@Transactional
@SpringBootTest(classes = SprintBootH2Application.class)
public class EmployeeRepositoryTest {
@Autowired
private EmployeeRepository employeeRepository;
@Test
public void testFindById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), result.getId());
}
@Test
public void testFindAll() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testSave() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee found = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), found.getId());
}
@Test
public void testDeleteById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
employeeRepository.deleteById(employee.getId());
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 0);
}
private Employee getEmployee() {
Employee employee = new Employee();
employee.setId(1);
employee.setName("Mahesh");
employee.setAge(30);
employee.setEmail("mahesh@test.com");
return employee;
}
@Test
public void testFindByName() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByAge() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByEmail() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findByEmail(employee.getEmail());
assertNotNull(result);
}
@Test
public void testFindAllSortedByName() {
Employee employee = getEmployee();
Employee employee1 = new Employee();
employee1.setId(2);
employee1.setName("Aarav");
employee1.setAge(20);
employee1.setEmail("aarav@test.com");
employeeRepository.save(employee);
employeeRepository.save(employee1);
List<Employee> result = employeeRepository.findAllSortedByName();
assertEquals(employee1.getName(), result.get(0).getName());
}
}
Right Click on the file in eclipse and select Run a JUnit Test and verify the result.
Some time case arises, where we need a custom native query to fulfil one test case. We can use @Query annotation to specify a query within a repository. Following is an example. In this example, we are using native query, and set an attribute nativeQuery=true in Query annotation to mark the query as native.
We've added custom methods in Repository in JPA Custom Query chapter. Now let's add another method using native query and test it.
Add a method to get list of employees order by their names.
package com.tutorialspoint.repository;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
import com.tutorialspoint.entity.Employee;
@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Integer> {
public List<Employee> findByName(String name);
public List<Employee> findByAge(int age);
public Employee findByEmail(String email);
@Query(value = "SELECT e FROM Employee e ORDER BY name")
public List<Employee> findAllSortedByName();
@Query(value = "SELECT * FROM Employee ORDER BY name", nativeQuery = true)
public List<Employee> findAllSortedByNameUsingNative();
}
Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom query method added.
Following is the complete code of EmployeeRepositoryTest.
package com.tutorialspoint.repository;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.util.ArrayList;
import java.util.List;
import javax.transaction.Transactional;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import com.tutorialspoint.entity.Employee;
import com.tutorialspoint.sprintbooth2.SprintBootH2Application;
@ExtendWith(SpringExtension.class)
@Transactional
@SpringBootTest(classes = SprintBootH2Application.class)
public class EmployeeRepositoryTest {
@Autowired
private EmployeeRepository employeeRepository;
@Test
public void testFindById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), result.getId());
}
@Test
public void testFindAll() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testSave() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee found = employeeRepository.findById(employee.getId()).get();
assertEquals(employee.getId(), found.getId());
}
@Test
public void testDeleteById() {
Employee employee = getEmployee();
employeeRepository.save(employee);
employeeRepository.deleteById(employee.getId());
List<Employee> result = new ArrayList<>();
employeeRepository.findAll().forEach(e -> result.add(e));
assertEquals(result.size(), 0);
}
private Employee getEmployee() {
Employee employee = new Employee();
employee.setId(1);
employee.setName("Mahesh");
employee.setAge(30);
employee.setEmail("mahesh@test.com");
return employee;
}
@Test
public void testFindByName() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByAge() {
Employee employee = getEmployee();
employeeRepository.save(employee);
List<Employee> result = new ArrayList<>();
employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));
assertEquals(result.size(), 1);
}
@Test
public void testFindByEmail() {
Employee employee = getEmployee();
employeeRepository.save(employee);
Employee result = employeeRepository.findByEmail(employee.getEmail());
assertNotNull(result);
}
@Test
public void testFindAllSortedByName() {
Employee employee = getEmployee();
Employee employee1 = new Employee();
employee1.setId(2);
employee1.setName("Aarav");
employee1.setAge(20);
employee1.setEmail("aarav@test.com");
employeeRepository.save(employee);
employeeRepository.save(employee1);
List<Employee> result = employeeRepository.findAllSortedByName();
assertEquals(employee1.getName(), result.get(0).getName());
}
@Test
public void testFindAllSortedByNameUsingNative() {
Employee employee = getEmployee();
Employee employee1 = new Employee();
employee1.setId(2);
employee1.setName("Aarav");
employee1.setAge(20);
employee1.setEmail("aarav@test.com");
employeeRepository.save(employee);
employeeRepository.save(employee1);
List<Employee> result = employeeRepository.findAllSortedByNameUsingNative();
assertEquals(employee1.getName(), result.get(0).getName());
}
}
Right Click on the file in eclipse and select Run a JUnit Test and verify the result.
102 Lectures
8 hours
Karthikeya T
39 Lectures
5 hours
Chaand Sheikh
73 Lectures
5.5 hours
Senol Atac
62 Lectures
4.5 hours
Senol Atac
67 Lectures
4.5 hours
Senol Atac
69 Lectures
5 hours
Senol Atac
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2146,
"s": 1978,
"text": "Java Persistence API is a collection of classes and methods to persistently store the vast amounts of data into a database which is provided by the Oracle Corporation."
},
{
"code": null,
"e": 2373,
"s": 2146,
"text": "To reduce the burden of writing codes for relational object management, a programmer follows the ‘JPA Provider’ framework, which allows easy interaction with database instance. Here the required framework is taken over by JPA."
},
{
"code": null,
"e": 2497,
"s": 2373,
"text": "Earlier versions of EJB, defined persistence layer combined with business logic layer using javax.ejb.EntityBean Interface."
},
{
"code": null,
"e": 2727,
"s": 2497,
"text": "While introducing EJB 3.0, the persistence layer was separated and specified as JPA 1.0 (Java Persistence API). The specifications of this API were released along with the specifications of JAVA EE5 on May 11, 2006 using JSR 220."
},
{
"code": null,
"e": 2957,
"s": 2727,
"text": "While introducing EJB 3.0, the persistence layer was separated and specified as JPA 1.0 (Java Persistence API). The specifications of this API were released along with the specifications of JAVA EE5 on May 11, 2006 using JSR 220."
},
{
"code": null,
"e": 3080,
"s": 2957,
"text": "JPA 2.0 was released with the specifications of JAVA EE6 on December 10, 2009 as a part of Java Community Process JSR 317."
},
{
"code": null,
"e": 3203,
"s": 3080,
"text": "JPA 2.0 was released with the specifications of JAVA EE6 on December 10, 2009 as a part of Java Community Process JSR 317."
},
{
"code": null,
"e": 3292,
"s": 3203,
"text": "JPA 2.1 was released with the specification of JAVA EE7 on April 22, 2013 using JSR 338."
},
{
"code": null,
"e": 3381,
"s": 3292,
"text": "JPA 2.1 was released with the specification of JAVA EE7 on April 22, 2013 using JSR 338."
},
{
"code": null,
"e": 3583,
"s": 3381,
"text": "JPA is an open source API, therefore various enterprise vendors such as Oracle, Redhat, Eclipse, etc. provide new products by adding the JPA persistence flavor in them. Some of these products include −"
},
{
"code": null,
"e": 3638,
"s": 3583,
"text": "Hibernate, Eclipselink, Toplink, Spring Data JPA, etc."
},
{
"code": null,
"e": 3865,
"s": 3638,
"text": "This chapter will guide you on how to prepare a development environment to start your work with Spring Boot Framework. It will also teach you how to set up JDK, Eclipse on your machine before you set up Spring Boot Framework −"
},
{
"code": null,
"e": 3999,
"s": 3865,
"text": "Java SE is available for download for free. To download click here, please download a version compatible with your operating system."
},
{
"code": null,
"e": 4227,
"s": 3999,
"text": "Follow the instructions to download Java, and run the .exe to install Java on your machine. Once you have installed Java on your machine, you would need to set environment variables to point to correct installation directories."
},
{
"code": null,
"e": 4301,
"s": 4227,
"text": "Assuming you have installed Java in c:\\Program Files\\java\\jdk directory −"
},
{
"code": null,
"e": 4355,
"s": 4301,
"text": "Right-click on 'My Computer' and select 'Properties'."
},
{
"code": null,
"e": 4409,
"s": 4355,
"text": "Right-click on 'My Computer' and select 'Properties'."
},
{
"code": null,
"e": 4479,
"s": 4409,
"text": "Click on the 'Environment variables' button under the 'Advanced' tab."
},
{
"code": null,
"e": 4549,
"s": 4479,
"text": "Click on the 'Environment variables' button under the 'Advanced' tab."
},
{
"code": null,
"e": 4796,
"s": 4549,
"text": "Now, edit the 'Path' variable and add the path to the Java executable directory at the end of it. For example, if the path is currently set to C:\\Windows\\System32, then edit it the following way\nC:\\Windows\\System32;c:\\Program Files\\java\\jdk\\bin.\n"
},
{
"code": null,
"e": 4991,
"s": 4796,
"text": "Now, edit the 'Path' variable and add the path to the Java executable directory at the end of it. For example, if the path is currently set to C:\\Windows\\System32, then edit it the following way"
},
{
"code": null,
"e": 5042,
"s": 4991,
"text": "C:\\Windows\\System32;c:\\Program Files\\java\\jdk\\bin."
},
{
"code": null,
"e": 5116,
"s": 5042,
"text": "Assuming you have installed Java in c:\\Program Files\\java\\jdk directory −"
},
{
"code": null,
"e": 5234,
"s": 5116,
"text": "Edit the 'C:\\autoexec.bat' file and add the following line at the end −\nSET PATH=%PATH%;C:\\Program Files\\java\\jdk\\bin"
},
{
"code": null,
"e": 5306,
"s": 5234,
"text": "Edit the 'C:\\autoexec.bat' file and add the following line at the end −"
},
{
"code": null,
"e": 5352,
"s": 5306,
"text": "SET PATH=%PATH%;C:\\Program Files\\java\\jdk\\bin"
},
{
"code": null,
"e": 5515,
"s": 5352,
"text": "Environment variable PATH should be set to point to where the Java binaries have been installed. Refer to your shell documentation if you have trouble doing this."
},
{
"code": null,
"e": 5626,
"s": 5515,
"text": "For example, if you use bash as your shell, then you would add the following line at the end of your .bashrc −"
},
{
"code": null,
"e": 5659,
"s": 5626,
"text": "export PATH=/path/to/java:$PATH'"
},
{
"code": null,
"e": 5996,
"s": 5659,
"text": "Alternatively, if you use an Integrated Development Environment (IDE) like Borland JBuilder, Eclipse, IntelliJ IDEA, or Sun ONE Studio, you will have to compile and run a simple program to confirm that the IDE knows where you have installed Java. Otherwise, you will have to carry out a proper setup as given in the document of the IDE."
},
{
"code": null,
"e": 6160,
"s": 5996,
"text": "All the examples in this tutorial have been written using Eclipse IDE. So we would suggest you should have the latest version of Eclipse installed on your machine."
},
{
"code": null,
"e": 6469,
"s": 6160,
"text": "To install Eclipse IDE, download the latest Eclipse binaries from www.eclipse.org/downloads/. Once you download the installation, unpack the binary distribution into a convenient location. For example, in C:\\eclipse on Windows, or /usr/local/eclipse on Linux/Unix and finally set PATH variable appropriately."
},
{
"code": null,
"e": 6594,
"s": 6469,
"text": "Eclipse can be started by executing the following commands on Windows machine, or you can simply double-click on eclipse.exe"
},
{
"code": null,
"e": 6620,
"s": 6594,
"text": "%C:\\eclipse\\eclipse.exe \n"
},
{
"code": null,
"e": 6720,
"s": 6620,
"text": "Eclipse can be started by executing the following commands on Unix (Solaris, Linux, etc.) machine −"
},
{
"code": null,
"e": 6749,
"s": 6720,
"text": "$/usr/local/eclipse/eclipse\n"
},
{
"code": null,
"e": 6845,
"s": 6749,
"text": "After a successful startup, if everything is fine then it should display the following result −"
},
{
"code": null,
"e": 7063,
"s": 6845,
"text": "M2Eclipse is eclipse plugin which is very useful integration for Apache Maven into the Eclipse IDE. We are using maven in this tutorial to build spring boot project and examples are run within eclipse using m2eclipse."
},
{
"code": null,
"e": 7193,
"s": 7063,
"text": "Install the latest M2Eclipse release by using the Install New Software dialog in Eclipse IDE,and point it to this p2 repository −"
},
{
"code": null,
"e": 7254,
"s": 7193,
"text": "https://download.eclipse.org/technology/m2e/releases/latest/"
},
{
"code": null,
"e": 7426,
"s": 7254,
"text": "Now if everything is fine, then you can proceed to set up your Spring Boot. Following are the simple steps to download and install the Spring Boot Project on your machine."
},
{
"code": null,
"e": 7515,
"s": 7426,
"text": "Go to spring initializer link to create a spring boot project, https://start.spring.io/."
},
{
"code": null,
"e": 7604,
"s": 7515,
"text": "Go to spring initializer link to create a spring boot project, https://start.spring.io/."
},
{
"code": null,
"e": 7637,
"s": 7604,
"text": "Select project as Maven Project."
},
{
"code": null,
"e": 7670,
"s": 7637,
"text": "Select project as Maven Project."
},
{
"code": null,
"e": 7695,
"s": 7670,
"text": "Select language as Java."
},
{
"code": null,
"e": 7720,
"s": 7695,
"text": "Select language as Java."
},
{
"code": null,
"e": 7757,
"s": 7720,
"text": "Select Spring Boot version as 2.5.3."
},
{
"code": null,
"e": 7794,
"s": 7757,
"text": "Select Spring Boot version as 2.5.3."
},
{
"code": null,
"e": 8010,
"s": 7794,
"text": "Set Project Metadata - Group as com.tutorialspoint, Artifact as springboot-h2, name as springboot-h2, Description as Demo project for Spring Boot and H2 Database and package name as com.tutorialspoint.springboot-h2."
},
{
"code": null,
"e": 8226,
"s": 8010,
"text": "Set Project Metadata - Group as com.tutorialspoint, Artifact as springboot-h2, name as springboot-h2, Description as Demo project for Spring Boot and H2 Database and package name as com.tutorialspoint.springboot-h2."
},
{
"code": null,
"e": 8251,
"s": 8226,
"text": "Select packaging as Jar."
},
{
"code": null,
"e": 8276,
"s": 8251,
"text": "Select packaging as Jar."
},
{
"code": null,
"e": 8295,
"s": 8276,
"text": "Select java as 11."
},
{
"code": null,
"e": 8314,
"s": 8295,
"text": "Select java as 11."
},
{
"code": null,
"e": 8401,
"s": 8314,
"text": "Add dependencies as Spring Web, Spring Data JPA, H2 Database and Spring Boot DevTools."
},
{
"code": null,
"e": 8488,
"s": 8401,
"text": "Add dependencies as Spring Web, Spring Data JPA, H2 Database and Spring Boot DevTools."
},
{
"code": null,
"e": 8552,
"s": 8488,
"text": "Now click on GENERATE Button to generate the project structure."
},
{
"code": null,
"e": 8785,
"s": 8552,
"text": "Once the maven based spring boot project is downloaded, then import the maven project into eclipse and rest eclipse will handle. It will download the maven dependencies and build the project to make it ready for further development."
},
{
"code": null,
"e": 9007,
"s": 8785,
"text": "POSTMAN is a useful tool to test REST Based APIs. To install POSTMAN, download the latest POSTMAN binaries from www.postman.com/downloads/. Once you download the installable, follow the instructions to install and use it."
},
{
"code": null,
"e": 9201,
"s": 9007,
"text": "Java Persistence API is a source to store business entities as relational entities. It shows how to define a PLAIN OLD JAVA OBJECT (POJO) as an entity and how to manage entities with relations."
},
{
"code": null,
"e": 9313,
"s": 9201,
"text": "The following image shows the class level architecture of JPA. It shows the core classes and interfaces of JPA."
},
{
"code": null,
"e": 9394,
"s": 9313,
"text": "The following table describes each of the units shown in the above architecture."
},
{
"code": null,
"e": 9415,
"s": 9394,
"text": "EntityManagerFactory"
},
{
"code": null,
"e": 9514,
"s": 9415,
"text": "This is a factory class of EntityManager. It creates and manages multiple EntityManager instances."
},
{
"code": null,
"e": 9528,
"s": 9514,
"text": "EntityManager"
},
{
"code": null,
"e": 9640,
"s": 9528,
"text": "It is an Interface, it manages the persistence operations on objects. It works like factory for Query instance."
},
{
"code": null,
"e": 9647,
"s": 9640,
"text": "Entity"
},
{
"code": null,
"e": 9720,
"s": 9647,
"text": "Entities are the persistence objects, stores as records in the database."
},
{
"code": null,
"e": 9738,
"s": 9720,
"text": "EntityTransaction"
},
{
"code": null,
"e": 9867,
"s": 9738,
"text": "It has one-to-one relationship with EntityManager. For each EntityManager, operations are maintained by EntityTransaction class."
},
{
"code": null,
"e": 9879,
"s": 9867,
"text": "Persistence"
},
{
"code": null,
"e": 9954,
"s": 9879,
"text": "This class contain static methods to obtain EntityManagerFactory instance."
},
{
"code": null,
"e": 9960,
"s": 9954,
"text": "Query"
},
{
"code": null,
"e": 10062,
"s": 9960,
"text": "This interface is implemented by each JPA vendor to obtain relational objects that meet the criteria."
},
{
"code": null,
"e": 10376,
"s": 10062,
"text": "The above classes and interfaces are used for storing entities into a database as a record. They help programmers by reducing their efforts to write codes for storing data into a database so that they can concentrate on more important activities such as writing codes for mapping the classes with database tables."
},
{
"code": null,
"e": 10552,
"s": 10376,
"text": "In the above architecture, the relations between the classes and interfaces belong to the javax.persistence package. The following diagram shows the relationship between them."
},
{
"code": null,
"e": 10682,
"s": 10552,
"text": "The relationship between EntityManagerFactory and EntityManager is one-to-many. It is a factory class to EntityManager instances."
},
{
"code": null,
"e": 10812,
"s": 10682,
"text": "The relationship between EntityManagerFactory and EntityManager is one-to-many. It is a factory class to EntityManager instances."
},
{
"code": null,
"e": 10962,
"s": 10812,
"text": "The relationship between EntityManager and EntityTransaction is one-to-one. For each EntityManager operation, there is an EntityTransaction instance."
},
{
"code": null,
"e": 11112,
"s": 10962,
"text": "The relationship between EntityManager and EntityTransaction is one-to-one. For each EntityManager operation, there is an EntityTransaction instance."
},
{
"code": null,
"e": 11246,
"s": 11112,
"text": "The relationship between EntityManager and Query is one-to-many. Many number of queries can execute using one EntityManager instance."
},
{
"code": null,
"e": 11380,
"s": 11246,
"text": "The relationship between EntityManager and Query is one-to-many. Many number of queries can execute using one EntityManager instance."
},
{
"code": null,
"e": 11503,
"s": 11380,
"text": "The relationship between EntityManager and Entity is one-to-many. One EntityManager instance can manage multiple Entities."
},
{
"code": null,
"e": 11626,
"s": 11503,
"text": "The relationship between EntityManager and Entity is one-to-many. One EntityManager instance can manage multiple Entities."
},
{
"code": null,
"e": 11831,
"s": 11626,
"text": "JPA is a specification which specifies how to access, manage and persist information/data between java objects and relational databases. It provides a standard approach for ORM, Object Relational Mapping."
},
{
"code": null,
"e": 11952,
"s": 11831,
"text": "Hibernate is an implementation of JPA. It provides a lightweight framework and is one of the most popular ORM tool used."
},
{
"code": null,
"e": 12022,
"s": 11952,
"text": "Following table summerises the differences between JPA and Hibernate."
},
{
"code": null,
"e": 12191,
"s": 12022,
"text": "As in previous chapter Environment Setup, we've imported the generated spring boot project in eclipse. Now let's create the following structure in src/main/java folder."
},
{
"code": null,
"e": 12296,
"s": 12191,
"text": "com.tutorialspoint.controller.EmployeeController − A REST Based Controller to implement REST based APIs."
},
{
"code": null,
"e": 12401,
"s": 12296,
"text": "com.tutorialspoint.controller.EmployeeController − A REST Based Controller to implement REST based APIs."
},
{
"code": null,
"e": 12504,
"s": 12401,
"text": "com.tutorialspoint.entity.Employee − An entity class representing the corresponding table in database."
},
{
"code": null,
"e": 12607,
"s": 12504,
"text": "com.tutorialspoint.entity.Employee − An entity class representing the corresponding table in database."
},
{
"code": null,
"e": 12731,
"s": 12607,
"text": "com.tutorialspoint.repository.EmployeeRepository − A Repository Interface to implement the CRUD operations on the database."
},
{
"code": null,
"e": 12855,
"s": 12731,
"text": "com.tutorialspoint.repository.EmployeeRepository − A Repository Interface to implement the CRUD operations on the database."
},
{
"code": null,
"e": 12981,
"s": 12855,
"text": "com.tutorialspoint.service.EmployeeService − A Service Class to implement the business opearations over repository functions."
},
{
"code": null,
"e": 13107,
"s": 12981,
"text": "com.tutorialspoint.service.EmployeeService − A Service Class to implement the business opearations over repository functions."
},
{
"code": null,
"e": 13198,
"s": 13107,
"text": "com.tutorialspoint.springbooth2.SprintBootH2Application − A Spring Boot Application class."
},
{
"code": null,
"e": 13289,
"s": 13198,
"text": "com.tutorialspoint.springbooth2.SprintBootH2Application − A Spring Boot Application class."
},
{
"code": null,
"e": 13428,
"s": 13289,
"text": "SprintBootH2Application class is already present. We need to create the above packages and relevant classes and interface as shown below −"
},
{
"code": null,
"e": 13540,
"s": 13428,
"text": "Following is the default code of Employee. It represents a Employee table with id, name, age and email columns."
},
{
"code": null,
"e": 14344,
"s": 13540,
"text": "package com.tutorialspoint.entity;\n\nimport javax.persistence.Column;\nimport javax.persistence.Entity;\nimport javax.persistence.Id;\nimport javax.persistence.Table;\n\n@Entity\n@Table\npublic class Employee {\n @Id\n @Column\n private int id;\n\n @Column\n private String name;\n\n @Column\n private int age;\n\n @Column\n private String email;\n\n public int getId() {\n return id;\n }\n public void setId(int id) {\n this.id = id;\n }\n public String getName() {\n return name;\n }\n public void setName(String name) {\n this.name = name;\n }\n public int getAge() {\n return age;\n }\n public void setAge(int age) {\n this.age = age;\n }\n public String getEmail() {\n return email;\n }\n public void setEmail(String email) {\n this.email = email;\n }\n}"
},
{
"code": null,
"e": 14444,
"s": 14344,
"text": "Following is the default code of Repository to implement CRUD operations on above entity, Employee."
},
{
"code": null,
"e": 14732,
"s": 14444,
"text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n}"
},
{
"code": null,
"e": 14824,
"s": 14732,
"text": "Following is the default code of Service to implement operations over repository functions."
},
{
"code": null,
"e": 15683,
"s": 14824,
"text": "package com.tutorialspoint.service;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.stereotype.Service;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.repository.EmployeeRepository;\n\n@Service\npublic class EmployeeService {\n @Autowired\n EmployeeRepository repository;\n\n public Employee getEmployeeById(int id) {\n return repository.findById(id).get();\n }\n public List<Employee> getAllEmployees(){\n List<Employee> employees = new ArrayList<Employee>();\n repository.findAll().forEach(employee -> employees.add(employee));\n return employees;\n }\n public void saveOrUpdate(Employee employee) {\n repository.save(employee);\n }\n public void deleteEmployeeById(int id) {\n repository.deleteById(id);\n }\n}"
},
{
"code": null,
"e": 15751,
"s": 15683,
"text": "Following is the default code of Controller to implement REST APIs."
},
{
"code": null,
"e": 17303,
"s": 15751,
"text": "package com.tutorialspoint.controller;\n\nimport java.util.List;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.web.bind.annotation.DeleteMapping;\nimport org.springframework.web.bind.annotation.GetMapping;\nimport org.springframework.web.bind.annotation.PathVariable;\nimport org.springframework.web.bind.annotation.PostMapping;\nimport org.springframework.web.bind.annotation.PutMapping;\nimport org.springframework.web.bind.annotation.RequestBody;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.RestController;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.service.EmployeeService;\n\n@RestController\n@RequestMapping(path = \"/emp\")\npublic class EmployeeController {\n @Autowired\n EmployeeService employeeService;\n\n @GetMapping(\"/employees\")\n public List<Employee> getAllEmployees(){\n return employeeService.getAllEmployees();\n }\n @GetMapping(\"/employee/{id}\")\n public Employee getEmployee(@PathVariable(\"id\") int id) {\n return employeeService.getEmployeeById(id);\n }\n @DeleteMapping(\"/employee/{id}\")\n public void deleteEmployee(@PathVariable(\"id\") int id) {\n employeeService.deleteEmployeeById(id);\n }\n @PostMapping(\"/employee\")\n public void addEmployee(@RequestBody Employee employee) {\n employeeService.saveOrUpdate(employee); \n }\n @PutMapping(\"/employee\")\n public void updateEmployee(@RequestBody Employee employee) {\n employeeService.saveOrUpdate(employee);\n }\t\n}"
},
{
"code": null,
"e": 17370,
"s": 17303,
"text": "Following is the updated code of Application to use above classes."
},
{
"code": null,
"e": 18091,
"s": 17370,
"text": "package com.tutorialspoint.sprintbooth2;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport org.springframework.boot.autoconfigure.domain.EntityScan;\nimport org.springframework.context.annotation.ComponentScan;\nimport org.springframework.data.jpa.repository.config.EnableJpaRepositories;\n\n@ComponentScan({\"com.tutorialspoint.controller\",\"com.tutorialspoint.service\"})\n@EntityScan(\"com.tutorialspoint.entity\")\n@EnableJpaRepositories(\"com.tutorialspoint.repository\")\n@SpringBootApplication\npublic class SprintBootH2Application {\n public static void main(String[] args) {\n SpringApplication.run(SprintBootH2Application.class, args);\n }\n}"
},
{
"code": null,
"e": 18286,
"s": 18091,
"text": "Create following maven configuration in eclipse to run the springboot application with goal spring-boot:run. This configuration will help to run the REST APIs and we can test them using POSTMAN."
},
{
"code": null,
"e": 18388,
"s": 18286,
"text": "In eclipse, run the Employee Application configuration. Eclipse console will show the similar output."
},
{
"code": null,
"e": 18585,
"s": 18388,
"text": "[INFO] Scanning for projects...\n...\n2021-07-24 20:51:14.823 INFO 9760 --- [restartedMain] c.t.s.SprintBootH2Application: \nStarted SprintBootH2Application in 7.353 seconds (JVM running for 8.397)\n"
},
{
"code": null,
"e": 18674,
"s": 18585,
"text": "Once server is up and running, Use Postman to make a POST request to add a record first."
},
{
"code": null,
"e": 18715,
"s": 18674,
"text": "Set the following parameters in POSTMAN."
},
{
"code": null,
"e": 18734,
"s": 18715,
"text": "HTTP Method - POST"
},
{
"code": null,
"e": 18753,
"s": 18734,
"text": "HTTP Method - POST"
},
{
"code": null,
"e": 18794,
"s": 18753,
"text": "URL - http://localhost:8080/emp/employee"
},
{
"code": null,
"e": 18835,
"s": 18794,
"text": "URL - http://localhost:8080/emp/employee"
},
{
"code": null,
"e": 18859,
"s": 18835,
"text": "BODY - An employee JSON"
},
{
"code": null,
"e": 18883,
"s": 18859,
"text": "BODY - An employee JSON"
},
{
"code": null,
"e": 18980,
"s": 18883,
"text": "{ \n \"id\": \"1\", \n \"age\": \"35\", \n \"name\": \"Julie\", \n \"email\": \"julie@gmail.com\" \n} "
},
{
"code": null,
"e": 19084,
"s": 18980,
"text": "Click on Send Button and check the response status to be OK. Now make a GET Request to get all records."
},
{
"code": null,
"e": 19125,
"s": 19084,
"text": "Set the following parameters in POSTMAN."
},
{
"code": null,
"e": 19143,
"s": 19125,
"text": "HTTP Method - GET"
},
{
"code": null,
"e": 19161,
"s": 19143,
"text": "HTTP Method - GET"
},
{
"code": null,
"e": 19203,
"s": 19161,
"text": "URL - http://localhost:8080/emp/employees"
},
{
"code": null,
"e": 19245,
"s": 19203,
"text": "URL - http://localhost:8080/emp/employees"
},
{
"code": null,
"e": 19292,
"s": 19245,
"text": "Click the send button and verify the response."
},
{
"code": null,
"e": 19391,
"s": 19292,
"text": "[{ \n \"id\": \"1\", \n \"age\": \"35\", \n \"name\": \"Julie\", \n \"email\": \"julie@gmail.com\" \n}] "
},
{
"code": null,
"e": 19460,
"s": 19391,
"text": "To test a Repository, we need the following annotation and classes −"
},
{
"code": null,
"e": 19561,
"s": 19460,
"text": "@ExtendWith(SpringExtension.class) − Mark the class to run as test case using SpringExtension class."
},
{
"code": null,
"e": 19662,
"s": 19561,
"text": "@ExtendWith(SpringExtension.class) − Mark the class to run as test case using SpringExtension class."
},
{
"code": null,
"e": 19760,
"s": 19662,
"text": "@SpringBootTest(classes = SprintBootH2Application.class) − Configure the Spring Boot application."
},
{
"code": null,
"e": 19858,
"s": 19760,
"text": "@SpringBootTest(classes = SprintBootH2Application.class) − Configure the Spring Boot application."
},
{
"code": null,
"e": 19924,
"s": 19858,
"text": "@Transactional − To mark repository to do CRUD Operation capable."
},
{
"code": null,
"e": 19990,
"s": 19924,
"text": "@Transactional − To mark repository to do CRUD Operation capable."
},
{
"code": null,
"e": 20089,
"s": 19990,
"text": "@Autowired private EmployeeRepository employeeRepository − EmployeeRepository object to be tested."
},
{
"code": null,
"e": 20188,
"s": 20089,
"text": "@Autowired private EmployeeRepository employeeRepository − EmployeeRepository object to be tested."
},
{
"code": null,
"e": 20246,
"s": 20188,
"text": "Following is the complete code of EmployeeRepositoryTest."
},
{
"code": null,
"e": 22425,
"s": 20246,
"text": "package com.tutorialspoint.repository;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.transaction.Transactional;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.extension.ExtendWith;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.test.context.SpringBootTest;\nimport org.springframework.test.context.junit.jupiter.SpringExtension;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.sprintbooth2.SprintBootH2Application;\n\n@ExtendWith(SpringExtension.class)\n@Transactional\n@SpringBootTest(classes = SprintBootH2Application.class)\npublic class EmployeeRepositoryTest {\n @Autowired\n private EmployeeRepository employeeRepository;\n\n @Test\n public void testFindById() {\n Employee employee = getEmployee();\t \n employeeRepository.save(employee);\n Employee result = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), result.getId());\t \n }\n @Test\n public void testFindAll() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testSave() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee found = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), found.getId());\t \n }\n @Test\n public void testDeleteById() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n employeeRepository.deleteById(employee.getId());\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 0);\n }\n private Employee getEmployee() {\n Employee employee = new Employee();\n employee.setId(1);\n employee.setName(\"Mahesh\");\n employee.setAge(30);\n employee.setEmail(\"mahesh@test.com\");\n return employee;\n }\n}"
},
{
"code": null,
"e": 22511,
"s": 22425,
"text": "Right Click on the file in eclipse and select Run a JUnit Test and verify the result."
},
{
"code": null,
"e": 22596,
"s": 22511,
"text": "Let's now analyze the methods available in repository interface which we've created."
},
{
"code": null,
"e": 22696,
"s": 22596,
"text": "Following is the default code of Repository to implement CRUD operations on above entity, Employee."
},
{
"code": null,
"e": 22984,
"s": 22696,
"text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n}"
},
{
"code": null,
"e": 23043,
"s": 22984,
"text": "Now this repository contains following methods by default."
},
{
"code": null,
"e": 23057,
"s": 23043,
"text": "count(): long"
},
{
"code": null,
"e": 23099,
"s": 23057,
"text": "returns the number of entities available."
},
{
"code": null,
"e": 23129,
"s": 23099,
"text": "delete(Employee entity): void"
},
{
"code": null,
"e": 23148,
"s": 23129,
"text": "deletes an entity."
},
{
"code": null,
"e": 23165,
"s": 23148,
"text": "deleteAll():void"
},
{
"code": null,
"e": 23191,
"s": 23165,
"text": "deletes all the entities."
},
{
"code": null,
"e": 23245,
"s": 23191,
"text": "deleteAll(Iterable< extends Employee > entities):void"
},
{
"code": null,
"e": 23286,
"s": 23245,
"text": "deletes the entities passed as argument."
},
{
"code": null,
"e": 23334,
"s": 23286,
"text": "deleteAll(Iterable< extends Integer > ids):void"
},
{
"code": null,
"e": 23402,
"s": 23334,
"text": "deletes the entities identified using their ids passed as argument."
},
{
"code": null,
"e": 23433,
"s": 23402,
"text": "existsById(Integer id):boolean"
},
{
"code": null,
"e": 23474,
"s": 23433,
"text": "checks if an entity exists using its id."
},
{
"code": null,
"e": 23505,
"s": 23474,
"text": "findAll():Iterable< Employee >"
},
{
"code": null,
"e": 23531,
"s": 23505,
"text": "returns all the entities."
},
{
"code": null,
"e": 23590,
"s": 23531,
"text": "findAllByIds(Iterable< Integer > ids):Iterable< Employee >"
},
{
"code": null,
"e": 23656,
"s": 23590,
"text": "returns all the entities identified using ids passed as argument."
},
{
"code": null,
"e": 23698,
"s": 23656,
"text": "findById(Integer id):Optional< Employee >"
},
{
"code": null,
"e": 23737,
"s": 23698,
"text": "returns an entity identified using id."
},
{
"code": null,
"e": 23769,
"s": 23737,
"text": "save(Employee entity): Employee"
},
{
"code": null,
"e": 23813,
"s": 23769,
"text": "saves an entity and return the updated one."
},
{
"code": null,
"e": 23872,
"s": 23813,
"text": "saveAll(Iterable< Employee> entities): Iterable< Employee>"
},
{
"code": null,
"e": 23931,
"s": 23872,
"text": "saves all entities passed and return the updated entities."
},
{
"code": null,
"e": 24052,
"s": 23931,
"text": "We've checked the methods available by default in Repository in JPA Methods chapter. Now let's add a method and test it."
},
{
"code": null,
"e": 24098,
"s": 24052,
"text": "Add a method to find an employee by its name."
},
{
"code": null,
"e": 24482,
"s": 24098,
"text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n public List<Employee> findByName(String name);\t\n public List<Employee> findByAge(int age);\n}"
},
{
"code": null,
"e": 24744,
"s": 24482,
"text": "Now Spring JPA will create the implementation of above methods automatically as we've following the property based nomenclature. Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom methods added."
},
{
"code": null,
"e": 24802,
"s": 24744,
"text": "Following is the complete code of EmployeeRepositoryTest."
},
{
"code": null,
"e": 27593,
"s": 24802,
"text": "package com.tutorialspoint.repository;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.transaction.Transactional;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.extension.ExtendWith;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.test.context.SpringBootTest;\nimport org.springframework.test.context.junit.jupiter.SpringExtension;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.sprintbooth2.SprintBootH2Application;\n\n@ExtendWith(SpringExtension.class)\n@Transactional\n@SpringBootTest(classes = SprintBootH2Application.class)\npublic class EmployeeRepositoryTest {\n @Autowired\n private EmployeeRepository employeeRepository;\n @Test\n public void testFindById() {\n Employee employee = getEmployee();\t \n employeeRepository.save(employee);\n Employee result = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), result.getId());\t \n }\n @Test\n public void testFindAll() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testSave() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee found = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), found.getId());\t \n }\n @Test\n public void testDeleteById() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n employeeRepository.deleteById(employee.getId());\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 0);\n }\n private Employee getEmployee() {\n Employee employee = new Employee();\n employee.setId(1);\n employee.setName(\"Mahesh\");\n employee.setAge(30);\n employee.setEmail(\"mahesh@test.com\");\n return employee;\n }\n @Test\n public void testFindByName() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByAge() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n}"
},
{
"code": null,
"e": 27679,
"s": 27593,
"text": "Right Click on the file in eclipse and select Run a JUnit Test and verify the result."
},
{
"code": null,
"e": 27907,
"s": 27679,
"text": "Some time case arises, where we need a custom query to fulfil one test case. We can use @NamedQuery annotation to specify a named query within an entity class and then declare that method in repository. Following is an example."
},
{
"code": null,
"e": 28039,
"s": 27907,
"text": "We've added custom methods in Repository in JPA Custom Methods chapter. Now let's add another method using @NamedQuery and test it."
},
{
"code": null,
"e": 28151,
"s": 28039,
"text": "Following is the default code of Employee. It represents a Employee table with id, name, age and email columns."
},
{
"code": null,
"e": 29090,
"s": 28151,
"text": "package com.tutorialspoint.entity;\n\nimport javax.persistence.Column;\nimport javax.persistence.Entity;\nimport javax.persistence.Id;\nimport javax.persistence.NamedQuery;\nimport javax.persistence.Table;\n\n@Entity\n@Table\n@NamedQuery(name = \"Employee.findByEmail\",\nquery = \"select e from Employee e where e.email = ?1\")\npublic class Employee {\n @Id\n @Column\n private int id;\n\n @Column\n private String name;\n\n @Column\n private int age;\n\n @Column\n private String email;\n\n public int getId() {\n return id;\n }\n public void setId(int id) {\n this.id = id;\n }\n public String getName() {\n return name;\n }\n public void setName(String name) {\n this.name = name;\n }\n public int getAge() {\n return age;\n }\n public void setAge(int age) {\n this.age = age;\n }\n public String getEmail() {\n return email;\n }\n public void setEmail(String email) {\n this.email = email;\n }\n}"
},
{
"code": null,
"e": 29144,
"s": 29090,
"text": "Add a method to find an employee by its name and age."
},
{
"code": null,
"e": 29574,
"s": 29144,
"text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n public List<Employee> findByName(String name);\t\n public List<Employee> findByAge(int age);\n public Employee findByEmail(String email);\n}"
},
{
"code": null,
"e": 29829,
"s": 29574,
"text": "Now Spring JPA will create the implementation of above methods automatically using the query provided in named query. Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the named query method added."
},
{
"code": null,
"e": 29887,
"s": 29829,
"text": "Following is the complete code of EmployeeRepositoryTest."
},
{
"code": null,
"e": 32934,
"s": 29887,
"text": "package com.tutorialspoint.repository;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.transaction.Transactional;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.extension.ExtendWith;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.test.context.SpringBootTest;\nimport org.springframework.test.context.junit.jupiter.SpringExtension;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.sprintbooth2.SprintBootH2Application;\n\n@ExtendWith(SpringExtension.class)\n@Transactional\n@SpringBootTest(classes = SprintBootH2Application.class)\npublic class EmployeeRepositoryTest {\n @Autowired\n private EmployeeRepository employeeRepository;\n\n @Test\n public void testFindById() {\n Employee employee = getEmployee();\t \n employeeRepository.save(employee);\n Employee result = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), result.getId());\t \n }\n @Test\n public void testFindAll() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testSave() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee found = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), found.getId());\t \n }\n @Test\n public void testDeleteById() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n employeeRepository.deleteById(employee.getId());\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 0);\n }\n private Employee getEmployee() {\n Employee employee = new Employee();\n employee.setId(1);\n employee.setName(\"Mahesh\");\n employee.setAge(30);\n employee.setEmail(\"mahesh@test.com\");\n return employee;\n }\n @Test\n public void testFindByName() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByAge() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByEmail() {\t \n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee result = employeeRepository.findByEmail(employee.getEmail());\t \n assertNotNull(result);\t \n }\n}"
},
{
"code": null,
"e": 33020,
"s": 32934,
"text": "Right Click on the file in eclipse and select Run a JUnit Test and verify the result."
},
{
"code": null,
"e": 33260,
"s": 33020,
"text": "Some time case arises, where we need a custom query to fulfil one test case. We can use @Query annotation to specify a query within a repository. Following is an example. In this example, we are using JPQL, Java Persistence Query Language."
},
{
"code": null,
"e": 33395,
"s": 33260,
"text": "We've added name query custom methods in Repository in JPA Named Query chapter. Now let's add another method using @Query and test it."
},
{
"code": null,
"e": 33455,
"s": 33395,
"text": "Add a method to get list of employees order by their names."
},
{
"code": null,
"e": 34051,
"s": 33455,
"text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.jpa.repository.Query;\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n public List<Employee> findByName(String name);\t\n public List<Employee> findByAge(int age);\n public Employee findByEmail(String email);\n \n @Query(value = \"SELECT e FROM Employee e ORDER BY name\")\n public List<Employee> findAllSortedByName();\n}"
},
{
"code": null,
"e": 34189,
"s": 34051,
"text": "Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom query method added."
},
{
"code": null,
"e": 34247,
"s": 34189,
"text": "Following is the complete code of EmployeeRepositoryTest."
},
{
"code": null,
"e": 37799,
"s": 34247,
"text": "package com.tutorialspoint.repository;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.transaction.Transactional;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.extension.ExtendWith;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.test.context.SpringBootTest;\nimport org.springframework.test.context.junit.jupiter.SpringExtension;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.sprintbooth2.SprintBootH2Application;\n\n@ExtendWith(SpringExtension.class)\n@Transactional\n@SpringBootTest(classes = SprintBootH2Application.class)\npublic class EmployeeRepositoryTest {\n @Autowired\n private EmployeeRepository employeeRepository;\n @Test\n public void testFindById() {\n Employee employee = getEmployee();\t \n employeeRepository.save(employee);\n Employee result = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), result.getId());\t \n }\n @Test\n public void testFindAll() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testSave() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee found = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), found.getId());\t \n }\n @Test\n public void testDeleteById() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n employeeRepository.deleteById(employee.getId());\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 0);\n }\n private Employee getEmployee() {\n Employee employee = new Employee();\n employee.setId(1);\n employee.setName(\"Mahesh\");\n employee.setAge(30);\n employee.setEmail(\"mahesh@test.com\");\n return employee;\n }\n @Test\n public void testFindByName() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByAge() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByEmail() {\t \n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee result = employeeRepository.findByEmail(employee.getEmail());\t \n assertNotNull(result);\t \n }\n @Test\n public void testFindAllSortedByName() {\n Employee employee = getEmployee();\n Employee employee1 = new Employee();\n employee1.setId(2);\n employee1.setName(\"Aarav\");\n employee1.setAge(20);\n employee1.setEmail(\"aarav@test.com\");\n employeeRepository.save(employee);\t \n employeeRepository.save(employee1);\n List<Employee> result = employeeRepository.findAllSortedByName();\n assertEquals(employee1.getName(), result.get(0).getName());\t \n }\n}"
},
{
"code": null,
"e": 37885,
"s": 37799,
"text": "Right Click on the file in eclipse and select Run a JUnit Test and verify the result."
},
{
"code": null,
"e": 38194,
"s": 37885,
"text": "Some time case arises, where we need a custom native query to fulfil one test case. We can use @Query annotation to specify a query within a repository. Following is an example. In this example, we are using native query, and set an attribute nativeQuery=true in Query annotation to mark the query as native."
},
{
"code": null,
"e": 38325,
"s": 38194,
"text": "We've added custom methods in Repository in JPA Custom Query chapter. Now let's add another method using native query and test it."
},
{
"code": null,
"e": 38385,
"s": 38325,
"text": "Add a method to get list of employees order by their names."
},
{
"code": null,
"e": 39122,
"s": 38385,
"text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.jpa.repository.Query;\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n public List<Employee> findByName(String name);\t\n public List<Employee> findByAge(int age);\n public Employee findByEmail(String email);\n \n @Query(value = \"SELECT e FROM Employee e ORDER BY name\")\n public List<Employee> findAllSortedByName();\n \n @Query(value = \"SELECT * FROM Employee ORDER BY name\", nativeQuery = true)\n public List<Employee> findAllSortedByNameUsingNative();\n}"
},
{
"code": null,
"e": 39260,
"s": 39122,
"text": "Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom query method added."
},
{
"code": null,
"e": 39318,
"s": 39260,
"text": "Following is the complete code of EmployeeRepositoryTest."
},
{
"code": null,
"e": 43402,
"s": 39318,
"text": "package com.tutorialspoint.repository;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.transaction.Transactional;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.extension.ExtendWith;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.test.context.SpringBootTest;\nimport org.springframework.test.context.junit.jupiter.SpringExtension;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.sprintbooth2.SprintBootH2Application;\n\n@ExtendWith(SpringExtension.class)\n@Transactional\n@SpringBootTest(classes = SprintBootH2Application.class)\npublic class EmployeeRepositoryTest {\n @Autowired\n private EmployeeRepository employeeRepository;\n\n @Test\n public void testFindById() {\n Employee employee = getEmployee();\t \n employeeRepository.save(employee);\n Employee result = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), result.getId());\t \n }\n @Test\n public void testFindAll() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testSave() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee found = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), found.getId());\t \n }\n @Test\n public void testDeleteById() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n employeeRepository.deleteById(employee.getId());\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 0);\n }\n private Employee getEmployee() {\n Employee employee = new Employee();\n employee.setId(1);\n employee.setName(\"Mahesh\");\n employee.setAge(30);\n employee.setEmail(\"mahesh@test.com\");\n return employee;\n }\n @Test\n public void testFindByName() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByAge() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByEmail() {\t \n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee result = employeeRepository.findByEmail(employee.getEmail());\t \n assertNotNull(result);\t \n }\n @Test\n public void testFindAllSortedByName() {\n Employee employee = getEmployee();\n Employee employee1 = new Employee();\n employee1.setId(2);\n employee1.setName(\"Aarav\");\n employee1.setAge(20);\n employee1.setEmail(\"aarav@test.com\");\n employeeRepository.save(employee);\t \n employeeRepository.save(employee1);\n List<Employee> result = employeeRepository.findAllSortedByName();\n assertEquals(employee1.getName(), result.get(0).getName());\t \n }\n @Test\n public void testFindAllSortedByNameUsingNative() {\n Employee employee = getEmployee();\n Employee employee1 = new Employee();\n employee1.setId(2);\n employee1.setName(\"Aarav\");\n employee1.setAge(20);\n employee1.setEmail(\"aarav@test.com\");\n employeeRepository.save(employee);\t \n employeeRepository.save(employee1);\n List<Employee> result = employeeRepository.findAllSortedByNameUsingNative();\n assertEquals(employee1.getName(), result.get(0).getName());\t \n } \n}"
},
{
"code": null,
"e": 43488,
"s": 43402,
"text": "Right Click on the file in eclipse and select Run a JUnit Test and verify the result."
},
{
"code": null,
"e": 43522,
"s": 43488,
"text": "\n 102 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 43536,
"s": 43522,
"text": " Karthikeya T"
},
{
"code": null,
"e": 43569,
"s": 43536,
"text": "\n 39 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 43584,
"s": 43569,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 43619,
"s": 43584,
"text": "\n 73 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 43631,
"s": 43619,
"text": " Senol Atac"
},
{
"code": null,
"e": 43666,
"s": 43631,
"text": "\n 62 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 43678,
"s": 43666,
"text": " Senol Atac"
},
{
"code": null,
"e": 43713,
"s": 43678,
"text": "\n 67 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 43725,
"s": 43713,
"text": " Senol Atac"
},
{
"code": null,
"e": 43758,
"s": 43725,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 43770,
"s": 43758,
"text": " Senol Atac"
},
{
"code": null,
"e": 43777,
"s": 43770,
"text": " Print"
},
{
"code": null,
"e": 43788,
"s": 43777,
"text": " Add Notes"
}
] |
Seaborn essentials for data visualization in Python | by Mahbubul Alam | Towards Data Science
|
As I said before, the purpose of data visualization is to communicate hidden information in the data. Visualization fundamentally serves 4 purposes in data science, in understanding:
the distribution of features
relationships between two or more variables
comparisons between variables
the composition of data
Various ways you can identify and communicate that hidden information and there are tools to do exactly that. Regardless of which programming language you choose, each language has libraries to handle data visualization efficiently.
If you are in Python you have several options, but I’ve found seaborn to be best in town. Not because seaborn can do something that other libraries cann’t, but because of its simplicity and intuitive code structure.
So the purpose of this article is to demonstrate some use cases of Seaborn with code snippets. My learning/teaching philosophy is to start with a problem and then find tools that can solve it. So I am not going to show everything under the sun, rather give an intuition on how Seaborn works and how to solve the 4 categories of problems I just mentioned.
Let’s dive right in.
seaborn comes with 17 built-in datasets. That means you don’t have to spend a whole lot of your time finding the right dataset and cleaning it up to make Seaborn-ready; rather you will focus on the core features of Seaborn visualization techniques to solve problems.
First, let’s take a look at the datasets.
# get names of the builtin datasetsns.get_dataset_names()
If you are in data science for some time you’ll recognize many of them just by their names (titanic? iris?). Otherwise, just load the dataset and call the head() function to take a peek into the data.
df = sns.load_dataset("iris")df.head()
I’m going to use a few of those datasets to show different kinds of visualization techniques, so I’m loading them all in. If you know what each dataset is about that’s great, otherwise don’t bother. Just know that each dataset contains variables of different kinds; some of them are discrete variables (e.g. 10, 20 , 30 miles/gallon), some are continuous (e.g. 5.2, 3.6 petal length) and some are categorical (e.g. days of the week). Knowing just different variable types and when to use them should work for now.
# loading additional datasetstips = sns.load_dataset("tips")mpg = sns.load_dataset("mpg")fmri = sns.load_dataset("fmri")car_crashes = sns.load_dataset("car_crashes")
Our first kind of problem is to understand the distribution of data. By distribution I mean few things: frequency distribution, probability distribution or just spread of the data with respect to central values (mean, median etc).
Histogram
Let’s say we have a dataset containing 1000 cars and their fuel efficiency in miles per gallon (mpg). A histogram will tell us the frequency distribution of the number of cars in each mpg category.
# Bar histogramsns.distplot(mpg["mpg"], kde = False)
There is a more sophisticated version of this frequency distribution used in statistics called the probability distribution. You can plot that as well.
# Line histogram/kernel density plotsns.kdeplot(mpg["mpg"], shade = True)
There is another kind of distribution — better known as spread— which shows how a variable is dispersed/spread with respect to its central tendency.
Boxplot is best known to demonstrate the dispersion of a variable with values such as the median, the minimum, the maximum and the outliers — all in the same plot.
# boxplotsns.boxplot(y = mpg["mpg"])
Kids grow taller with age — that’s a relationship between two variables: height and age.
The bigger the house, the higher the price — that’s another kind of relationship between the variables: floor size and price.
Relational plots display how variables are associated with each other.
Visualization can examine and plot the association between variables. For continuous variables, you can pretty much visualize anything in a scatterplot and visually determine the relationship between them.
# scatterplotsns.scatterplot(x = "sepal_length", y = "petal_length", data = iris)
You can see from the scatterplot below that in Iris flowers high petal length is positively associated with sepal length.
If you are having trouble visually figuring out the relationship, there’s another option — drawing a trend line.
# trend linesns.lmplot(x = "sepal_length", y = "petal_length", data = iris, scatter = False)
The trend line below is applied to the same data as the previous one in the scatter plot. Which one you want to use depends on your needs, but there are tradeoffs. In the scatterplot, you see the variability of every single data point in a two-dimensional space, but not easy to track the trend.
In a trend plot — created based on a linear model — you can identify the overall trend in data along with confidence intervals, but you lose individual data points (but of course you can display points in trend plots as well).
The comparative analysis is key to many business decisions. Which product is selling better — product A or product B? Is there a significant difference between the two drugs in their effectiveness? These kinds of questions are asked all the time all across the decision-making landscape.
Bar charts are some of the simplest and old school yet effective visualization techniques to communicate comparative analysis. The figure below presents the total restaurant bills paid by customers on different days of the week. So this visual compares total bills between weekdays.
# bar chartsns.barplot(x = "day", y = "total_bill", data = tips)
Line charts are another way to compare data. It’s commonly used in time series analysis where the temporal evolution of two sets of observations is compared.
# line chart/time seriessns.lineplot(x="timepoint", y="signal", hue="event", data=fmri)
The final pillar of data visualization is composition. The purpose of composition charts is to show the composition of one or more variables in absolute and relative terms (e.g. percentage).
Composition charts are a bit complicated to create in Seaborn, it’s not a one-liner code like the others. So bear with me as I give two examples below.
Stacked charts show the composition of a variable with different categories in a single bar. For example, a bar showing total car crashes in the state of Alabama, which is also disaggregated by crashes that involved alcohol (see the figure below).
# Initialize the matplotlib figuref, ax = plt.subplots(figsize=(10, 15))# Plot the total crashessns.set_color_codes("pastel")sns.barplot(x="total", y="abbrev", data=crashes,label="Total", color="b")# Plot the crashes where alcohol was involvedsns.set_color_codes("muted")sns.barplot(x="alcohol", y="abbrev", data=crashes, label="Alcohol-involved", color="b")# Add a legend and informative axis labelax.legend(ncol=2, loc="upper right", frameon=True)
Stacked area charts display the changes in values of different groups of observation in the same plot area. In this case, the values are “stacked” on top of each other.
Seaborn’s built-in datasets and codes are not quite useful to display stacked area charts, so we are going to create a simple dataset and display a “Seaborn-style” graph using matplotlib library.
# stacked areaimport numpy as npimport matplotlib.pytplot as plt# Datax = range(1,6)y = [[2,4,5,6,8], [2,3,6,7,10], [2,6,7,8,5]]# Plotplt.stackplot(x,y, labels=['X','Y','Z'])plt.legend(loc='upper left')plt.show()
The purpose of this article was to demonstrate some visualization techniques using the Python library seaborn. I wanted to cover two things — introduce different kinds of visualizations you will typically encounter in data science and second, some code snippets using Seaborn’s built-in datasets. Hope that was useful. Shout out if you want to see more articles like this.
|
[
{
"code": null,
"e": 355,
"s": 172,
"text": "As I said before, the purpose of data visualization is to communicate hidden information in the data. Visualization fundamentally serves 4 purposes in data science, in understanding:"
},
{
"code": null,
"e": 384,
"s": 355,
"text": "the distribution of features"
},
{
"code": null,
"e": 428,
"s": 384,
"text": "relationships between two or more variables"
},
{
"code": null,
"e": 458,
"s": 428,
"text": "comparisons between variables"
},
{
"code": null,
"e": 482,
"s": 458,
"text": "the composition of data"
},
{
"code": null,
"e": 715,
"s": 482,
"text": "Various ways you can identify and communicate that hidden information and there are tools to do exactly that. Regardless of which programming language you choose, each language has libraries to handle data visualization efficiently."
},
{
"code": null,
"e": 931,
"s": 715,
"text": "If you are in Python you have several options, but I’ve found seaborn to be best in town. Not because seaborn can do something that other libraries cann’t, but because of its simplicity and intuitive code structure."
},
{
"code": null,
"e": 1286,
"s": 931,
"text": "So the purpose of this article is to demonstrate some use cases of Seaborn with code snippets. My learning/teaching philosophy is to start with a problem and then find tools that can solve it. So I am not going to show everything under the sun, rather give an intuition on how Seaborn works and how to solve the 4 categories of problems I just mentioned."
},
{
"code": null,
"e": 1307,
"s": 1286,
"text": "Let’s dive right in."
},
{
"code": null,
"e": 1574,
"s": 1307,
"text": "seaborn comes with 17 built-in datasets. That means you don’t have to spend a whole lot of your time finding the right dataset and cleaning it up to make Seaborn-ready; rather you will focus on the core features of Seaborn visualization techniques to solve problems."
},
{
"code": null,
"e": 1616,
"s": 1574,
"text": "First, let’s take a look at the datasets."
},
{
"code": null,
"e": 1674,
"s": 1616,
"text": "# get names of the builtin datasetsns.get_dataset_names()"
},
{
"code": null,
"e": 1875,
"s": 1674,
"text": "If you are in data science for some time you’ll recognize many of them just by their names (titanic? iris?). Otherwise, just load the dataset and call the head() function to take a peek into the data."
},
{
"code": null,
"e": 1914,
"s": 1875,
"text": "df = sns.load_dataset(\"iris\")df.head()"
},
{
"code": null,
"e": 2428,
"s": 1914,
"text": "I’m going to use a few of those datasets to show different kinds of visualization techniques, so I’m loading them all in. If you know what each dataset is about that’s great, otherwise don’t bother. Just know that each dataset contains variables of different kinds; some of them are discrete variables (e.g. 10, 20 , 30 miles/gallon), some are continuous (e.g. 5.2, 3.6 petal length) and some are categorical (e.g. days of the week). Knowing just different variable types and when to use them should work for now."
},
{
"code": null,
"e": 2594,
"s": 2428,
"text": "# loading additional datasetstips = sns.load_dataset(\"tips\")mpg = sns.load_dataset(\"mpg\")fmri = sns.load_dataset(\"fmri\")car_crashes = sns.load_dataset(\"car_crashes\")"
},
{
"code": null,
"e": 2825,
"s": 2594,
"text": "Our first kind of problem is to understand the distribution of data. By distribution I mean few things: frequency distribution, probability distribution or just spread of the data with respect to central values (mean, median etc)."
},
{
"code": null,
"e": 2835,
"s": 2825,
"text": "Histogram"
},
{
"code": null,
"e": 3033,
"s": 2835,
"text": "Let’s say we have a dataset containing 1000 cars and their fuel efficiency in miles per gallon (mpg). A histogram will tell us the frequency distribution of the number of cars in each mpg category."
},
{
"code": null,
"e": 3086,
"s": 3033,
"text": "# Bar histogramsns.distplot(mpg[\"mpg\"], kde = False)"
},
{
"code": null,
"e": 3238,
"s": 3086,
"text": "There is a more sophisticated version of this frequency distribution used in statistics called the probability distribution. You can plot that as well."
},
{
"code": null,
"e": 3312,
"s": 3238,
"text": "# Line histogram/kernel density plotsns.kdeplot(mpg[\"mpg\"], shade = True)"
},
{
"code": null,
"e": 3461,
"s": 3312,
"text": "There is another kind of distribution — better known as spread— which shows how a variable is dispersed/spread with respect to its central tendency."
},
{
"code": null,
"e": 3625,
"s": 3461,
"text": "Boxplot is best known to demonstrate the dispersion of a variable with values such as the median, the minimum, the maximum and the outliers — all in the same plot."
},
{
"code": null,
"e": 3662,
"s": 3625,
"text": "# boxplotsns.boxplot(y = mpg[\"mpg\"])"
},
{
"code": null,
"e": 3751,
"s": 3662,
"text": "Kids grow taller with age — that’s a relationship between two variables: height and age."
},
{
"code": null,
"e": 3877,
"s": 3751,
"text": "The bigger the house, the higher the price — that’s another kind of relationship between the variables: floor size and price."
},
{
"code": null,
"e": 3948,
"s": 3877,
"text": "Relational plots display how variables are associated with each other."
},
{
"code": null,
"e": 4154,
"s": 3948,
"text": "Visualization can examine and plot the association between variables. For continuous variables, you can pretty much visualize anything in a scatterplot and visually determine the relationship between them."
},
{
"code": null,
"e": 4236,
"s": 4154,
"text": "# scatterplotsns.scatterplot(x = \"sepal_length\", y = \"petal_length\", data = iris)"
},
{
"code": null,
"e": 4358,
"s": 4236,
"text": "You can see from the scatterplot below that in Iris flowers high petal length is positively associated with sepal length."
},
{
"code": null,
"e": 4471,
"s": 4358,
"text": "If you are having trouble visually figuring out the relationship, there’s another option — drawing a trend line."
},
{
"code": null,
"e": 4564,
"s": 4471,
"text": "# trend linesns.lmplot(x = \"sepal_length\", y = \"petal_length\", data = iris, scatter = False)"
},
{
"code": null,
"e": 4860,
"s": 4564,
"text": "The trend line below is applied to the same data as the previous one in the scatter plot. Which one you want to use depends on your needs, but there are tradeoffs. In the scatterplot, you see the variability of every single data point in a two-dimensional space, but not easy to track the trend."
},
{
"code": null,
"e": 5087,
"s": 4860,
"text": "In a trend plot — created based on a linear model — you can identify the overall trend in data along with confidence intervals, but you lose individual data points (but of course you can display points in trend plots as well)."
},
{
"code": null,
"e": 5375,
"s": 5087,
"text": "The comparative analysis is key to many business decisions. Which product is selling better — product A or product B? Is there a significant difference between the two drugs in their effectiveness? These kinds of questions are asked all the time all across the decision-making landscape."
},
{
"code": null,
"e": 5658,
"s": 5375,
"text": "Bar charts are some of the simplest and old school yet effective visualization techniques to communicate comparative analysis. The figure below presents the total restaurant bills paid by customers on different days of the week. So this visual compares total bills between weekdays."
},
{
"code": null,
"e": 5723,
"s": 5658,
"text": "# bar chartsns.barplot(x = \"day\", y = \"total_bill\", data = tips)"
},
{
"code": null,
"e": 5881,
"s": 5723,
"text": "Line charts are another way to compare data. It’s commonly used in time series analysis where the temporal evolution of two sets of observations is compared."
},
{
"code": null,
"e": 5969,
"s": 5881,
"text": "# line chart/time seriessns.lineplot(x=\"timepoint\", y=\"signal\", hue=\"event\", data=fmri)"
},
{
"code": null,
"e": 6160,
"s": 5969,
"text": "The final pillar of data visualization is composition. The purpose of composition charts is to show the composition of one or more variables in absolute and relative terms (e.g. percentage)."
},
{
"code": null,
"e": 6312,
"s": 6160,
"text": "Composition charts are a bit complicated to create in Seaborn, it’s not a one-liner code like the others. So bear with me as I give two examples below."
},
{
"code": null,
"e": 6560,
"s": 6312,
"text": "Stacked charts show the composition of a variable with different categories in a single bar. For example, a bar showing total car crashes in the state of Alabama, which is also disaggregated by crashes that involved alcohol (see the figure below)."
},
{
"code": null,
"e": 7010,
"s": 6560,
"text": "# Initialize the matplotlib figuref, ax = plt.subplots(figsize=(10, 15))# Plot the total crashessns.set_color_codes(\"pastel\")sns.barplot(x=\"total\", y=\"abbrev\", data=crashes,label=\"Total\", color=\"b\")# Plot the crashes where alcohol was involvedsns.set_color_codes(\"muted\")sns.barplot(x=\"alcohol\", y=\"abbrev\", data=crashes, label=\"Alcohol-involved\", color=\"b\")# Add a legend and informative axis labelax.legend(ncol=2, loc=\"upper right\", frameon=True)"
},
{
"code": null,
"e": 7179,
"s": 7010,
"text": "Stacked area charts display the changes in values of different groups of observation in the same plot area. In this case, the values are “stacked” on top of each other."
},
{
"code": null,
"e": 7375,
"s": 7179,
"text": "Seaborn’s built-in datasets and codes are not quite useful to display stacked area charts, so we are going to create a simple dataset and display a “Seaborn-style” graph using matplotlib library."
},
{
"code": null,
"e": 7588,
"s": 7375,
"text": "# stacked areaimport numpy as npimport matplotlib.pytplot as plt# Datax = range(1,6)y = [[2,4,5,6,8], [2,3,6,7,10], [2,6,7,8,5]]# Plotplt.stackplot(x,y, labels=['X','Y','Z'])plt.legend(loc='upper left')plt.show()"
}
] |
PHP mysqli_begin_transaction() Function
|
The mysqli_begin_transaction() is used to start a new transaction.
mysqli_begin_transaction($con, [$flags, $name]);
con(Mandatory)
This is an object representing a connection to MySQL Server.
flags(Optional)
A constant which can be on of the following :
MYSQLI_TRANS_START_READ_ONLY
MYSQLI_TRANS_START_READ_ONLY
MYSQLI_TRANS_START_READ_WRITE
MYSQLI_TRANS_START_READ_WRITE
MYSQLI_TRANS_START_WITH_CONSISTENT_SNAPSHOT
MYSQLI_TRANS_START_WITH_CONSISTENT_SNAPSHOT
name(Optional)
This is string value representing the name of the save point of the transaction.
The PHP mysqli_begin_transaction() function returns a boolean value which is, true if the operation is successful and, false if not.
This function was first introduced in PHP Version 5 and works in all the later versions.
Following example demonstrates the usage of the mysqli_begin_transaction() function (in procedural style) −
<?php
//Creating a connection
$con = mysqli_connect("localhost", "root", "password", "mydb");
//Beginning the transaction
mysqli_begin_transaction($con, MYSQLI_TRANS_START_READ_ONLY);
print("Transaction Started......\n");
//Creating a table
mysqli_query($con, "CREATE TABLE Test(Name VARCHAR(255), AGE INT)");
print("Table Created......\n");
//Inserting values
mysqli_query($con, "INSERT INTO Test values('Raju', 25),('Rahman', 30),('Sarmista', 27)");
print("Records Inserted......\n");
//Committing the transaction
mysqli_commit($con);
print("Transaction Saved......\n");
//Closing the connection
mysqli_close($con);
?>
This will produce following result −
Transaction Started......
Table Created......
Records Inserted......
Transaction Saved......
The syntax of this method in object oriented style is $con->begin_transaction(). Following is an example of this function in object oriented mode $minus;
//Creating a connection
$con = new mysqli("localhost", "root", "password", "mydb");
//Beginning the transaction
$con->begin_transaction($con, MYSQLI_TRANS_START_READ_ONLY);
print("Transaction Started......\n");
//Creating a table
$con->query("CREATE TABLE Test(Name VARCHAR(255), AGE INT)");
print("Table Created......\n");
//Inserting values
$con->query("insert into Test values('Raju', 25),('Rahman', 30),('Sarmista', 27)");
print("Records Inserted......\n");
//Committing the transaction
$con->commit();
print("Transaction Saved......\n");
//Closing the connection
$con->close();
?>
This will produce following result −
Transaction Started......
Table Created......
Records Inserted......
Transaction Saved......
45 Lectures
9 hours
Malhar Lathkar
34 Lectures
4 hours
Syed Raza
84 Lectures
5.5 hours
Frahaan Hussain
17 Lectures
1 hours
Nivedita Jain
100 Lectures
34 hours
Azaz Patel
43 Lectures
5.5 hours
Vijay Kumar Parvatha Reddy
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2824,
"s": 2757,
"text": "The mysqli_begin_transaction() is used to start a new transaction."
},
{
"code": null,
"e": 2874,
"s": 2824,
"text": "mysqli_begin_transaction($con, [$flags, $name]);\n"
},
{
"code": null,
"e": 2889,
"s": 2874,
"text": "con(Mandatory)"
},
{
"code": null,
"e": 2950,
"s": 2889,
"text": "This is an object representing a connection to MySQL Server."
},
{
"code": null,
"e": 2966,
"s": 2950,
"text": "flags(Optional)"
},
{
"code": null,
"e": 3012,
"s": 2966,
"text": "A constant which can be on of the following :"
},
{
"code": null,
"e": 3041,
"s": 3012,
"text": "MYSQLI_TRANS_START_READ_ONLY"
},
{
"code": null,
"e": 3070,
"s": 3041,
"text": "MYSQLI_TRANS_START_READ_ONLY"
},
{
"code": null,
"e": 3100,
"s": 3070,
"text": "MYSQLI_TRANS_START_READ_WRITE"
},
{
"code": null,
"e": 3130,
"s": 3100,
"text": "MYSQLI_TRANS_START_READ_WRITE"
},
{
"code": null,
"e": 3174,
"s": 3130,
"text": "MYSQLI_TRANS_START_WITH_CONSISTENT_SNAPSHOT"
},
{
"code": null,
"e": 3218,
"s": 3174,
"text": "MYSQLI_TRANS_START_WITH_CONSISTENT_SNAPSHOT"
},
{
"code": null,
"e": 3233,
"s": 3218,
"text": "name(Optional)"
},
{
"code": null,
"e": 3314,
"s": 3233,
"text": "This is string value representing the name of the save point of the transaction."
},
{
"code": null,
"e": 3447,
"s": 3314,
"text": "The PHP mysqli_begin_transaction() function returns a boolean value which is, true if the operation is successful and, false if not."
},
{
"code": null,
"e": 3536,
"s": 3447,
"text": "This function was first introduced in PHP Version 5 and works in all the later versions."
},
{
"code": null,
"e": 3644,
"s": 3536,
"text": "Following example demonstrates the usage of the mysqli_begin_transaction() function (in procedural style) −"
},
{
"code": null,
"e": 4318,
"s": 3644,
"text": "<?php\n //Creating a connection\n $con = mysqli_connect(\"localhost\", \"root\", \"password\", \"mydb\");\n\n //Beginning the transaction\n mysqli_begin_transaction($con, MYSQLI_TRANS_START_READ_ONLY);\n print(\"Transaction Started......\\n\");\n\n //Creating a table\n mysqli_query($con, \"CREATE TABLE Test(Name VARCHAR(255), AGE INT)\");\n print(\"Table Created......\\n\");\n\n //Inserting values\n mysqli_query($con, \"INSERT INTO Test values('Raju', 25),('Rahman', 30),('Sarmista', 27)\");\n print(\"Records Inserted......\\n\");\n\n //Committing the transaction\n mysqli_commit($con);\n print(\"Transaction Saved......\\n\");\n\n //Closing the connection\n mysqli_close($con);\n?>"
},
{
"code": null,
"e": 4355,
"s": 4318,
"text": "This will produce following result −"
},
{
"code": null,
"e": 4449,
"s": 4355,
"text": "Transaction Started......\nTable Created......\nRecords Inserted......\nTransaction Saved......\n"
},
{
"code": null,
"e": 4603,
"s": 4449,
"text": "The syntax of this method in object oriented style is $con->begin_transaction(). Following is an example of this function in object oriented mode $minus;"
},
{
"code": null,
"e": 5194,
"s": 4603,
"text": "//Creating a connection\n$con = new mysqli(\"localhost\", \"root\", \"password\", \"mydb\");\n\n//Beginning the transaction\n$con->begin_transaction($con, MYSQLI_TRANS_START_READ_ONLY);\nprint(\"Transaction Started......\\n\");\n\n//Creating a table\n$con->query(\"CREATE TABLE Test(Name VARCHAR(255), AGE INT)\");\nprint(\"Table Created......\\n\");\n\n//Inserting values\n$con->query(\"insert into Test values('Raju', 25),('Rahman', 30),('Sarmista', 27)\");\nprint(\"Records Inserted......\\n\");\n\n//Committing the transaction\n$con->commit();\nprint(\"Transaction Saved......\\n\");\n\n//Closing the connection\n$con->close();\n?>"
},
{
"code": null,
"e": 5231,
"s": 5194,
"text": "This will produce following result −"
},
{
"code": null,
"e": 5325,
"s": 5231,
"text": "Transaction Started......\nTable Created......\nRecords Inserted......\nTransaction Saved......\n"
},
{
"code": null,
"e": 5358,
"s": 5325,
"text": "\n 45 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 5374,
"s": 5358,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 5407,
"s": 5374,
"text": "\n 34 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 5418,
"s": 5407,
"text": " Syed Raza"
},
{
"code": null,
"e": 5453,
"s": 5418,
"text": "\n 84 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 5470,
"s": 5453,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 5503,
"s": 5470,
"text": "\n 17 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 5518,
"s": 5503,
"text": " Nivedita Jain"
},
{
"code": null,
"e": 5553,
"s": 5518,
"text": "\n 100 Lectures \n 34 hours \n"
},
{
"code": null,
"e": 5565,
"s": 5553,
"text": " Azaz Patel"
},
{
"code": null,
"e": 5600,
"s": 5565,
"text": "\n 43 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 5628,
"s": 5600,
"text": " Vijay Kumar Parvatha Reddy"
},
{
"code": null,
"e": 5635,
"s": 5628,
"text": " Print"
},
{
"code": null,
"e": 5646,
"s": 5635,
"text": " Add Notes"
}
] |
Python - Convert excel serial date to datetime - GeeksforGeeks
|
14 Sep, 2021
This article will discuss the conversion of an excel serial date to DateTime in Python.
The Excel “serial date” format is actually the number of days since 1900-01-00 i.e., January 1st, 1900. For example, the excel serial date number 43831 represents January 1st, 2020, and after converting 43831 to a DateTime becomes 2020-01-01.
By using xlrd.xldate_as_datetime() function this can be achieved. The xlrd.xldate_as_datetime() function is used to convert excel date/time number to datetime.datetime object.
Syntax: xldate_as_datetime (xldate, datemode)
Parameters: This function accepts two parameters that are illustrated below:
xldate: This is the specified excel date that will converted into datetime.
datemode: This is the specified datemode in which conversion will be performed.
Return values: This function returns the datetime.datetime object.
First, call xlrd.xldate_as_datetime(date, 0) function to convert the specified Excel date to a datetime.datetime object. Then, call datetime.datetime.date() function on the returned datetime.datetime object to return the date as a datetime.date object. Lastly, call datetime.date.isoformat() function to convert the returned datetime.date object to a ISO format date string.
Let’s see some examples to illustrate the above algorithm:
Example: Python program to convert excel serial date to string date
Python3
# Python3 code to illustrate the conversion
# of excel serial date to datetime
# Importing xlrd module
import xlrd
# Initializing an excel serial date
xl_date = 43831
# Calling the xldate_as_datetime() function to
# convert the specified excel serial date into
# datetime.datetime object
datetime_date = xlrd.xldate_as_datetime(xl_date, 0)
# Calling the datetime_date.date() function to convert
# the above returned datetime.datetime object into
# datetime.date object
date_object = datetime_date.date()
# Calling the isoformat() function to convert the
# above returned datetime.date object into the
# ISO format date string
string_date = date_object.isoformat()
# Getting the converted date string as output
print(string_date)
# Getting the type of returned date format
print(type(string_date))
Output:
2020-01-01
<class 'str'>
Example 2: Python program to convert excel serial number to DateTime
Python3
# Python3 code to illustrate the conversion
# of excel serial date to datetime
# Importing xlrd module
import xlrd
# Initializing an excel serial date
xl_date = 43831
# Calling the xldate_as_datetime() function to
# convert the specified excel serial date into
# datetime.datetime object
datetime_date = xlrd.xldate_as_datetime(xl_date, 0)
# Calling the datetime_date.date() function to convert
# the above returned datetime.datetime object into
# datetime.date object
date_object = datetime_date.date()
# Getting the converted date date as output
print(date_object)
# Getting the type of returned date format
print(type(date_object))
Output:
2020-01-01
<class 'datetime.date'>
Picked
Python-datetime
Python-excel
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Selecting rows in pandas DataFrame based on conditions
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Defaultdict in Python
Python | Get unique values from a list
Python | os.path.join() method
Create a directory in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24364,
"s": 24333,
"text": " \n14 Sep, 2021\n"
},
{
"code": null,
"e": 24453,
"s": 24364,
"text": "This article will discuss the conversion of an excel serial date to DateTime in Python. "
},
{
"code": null,
"e": 24696,
"s": 24453,
"text": "The Excel “serial date” format is actually the number of days since 1900-01-00 i.e., January 1st, 1900. For example, the excel serial date number 43831 represents January 1st, 2020, and after converting 43831 to a DateTime becomes 2020-01-01."
},
{
"code": null,
"e": 24872,
"s": 24696,
"text": "By using xlrd.xldate_as_datetime() function this can be achieved. The xlrd.xldate_as_datetime() function is used to convert excel date/time number to datetime.datetime object."
},
{
"code": null,
"e": 24918,
"s": 24872,
"text": "Syntax: xldate_as_datetime (xldate, datemode)"
},
{
"code": null,
"e": 24995,
"s": 24918,
"text": "Parameters: This function accepts two parameters that are illustrated below:"
},
{
"code": null,
"e": 25071,
"s": 24995,
"text": "xldate: This is the specified excel date that will converted into datetime."
},
{
"code": null,
"e": 25151,
"s": 25071,
"text": "datemode: This is the specified datemode in which conversion will be performed."
},
{
"code": null,
"e": 25218,
"s": 25151,
"text": "Return values: This function returns the datetime.datetime object."
},
{
"code": null,
"e": 25593,
"s": 25218,
"text": "First, call xlrd.xldate_as_datetime(date, 0) function to convert the specified Excel date to a datetime.datetime object. Then, call datetime.datetime.date() function on the returned datetime.datetime object to return the date as a datetime.date object. Lastly, call datetime.date.isoformat() function to convert the returned datetime.date object to a ISO format date string."
},
{
"code": null,
"e": 25652,
"s": 25593,
"text": "Let’s see some examples to illustrate the above algorithm:"
},
{
"code": null,
"e": 25720,
"s": 25652,
"text": "Example: Python program to convert excel serial date to string date"
},
{
"code": null,
"e": 25728,
"s": 25720,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# Python3 code to illustrate the conversion \n# of excel serial date to datetime \n \n# Importing xlrd module \nimport xlrd \n \n# Initializing an excel serial date \nxl_date = 43831\n \n# Calling the xldate_as_datetime() function to \n# convert the specified excel serial date into \n# datetime.datetime object \ndatetime_date = xlrd.xldate_as_datetime(xl_date, 0) \n \n# Calling the datetime_date.date() function to convert \n# the above returned datetime.datetime object into \n# datetime.date object \ndate_object = datetime_date.date() \n \n# Calling the isoformat() function to convert the \n# above returned datetime.date object into the \n# ISO format date string \nstring_date = date_object.isoformat() \n \n# Getting the converted date string as output \nprint(string_date) \n \n# Getting the type of returned date format \nprint(type(string_date)) \n\n\n\n\n\n",
"e": 26590,
"s": 25738,
"text": null
},
{
"code": null,
"e": 26598,
"s": 26590,
"text": "Output:"
},
{
"code": null,
"e": 26623,
"s": 26598,
"text": "2020-01-01\n<class 'str'>"
},
{
"code": null,
"e": 26692,
"s": 26623,
"text": "Example 2: Python program to convert excel serial number to DateTime"
},
{
"code": null,
"e": 26700,
"s": 26692,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# Python3 code to illustrate the conversion \n# of excel serial date to datetime \n \n# Importing xlrd module \nimport xlrd \n \n# Initializing an excel serial date \nxl_date = 43831\n \n# Calling the xldate_as_datetime() function to \n# convert the specified excel serial date into \n# datetime.datetime object \ndatetime_date = xlrd.xldate_as_datetime(xl_date, 0) \n \n# Calling the datetime_date.date() function to convert \n# the above returned datetime.datetime object into \n# datetime.date object \ndate_object = datetime_date.date() \n \n# Getting the converted date date as output \nprint(date_object) \n \n# Getting the type of returned date format \nprint(type(date_object)) \n\n\n\n\n\n",
"e": 27393,
"s": 26710,
"text": null
},
{
"code": null,
"e": 27401,
"s": 27393,
"text": "Output:"
},
{
"code": null,
"e": 27436,
"s": 27401,
"text": "2020-01-01\n<class 'datetime.date'>"
},
{
"code": null,
"e": 27445,
"s": 27436,
"text": "\nPicked\n"
},
{
"code": null,
"e": 27463,
"s": 27445,
"text": "\nPython-datetime\n"
},
{
"code": null,
"e": 27478,
"s": 27463,
"text": "\nPython-excel\n"
},
{
"code": null,
"e": 27487,
"s": 27478,
"text": "\nPython\n"
},
{
"code": null,
"e": 27692,
"s": 27487,
"text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n "
},
{
"code": null,
"e": 27724,
"s": 27692,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27766,
"s": 27724,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 27821,
"s": 27766,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 27877,
"s": 27821,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 27919,
"s": 27877,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27941,
"s": 27919,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27980,
"s": 27941,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 28011,
"s": 27980,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 28040,
"s": 28011,
"text": "Create a directory in Python"
}
] |
file - Unix, Linux Command
|
File tests each argument in an attempt to classify it.
There are three sets of tests, performed in this order:
filesystem tests, magic number tests, and language tests.
The
first test that succeeds causes the file type to be printed.
The type printed will usually contain one of the words
text (the file contains only
printing characters and a few common control
characters and is probably safe to read on an
ASCII terminal),
executable (the file contains the result of compiling a program
in a form understandable to some UNIX kernel or another),
or
data meaning anything else (data is usually ‘binary’ or non-printable).
Exceptions are well-known file formats (core files, tar archives)
that are known to contain binary data.
When modifying the file
/usr/share/file/magic or the program itself,
preserve these keywords . People depend on knowing that all the readable files in a directory
have the word ‘‘text’’ printed.
Don’t do as Berkeley did and change ‘‘shell commands text’’
to ‘‘shell script’’.
Note that the file
/usr/share/file/magic is built mechanically from a large number of small files in
the subdirectory
Magdir in the source distribution of this program.
The filesystem tests are based on examining the return from a
stat(2)
system call.
The program checks to see if the file is empty,
or if it’s some sort of special file.
Any known file types appropriate to the system you are running on
(sockets, symbolic links, or named pipes (FIFOs) on those systems that
implement them)
are intuited if they are defined in
the system header file
<sys/stat.h>.
The magic number tests are used to check for files with data in
particular fixed formats.
The canonical example of this is a binary executable (compiled program)
a.out file, whose format is defined in
a.out.h and possibly
exec.h in the standard include directory.
These files have a ‘magic number’ stored in a particular place
near the beginning of the file that tells the UNIX operating system
that the file is a binary executable, and which of several types thereof.
The concept of ‘magic number’ has been applied by extension to data files.
Any file with some invariant identifier at a small fixed
offset into the file can usually be described in this way.
The information identifying these files is read from the compiled
magic file
/usr/share/file/magic.mgc , or
/usr/share/file/magic if the compile file does not exist. In addition
file will look in
$HOME/.magic.mgc , or
$HOME/.magic for magic entries.
If a file does not match any of the entries in the magic file,
it is examined to see if it seems to be a text file.
ASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets
(such as those used on Macintosh and IBM PC systems),
UTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC
character sets can be distinguished by the different
ranges and sequences of bytes that constitute printable text
in each set.
If a file passes any of these tests, its character set is reported.
ASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified
as ‘‘text’’ because they will be mostly readable on nearly any terminal;
UTF-16 and EBCDIC are only ‘‘character data’’ because, while
they contain text, it is text that will require translation
before it can be read.
In addition,
file will attempt to determine other characteristics of text-type files.
If the lines of a file are terminated by CR, CRLF, or NEL, instead
of the Unix-standard LF, this will be reported.
Files that contain embedded escape sequences or overstriking
will also be identified.
Once
file has determined the character set used in a text-type file,
it will
attempt to determine in what language the file is written.
The language tests look for particular strings (cf
names.h) that can appear anywhere in the first few blocks of a file.
For example, the keyword
.br indicates that the file is most likely a
troff(1)
input file, just as the keyword
struct indicates a C program.
These tests are less reliable than the previous
two groups, so they are performed last.
The language test routines also test for some miscellany
(such as
tar(1)
archives).
Any file that cannot be identified as having been written
in any of the character sets listed above is simply said to be ‘‘data’’.
The one significant difference
between this version and System V
is that this version treats any white space
as a delimiter, so that spaces in pattern strings must be escaped.
For example,
>10 string language impress (imPRESS data)
in an existing magic file would have to be changed to
>10 string language\ impress (imPRESS data)
In addition, in this version, if a pattern string contains a backslash,
it must be escaped.
For example
0 string \begindata Andrew Toolkit document
in an existing magic file would have to be changed to
0 string \\begindata Andrew Toolkit document
SunOS releases 3.2 and later from Sun Microsystems include a
file(1)
command derived from the System V one, but with some extensions.
My version differs from Sun’s only in minor ways.
It includes the extension of the ‘&’ operator, used as,
for example,
>16 long&0x7fffffff >0 not stripped
The order of entries in the magic file is significant.
Depending on what system you are using, the order that
they are put together may be incorrect.
If your old
file command uses a magic file,
keep the old magic file around for comparison purposes
(rename it to
/usr/share/file/magic.orig).
$ file file.c file /dev/{wd0a,hda}
file.c: C program text
file: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked (uses shared libs), stripped
/dev/wd0a: block special (0/0)
/dev/hda: block special (3/0)
$ file -s /dev/wd0{b,d}
/dev/wd0b: data
/dev/wd0d: x86 boot sector
$ file -s /dev/hda{,1,2,3,4,5,6,7,8,9,10}
/dev/hda: x86 boot sector
/dev/hda1: Linux/i386 ext2 filesystem
/dev/hda2: x86 boot sector
/dev/hda3: x86 boot sector, extended partition table
/dev/hda4: Linux/i386 ext2 filesystem
/dev/hda5: Linux/i386 swap file
/dev/hda6: Linux/i386 swap file
/dev/hda7: Linux/i386 swap file
/dev/hda8: Linux/i386 swap file
/dev/hda9: empty
/dev/hda10: empty
$ file -i file.c file /dev/{wd0a,hda}
file.c: text/x-c
file: application/x-executable, dynamically linked (uses shared libs),
not stripped
/dev/hda: application/x-not-regular-file
/dev/wd0a: application/x-not-regular-file
$ file -i file.c file /dev/{wd0a,hda}
file.c: text/x-c
file: application/x-executable, dynamically linked (uses shared libs),
not stripped
/dev/hda: application/x-not-regular-file
/dev/wd0a: application/x-not-regular-file
This program, based on the System V version,
was written by Ian Darwin <ian@darwinsys.com>
without looking at anybody else’s source code.
John Gilmore revised the code extensively, making it better than
the first version.
Geoff Collyer found several inadequacies
and provided some magic file entries.
Contributions by the ‘&’ operator by Rob McMahon, cudcv@warwick.ac.uk, 1989.
Guy Harris, guy@netapp.com, made many changes from 1993 to the present.
Primary development and maintenance from 1990 to the present by
Christos Zoulas (christos@astron.com).
Altered by Chris Lowth, chris@lowth.com, 2000:
Handle the ‘‘-i’’ option to output mime type strings and using an alternative
magic file and internal logic.
Altered by Eric Fischer (enf@pobox.com), July, 2000,
to identify character codes and attempt to identify the languages
of non-ASCII files.
The list of contributors to the "Magdir" directory (source for the
/usr/share/file/magic file) is too long to include here.
You know who you are; thank you.
The files
tar.h and
is_tar.c were written by John Gilmore from his public-domain
tar program, and are not covered by the above license.
File uses several algorithms that favor speed over accuracy,
thus it can be misled about the contents of
text
files.
The support for
text
files (primarily for programming languages)
is simplistic, inefficient and requires recompilation to update.
There should be an ‘‘else’’ clause to follow a series of continuation lines.
The magic file and keywords should have regular expression support.
Their use of
ASCII TAB as a field delimiter is ugly and makes
it hard to edit the files, but is entrenched.
It might be advisable to allow upper-case letters in keywords
for e.g.,
troff(1)
commands vs man page macros.
Regular expression support would make this easy.
The program doesn’t grok FORTRAN.
It should be able to figure FORTRAN by seeing some keywords which
appear indented at the start of line.
Regular expression support would make this easy.
The list of keywords in
ascmagic probably belongs in the Magic file.
This could be done by using some keyword like ‘*’ for the offset value.
Another optimisation would be to sort
the magic file so that we can just run down all the
tests for the first byte, first word, first long, etc, once we
have fetched it.
Complain about conflicts in the magic file entries.
Make a rule that the magic entries sort based on file offset rather
than position within the magic file?
The program should provide a way to give an estimate
of ‘‘how good’’ a guess is.
We end up removing guesses (e.g. ‘‘From ’’ as first 5 chars of file) because
they are not as good as other guesses (e.g. ‘‘Newsgroups:’’ versus
‘‘Return-Path:’’).
Still, if the others don’t pan out, it should be possible to use the
first guess.
This program is slower than some vendors’ file commands.
The new support for multiple character codes makes it even slower.
This manual page, and particularly this section, is too long.
Advertisements
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 10813,
"s": 10577,
"text": "\nFile tests each argument in an attempt to classify it.\nThere are three sets of tests, performed in this order:\nfilesystem tests, magic number tests, and language tests.\nThe\nfirst test that succeeds causes the file type to be printed.\n"
},
{
"code": null,
"e": 11754,
"s": 10813,
"text": "\nThe type printed will usually contain one of the words\ntext (the file contains only\nprinting characters and a few common control\ncharacters and is probably safe to read on an\nASCII terminal),\nexecutable (the file contains the result of compiling a program\nin a form understandable to some UNIX kernel or another),\nor\ndata meaning anything else (data is usually ‘binary’ or non-printable).\nExceptions are well-known file formats (core files, tar archives)\nthat are known to contain binary data.\nWhen modifying the file\n/usr/share/file/magic or the program itself,\npreserve these keywords . People depend on knowing that all the readable files in a directory\nhave the word ‘‘text’’ printed.\nDon’t do as Berkeley did and change ‘‘shell commands text’’\nto ‘‘shell script’’.\nNote that the file\n/usr/share/file/magic is built mechanically from a large number of small files in\nthe subdirectory\nMagdir in the source distribution of this program.\n"
},
{
"code": null,
"e": 12151,
"s": 11754,
"text": "\nThe filesystem tests are based on examining the return from a\nstat(2)\nsystem call.\nThe program checks to see if the file is empty,\nor if it’s some sort of special file.\nAny known file types appropriate to the system you are running on\n(sockets, symbolic links, or named pipes (FIFOs) on those systems that\nimplement them)\nare intuited if they are defined in\nthe system header file\n<sys/stat.h>. "
},
{
"code": null,
"e": 13063,
"s": 12151,
"text": "\nThe magic number tests are used to check for files with data in\nparticular fixed formats.\nThe canonical example of this is a binary executable (compiled program)\na.out file, whose format is defined in\na.out.h and possibly\nexec.h in the standard include directory.\nThese files have a ‘magic number’ stored in a particular place\nnear the beginning of the file that tells the UNIX operating system\nthat the file is a binary executable, and which of several types thereof.\nThe concept of ‘magic number’ has been applied by extension to data files.\nAny file with some invariant identifier at a small fixed\noffset into the file can usually be described in this way.\nThe information identifying these files is read from the compiled\nmagic file\n/usr/share/file/magic.mgc , or\n/usr/share/file/magic if the compile file does not exist. In addition\nfile will look in\n$HOME/.magic.mgc , or\n$HOME/.magic for magic entries.\n"
},
{
"code": null,
"e": 14121,
"s": 13063,
"text": "\nIf a file does not match any of the entries in the magic file,\nit is examined to see if it seems to be a text file.\nASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets\n(such as those used on Macintosh and IBM PC systems),\nUTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC\ncharacter sets can be distinguished by the different\nranges and sequences of bytes that constitute printable text\nin each set.\nIf a file passes any of these tests, its character set is reported.\nASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified\nas ‘‘text’’ because they will be mostly readable on nearly any terminal;\nUTF-16 and EBCDIC are only ‘‘character data’’ because, while\nthey contain text, it is text that will require translation\nbefore it can be read.\nIn addition,\nfile will attempt to determine other characteristics of text-type files.\nIf the lines of a file are terminated by CR, CRLF, or NEL, instead\nof the Unix-standard LF, this will be reported.\nFiles that contain embedded escape sequences or overstriking\nwill also be identified.\n"
},
{
"code": null,
"e": 14692,
"s": 14121,
"text": "\nOnce\nfile has determined the character set used in a text-type file,\nit will\nattempt to determine in what language the file is written.\nThe language tests look for particular strings (cf\nnames.h) that can appear anywhere in the first few blocks of a file.\nFor example, the keyword\n.br indicates that the file is most likely a\ntroff(1)\ninput file, just as the keyword\nstruct indicates a C program.\nThese tests are less reliable than the previous\ntwo groups, so they are performed last.\nThe language test routines also test for some miscellany\n(such as\ntar(1)\narchives).\n"
},
{
"code": null,
"e": 14825,
"s": 14692,
"text": "\nAny file that cannot be identified as having been written\nin any of the character sets listed above is simply said to be ‘‘data’’.\n"
},
{
"code": null,
"e": 15475,
"s": 14827,
"text": "\nThe one significant difference\nbetween this version and System V\nis that this version treats any white space\nas a delimiter, so that spaces in pattern strings must be escaped.\nFor example,\n\n>10 string language impress (imPRESS data)\n\nin an existing magic file would have to be changed to\n\n>10 string language\\ impress (imPRESS data)\n\nIn addition, in this version, if a pattern string contains a backslash,\nit must be escaped.\nFor example\n\n0 string \\begindata Andrew Toolkit document\n\nin an existing magic file would have to be changed to\n\n0 string \\\\begindata Andrew Toolkit document\n"
},
{
"code": null,
"e": 15782,
"s": 15475,
"text": "\nSunOS releases 3.2 and later from Sun Microsystems include a\nfile(1)\ncommand derived from the System V one, but with some extensions.\nMy version differs from Sun’s only in minor ways.\nIt includes the extension of the ‘&’ operator, used as,\nfor example,\n\n>16 long&0x7fffffff >0 not stripped\n"
},
{
"code": null,
"e": 16076,
"s": 15782,
"text": "\nThe order of entries in the magic file is significant.\nDepending on what system you are using, the order that\nthey are put together may be incorrect.\nIf your old\nfile command uses a magic file,\nkeep the old magic file around for comparison purposes\n(rename it to\n/usr/share/file/magic.orig). "
},
{
"code": null,
"e": 17030,
"s": 16076,
"text": "$ file file.c file /dev/{wd0a,hda}\nfile.c: C program text\nfile: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),\n dynamically linked (uses shared libs), stripped\n/dev/wd0a: block special (0/0)\n/dev/hda: block special (3/0)\n$ file -s /dev/wd0{b,d}\n/dev/wd0b: data\n/dev/wd0d: x86 boot sector\n$ file -s /dev/hda{,1,2,3,4,5,6,7,8,9,10}\n/dev/hda: x86 boot sector\n/dev/hda1: Linux/i386 ext2 filesystem\n/dev/hda2: x86 boot sector\n/dev/hda3: x86 boot sector, extended partition table\n/dev/hda4: Linux/i386 ext2 filesystem\n/dev/hda5: Linux/i386 swap file\n/dev/hda6: Linux/i386 swap file\n/dev/hda7: Linux/i386 swap file\n/dev/hda8: Linux/i386 swap file\n/dev/hda9: empty\n/dev/hda10: empty\n\n$ file -i file.c file /dev/{wd0a,hda}\nfile.c: text/x-c\nfile: application/x-executable, dynamically linked (uses shared libs),\nnot stripped\n/dev/hda: application/x-not-regular-file\n/dev/wd0a: application/x-not-regular-file\n\n"
},
{
"code": null,
"e": 17271,
"s": 17030,
"text": "\n$ file -i file.c file /dev/{wd0a,hda}\nfile.c: text/x-c\nfile: application/x-executable, dynamically linked (uses shared libs),\nnot stripped\n/dev/hda: application/x-not-regular-file\n/dev/wd0a: application/x-not-regular-file\n"
},
{
"code": null,
"e": 17413,
"s": 17273,
"text": "\nThis program, based on the System V version,\nwas written by Ian Darwin <ian@darwinsys.com>\nwithout looking at anybody else’s source code.\n"
},
{
"code": null,
"e": 17655,
"s": 17413,
"text": "\nJohn Gilmore revised the code extensively, making it better than\nthe first version.\nGeoff Collyer found several inadequacies\nand provided some magic file entries.\nContributions by the ‘&’ operator by Rob McMahon, cudcv@warwick.ac.uk, 1989.\n"
},
{
"code": null,
"e": 17729,
"s": 17655,
"text": "\nGuy Harris, guy@netapp.com, made many changes from 1993 to the present.\n"
},
{
"code": null,
"e": 17834,
"s": 17729,
"text": "\nPrimary development and maintenance from 1990 to the present by\nChristos Zoulas (christos@astron.com).\n"
},
{
"code": null,
"e": 17992,
"s": 17834,
"text": "\nAltered by Chris Lowth, chris@lowth.com, 2000:\nHandle the ‘‘-i’’ option to output mime type strings and using an alternative\nmagic file and internal logic.\n"
},
{
"code": null,
"e": 18133,
"s": 17992,
"text": "\nAltered by Eric Fischer (enf@pobox.com), July, 2000,\nto identify character codes and attempt to identify the languages\nof non-ASCII files.\n"
},
{
"code": null,
"e": 18292,
"s": 18133,
"text": "\nThe list of contributors to the \"Magdir\" directory (source for the\n/usr/share/file/magic file) is too long to include here.\nYou know who you are; thank you.\n"
},
{
"code": null,
"e": 18430,
"s": 18292,
"text": "\nThe files\ntar.h and\nis_tar.c were written by John Gilmore from his public-domain\ntar program, and are not covered by the above license.\n"
},
{
"code": null,
"e": 18549,
"s": 18430,
"text": "\nFile uses several algorithms that favor speed over accuracy,\nthus it can be misled about the contents of\ntext\nfiles.\n"
},
{
"code": null,
"e": 18681,
"s": 18549,
"text": "\nThe support for\ntext\nfiles (primarily for programming languages)\nis simplistic, inefficient and requires recompilation to update.\n"
},
{
"code": null,
"e": 18760,
"s": 18681,
"text": "\nThere should be an ‘‘else’’ clause to follow a series of continuation lines.\n"
},
{
"code": null,
"e": 18938,
"s": 18760,
"text": "\nThe magic file and keywords should have regular expression support.\nTheir use of\nASCII TAB as a field delimiter is ugly and makes\nit hard to edit the files, but is entrenched.\n"
},
{
"code": null,
"e": 19099,
"s": 18938,
"text": "\nIt might be advisable to allow upper-case letters in keywords\nfor e.g.,\ntroff(1)\ncommands vs man page macros.\nRegular expression support would make this easy.\n"
},
{
"code": null,
"e": 19288,
"s": 19099,
"text": "\nThe program doesn’t grok FORTRAN.\nIt should be able to figure FORTRAN by seeing some keywords which\nappear indented at the start of line.\nRegular expression support would make this easy.\n"
},
{
"code": null,
"e": 19431,
"s": 19288,
"text": "\nThe list of keywords in\nascmagic probably belongs in the Magic file.\nThis could be done by using some keyword like ‘*’ for the offset value.\n"
},
{
"code": null,
"e": 19760,
"s": 19431,
"text": "\nAnother optimisation would be to sort\nthe magic file so that we can just run down all the\ntests for the first byte, first word, first long, etc, once we\nhave fetched it.\nComplain about conflicts in the magic file entries.\nMake a rule that the magic entries sort based on file offset rather\nthan position within the magic file?\n"
},
{
"code": null,
"e": 20089,
"s": 19760,
"text": "\nThe program should provide a way to give an estimate\nof ‘‘how good’’ a guess is.\nWe end up removing guesses (e.g. ‘‘From ’’ as first 5 chars of file) because\nthey are not as good as other guesses (e.g. ‘‘Newsgroups:’’ versus\n‘‘Return-Path:’’).\nStill, if the others don’t pan out, it should be possible to use the\nfirst guess. \n"
},
{
"code": null,
"e": 20215,
"s": 20089,
"text": "\nThis program is slower than some vendors’ file commands.\nThe new support for multiple character codes makes it even slower.\n"
},
{
"code": null,
"e": 20279,
"s": 20215,
"text": "\nThis manual page, and particularly this section, is too long.\n"
},
{
"code": null,
"e": 20296,
"s": 20279,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 20331,
"s": 20296,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 20359,
"s": 20331,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 20393,
"s": 20359,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 20410,
"s": 20393,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 20443,
"s": 20410,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 20454,
"s": 20443,
"text": " Pradeep D"
},
{
"code": null,
"e": 20489,
"s": 20454,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 20505,
"s": 20489,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 20538,
"s": 20505,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 20550,
"s": 20538,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 20582,
"s": 20550,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 20590,
"s": 20582,
"text": " Uplatz"
},
{
"code": null,
"e": 20597,
"s": 20590,
"text": " Print"
},
{
"code": null,
"e": 20608,
"s": 20597,
"text": " Add Notes"
}
] |
Django CRUD (Create, Retrieve, Update, Delete) Function Based Views - GeeksforGeeks
|
27 Aug, 2021
Django is a Python-based web framework which allows you to quickly create web application without all of the installation or dependency problems that you normally will find with other frameworks. Django is based on MVT (Model View Template) architecture and revolves around CRUD (Create, Retrieve, Update, Delete) operations. CRUD can be best explained as an approach to building a Django web application. In general CRUD means performing Create, Retrieve, Update and Delete operations on a table in a database. Let’s discuss what actually CRUD means,
Create – create or add new entries in a table in the database. Retrieve – read, retrieve, search, or view existing entries as a list(List View) or retrieve a particular entry in detail (Detail View) Update – update or edit existing entries in a table in the database Delete – delete, deactivate, or remove existing entries in a table in the database
Illustration of How to create and use CRUD view using an Example. Consider a project named geeksforgeeks having an app named geeks.
Refer to the following articles to check how to create a project and an app in Django.
How to Create a Basic Project using MVT in Django?
How to Create an App in Django ?
After you have a project and an app, let’s create a model of which we will be creating instances through our view. In geeks/models.py,
Python3
# import the standard Django Model# from built-in libraryfrom django.db import models # declare a new model with a name "GeeksModel"class GeeksModel(models.Model): # fields of the model title = models.CharField(max_length = 200) description = models.TextField() # renames the instances of the model # with their title name def __str__(self): return self.title
After creating this model, we need to run two commands in order to create Database for the same.
Python manage.py makemigrations
Python manage.py migrate
Now we will create a Django ModelForm for this model. Refer this article for more on modelform – Django ModelForm – Create form from Models. create a file forms.py in geeks folder,
Python3
from django import formsfrom .models import GeeksModel # creating a formclass GeeksForm(forms.ModelForm): # create meta class class Meta: # specify model to be used model = GeeksModel # specify fields to be used fields = [ "title", "description", ]
Create View refers to a view (logic) to create an instance of a table in the database. It is just like taking an input from a user and storing it in a specified table. In geeks/views.py,
Python3
from django.shortcuts import render # relative import of formsfrom .models import GeeksModelfrom .forms import GeeksForm def create_view(request): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization form = GeeksForm(request.POST or None) if form.is_valid(): form.save() context['form']= form return render(request, "create_view.html", context)
Create a template in templates/create_view.html,
html
<form method="POST" enctype="multipart/form-data"> <!-- Security token --> {% csrf_token %} <!-- Using the formset --> {{ form.as_p }} <input type="submit" value="Submit"></form>
Now visit http://localhost:8000/
To check complete implementation of Function based Create View, visit Create View – Function based Views Django.
Retrieve view is basically divided into two types of views Detail View and List View.
List View refers to a view (logic) to list all or particular instances of a table from the database in a particular order. It is used to display multiple types of data on a single page or view, for example, products on an eCommerce page. In geeks/views.py,
Python3
from django.shortcuts import render # relative import of formsfrom .models import GeeksModel def list_view(request): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization context["dataset"] = GeeksModel.objects.all() return render(request, "list_view.html", context)
Create a template in templates/list_view.html,
html
<div class="main"> {% for data in dataset %}. {{ data.title }}<br/> {{ data.description }}<br/> <hr/> {% endfor %} </div>
Now visit http://localhost:8000/
To check complete implementation of Function based List View, visit List View – Function based Views Django
Detail View refers to a view (logic) to display a particular instance of a table from the database with all the necessary details. It is used to display multiple types of data on a single page or view, for example, profile of a user. In geeks/views.py,
Python3
from django.urls import path # importing views from views..pyfrom .views import detail_view urlpatterns = [ path('<id>', detail_view ),]
Let’s create a view and template for the same. In geeks/views.py,
Python3
from django.shortcuts import render # relative import of formsfrom .models import GeeksModel # pass id attribute from urlsdef detail_view(request, id): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization context["data"] = GeeksModel.objects.get(id = id) return render(request, "detail_view.html", context)
Create a template in templates/Detail_view.html,
html
<div class="main"> <!-- Specify fields to be displayed --> {{ data.title }}<br/> {{ data.description }}<br/> </div>
Let’s check what is there on http://localhost:8000/1
To check complete implementation of Function based Detail View, visit Detail View – Function based Views Django
Update View refers to a view (logic) to update a particular instance of a table from the database with some extra details. It is used to update entries in the database for example, updating an article at geeksforgeeks. In geeks/views.py,
Python3
from django.shortcuts import (get_object_or_404, render, HttpResponseRedirect) # relative import of formsfrom .models import GeeksModelfrom .forms import GeeksForm # after updating it will redirect to detail_Viewdef detail_view(request, id): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization context["data"] = GeeksModel.objects.get(id = id) return render(request, "detail_view.html", context) # update view for detailsdef update_view(request, id): # dictionary for initial data with # field names as keys context ={} # fetch the object related to passed id obj = get_object_or_404(GeeksModel, id = id) # pass the object as instance in form form = GeeksForm(request.POST or None, instance = obj) # save the data from the form and # redirect to detail_view if form.is_valid(): form.save() return HttpResponseRedirect("/"+id) # add form dictionary to context context["form"] = form return render(request, "update_view.html", context)
Now create following templates in templates folder, In geeks/templates/update_view.html,
html
<div class="main"> <!-- Create a Form --> <form method="POST"> <!-- Security token by Django --> {% csrf_token %} <!-- form as paragraph --> {{ form.as_p }} <input type="submit" value="Update"> </form> </div>
In geeks/templates/detail_view.html,
html
<div class="main"> <!-- Display attributes of instance --> {{ data.title }} <br/> {{ data.description }}</div>
Let’s check if everything is working, visithttp://localhost:8000/1/update.
To check complete implementation of Function based update View, visit Update View – Function based Views Django
Delete View refers to a view (logic) to delete a particular instance of a table from the database. It is used to delete entries in the database for example, deleting an article at geeksforgeeks. In geeks/views.py
Python3
from django.shortcuts import (get_object_or_404, render, HttpResponseRedirect) from .models import GeeksModel # delete view for detailsdef delete_view(request, id): # dictionary for initial data with # field names as keys context ={} # fetch the object related to passed id obj = get_object_or_404(GeeksModel, id = id) if request.method =="POST": # delete object obj.delete() # after deleting redirect to # home page return HttpResponseRedirect("/") return render(request, "delete_view.html", context)
Now a url mapping to this view with a regular expression of id, In geeks/urls.py
Python3
from django.urls import path # importing views from views..pyfrom .views import delete_viewurlpatterns = [ path('<id>/delete', delete_view ),]
Template for delete view includes a simple form confirming whether user wants to delete the instance or not. In geeks/templates/delete_view.html,
html
<div class="main"> <!-- Create a Form --> <form method="POST"> <!-- Security token by Django --> {% csrf_token %} Are you want to delete this item ? <input type="submit" value="Yes" /> <a href="/">Cancel </a> </form></div>
Everything ready, now let’s check if it is working or not, visit http://localhost:8000/2/delete
To check complete implementation of Function based Delete View, visit Delete View – Function based Views Django
akshaysingh98088
sagar0719kumar
Django-views
Python Django
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python String | replace()
|
[
{
"code": null,
"e": 41554,
"s": 41526,
"text": "\n27 Aug, 2021"
},
{
"code": null,
"e": 42107,
"s": 41554,
"text": "Django is a Python-based web framework which allows you to quickly create web application without all of the installation or dependency problems that you normally will find with other frameworks. Django is based on MVT (Model View Template) architecture and revolves around CRUD (Create, Retrieve, Update, Delete) operations. CRUD can be best explained as an approach to building a Django web application. In general CRUD means performing Create, Retrieve, Update and Delete operations on a table in a database. Let’s discuss what actually CRUD means, "
},
{
"code": null,
"e": 42458,
"s": 42107,
"text": "Create – create or add new entries in a table in the database. Retrieve – read, retrieve, search, or view existing entries as a list(List View) or retrieve a particular entry in detail (Detail View) Update – update or edit existing entries in a table in the database Delete – delete, deactivate, or remove existing entries in a table in the database "
},
{
"code": null,
"e": 42592,
"s": 42458,
"text": "Illustration of How to create and use CRUD view using an Example. Consider a project named geeksforgeeks having an app named geeks. "
},
{
"code": null,
"e": 42681,
"s": 42592,
"text": "Refer to the following articles to check how to create a project and an app in Django. "
},
{
"code": null,
"e": 42732,
"s": 42681,
"text": "How to Create a Basic Project using MVT in Django?"
},
{
"code": null,
"e": 42765,
"s": 42732,
"text": "How to Create an App in Django ?"
},
{
"code": null,
"e": 42902,
"s": 42765,
"text": "After you have a project and an app, let’s create a model of which we will be creating instances through our view. In geeks/models.py, "
},
{
"code": null,
"e": 42910,
"s": 42902,
"text": "Python3"
},
{
"code": "# import the standard Django Model# from built-in libraryfrom django.db import models # declare a new model with a name \"GeeksModel\"class GeeksModel(models.Model): # fields of the model title = models.CharField(max_length = 200) description = models.TextField() # renames the instances of the model # with their title name def __str__(self): return self.title",
"e": 43298,
"s": 42910,
"text": null
},
{
"code": null,
"e": 43397,
"s": 43298,
"text": "After creating this model, we need to run two commands in order to create Database for the same. "
},
{
"code": null,
"e": 43454,
"s": 43397,
"text": "Python manage.py makemigrations\nPython manage.py migrate"
},
{
"code": null,
"e": 43637,
"s": 43454,
"text": "Now we will create a Django ModelForm for this model. Refer this article for more on modelform – Django ModelForm – Create form from Models. create a file forms.py in geeks folder, "
},
{
"code": null,
"e": 43645,
"s": 43637,
"text": "Python3"
},
{
"code": "from django import formsfrom .models import GeeksModel # creating a formclass GeeksForm(forms.ModelForm): # create meta class class Meta: # specify model to be used model = GeeksModel # specify fields to be used fields = [ \"title\", \"description\", ]",
"e": 43960,
"s": 43645,
"text": null
},
{
"code": null,
"e": 44149,
"s": 43960,
"text": "Create View refers to a view (logic) to create an instance of a table in the database. It is just like taking an input from a user and storing it in a specified table. In geeks/views.py, "
},
{
"code": null,
"e": 44157,
"s": 44149,
"text": "Python3"
},
{
"code": "from django.shortcuts import render # relative import of formsfrom .models import GeeksModelfrom .forms import GeeksForm def create_view(request): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization form = GeeksForm(request.POST or None) if form.is_valid(): form.save() context['form']= form return render(request, \"create_view.html\", context)",
"e": 44603,
"s": 44157,
"text": null
},
{
"code": null,
"e": 44654,
"s": 44603,
"text": "Create a template in templates/create_view.html, "
},
{
"code": null,
"e": 44659,
"s": 44654,
"text": "html"
},
{
"code": "<form method=\"POST\" enctype=\"multipart/form-data\"> <!-- Security token --> {% csrf_token %} <!-- Using the formset --> {{ form.as_p }} <input type=\"submit\" value=\"Submit\"></form>",
"e": 44860,
"s": 44659,
"text": null
},
{
"code": null,
"e": 44895,
"s": 44860,
"text": "Now visit http://localhost:8000/ "
},
{
"code": null,
"e": 45009,
"s": 44895,
"text": "To check complete implementation of Function based Create View, visit Create View – Function based Views Django. "
},
{
"code": null,
"e": 45097,
"s": 45009,
"text": "Retrieve view is basically divided into two types of views Detail View and List View. "
},
{
"code": null,
"e": 45356,
"s": 45097,
"text": "List View refers to a view (logic) to list all or particular instances of a table from the database in a particular order. It is used to display multiple types of data on a single page or view, for example, products on an eCommerce page. In geeks/views.py, "
},
{
"code": null,
"e": 45364,
"s": 45356,
"text": "Python3"
},
{
"code": "from django.shortcuts import render # relative import of formsfrom .models import GeeksModel def list_view(request): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization context[\"dataset\"] = GeeksModel.objects.all() return render(request, \"list_view.html\", context)",
"e": 45718,
"s": 45364,
"text": null
},
{
"code": null,
"e": 45767,
"s": 45718,
"text": "Create a template in templates/list_view.html, "
},
{
"code": null,
"e": 45772,
"s": 45767,
"text": "html"
},
{
"code": "<div class=\"main\"> {% for data in dataset %}. {{ data.title }}<br/> {{ data.description }}<br/> <hr/> {% endfor %} </div>",
"e": 45912,
"s": 45772,
"text": null
},
{
"code": null,
"e": 45947,
"s": 45912,
"text": "Now visit http://localhost:8000/ "
},
{
"code": null,
"e": 46057,
"s": 45947,
"text": "To check complete implementation of Function based List View, visit List View – Function based Views Django "
},
{
"code": null,
"e": 46312,
"s": 46057,
"text": "Detail View refers to a view (logic) to display a particular instance of a table from the database with all the necessary details. It is used to display multiple types of data on a single page or view, for example, profile of a user. In geeks/views.py, "
},
{
"code": null,
"e": 46320,
"s": 46312,
"text": "Python3"
},
{
"code": "from django.urls import path # importing views from views..pyfrom .views import detail_view urlpatterns = [ path('<id>', detail_view ),]",
"e": 46460,
"s": 46320,
"text": null
},
{
"code": null,
"e": 46527,
"s": 46460,
"text": "Let’s create a view and template for the same. In geeks/views.py, "
},
{
"code": null,
"e": 46535,
"s": 46527,
"text": "Python3"
},
{
"code": "from django.shortcuts import render # relative import of formsfrom .models import GeeksModel # pass id attribute from urlsdef detail_view(request, id): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization context[\"data\"] = GeeksModel.objects.get(id = id) return render(request, \"detail_view.html\", context)",
"e": 46929,
"s": 46535,
"text": null
},
{
"code": null,
"e": 46980,
"s": 46929,
"text": "Create a template in templates/Detail_view.html, "
},
{
"code": null,
"e": 46985,
"s": 46980,
"text": "html"
},
{
"code": "<div class=\"main\"> <!-- Specify fields to be displayed --> {{ data.title }}<br/> {{ data.description }}<br/> </div>",
"e": 47115,
"s": 46985,
"text": null
},
{
"code": null,
"e": 47170,
"s": 47115,
"text": "Let’s check what is there on http://localhost:8000/1 "
},
{
"code": null,
"e": 47283,
"s": 47170,
"text": "To check complete implementation of Function based Detail View, visit Detail View – Function based Views Django "
},
{
"code": null,
"e": 47522,
"s": 47283,
"text": "Update View refers to a view (logic) to update a particular instance of a table from the database with some extra details. It is used to update entries in the database for example, updating an article at geeksforgeeks. In geeks/views.py, "
},
{
"code": null,
"e": 47530,
"s": 47522,
"text": "Python3"
},
{
"code": "from django.shortcuts import (get_object_or_404, render, HttpResponseRedirect) # relative import of formsfrom .models import GeeksModelfrom .forms import GeeksForm # after updating it will redirect to detail_Viewdef detail_view(request, id): # dictionary for initial data with # field names as keys context ={} # add the dictionary during initialization context[\"data\"] = GeeksModel.objects.get(id = id) return render(request, \"detail_view.html\", context) # update view for detailsdef update_view(request, id): # dictionary for initial data with # field names as keys context ={} # fetch the object related to passed id obj = get_object_or_404(GeeksModel, id = id) # pass the object as instance in form form = GeeksForm(request.POST or None, instance = obj) # save the data from the form and # redirect to detail_view if form.is_valid(): form.save() return HttpResponseRedirect(\"/\"+id) # add form dictionary to context context[\"form\"] = form return render(request, \"update_view.html\", context)",
"e": 48670,
"s": 47530,
"text": null
},
{
"code": null,
"e": 48760,
"s": 48670,
"text": "Now create following templates in templates folder, In geeks/templates/update_view.html, "
},
{
"code": null,
"e": 48765,
"s": 48760,
"text": "html"
},
{
"code": "<div class=\"main\"> <!-- Create a Form --> <form method=\"POST\"> <!-- Security token by Django --> {% csrf_token %} <!-- form as paragraph --> {{ form.as_p }} <input type=\"submit\" value=\"Update\"> </form> </div>",
"e": 49020,
"s": 48765,
"text": null
},
{
"code": null,
"e": 49059,
"s": 49020,
"text": "In geeks/templates/detail_view.html, "
},
{
"code": null,
"e": 49064,
"s": 49059,
"text": "html"
},
{
"code": "<div class=\"main\"> <!-- Display attributes of instance --> {{ data.title }} <br/> {{ data.description }}</div>",
"e": 49184,
"s": 49064,
"text": null
},
{
"code": null,
"e": 49261,
"s": 49184,
"text": "Let’s check if everything is working, visithttp://localhost:8000/1/update. "
},
{
"code": null,
"e": 49374,
"s": 49261,
"text": "To check complete implementation of Function based update View, visit Update View – Function based Views Django "
},
{
"code": null,
"e": 49589,
"s": 49374,
"text": "Delete View refers to a view (logic) to delete a particular instance of a table from the database. It is used to delete entries in the database for example, deleting an article at geeksforgeeks. In geeks/views.py "
},
{
"code": null,
"e": 49597,
"s": 49589,
"text": "Python3"
},
{
"code": "from django.shortcuts import (get_object_or_404, render, HttpResponseRedirect) from .models import GeeksModel # delete view for detailsdef delete_view(request, id): # dictionary for initial data with # field names as keys context ={} # fetch the object related to passed id obj = get_object_or_404(GeeksModel, id = id) if request.method ==\"POST\": # delete object obj.delete() # after deleting redirect to # home page return HttpResponseRedirect(\"/\") return render(request, \"delete_view.html\", context)",
"e": 50218,
"s": 49597,
"text": null
},
{
"code": null,
"e": 50301,
"s": 50218,
"text": "Now a url mapping to this view with a regular expression of id, In geeks/urls.py "
},
{
"code": null,
"e": 50309,
"s": 50301,
"text": "Python3"
},
{
"code": "from django.urls import path # importing views from views..pyfrom .views import delete_viewurlpatterns = [ path('<id>/delete', delete_view ),]",
"e": 50455,
"s": 50309,
"text": null
},
{
"code": null,
"e": 50603,
"s": 50455,
"text": "Template for delete view includes a simple form confirming whether user wants to delete the instance or not. In geeks/templates/delete_view.html, "
},
{
"code": null,
"e": 50608,
"s": 50603,
"text": "html"
},
{
"code": "<div class=\"main\"> <!-- Create a Form --> <form method=\"POST\"> <!-- Security token by Django --> {% csrf_token %} Are you want to delete this item ? <input type=\"submit\" value=\"Yes\" /> <a href=\"/\">Cancel </a> </form></div>",
"e": 50875,
"s": 50608,
"text": null
},
{
"code": null,
"e": 50973,
"s": 50875,
"text": "Everything ready, now let’s check if it is working or not, visit http://localhost:8000/2/delete "
},
{
"code": null,
"e": 51086,
"s": 50973,
"text": "To check complete implementation of Function based Delete View, visit Delete View – Function based Views Django "
},
{
"code": null,
"e": 51103,
"s": 51086,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 51118,
"s": 51103,
"text": "sagar0719kumar"
},
{
"code": null,
"e": 51131,
"s": 51118,
"text": "Django-views"
},
{
"code": null,
"e": 51145,
"s": 51131,
"text": "Python Django"
},
{
"code": null,
"e": 51152,
"s": 51145,
"text": "Python"
},
{
"code": null,
"e": 51250,
"s": 51152,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 51278,
"s": 51250,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 51328,
"s": 51278,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 51350,
"s": 51328,
"text": "Python map() function"
},
{
"code": null,
"e": 51394,
"s": 51350,
"text": "How to get column names in Pandas dataframe"
},
{
"code": null,
"e": 51429,
"s": 51394,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 51451,
"s": 51429,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 51483,
"s": 51451,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 51513,
"s": 51483,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 51555,
"s": 51513,
"text": "Different ways to create Pandas Dataframe"
}
] |
Date toString() method in Java with Examples - GeeksforGeeks
|
02 Jan, 2019
The toString() method of Java Date class converts this date object into a String in form “dow mon dd hh:mm:ss zzz yyy”. This method overrides toString in class object.Syntax:
public String toString()
Parameters: The function does not accept any parameter.
Return Value: It method returns a string representation of this date.
Exception: The function does not throws any exception.
Program below demonstrates the above mentioned function:
// Java code to demonstrate// toString() function of Date class import java.util.Date;import java.util.Calendar;public class GfG { // main method public static void main(String[] args) { // creating a Calendar object Calendar c1 = Calendar.getInstance(); // set Month // MONTH starts with 0 i.e. ( 0 - Jan) c1.set(Calendar.MONTH, 11); // set Date c1.set(Calendar.DATE, 05); // set Year c1.set(Calendar.YEAR, 1996); // creating a date object with specified time. Date dateOne = c1.getTime(); System.out.println(dateOne.toString()); }}
Thu Dec 05 08:21:00 UTC 1996
// Java code to demonstrate// toString() function of Date class import java.util.Date;import java.util.Calendar;public class GfG { // main method public static void main(String[] args) { // creating a Calendar object Calendar c1 = Calendar.getInstance(); // set Month // MONTH starts with 0 i.e. ( 0 - Jan) c1.set(Calendar.MONTH, 11); // set Date c1.set(Calendar.DATE, 15); // set Year c1.set(Calendar.YEAR, 1999); // creating a date object with specified time. Date dateOne = c1.getTime(); System.out.println(dateOne.toString()); }}
Wed Dec 15 08:21:02 UTC 1999
Java - util package
Java-Functions
Java-util-Date
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Different ways of Reading a text file in Java
Constructors in Java
Exceptions in Java
Generics in Java
Functional Interfaces in Java
Comparator Interface in Java with Examples
HashMap get() Method in Java
Introduction to Java
Difference between Abstract Class and Interface in Java
|
[
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n02 Jan, 2019"
},
{
"code": null,
"e": 24123,
"s": 23948,
"text": "The toString() method of Java Date class converts this date object into a String in form “dow mon dd hh:mm:ss zzz yyy”. This method overrides toString in class object.Syntax:"
},
{
"code": null,
"e": 24149,
"s": 24123,
"text": "public String toString()\n"
},
{
"code": null,
"e": 24205,
"s": 24149,
"text": "Parameters: The function does not accept any parameter."
},
{
"code": null,
"e": 24275,
"s": 24205,
"text": "Return Value: It method returns a string representation of this date."
},
{
"code": null,
"e": 24330,
"s": 24275,
"text": "Exception: The function does not throws any exception."
},
{
"code": null,
"e": 24387,
"s": 24330,
"text": "Program below demonstrates the above mentioned function:"
},
{
"code": "// Java code to demonstrate// toString() function of Date class import java.util.Date;import java.util.Calendar;public class GfG { // main method public static void main(String[] args) { // creating a Calendar object Calendar c1 = Calendar.getInstance(); // set Month // MONTH starts with 0 i.e. ( 0 - Jan) c1.set(Calendar.MONTH, 11); // set Date c1.set(Calendar.DATE, 05); // set Year c1.set(Calendar.YEAR, 1996); // creating a date object with specified time. Date dateOne = c1.getTime(); System.out.println(dateOne.toString()); }}",
"e": 25030,
"s": 24387,
"text": null
},
{
"code": null,
"e": 25060,
"s": 25030,
"text": "Thu Dec 05 08:21:00 UTC 1996\n"
},
{
"code": "// Java code to demonstrate// toString() function of Date class import java.util.Date;import java.util.Calendar;public class GfG { // main method public static void main(String[] args) { // creating a Calendar object Calendar c1 = Calendar.getInstance(); // set Month // MONTH starts with 0 i.e. ( 0 - Jan) c1.set(Calendar.MONTH, 11); // set Date c1.set(Calendar.DATE, 15); // set Year c1.set(Calendar.YEAR, 1999); // creating a date object with specified time. Date dateOne = c1.getTime(); System.out.println(dateOne.toString()); }}",
"e": 25703,
"s": 25060,
"text": null
},
{
"code": null,
"e": 25733,
"s": 25703,
"text": "Wed Dec 15 08:21:02 UTC 1999\n"
},
{
"code": null,
"e": 25753,
"s": 25733,
"text": "Java - util package"
},
{
"code": null,
"e": 25768,
"s": 25753,
"text": "Java-Functions"
},
{
"code": null,
"e": 25783,
"s": 25768,
"text": "Java-util-Date"
},
{
"code": null,
"e": 25788,
"s": 25783,
"text": "Java"
},
{
"code": null,
"e": 25793,
"s": 25788,
"text": "Java"
},
{
"code": null,
"e": 25891,
"s": 25793,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25906,
"s": 25891,
"text": "Stream In Java"
},
{
"code": null,
"e": 25952,
"s": 25906,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 25973,
"s": 25952,
"text": "Constructors in Java"
},
{
"code": null,
"e": 25992,
"s": 25973,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 26009,
"s": 25992,
"text": "Generics in Java"
},
{
"code": null,
"e": 26039,
"s": 26009,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 26082,
"s": 26039,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 26111,
"s": 26082,
"text": "HashMap get() Method in Java"
},
{
"code": null,
"e": 26132,
"s": 26111,
"text": "Introduction to Java"
}
] |
Unsafe Code in C# - GeeksforGeeks
|
11 Mar, 2019
Unsafe code in C# is the part of the program that runs outside the control of the Common Language Runtime (CLR) of the .NET frameworks. The CLR is responsible for all of the background tasks that the programmer doesn’t have to worry about like memory allocation and release, managing stack etc. Using the keyword “unsafe” means telling the compiler that the management of this code will be done by the programmer. Making a code content unsafe introduces stability and security risks as there are no bound checks in cases of arrays, memory related errors can occur which might remain unchecked etc.
A programmer can make the following sub-programs as unsafe:
Code blocksMethodsTypesClassStruct
Code blocks
Methods
Types
Class
Struct
Need to use the unsafe code?
When the program needs to implement pointers.
If native methods are used.
Syntax:
unsafe Context_declaration
Example: Here, we are declaring a block of code inside main as unsafe so that we can use pointers.
// C# program to demonstrate the unsafe codeusing System; namespace GFG { class Program { // Main Method static void Main(string[] args) { // Declaring a code block as // unsafe to make use of pointers unsafe { int x = 10; int* ptr; ptr = &x; // displaying value of x using pointer Console.WriteLine("Inside the unsafe code block"); Console.WriteLine("The value of x is " + *ptr); } // end unsafe block Console.WriteLine("\nOutside the unsafe code block"); } // end main}}
Note: This code will not compile directly, it gives the following error.
Therefore, if you are using Visual Studio, then you need to follow the given steps:
1) Go to the project properties
2) Select the build option and check the “Allow unsafe code” option.
Output:
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 50 C# Interview Questions & Answers
Extension Method in C#
HashSet in C# with Examples
C# | Inheritance
Partial Classes in C#
Convert String to Character Array in C#
Linked List Implementation in C#
C# | How to insert an element in an Array?
C# | List Class
Difference between Hashtable and Dictionary in C#
|
[
{
"code": null,
"e": 23911,
"s": 23883,
"text": "\n11 Mar, 2019"
},
{
"code": null,
"e": 24509,
"s": 23911,
"text": "Unsafe code in C# is the part of the program that runs outside the control of the Common Language Runtime (CLR) of the .NET frameworks. The CLR is responsible for all of the background tasks that the programmer doesn’t have to worry about like memory allocation and release, managing stack etc. Using the keyword “unsafe” means telling the compiler that the management of this code will be done by the programmer. Making a code content unsafe introduces stability and security risks as there are no bound checks in cases of arrays, memory related errors can occur which might remain unchecked etc."
},
{
"code": null,
"e": 24569,
"s": 24509,
"text": "A programmer can make the following sub-programs as unsafe:"
},
{
"code": null,
"e": 24604,
"s": 24569,
"text": "Code blocksMethodsTypesClassStruct"
},
{
"code": null,
"e": 24616,
"s": 24604,
"text": "Code blocks"
},
{
"code": null,
"e": 24624,
"s": 24616,
"text": "Methods"
},
{
"code": null,
"e": 24630,
"s": 24624,
"text": "Types"
},
{
"code": null,
"e": 24636,
"s": 24630,
"text": "Class"
},
{
"code": null,
"e": 24643,
"s": 24636,
"text": "Struct"
},
{
"code": null,
"e": 24672,
"s": 24643,
"text": "Need to use the unsafe code?"
},
{
"code": null,
"e": 24718,
"s": 24672,
"text": "When the program needs to implement pointers."
},
{
"code": null,
"e": 24746,
"s": 24718,
"text": "If native methods are used."
},
{
"code": null,
"e": 24754,
"s": 24746,
"text": "Syntax:"
},
{
"code": null,
"e": 24781,
"s": 24754,
"text": "unsafe Context_declaration"
},
{
"code": null,
"e": 24880,
"s": 24781,
"text": "Example: Here, we are declaring a block of code inside main as unsafe so that we can use pointers."
},
{
"code": "// C# program to demonstrate the unsafe codeusing System; namespace GFG { class Program { // Main Method static void Main(string[] args) { // Declaring a code block as // unsafe to make use of pointers unsafe { int x = 10; int* ptr; ptr = &x; // displaying value of x using pointer Console.WriteLine(\"Inside the unsafe code block\"); Console.WriteLine(\"The value of x is \" + *ptr); } // end unsafe block Console.WriteLine(\"\\nOutside the unsafe code block\"); } // end main}}",
"e": 25482,
"s": 24880,
"text": null
},
{
"code": null,
"e": 25555,
"s": 25482,
"text": "Note: This code will not compile directly, it gives the following error."
},
{
"code": null,
"e": 25639,
"s": 25555,
"text": "Therefore, if you are using Visual Studio, then you need to follow the given steps:"
},
{
"code": null,
"e": 25671,
"s": 25639,
"text": "1) Go to the project properties"
},
{
"code": null,
"e": 25740,
"s": 25671,
"text": "2) Select the build option and check the “Allow unsafe code” option."
},
{
"code": null,
"e": 25748,
"s": 25740,
"text": "Output:"
},
{
"code": null,
"e": 25751,
"s": 25748,
"text": "C#"
},
{
"code": null,
"e": 25849,
"s": 25751,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25858,
"s": 25849,
"text": "Comments"
},
{
"code": null,
"e": 25871,
"s": 25858,
"text": "Old Comments"
},
{
"code": null,
"e": 25911,
"s": 25871,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 25934,
"s": 25911,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 25962,
"s": 25934,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 25979,
"s": 25962,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 26001,
"s": 25979,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 26041,
"s": 26001,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 26074,
"s": 26041,
"text": "Linked List Implementation in C#"
},
{
"code": null,
"e": 26117,
"s": 26074,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 26133,
"s": 26117,
"text": "C# | List Class"
}
] |
JavaFX - Colors
|
To apply colors to an application, JavaFX provides various classes in the package javafx.scene.paint package. This package contains an abstract class named Paint and it is the base class of all the classes that are used to apply colors.
Using these classes, you can apply colors in the following patterns −
Uniform − In this pattern, color is applied uniformly throughout node.
Uniform − In this pattern, color is applied uniformly throughout node.
Image Pattern − This lets you to fill the region of the node with an image pattern.
Image Pattern − This lets you to fill the region of the node with an image pattern.
Gradient − In this pattern, the color applied to the node varies from one point to the other. It has two kinds of gradients namely Linear Gradient and Radial Gradient.
Gradient − In this pattern, the color applied to the node varies from one point to the other. It has two kinds of gradients namely Linear Gradient and Radial Gradient.
All those node classes to which you can apply color such as Shape, Text (including Scene), have methods named setFill() and setStroke(). These will help to set the color values of the nodes and their strokes respectively.
These methods accept an object of type Paint. Therefore, to create either of these type of images, you need to instantiate these classes and pass the object as a parameter to these methods.
To set uniform color pattern to the nodes, you need to pass an object of the class color to the setFill(), setStroke() methods as follows −
//Setting color to the text
Color color = new Color.BEIGE
text.setFill(color);
//Setting color to the stroke
Color color = new Color.DARKSLATEBLUE
circle.setStroke(color);
In the above code block, we are using the static variables of the color class to create a color object.
In the same way, you can also use the RGB values or HSB standard of coloring or web hash codes of colors as shown below −
//creating color object by passing RGB values
Color c = Color.rgb(0,0,255);
//creating color object by passing HSB values
Color c = Color.hsb(270,1.0,1.0);
//creating color object by passing the hash code for web
Color c = Color.web("0x0000FF",1.0);
Following is an example which demonstrates, how to apply color to the nodes in JavaFX. Here, we are creating a circle and text nodes and applying colors to them.
Save this code in a file with the name ColorExample.java.
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.scene.shape.Circle;
import javafx.scene.text.Font;
import javafx.scene.text.Text;
public class ColorExample extends Application {
@Override
public void start(Stage stage) {
//Drawing a Circle
Circle circle = new Circle();
//Setting the properties of the circle
circle.setCenterX(300.0f);
circle.setCenterY(180.0f);
circle.setRadius(90.0f);
//Setting color to the circle
circle.setFill(Color.DARKRED);
//Setting the stroke width
circle.setStrokeWidth(3);
//Setting color to the stroke
circle.setStroke(Color.DARKSLATEBLUE);
//Drawing a text
Text text = new Text("This is a colored circle");
//Setting the font of the text
text.setFont(Font.font("Edwardian Script ITC", 50));
//Setting the position of the text
text.setX(155);
text.setY(50);
//Setting color to the text
text.setFill(Color.BEIGE);
text.setStrokeWidth(2);
text.setStroke(Color.DARKSLATEBLUE);
//Creating a Group object
Group root = new Group(circle, text);
//Creating a scene object
Scene scene = new Scene(root, 600, 300);
//Setting title to the Stage
stage.setTitle("Color Example");
//Adding scene to the stage
stage.setScene(scene);
//Displaying the contents of the stage
stage.show();
}
public static void main(String args[]){
launch(args);
}
}
Compile and execute the saved java file from the command prompt using the following commands.
Javac ColorExample.java
java ColorExample
On executing, the above program generates a JavaFX window as follows −
To apply an image pattern to the nodes, instantiate the ImagePattern class and pass its object to the setFill(), setStroke() methods.
The constructor of this class accepts six parameters namely −
Image − The object of the image using which you want to create the pattern.
Image − The object of the image using which you want to create the pattern.
x and y − Double variables representing the (x, y) coordinates of origin of the anchor rectangle.
x and y − Double variables representing the (x, y) coordinates of origin of the anchor rectangle.
height and width − Double variables representing the height and width of the image that is used to create a pattern.
height and width − Double variables representing the height and width of the image that is used to create a pattern.
isProportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to be proportional.
isProportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to be proportional.
ImagePattern radialGradient = new ImagePattern(dots, 20, 20, 40, 40, false);
Following is an example which demonstrates how to apply image pattern to the nodes in JavaFX. Here, we are creating a circle and a text node and applying an image pattern to them.
Save this code in a file with name ImagePatternExample.java.
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.image.Image;
import javafx.scene.paint.Color;
import javafx.scene.paint.ImagePattern;
import javafx.scene.paint.Stop;
import javafx.stage.Stage;
import javafx.scene.shape.Circle;
import javafx.scene.text.Font;
import javafx.scene.text.Text;
public class ImagePatternExample extends Application {
@Override
public void start(Stage stage) {
//Drawing a Circle
Circle circle = new Circle();
//Setting the properties of the circle
circle.setCenterX(300.0f);
circle.setCenterY(180.0f);
circle.setRadius(90.0f);
//Drawing a text
Text text = new Text("This is a colored circle");
//Setting the font of the text
text.setFont(Font.font("Edwardian Script ITC", 50));
//Setting the position of the text
text.setX(155);
text.setY(50);
//Setting the image pattern
String link = "https://encrypted-tbn1.gstatic.com"
+ "/images?q=tbn:ANd9GcRQub4GvEezKMsiIf67U"
+ "rOxSzQuQ9zl5ysnjRn87VOC8tAdgmAJjcwZ2qM";
Image image = new Image(link);
ImagePattern radialGradient = new ImagePattern(image, 20, 20, 40, 40, false);
//Setting the linear gradient to the circle and text
circle.setFill(radialGradient);
text.setFill(radialGradient);
//Creating a Group object
Group root = new Group(circle, text);
//Creating a scene object
Scene scene = new Scene(root, 600, 300);
//Setting title to the Stage
stage.setTitle("Image pattern Example");
//Adding scene to the stage
stage.setScene(scene);
//Displaying the contents of the stage
stage.show();
}
public static void main(String args[]){
launch(args);
}
}
Compile and execute the saved java file from the command prompt using the following commands.
Javac ImagePatternExample.java
java ImagePatternExample
On executing, the above program generates a JavaFX window as follows −
To apply a Linear Gradient Pattern to the nodes, instantiate the LinearGradient class and pass its object to the setFill(), setStroke() methods.
The constructor of this class accepts five parameters namely −
startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient.
startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient.
endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient.
endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient.
cycleMethod − This argument defines how the regions outside the color gradient bounds, defined by the starting and ending points, should be filled.
cycleMethod − This argument defines how the regions outside the color gradient bounds, defined by the starting and ending points, should be filled.
proportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to a proportion.
proportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to a proportion.
Stops − This argument defines the color-stop points along the gradient line.
Stops − This argument defines the color-stop points along the gradient line.
//Setting the linear gradient
Stop[] stops = new Stop[] {
new Stop(0, Color.DARKSLATEBLUE),
new Stop(1, Color.DARKRED)
};
LinearGradient linearGradient =
new LinearGradient(0, 0, 1, 0, true, CycleMethod.NO_CYCLE, stops);
Following is an example which demonstrates how to apply a gradient pattern to the nodes in JavaFX. Here, we are creating a circle and a text nodes and applying linear gradient pattern to them.
Save this code in a file with name LinearGradientExample.java.
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.paint.Color;
import javafx.scene.paint.CycleMethod;
import javafx.scene.paint.LinearGradient;
import javafx.scene.paint.Stop;
import javafx.stage.Stage;
import javafx.scene.shape.Circle;
import javafx.scene.text.Font;
import javafx.scene.text.Text;
public class LinearGradientExample extends Application {
@Override
public void start(Stage stage) {
//Drawing a Circle
Circle circle = new Circle();
//Setting the properties of the circle
circle.setCenterX(300.0f);
circle.setCenterY(180.0f);
circle.setRadius(90.0f);
//Drawing a text
Text text = new Text("This is a colored circle");
//Setting the font of the text
text.setFont(Font.font("Edwardian Script ITC", 55));
//Setting the position of the text
text.setX(140);
text.setY(50);
//Setting the linear gradient
Stop[] stops = new Stop[] {
new Stop(0, Color.DARKSLATEBLUE),
new Stop(1, Color.DARKRED)
};
LinearGradient linearGradient =
new LinearGradient(0, 0, 1, 0, true, CycleMethod.NO_CYCLE, stops);
//Setting the linear gradient to the circle and text
circle.setFill(linearGradient);
text.setFill(linearGradient);
//Creating a Group object
Group root = new Group(circle, text);
//Creating a scene object
Scene scene = new Scene(root, 600, 300);
//Setting title to the Stage
stage.setTitle("Linear Gradient Example");
//Adding scene to the stage
stage.setScene(scene);
//Displaying the contents of the stage
stage.show();
}
public static void main(String args[]){
launch(args);
}
}
Compile and execute the saved java file from the command prompt using the following commands.
Javac LinearGradientExample.java
java LinearGradientExample
On executing, the above program generates a JavaFX window as follows −
To apply a Radial Gradient Pattern to the nodes, instantiate the GradientPattern class and pass its object to the setFill(), setStroke() methods.
The constructor of this class accepts a few parameters, some of which are −
startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient.
startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient.
endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient.
endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient.
cycleMethod − This argument defines how the regions outside the color gradient bounds are defined by the starting and ending points and how they should be filled.
cycleMethod − This argument defines how the regions outside the color gradient bounds are defined by the starting and ending points and how they should be filled.
proportional − This is a Boolean Variable; on setting this property to true the start and end locations are set to a proportion.
proportional − This is a Boolean Variable; on setting this property to true the start and end locations are set to a proportion.
Stops − This argument defines the color-stop points along the gradient line.
Stops − This argument defines the color-stop points along the gradient line.
//Setting the radial gradient
Stop[] stops = new Stop[] {
new Stop(0.0, Color.WHITE),
new Stop(0.3, Color.RED),
new Stop(1.0, Color.DARKRED)
};
RadialGradient radialGradient =
new RadialGradient(0, 0, 300, 178, 60, false, CycleMethod.NO_CYCLE, stops);
Following is an example which demonstrates how to apply a radial gradient pattern to the nodes in JavaFX. Here, we are creating a circle and a text nodes and applying gradient pattern to them.
Save this code in a file with the name RadialGradientExample.java.
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.paint.Color;
import javafx.scene.paint.CycleMethod;
import javafx.scene.paint.RadialGradient;
import javafx.scene.paint.Stop;
import javafx.stage.Stage;
import javafx.scene.shape.Circle;
import javafx.scene.text.Font;
import javafx.scene.text.Text;
public class RadialGradientExample extends Application {
@Override
public void start(Stage stage) {
//Drawing a Circle
Circle circle = new Circle();
//Setting the properties of the circle
circle.setCenterX(300.0f);
circle.setCenterY(180.0f);
circle.setRadius(90.0f);
//Drawing a text
Text text = new Text("This is a colored circle");
//Setting the font of the text
text.setFont(Font.font("Edwardian Script ITC", 50));
//Setting the position of the text
text.setX(155);
text.setY(50);
//Setting the radial gradient
Stop[] stops = new Stop[] {
new Stop(0.0, Color.WHITE),
new Stop(0.3, Color.RED),
new Stop(1.0, Color.DARKRED)
};
RadialGradient radialGradient =
new RadialGradient(0, 0, 300, 178, 60, false, CycleMethod.NO_CYCLE, stops);
//Setting the radial gradient to the circle and text
circle.setFill(radialGradient);
text.setFill(radialGradient);
//Creating a Group object
Group root = new Group(circle, text);
//Creating a scene object
Scene scene = new Scene(root, 600, 300);
//Setting title to the Stage
stage.setTitle("Radial Gradient Example");
//Adding scene to the stage
stage.setScene(scene);
//Displaying the contents of the stage
stage.show();
}
public static void main(String args[]) {
launch(args);
}
}
Compile and execute the saved java file from the command prompt using the following commands.
Javac RadialGradientExample.java
java RadialGradientExample
On executing, the above program generates a JavaFX window as follows −
33 Lectures
7.5 hours
Syed Raza
64 Lectures
12.5 hours
Emenwa Global, Ejike IfeanyiChukwu
20 Lectures
4 hours
Emenwa Global, Ejike IfeanyiChukwu
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2137,
"s": 1900,
"text": "To apply colors to an application, JavaFX provides various classes in the package javafx.scene.paint package. This package contains an abstract class named Paint and it is the base class of all the classes that are used to apply colors."
},
{
"code": null,
"e": 2207,
"s": 2137,
"text": "Using these classes, you can apply colors in the following patterns −"
},
{
"code": null,
"e": 2278,
"s": 2207,
"text": "Uniform − In this pattern, color is applied uniformly throughout node."
},
{
"code": null,
"e": 2349,
"s": 2278,
"text": "Uniform − In this pattern, color is applied uniformly throughout node."
},
{
"code": null,
"e": 2433,
"s": 2349,
"text": "Image Pattern − This lets you to fill the region of the node with an image pattern."
},
{
"code": null,
"e": 2517,
"s": 2433,
"text": "Image Pattern − This lets you to fill the region of the node with an image pattern."
},
{
"code": null,
"e": 2685,
"s": 2517,
"text": "Gradient − In this pattern, the color applied to the node varies from one point to the other. It has two kinds of gradients namely Linear Gradient and Radial Gradient."
},
{
"code": null,
"e": 2853,
"s": 2685,
"text": "Gradient − In this pattern, the color applied to the node varies from one point to the other. It has two kinds of gradients namely Linear Gradient and Radial Gradient."
},
{
"code": null,
"e": 3075,
"s": 2853,
"text": "All those node classes to which you can apply color such as Shape, Text (including Scene), have methods named setFill() and setStroke(). These will help to set the color values of the nodes and their strokes respectively."
},
{
"code": null,
"e": 3265,
"s": 3075,
"text": "These methods accept an object of type Paint. Therefore, to create either of these type of images, you need to instantiate these classes and pass the object as a parameter to these methods."
},
{
"code": null,
"e": 3405,
"s": 3265,
"text": "To set uniform color pattern to the nodes, you need to pass an object of the class color to the setFill(), setStroke() methods as follows −"
},
{
"code": null,
"e": 3584,
"s": 3405,
"text": "//Setting color to the text \nColor color = new Color.BEIGE \ntext.setFill(color); \n\n//Setting color to the stroke \nColor color = new Color.DARKSLATEBLUE \ncircle.setStroke(color);\n"
},
{
"code": null,
"e": 3688,
"s": 3584,
"text": "In the above code block, we are using the static variables of the color class to create a color object."
},
{
"code": null,
"e": 3810,
"s": 3688,
"text": "In the same way, you can also use the RGB values or HSB standard of coloring or web hash codes of colors as shown below −"
},
{
"code": null,
"e": 4070,
"s": 3810,
"text": "//creating color object by passing RGB values \nColor c = Color.rgb(0,0,255); \n\n//creating color object by passing HSB values\nColor c = Color.hsb(270,1.0,1.0); \n\n//creating color object by passing the hash code for web \nColor c = Color.web(\"0x0000FF\",1.0);\n"
},
{
"code": null,
"e": 4232,
"s": 4070,
"text": "Following is an example which demonstrates, how to apply color to the nodes in JavaFX. Here, we are creating a circle and text nodes and applying colors to them."
},
{
"code": null,
"e": 4290,
"s": 4232,
"text": "Save this code in a file with the name ColorExample.java."
},
{
"code": null,
"e": 6060,
"s": 4290,
"text": "import javafx.application.Application; \nimport javafx.scene.Group; \nimport javafx.scene.Scene; \nimport javafx.scene.paint.Color; \nimport javafx.stage.Stage; \nimport javafx.scene.shape.Circle; \nimport javafx.scene.text.Font; \nimport javafx.scene.text.Text; \n \npublic class ColorExample extends Application { \n @Override \n public void start(Stage stage) { \n //Drawing a Circle \n Circle circle = new Circle(); \n \n //Setting the properties of the circle \n circle.setCenterX(300.0f); \n circle.setCenterY(180.0f); \n circle.setRadius(90.0f); \n \n //Setting color to the circle \n circle.setFill(Color.DARKRED); \n \n //Setting the stroke width \n circle.setStrokeWidth(3); \n \n //Setting color to the stroke \n circle.setStroke(Color.DARKSLATEBLUE);\n \n //Drawing a text \n Text text = new Text(\"This is a colored circle\"); \n \n //Setting the font of the text \n text.setFont(Font.font(\"Edwardian Script ITC\", 50)); \n \n //Setting the position of the text \n text.setX(155); \n text.setY(50); \n \n //Setting color to the text \n text.setFill(Color.BEIGE); \n text.setStrokeWidth(2); \n text.setStroke(Color.DARKSLATEBLUE); \n \n //Creating a Group object \n Group root = new Group(circle, text); \n \n //Creating a scene object \n Scene scene = new Scene(root, 600, 300); \n \n //Setting title to the Stage \n stage.setTitle(\"Color Example\"); \n \n //Adding scene to the stage \n stage.setScene(scene); \n \n //Displaying the contents of the stage \n stage.show(); \n } \n public static void main(String args[]){ \n launch(args); \n } \n}"
},
{
"code": null,
"e": 6154,
"s": 6060,
"text": "Compile and execute the saved java file from the command prompt using the following commands."
},
{
"code": null,
"e": 6198,
"s": 6154,
"text": "Javac ColorExample.java \njava ColorExample\n"
},
{
"code": null,
"e": 6269,
"s": 6198,
"text": "On executing, the above program generates a JavaFX window as follows −"
},
{
"code": null,
"e": 6403,
"s": 6269,
"text": "To apply an image pattern to the nodes, instantiate the ImagePattern class and pass its object to the setFill(), setStroke() methods."
},
{
"code": null,
"e": 6465,
"s": 6403,
"text": "The constructor of this class accepts six parameters namely −"
},
{
"code": null,
"e": 6541,
"s": 6465,
"text": "Image − The object of the image using which you want to create the pattern."
},
{
"code": null,
"e": 6617,
"s": 6541,
"text": "Image − The object of the image using which you want to create the pattern."
},
{
"code": null,
"e": 6715,
"s": 6617,
"text": "x and y − Double variables representing the (x, y) coordinates of origin of the anchor rectangle."
},
{
"code": null,
"e": 6813,
"s": 6715,
"text": "x and y − Double variables representing the (x, y) coordinates of origin of the anchor rectangle."
},
{
"code": null,
"e": 6930,
"s": 6813,
"text": "height and width − Double variables representing the height and width of the image that is used to create a pattern."
},
{
"code": null,
"e": 7047,
"s": 6930,
"text": "height and width − Double variables representing the height and width of the image that is used to create a pattern."
},
{
"code": null,
"e": 7182,
"s": 7047,
"text": "isProportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to be proportional."
},
{
"code": null,
"e": 7317,
"s": 7182,
"text": "isProportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to be proportional."
},
{
"code": null,
"e": 7396,
"s": 7317,
"text": "ImagePattern radialGradient = new ImagePattern(dots, 20, 20, 40, 40, false); \n"
},
{
"code": null,
"e": 7576,
"s": 7396,
"text": "Following is an example which demonstrates how to apply image pattern to the nodes in JavaFX. Here, we are creating a circle and a text node and applying an image pattern to them."
},
{
"code": null,
"e": 7637,
"s": 7576,
"text": "Save this code in a file with name ImagePatternExample.java."
},
{
"code": null,
"e": 9629,
"s": 7637,
"text": "import javafx.application.Application; \nimport javafx.scene.Group; \nimport javafx.scene.Scene; \nimport javafx.scene.image.Image; \n\nimport javafx.scene.paint.Color; \nimport javafx.scene.paint.ImagePattern; \nimport javafx.scene.paint.Stop; \n\nimport javafx.stage.Stage; \nimport javafx.scene.shape.Circle; \nimport javafx.scene.text.Font; \nimport javafx.scene.text.Text; \n \npublic class ImagePatternExample extends Application { \n @Override \n public void start(Stage stage) { \n //Drawing a Circle \n Circle circle = new Circle(); \n \n //Setting the properties of the circle \n circle.setCenterX(300.0f); \n circle.setCenterY(180.0f); \n circle.setRadius(90.0f); \n \n //Drawing a text \n Text text = new Text(\"This is a colored circle\"); \n \n //Setting the font of the text \n text.setFont(Font.font(\"Edwardian Script ITC\", 50)); \n \n //Setting the position of the text\n text.setX(155); \n text.setY(50); \n \n //Setting the image pattern \n String link = \"https://encrypted-tbn1.gstatic.com\" \n + \"/images?q=tbn:ANd9GcRQub4GvEezKMsiIf67U\" \n + \"rOxSzQuQ9zl5ysnjRn87VOC8tAdgmAJjcwZ2qM\"; \n \n Image image = new Image(link); \n ImagePattern radialGradient = new ImagePattern(image, 20, 20, 40, 40, false); \n \n //Setting the linear gradient to the circle and text \n circle.setFill(radialGradient); \n text.setFill(radialGradient); \n \n //Creating a Group object \n Group root = new Group(circle, text); \n \n //Creating a scene object \n Scene scene = new Scene(root, 600, 300); \n \n //Setting title to the Stage \n stage.setTitle(\"Image pattern Example\"); \n \n //Adding scene to the stage \n stage.setScene(scene); \n \n //Displaying the contents of the stage \n stage.show(); \n } \n public static void main(String args[]){ \n launch(args); \n } \n}"
},
{
"code": null,
"e": 9723,
"s": 9629,
"text": "Compile and execute the saved java file from the command prompt using the following commands."
},
{
"code": null,
"e": 9782,
"s": 9723,
"text": "Javac ImagePatternExample.java \njava ImagePatternExample \n"
},
{
"code": null,
"e": 9853,
"s": 9782,
"text": "On executing, the above program generates a JavaFX window as follows −"
},
{
"code": null,
"e": 9998,
"s": 9853,
"text": "To apply a Linear Gradient Pattern to the nodes, instantiate the LinearGradient class and pass its object to the setFill(), setStroke() methods."
},
{
"code": null,
"e": 10061,
"s": 9998,
"text": "The constructor of this class accepts five parameters namely −"
},
{
"code": null,
"e": 10175,
"s": 10061,
"text": "startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient."
},
{
"code": null,
"e": 10289,
"s": 10175,
"text": "startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient."
},
{
"code": null,
"e": 10397,
"s": 10289,
"text": "endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient."
},
{
"code": null,
"e": 10505,
"s": 10397,
"text": "endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient."
},
{
"code": null,
"e": 10653,
"s": 10505,
"text": "cycleMethod − This argument defines how the regions outside the color gradient bounds, defined by the starting and ending points, should be filled."
},
{
"code": null,
"e": 10801,
"s": 10653,
"text": "cycleMethod − This argument defines how the regions outside the color gradient bounds, defined by the starting and ending points, should be filled."
},
{
"code": null,
"e": 10931,
"s": 10801,
"text": "proportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to a proportion."
},
{
"code": null,
"e": 11061,
"s": 10931,
"text": "proportional − This is a Boolean Variable; on setting this property to true, the start and end locations are set to a proportion."
},
{
"code": null,
"e": 11138,
"s": 11061,
"text": "Stops − This argument defines the color-stop points along the gradient line."
},
{
"code": null,
"e": 11215,
"s": 11138,
"text": "Stops − This argument defines the color-stop points along the gradient line."
},
{
"code": null,
"e": 11453,
"s": 11215,
"text": "//Setting the linear gradient \nStop[] stops = new Stop[] { \n new Stop(0, Color.DARKSLATEBLUE), \n new Stop(1, Color.DARKRED)\n}; \nLinearGradient linearGradient = \n new LinearGradient(0, 0, 1, 0, true, CycleMethod.NO_CYCLE, stops); "
},
{
"code": null,
"e": 11646,
"s": 11453,
"text": "Following is an example which demonstrates how to apply a gradient pattern to the nodes in JavaFX. Here, we are creating a circle and a text nodes and applying linear gradient pattern to them."
},
{
"code": null,
"e": 11709,
"s": 11646,
"text": "Save this code in a file with name LinearGradientExample.java."
},
{
"code": null,
"e": 13662,
"s": 11709,
"text": "import javafx.application.Application; \nimport javafx.scene.Group; \nimport javafx.scene.Scene; \n\nimport javafx.scene.paint.Color; \nimport javafx.scene.paint.CycleMethod; \nimport javafx.scene.paint.LinearGradient; \nimport javafx.scene.paint.Stop; \n\nimport javafx.stage.Stage; \nimport javafx.scene.shape.Circle; \nimport javafx.scene.text.Font; \nimport javafx.scene.text.Text; \n \npublic class LinearGradientExample extends Application { \n @Override \n public void start(Stage stage) { \n //Drawing a Circle \n Circle circle = new Circle(); \n \n //Setting the properties of the circle \n circle.setCenterX(300.0f); \n circle.setCenterY(180.0f); \n circle.setRadius(90.0f); \n \n //Drawing a text \n Text text = new Text(\"This is a colored circle\"); \n \n //Setting the font of the text \n text.setFont(Font.font(\"Edwardian Script ITC\", 55)); \n \n //Setting the position of the text \n text.setX(140); \n text.setY(50); \n \n //Setting the linear gradient \n Stop[] stops = new Stop[] { \n new Stop(0, Color.DARKSLATEBLUE), \n new Stop(1, Color.DARKRED)\n }; \n LinearGradient linearGradient = \n new LinearGradient(0, 0, 1, 0, true, CycleMethod.NO_CYCLE, stops); \n \n //Setting the linear gradient to the circle and text \n circle.setFill(linearGradient); \n text.setFill(linearGradient); \n \n //Creating a Group object \n Group root = new Group(circle, text); \n \n //Creating a scene object \n Scene scene = new Scene(root, 600, 300); \n \n //Setting title to the Stage \n stage.setTitle(\"Linear Gradient Example\"); \n \n //Adding scene to the stage \n stage.setScene(scene); \n \n //Displaying the contents of the stage \n stage.show(); \n } \n public static void main(String args[]){ \n launch(args); \n } \n}"
},
{
"code": null,
"e": 13756,
"s": 13662,
"text": "Compile and execute the saved java file from the command prompt using the following commands."
},
{
"code": null,
"e": 13818,
"s": 13756,
"text": "Javac LinearGradientExample.java \njava LinearGradientExample\n"
},
{
"code": null,
"e": 13889,
"s": 13818,
"text": "On executing, the above program generates a JavaFX window as follows −"
},
{
"code": null,
"e": 14035,
"s": 13889,
"text": "To apply a Radial Gradient Pattern to the nodes, instantiate the GradientPattern class and pass its object to the setFill(), setStroke() methods."
},
{
"code": null,
"e": 14111,
"s": 14035,
"text": "The constructor of this class accepts a few parameters, some of which are −"
},
{
"code": null,
"e": 14225,
"s": 14111,
"text": "startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient."
},
{
"code": null,
"e": 14339,
"s": 14225,
"text": "startX, startY − These double properties represent the x and y coordinates of the starting point of the gradient."
},
{
"code": null,
"e": 14447,
"s": 14339,
"text": "endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient."
},
{
"code": null,
"e": 14555,
"s": 14447,
"text": "endX, endY − These double properties represent the x and y coordinates of the ending point of the gradient."
},
{
"code": null,
"e": 14718,
"s": 14555,
"text": "cycleMethod − This argument defines how the regions outside the color gradient bounds are defined by the starting and ending points and how they should be filled."
},
{
"code": null,
"e": 14881,
"s": 14718,
"text": "cycleMethod − This argument defines how the regions outside the color gradient bounds are defined by the starting and ending points and how they should be filled."
},
{
"code": null,
"e": 15010,
"s": 14881,
"text": "proportional − This is a Boolean Variable; on setting this property to true the start and end locations are set to a proportion."
},
{
"code": null,
"e": 15139,
"s": 15010,
"text": "proportional − This is a Boolean Variable; on setting this property to true the start and end locations are set to a proportion."
},
{
"code": null,
"e": 15216,
"s": 15139,
"text": "Stops − This argument defines the color-stop points along the gradient line."
},
{
"code": null,
"e": 15293,
"s": 15216,
"text": "Stops − This argument defines the color-stop points along the gradient line."
},
{
"code": null,
"e": 15573,
"s": 15293,
"text": "//Setting the radial gradient \nStop[] stops = new Stop[] { \n new Stop(0.0, Color.WHITE), \n new Stop(0.3, Color.RED), \n new Stop(1.0, Color.DARKRED) \n}; \n\nRadialGradient radialGradient = \n new RadialGradient(0, 0, 300, 178, 60, false, CycleMethod.NO_CYCLE, stops);"
},
{
"code": null,
"e": 15766,
"s": 15573,
"text": "Following is an example which demonstrates how to apply a radial gradient pattern to the nodes in JavaFX. Here, we are creating a circle and a text nodes and applying gradient pattern to them."
},
{
"code": null,
"e": 15833,
"s": 15766,
"text": "Save this code in a file with the name RadialGradientExample.java."
},
{
"code": null,
"e": 17802,
"s": 15833,
"text": "import javafx.application.Application; \nimport javafx.scene.Group; \nimport javafx.scene.Scene; \n\nimport javafx.scene.paint.Color; \nimport javafx.scene.paint.CycleMethod; \nimport javafx.scene.paint.RadialGradient; \nimport javafx.scene.paint.Stop; \n\nimport javafx.stage.Stage; \nimport javafx.scene.shape.Circle; \nimport javafx.scene.text.Font; \nimport javafx.scene.text.Text; \n\npublic class RadialGradientExample extends Application { \n @Override \n public void start(Stage stage) { \n //Drawing a Circle \n Circle circle = new Circle(); \n \n //Setting the properties of the circle \n circle.setCenterX(300.0f); \n circle.setCenterY(180.0f); \n circle.setRadius(90.0f); \n \n //Drawing a text \n Text text = new Text(\"This is a colored circle\"); \n \n //Setting the font of the text \n text.setFont(Font.font(\"Edwardian Script ITC\", 50)); \n \n //Setting the position of the text \n text.setX(155); \n text.setY(50); \n \n //Setting the radial gradient \n Stop[] stops = new Stop[] { \n new Stop(0.0, Color.WHITE), \n new Stop(0.3, Color.RED), \n new Stop(1.0, Color.DARKRED) \n }; \n RadialGradient radialGradient = \n new RadialGradient(0, 0, 300, 178, 60, false, CycleMethod.NO_CYCLE, stops); \n \n //Setting the radial gradient to the circle and text \n circle.setFill(radialGradient); \n text.setFill(radialGradient); \n \n //Creating a Group object \n Group root = new Group(circle, text); \n \n //Creating a scene object \n Scene scene = new Scene(root, 600, 300); \n \n //Setting title to the Stage \n stage.setTitle(\"Radial Gradient Example\"); \n \n //Adding scene to the stage \n stage.setScene(scene); \n \n //Displaying the contents of the stage \n stage.show(); \n }\n public static void main(String args[]) { \n launch(args); \n } \n}"
},
{
"code": null,
"e": 17896,
"s": 17802,
"text": "Compile and execute the saved java file from the command prompt using the following commands."
},
{
"code": null,
"e": 17958,
"s": 17896,
"text": "Javac RadialGradientExample.java \njava RadialGradientExample\n"
},
{
"code": null,
"e": 18029,
"s": 17958,
"text": "On executing, the above program generates a JavaFX window as follows −"
},
{
"code": null,
"e": 18064,
"s": 18029,
"text": "\n 33 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 18075,
"s": 18064,
"text": " Syed Raza"
},
{
"code": null,
"e": 18111,
"s": 18075,
"text": "\n 64 Lectures \n 12.5 hours \n"
},
{
"code": null,
"e": 18147,
"s": 18111,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 18180,
"s": 18147,
"text": "\n 20 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 18216,
"s": 18180,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 18223,
"s": 18216,
"text": " Print"
},
{
"code": null,
"e": 18234,
"s": 18223,
"text": " Add Notes"
}
] |
Create Password Protected Zip of a file using Python - GeeksforGeeks
|
24 Jan, 2021
ZIP is an archive file format that supports lossless data compression. By lossless compression, we mean that the compression algorithm allows the original data to be perfectly reconstructed from the compressed data. So, a ZIP file is a single file containing one or more compressed files, offering an ideal way to make large files smaller and keep related files together.
In this article, we will learn how to Create Password-Protected Zip of a file using Python. For this, we are using pyminizip module from python.
The pyminizip module can be installed using the below command:
pip install pyminizip
For creating zip, we are using compress() method from pyminizip. So, we discuss first its syntax and arguments.
pyminizip.compress(“/srcfile/path.txt”, “file_path_prefix”, “/distfile/path.zip”, “password”, int(compress_level))
Arguments:
src file path (string)
src file prefix path (string) or None (path to prepend to file)
dst file path (string)
password (string) or None (to create no-password zip)
compress_level(int) between 1 to 9, 1 (more fast) <—> 9 (more compress) or 0 (default)
Return value: Always returns None
Input file:
Program:
Python3
# importing module
import pyminizip
# input file path
inpt = "./Text.txt"
# prefix path
pre = None
# output zip file path
oupt = "./output.zip"
# set password value
password = "GFG"
# compress level
com_lvl = 5
# compressing file
pyminizip.compress(inpt, None, oupt,
password, com_lvl)
Output:
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Box Plot in Python using Matplotlib
Python | Get dictionary keys as a list
Bar Plot in Matplotlib
Multithreading in Python | Set 2 (Synchronization)
Python Dictionary keys() method
loops in python
Python - Call function from another file
Ways to filter Pandas DataFrame by column values
Python | Convert set into a list
Python program to find number of days between two given dates
|
[
{
"code": null,
"e": 23973,
"s": 23942,
"text": " \n24 Jan, 2021\n"
},
{
"code": null,
"e": 24345,
"s": 23973,
"text": "ZIP is an archive file format that supports lossless data compression. By lossless compression, we mean that the compression algorithm allows the original data to be perfectly reconstructed from the compressed data. So, a ZIP file is a single file containing one or more compressed files, offering an ideal way to make large files smaller and keep related files together."
},
{
"code": null,
"e": 24490,
"s": 24345,
"text": "In this article, we will learn how to Create Password-Protected Zip of a file using Python. For this, we are using pyminizip module from python."
},
{
"code": null,
"e": 24554,
"s": 24490,
"text": " The pyminizip module can be installed using the below command:"
},
{
"code": null,
"e": 24576,
"s": 24554,
"text": "pip install pyminizip"
},
{
"code": null,
"e": 24688,
"s": 24576,
"text": "For creating zip, we are using compress() method from pyminizip. So, we discuss first its syntax and arguments."
},
{
"code": null,
"e": 24803,
"s": 24688,
"text": "pyminizip.compress(“/srcfile/path.txt”, “file_path_prefix”, “/distfile/path.zip”, “password”, int(compress_level))"
},
{
"code": null,
"e": 24814,
"s": 24803,
"text": "Arguments:"
},
{
"code": null,
"e": 24837,
"s": 24814,
"text": "src file path (string)"
},
{
"code": null,
"e": 24901,
"s": 24837,
"text": "src file prefix path (string) or None (path to prepend to file)"
},
{
"code": null,
"e": 24924,
"s": 24901,
"text": "dst file path (string)"
},
{
"code": null,
"e": 24978,
"s": 24924,
"text": "password (string) or None (to create no-password zip)"
},
{
"code": null,
"e": 25066,
"s": 24978,
"text": "compress_level(int) between 1 to 9, 1 (more fast) <—> 9 (more compress) or 0 (default) "
},
{
"code": null,
"e": 25100,
"s": 25066,
"text": "Return value: Always returns None"
},
{
"code": null,
"e": 25112,
"s": 25100,
"text": "Input file:"
},
{
"code": null,
"e": 25121,
"s": 25112,
"text": "Program:"
},
{
"code": null,
"e": 25129,
"s": 25121,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# importing module \nimport pyminizip \n \n# input file path \ninpt = \"./Text.txt\"\n \n# prefix path \npre = None\n \n# output zip file path \noupt = \"./output.zip\"\n \n# set password value \npassword = \"GFG\"\n \n# compress level \ncom_lvl = 5\n \n# compressing file \npyminizip.compress(inpt, None, oupt, \n password, com_lvl) \n\n\n\n\n\n",
"e": 25485,
"s": 25139,
"text": null
},
{
"code": null,
"e": 25493,
"s": 25485,
"text": "Output:"
},
{
"code": null,
"e": 25510,
"s": 25493,
"text": "\npython-utility\n"
},
{
"code": null,
"e": 25519,
"s": 25510,
"text": "\nPython\n"
},
{
"code": null,
"e": 25724,
"s": 25519,
"text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n "
},
{
"code": null,
"e": 25760,
"s": 25724,
"text": "Box Plot in Python using Matplotlib"
},
{
"code": null,
"e": 25799,
"s": 25760,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 25822,
"s": 25799,
"text": "Bar Plot in Matplotlib"
},
{
"code": null,
"e": 25873,
"s": 25822,
"text": "Multithreading in Python | Set 2 (Synchronization)"
},
{
"code": null,
"e": 25905,
"s": 25873,
"text": "Python Dictionary keys() method"
},
{
"code": null,
"e": 25921,
"s": 25905,
"text": "loops in python"
},
{
"code": null,
"e": 25962,
"s": 25921,
"text": "Python - Call function from another file"
},
{
"code": null,
"e": 26011,
"s": 25962,
"text": "Ways to filter Pandas DataFrame by column values"
},
{
"code": null,
"e": 26044,
"s": 26011,
"text": "Python | Convert set into a list"
}
] |
ReactJS Reactstrap Breadcrumb Component - GeeksforGeeks
|
28 Jul, 2021
Reactstrap is a popular front-end library that is easy to use React Bootstrap 4 components. This library contains the stateless React components for Bootstrap 4. Breadcrumb Component provides a way to indicate the location of the current page. We can use the following approach in ReactJS to use the ReactJS Reactstrap Breadcrumb Component.
Breadcrumb Props:
tag: It is used to denote the tag for this component.
listTag: It is used to denote the list tag like ol, ul, etc.
className: It is used to denote the class name for the Breadcrumb component.
listClassname: It is used to denote the class name for list styling.
cssModule: It is used to denote the CSS module for styling.
children: It is used to pass the children element to this component.
aria-label: It is used to denote the aria-label attribute.
BreadcrumbItem Props:
tag: It is used to denote the tag for this component.
active: It is used to indicate whether the Item is in active state or not.
className: It is used to denote the class name for styling.
cssModule: It is used to denote the CSS module for styling.
Creating React Application And Installing Module:
Step 1: Create a React application using the following command:
npx create-react-app foldername
Step 2: After creating your project folder i.e. foldername, move to it using the following command:
cd foldername
Step 3: After creating the ReactJS application, Install the required module using the following command:
npm install reactstrap bootstrap
Project Structure: It will look like the following.
Project Structure
Example 1: Now write down the following code in the App.js file. Here, we have used the Breadcrumb component with active and href props.
Javascript
import React from 'react'import 'bootstrap/dist/css/bootstrap.min.css';import { Breadcrumb, BreadcrumbItem } from "reactstrap" function App() { return ( <div style={{ display: 'block', width: 700, padding: 30 }}> <h4>ReactJS Reactstrap Breadcrumb Component</h4> <Breadcrumb> <BreadcrumbItem><a href="#">Dashboard</a></BreadcrumbItem> <BreadcrumbItem active>Profile</BreadcrumbItem> </Breadcrumb> <Breadcrumb> <BreadcrumbItem><a href="#">Logout</a></BreadcrumbItem> <BreadcrumbItem><a href="#">Settings</a></BreadcrumbItem> </Breadcrumb> </div> );} export default App;
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
Example 2: Now write down the following code in the App.js file. Here, we have used the Breadcrumb component with disabled, tag, and href props.
App.js
Javascript
import React from 'react'import 'bootstrap/dist/css/bootstrap.min.css';import { Breadcrumb, BreadcrumbItem } from "reactstrap" function App() { return ( <div style={{ display: 'block', width: 700, padding: 30 }}> <h4>ReactJS Reactstrap Breadcrumb Component</h4> <Breadcrumb listTag="div" tag="nav" > <BreadcrumbItem disabled>Dashboard</BreadcrumbItem> <BreadcrumbItem href="#" tag="a" >Profile</BreadcrumbItem> <BreadcrumbItem href="#" tag="a" >Logout</BreadcrumbItem> <BreadcrumbItem disabled>Settings</BreadcrumbItem> </Breadcrumb> </div> );} export default App;
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
Output
Reference: https://reactstrap.github.io/components/breadcrumbs/
Reactstrap
JavaScript
ReactJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
How to get character array from string in JavaScript?
Remove elements from a JavaScript Array
How to get selected value in dropdown list using JavaScript ?
How to fetch data from an API in ReactJS ?
How to redirect to another page in ReactJS ?
How to pass data from child component to its parent in ReactJS ?
How to pass data from one component to other component in ReactJS ?
ReactJS Functional Components
|
[
{
"code": null,
"e": 24921,
"s": 24893,
"text": "\n28 Jul, 2021"
},
{
"code": null,
"e": 25262,
"s": 24921,
"text": "Reactstrap is a popular front-end library that is easy to use React Bootstrap 4 components. This library contains the stateless React components for Bootstrap 4. Breadcrumb Component provides a way to indicate the location of the current page. We can use the following approach in ReactJS to use the ReactJS Reactstrap Breadcrumb Component."
},
{
"code": null,
"e": 25280,
"s": 25262,
"text": "Breadcrumb Props:"
},
{
"code": null,
"e": 25334,
"s": 25280,
"text": "tag: It is used to denote the tag for this component."
},
{
"code": null,
"e": 25395,
"s": 25334,
"text": "listTag: It is used to denote the list tag like ol, ul, etc."
},
{
"code": null,
"e": 25472,
"s": 25395,
"text": "className: It is used to denote the class name for the Breadcrumb component."
},
{
"code": null,
"e": 25541,
"s": 25472,
"text": "listClassname: It is used to denote the class name for list styling."
},
{
"code": null,
"e": 25601,
"s": 25541,
"text": "cssModule: It is used to denote the CSS module for styling."
},
{
"code": null,
"e": 25670,
"s": 25601,
"text": "children: It is used to pass the children element to this component."
},
{
"code": null,
"e": 25729,
"s": 25670,
"text": "aria-label: It is used to denote the aria-label attribute."
},
{
"code": null,
"e": 25751,
"s": 25729,
"text": "BreadcrumbItem Props:"
},
{
"code": null,
"e": 25805,
"s": 25751,
"text": "tag: It is used to denote the tag for this component."
},
{
"code": null,
"e": 25880,
"s": 25805,
"text": "active: It is used to indicate whether the Item is in active state or not."
},
{
"code": null,
"e": 25940,
"s": 25880,
"text": "className: It is used to denote the class name for styling."
},
{
"code": null,
"e": 26000,
"s": 25940,
"text": "cssModule: It is used to denote the CSS module for styling."
},
{
"code": null,
"e": 26052,
"s": 26002,
"text": "Creating React Application And Installing Module:"
},
{
"code": null,
"e": 26116,
"s": 26052,
"text": "Step 1: Create a React application using the following command:"
},
{
"code": null,
"e": 26148,
"s": 26116,
"text": "npx create-react-app foldername"
},
{
"code": null,
"e": 26248,
"s": 26148,
"text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:"
},
{
"code": null,
"e": 26262,
"s": 26248,
"text": "cd foldername"
},
{
"code": null,
"e": 26367,
"s": 26262,
"text": "Step 3: After creating the ReactJS application, Install the required module using the following command:"
},
{
"code": null,
"e": 26400,
"s": 26367,
"text": "npm install reactstrap bootstrap"
},
{
"code": null,
"e": 26452,
"s": 26400,
"text": "Project Structure: It will look like the following."
},
{
"code": null,
"e": 26470,
"s": 26452,
"text": "Project Structure"
},
{
"code": null,
"e": 26607,
"s": 26470,
"text": "Example 1: Now write down the following code in the App.js file. Here, we have used the Breadcrumb component with active and href props."
},
{
"code": null,
"e": 26618,
"s": 26607,
"text": "Javascript"
},
{
"code": "import React from 'react'import 'bootstrap/dist/css/bootstrap.min.css';import { Breadcrumb, BreadcrumbItem } from \"reactstrap\" function App() { return ( <div style={{ display: 'block', width: 700, padding: 30 }}> <h4>ReactJS Reactstrap Breadcrumb Component</h4> <Breadcrumb> <BreadcrumbItem><a href=\"#\">Dashboard</a></BreadcrumbItem> <BreadcrumbItem active>Profile</BreadcrumbItem> </Breadcrumb> <Breadcrumb> <BreadcrumbItem><a href=\"#\">Logout</a></BreadcrumbItem> <BreadcrumbItem><a href=\"#\">Settings</a></BreadcrumbItem> </Breadcrumb> </div> );} export default App;",
"e": 27341,
"s": 26618,
"text": null
},
{
"code": null,
"e": 27454,
"s": 27341,
"text": "Step to Run Application: Run the application using the following command from the root directory of the project:"
},
{
"code": null,
"e": 27464,
"s": 27454,
"text": "npm start"
},
{
"code": null,
"e": 27565,
"s": 27466,
"text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"
},
{
"code": null,
"e": 27712,
"s": 27567,
"text": "Example 2: Now write down the following code in the App.js file. Here, we have used the Breadcrumb component with disabled, tag, and href props."
},
{
"code": null,
"e": 27720,
"s": 27712,
"text": "App.js "
},
{
"code": null,
"e": 27731,
"s": 27720,
"text": "Javascript"
},
{
"code": "import React from 'react'import 'bootstrap/dist/css/bootstrap.min.css';import { Breadcrumb, BreadcrumbItem } from \"reactstrap\" function App() { return ( <div style={{ display: 'block', width: 700, padding: 30 }}> <h4>ReactJS Reactstrap Breadcrumb Component</h4> <Breadcrumb listTag=\"div\" tag=\"nav\" > <BreadcrumbItem disabled>Dashboard</BreadcrumbItem> <BreadcrumbItem href=\"#\" tag=\"a\" >Profile</BreadcrumbItem> <BreadcrumbItem href=\"#\" tag=\"a\" >Logout</BreadcrumbItem> <BreadcrumbItem disabled>Settings</BreadcrumbItem> </Breadcrumb> </div> );} export default App;",
"e": 28429,
"s": 27731,
"text": null
},
{
"code": null,
"e": 28542,
"s": 28429,
"text": "Step to Run Application: Run the application using the following command from the root directory of the project:"
},
{
"code": null,
"e": 28552,
"s": 28542,
"text": "npm start"
},
{
"code": null,
"e": 28651,
"s": 28552,
"text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"
},
{
"code": null,
"e": 28658,
"s": 28651,
"text": "Output"
},
{
"code": null,
"e": 28722,
"s": 28658,
"text": "Reference: https://reactstrap.github.io/components/breadcrumbs/"
},
{
"code": null,
"e": 28733,
"s": 28722,
"text": "Reactstrap"
},
{
"code": null,
"e": 28744,
"s": 28733,
"text": "JavaScript"
},
{
"code": null,
"e": 28752,
"s": 28744,
"text": "ReactJS"
},
{
"code": null,
"e": 28769,
"s": 28752,
"text": "Web Technologies"
},
{
"code": null,
"e": 28867,
"s": 28769,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28876,
"s": 28867,
"text": "Comments"
},
{
"code": null,
"e": 28889,
"s": 28876,
"text": "Old Comments"
},
{
"code": null,
"e": 28950,
"s": 28889,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28991,
"s": 28950,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 29045,
"s": 28991,
"text": "How to get character array from string in JavaScript?"
},
{
"code": null,
"e": 29085,
"s": 29045,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 29147,
"s": 29085,
"text": "How to get selected value in dropdown list using JavaScript ?"
},
{
"code": null,
"e": 29190,
"s": 29147,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 29235,
"s": 29190,
"text": "How to redirect to another page in ReactJS ?"
},
{
"code": null,
"e": 29300,
"s": 29235,
"text": "How to pass data from child component to its parent in ReactJS ?"
},
{
"code": null,
"e": 29368,
"s": 29300,
"text": "How to pass data from one component to other component in ReactJS ?"
}
] |
Count the number of times a letter appears in a text file in Python - GeeksforGeeks
|
05 Jan, 2022
In this article, we will be learning different approaches to count the number of times a letter appears in a text file in Python. Below is the content of the text file gfg.txt that we are going to use in the below programs:
Now we will discuss various approaches to get the frequency of a letter in a text file.
Method 1: Using the in-built count() method.
Approach:
Read the file.
Store the content of the file in a variable.
Use the count() method with the argument as a letter whose frequency is required.
Display the count of the letter.
Implementation:
Python3
# Program to get letter count in a text file # explicit function to return the letter countdef letterFrequency(fileName, letter): # open file in read mode file = open(fileName, 'r') # store content of the file in a variable text = file.read() # using count() return text.count(letter) # call the function and display the letter countprint(letterFrequency('gfg.txt', 'g'))
Output:
Method 2: Iterate through the file content in order to compare each character with the given letter.
Approach:
Read the file.
Store the content of the file in a variable.
Assign a counter count variable with 0.
Iterate through each character, if the character is found to be the given letter then increment the counter.
Display the count of the letter.
Implementation:
Python3
# Program to get letter count in a text file # explicit function to return the letter countdef letterFrequency(fileName, letter): # open file in read mode file = open(fileName, "r") # store content of the file in a variable text = file.read() # declare count variable count = 0 # iterate through each character for char in text: # compare each character with # the given letter if char == letter: count += 1 # return letter count return count # call function and display the letter countprint(letterFrequency('gfg.txt', 'g'))
Output:
as5853535
gulshankumarar231
Picked
Python file-handling-programs
python-file-handling
Technical Scripter 2020
Python
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Python | os.path.join() method
Selecting rows in pandas DataFrame based on conditions
Defaultdict in Python
Python | Get unique values from a list
Create a directory in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n05 Jan, 2022"
},
{
"code": null,
"e": 24516,
"s": 24292,
"text": "In this article, we will be learning different approaches to count the number of times a letter appears in a text file in Python. Below is the content of the text file gfg.txt that we are going to use in the below programs:"
},
{
"code": null,
"e": 24604,
"s": 24516,
"text": "Now we will discuss various approaches to get the frequency of a letter in a text file."
},
{
"code": null,
"e": 24649,
"s": 24604,
"text": "Method 1: Using the in-built count() method."
},
{
"code": null,
"e": 24659,
"s": 24649,
"text": "Approach:"
},
{
"code": null,
"e": 24674,
"s": 24659,
"text": "Read the file."
},
{
"code": null,
"e": 24719,
"s": 24674,
"text": "Store the content of the file in a variable."
},
{
"code": null,
"e": 24801,
"s": 24719,
"text": "Use the count() method with the argument as a letter whose frequency is required."
},
{
"code": null,
"e": 24834,
"s": 24801,
"text": "Display the count of the letter."
},
{
"code": null,
"e": 24850,
"s": 24834,
"text": "Implementation:"
},
{
"code": null,
"e": 24858,
"s": 24850,
"text": "Python3"
},
{
"code": "# Program to get letter count in a text file # explicit function to return the letter countdef letterFrequency(fileName, letter): # open file in read mode file = open(fileName, 'r') # store content of the file in a variable text = file.read() # using count() return text.count(letter) # call the function and display the letter countprint(letterFrequency('gfg.txt', 'g'))",
"e": 25251,
"s": 24858,
"text": null
},
{
"code": null,
"e": 25259,
"s": 25251,
"text": "Output:"
},
{
"code": null,
"e": 25360,
"s": 25259,
"text": "Method 2: Iterate through the file content in order to compare each character with the given letter."
},
{
"code": null,
"e": 25370,
"s": 25360,
"text": "Approach:"
},
{
"code": null,
"e": 25385,
"s": 25370,
"text": "Read the file."
},
{
"code": null,
"e": 25430,
"s": 25385,
"text": "Store the content of the file in a variable."
},
{
"code": null,
"e": 25470,
"s": 25430,
"text": "Assign a counter count variable with 0."
},
{
"code": null,
"e": 25579,
"s": 25470,
"text": "Iterate through each character, if the character is found to be the given letter then increment the counter."
},
{
"code": null,
"e": 25612,
"s": 25579,
"text": "Display the count of the letter."
},
{
"code": null,
"e": 25628,
"s": 25612,
"text": "Implementation:"
},
{
"code": null,
"e": 25636,
"s": 25628,
"text": "Python3"
},
{
"code": "# Program to get letter count in a text file # explicit function to return the letter countdef letterFrequency(fileName, letter): # open file in read mode file = open(fileName, \"r\") # store content of the file in a variable text = file.read() # declare count variable count = 0 # iterate through each character for char in text: # compare each character with # the given letter if char == letter: count += 1 # return letter count return count # call function and display the letter countprint(letterFrequency('gfg.txt', 'g'))",
"e": 26230,
"s": 25636,
"text": null
},
{
"code": null,
"e": 26238,
"s": 26230,
"text": "Output:"
},
{
"code": null,
"e": 26248,
"s": 26238,
"text": "as5853535"
},
{
"code": null,
"e": 26266,
"s": 26248,
"text": "gulshankumarar231"
},
{
"code": null,
"e": 26273,
"s": 26266,
"text": "Picked"
},
{
"code": null,
"e": 26303,
"s": 26273,
"text": "Python file-handling-programs"
},
{
"code": null,
"e": 26324,
"s": 26303,
"text": "python-file-handling"
},
{
"code": null,
"e": 26348,
"s": 26324,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 26355,
"s": 26348,
"text": "Python"
},
{
"code": null,
"e": 26374,
"s": 26355,
"text": "Technical Scripter"
},
{
"code": null,
"e": 26472,
"s": 26374,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26504,
"s": 26472,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26546,
"s": 26504,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26602,
"s": 26546,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26644,
"s": 26602,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26675,
"s": 26644,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26730,
"s": 26675,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 26752,
"s": 26730,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26791,
"s": 26752,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 26820,
"s": 26791,
"text": "Create a directory in Python"
}
] |
How to change the <input> type? - GeeksforGeeks
|
18 Apr, 2021
The purpose of this article is to change the HTML <input> type. Javascript provides document.getElementById().type option to change the type of input field.
Syntax:
The selected input type will change to text.document.getElementById("id_of_tag_to_be_changed").type="text";
document.getElementById("id_of_tag_to_be_changed").type="text";
The selected input type will change to a button.document.getElementById("id_of_tag_to_be_changed").type="button";
document.getElementById("id_of_tag_to_be_changed").type="button";
The selected input type will change to a date.document.getElementById("id_of_tag_to_be_changed").type="date";
document.getElementById("id_of_tag_to_be_changed").type="date";
Example:
HTML
<!DOCTYPE html><html> <body> <p> Please click on Button2 to see the type of Button1 change from button to text </p> <input id="inputfield1" type="button" value="Input1" onclick="alert('I am Input1 click on Input2 to see me change my type')"/> <input id="inputfield2" type="button" value="Input2" onclick="typeChanger()"/> </body> <script> function typeChanger() { alert("The type of Input1 will now change from button to text"); document.getElementById("inputfield1").type = "text"; } </script></html>
Output:
Input to text
Note: Some Old Browsers do not allow a dynamic change in the type of input fields.
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
HTML-Attributes
HTML-Questions
HTML-Tags
Picked
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
How to set the default value for an HTML <select> element ?
How to update Node.js and NPM to next version ?
How to set input type date in dd-mm-yyyy format using HTML ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
|
[
{
"code": null,
"e": 31782,
"s": 31754,
"text": "\n18 Apr, 2021"
},
{
"code": null,
"e": 31939,
"s": 31782,
"text": "The purpose of this article is to change the HTML <input> type. Javascript provides document.getElementById().type option to change the type of input field."
},
{
"code": null,
"e": 31947,
"s": 31939,
"text": "Syntax:"
},
{
"code": null,
"e": 32056,
"s": 31947,
"text": "The selected input type will change to text.document.getElementById(\"id_of_tag_to_be_changed\").type=\"text\"; "
},
{
"code": null,
"e": 32121,
"s": 32056,
"text": "document.getElementById(\"id_of_tag_to_be_changed\").type=\"text\"; "
},
{
"code": null,
"e": 32235,
"s": 32121,
"text": "The selected input type will change to a button.document.getElementById(\"id_of_tag_to_be_changed\").type=\"button\";"
},
{
"code": null,
"e": 32301,
"s": 32235,
"text": "document.getElementById(\"id_of_tag_to_be_changed\").type=\"button\";"
},
{
"code": null,
"e": 32412,
"s": 32301,
"text": "The selected input type will change to a date.document.getElementById(\"id_of_tag_to_be_changed\").type=\"date\"; "
},
{
"code": null,
"e": 32477,
"s": 32412,
"text": "document.getElementById(\"id_of_tag_to_be_changed\").type=\"date\"; "
},
{
"code": null,
"e": 32486,
"s": 32477,
"text": "Example:"
},
{
"code": null,
"e": 32491,
"s": 32486,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <body> <p> Please click on Button2 to see the type of Button1 change from button to text </p> <input id=\"inputfield1\" type=\"button\" value=\"Input1\" onclick=\"alert('I am Input1 click on Input2 to see me change my type')\"/> <input id=\"inputfield2\" type=\"button\" value=\"Input2\" onclick=\"typeChanger()\"/> </body> <script> function typeChanger() { alert(\"The type of Input1 will now change from button to text\"); document.getElementById(\"inputfield1\").type = \"text\"; } </script></html>",
"e": 33078,
"s": 32491,
"text": null
},
{
"code": null,
"e": 33086,
"s": 33078,
"text": "Output:"
},
{
"code": null,
"e": 33100,
"s": 33086,
"text": "Input to text"
},
{
"code": null,
"e": 33183,
"s": 33100,
"text": "Note: Some Old Browsers do not allow a dynamic change in the type of input fields."
},
{
"code": null,
"e": 33320,
"s": 33183,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 33336,
"s": 33320,
"text": "HTML-Attributes"
},
{
"code": null,
"e": 33351,
"s": 33336,
"text": "HTML-Questions"
},
{
"code": null,
"e": 33361,
"s": 33351,
"text": "HTML-Tags"
},
{
"code": null,
"e": 33368,
"s": 33361,
"text": "Picked"
},
{
"code": null,
"e": 33373,
"s": 33368,
"text": "HTML"
},
{
"code": null,
"e": 33390,
"s": 33373,
"text": "Web Technologies"
},
{
"code": null,
"e": 33395,
"s": 33390,
"text": "HTML"
},
{
"code": null,
"e": 33493,
"s": 33395,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33555,
"s": 33493,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 33605,
"s": 33555,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 33665,
"s": 33605,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 33713,
"s": 33665,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 33774,
"s": 33713,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 33816,
"s": 33774,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 33849,
"s": 33816,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 33892,
"s": 33849,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 33942,
"s": 33892,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Check if list contains all unique elements in Python
|
A list in python can contain elements all of which may or may not be unique. But for a scenario when we need unique elements like marking the attendance for different roll numbers of a class. Below is the approaches with can use.
A python set is a collection which is unordered, unindexed and also contains unique elements. So we will compare the length of the set created from the list with the length of the list itself. They will be equal only if there are unique elements in the list.
Live Demo
# Given List
Alist = ['Mon','Tue','Wed']
print("The given list : ",Alist)
# Compare length for unique elements
if(len(set(Alist)) == len(Alist)):
print("All elements are unique.")
else:
print("All elements are not unique.")
Running the above code gives us the following result −
The given list : ['Mon', 'Tue', 'Wed']
All elements are unique.
Running the same program again without having unique elements.
Live Demo
# Given List
Alist = ['Mon','Tue','Wed','Mon']
print("The given list : ",Alist)
# Compare length for unique elements
if(len(set(Alist)) == len(Alist)):
print("All elements are unique.")
else:
print("All elements are not unique.")
Running the above code gives us the following result −
The given list : ['Mon', 'Tue', 'Wed', 'Mon']
All elements are not unique.
We can also use the in-built count() which will count the frequency of each element in the list. If the count is greater than 1 then we have duplicates in the list.
Live Demo
# Given List
list1 = ['Mon','Tue','Wed','Mon']
list2 = ['Mon','Tue','Wed']
def dupcheck(x):
for elem in x:
if x.count(elem) > 1:
return True
return False
if dupcheck(list1):
print("The given list : ", list1)
print("There are duplicates.")
else:
print("The given list : ", list1)
print("No duplicates.")
if dupcheck(list2):
print("The given list : ", list2)
print("There are duplicates.")
else:
print("The given list : ", list2)
print("No duplicates.")
Running the above code gives us the following result −
The given list : ['Mon', 'Tue', 'Wed', 'Mon']
There are duplicates.
The given list : ['Mon', 'Tue', 'Wed']
No duplicates.
|
[
{
"code": null,
"e": 1292,
"s": 1062,
"text": "A list in python can contain elements all of which may or may not be unique. But for a scenario when we need unique elements like marking the attendance for different roll numbers of a class. Below is the approaches with can use."
},
{
"code": null,
"e": 1551,
"s": 1292,
"text": "A python set is a collection which is unordered, unindexed and also contains unique elements. So we will compare the length of the set created from the list with the length of the list itself. They will be equal only if there are unique elements in the list."
},
{
"code": null,
"e": 1562,
"s": 1551,
"text": " Live Demo"
},
{
"code": null,
"e": 1794,
"s": 1562,
"text": "# Given List\nAlist = ['Mon','Tue','Wed']\nprint(\"The given list : \",Alist)\n\n# Compare length for unique elements\nif(len(set(Alist)) == len(Alist)):\n\n print(\"All elements are unique.\")\nelse:\n print(\"All elements are not unique.\")"
},
{
"code": null,
"e": 1849,
"s": 1794,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 1913,
"s": 1849,
"text": "The given list : ['Mon', 'Tue', 'Wed']\nAll elements are unique."
},
{
"code": null,
"e": 1976,
"s": 1913,
"text": "Running the same program again without having unique elements."
},
{
"code": null,
"e": 1987,
"s": 1976,
"text": " Live Demo"
},
{
"code": null,
"e": 2225,
"s": 1987,
"text": "# Given List\nAlist = ['Mon','Tue','Wed','Mon']\nprint(\"The given list : \",Alist)\n\n# Compare length for unique elements\nif(len(set(Alist)) == len(Alist)):\n\n print(\"All elements are unique.\")\nelse:\n print(\"All elements are not unique.\")"
},
{
"code": null,
"e": 2280,
"s": 2225,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2355,
"s": 2280,
"text": "The given list : ['Mon', 'Tue', 'Wed', 'Mon']\nAll elements are not unique."
},
{
"code": null,
"e": 2520,
"s": 2355,
"text": "We can also use the in-built count() which will count the frequency of each element in the list. If the count is greater than 1 then we have duplicates in the list."
},
{
"code": null,
"e": 2531,
"s": 2520,
"text": " Live Demo"
},
{
"code": null,
"e": 3035,
"s": 2531,
"text": "# Given List\nlist1 = ['Mon','Tue','Wed','Mon']\nlist2 = ['Mon','Tue','Wed']\n\n\ndef dupcheck(x):\n for elem in x:\n if x.count(elem) > 1:\n return True\n return False\n\nif dupcheck(list1):\n print(\"The given list : \", list1)\n print(\"There are duplicates.\")\nelse:\n print(\"The given list : \", list1)\n print(\"No duplicates.\")\n\nif dupcheck(list2):\n print(\"The given list : \", list2)\n print(\"There are duplicates.\")\nelse:\n print(\"The given list : \", list2)\n print(\"No duplicates.\")"
},
{
"code": null,
"e": 3090,
"s": 3035,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 3212,
"s": 3090,
"text": "The given list : ['Mon', 'Tue', 'Wed', 'Mon']\nThere are duplicates.\nThe given list : ['Mon', 'Tue', 'Wed']\nNo duplicates."
}
] |
jQuery Mobile - Form Structure
|
It is used to create a form layout to collect user input. The <form> element should contain action and method attribute that submit via HTTP POST or GET. Following form controls can be used to create the structure.
Text Input
Select Input
Checkbox Input
Radio Input
Slider Input
Search Input
Following example demonstrates the use of form structure in jQuery Mobile.
<!DOCTYPE html>
<html>
<head>
<meta name = "viewport" content = "width = device-width, initial-scale = 1">
<link rel = "stylesheet" href = "https://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.css">
<script src = "https://code.jquery.com/jquery-1.11.3.min.js"></script>
<script src = "https://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.js"></script>
</head>
<body>
<div data-role = "page">
<div data-role = "header">
<h2>Form Structure</h2>
</div>
<div data-role = "main" class = "ui-content">
<form method = "post" action="demo.php">
<label for = "fname">Name</label>
<input type = "text" name = "fname" id = "fname"
placeholder = "Full Name">
<label for = "date">Date</label>
<input type = "date" name = "date" id = "date">
<label for = "select">Select City</label>
<select name = "select" id = "select">
<option value = "1">Belgaum</option>
<option value = "2">Pune</option>
<option value = "3">Chennai</option>
<option value = "4">Bangalore</option>
<option value = "5">Mumbai</option>
</select>
Flipswitch
<input type = "checkbox" data-role = "flipswitch"><br/>
Gender
<label for = "radio1">
<input type = "radio" name = "radio-choice-0" id = "radio1">Male</input>
</label>
<label for = "radio2">
<input type = "radio" name = "radio-choice-0" id = "radio2">Female</input>
</label>
<label for = "radio3">
<input type = "radio" name = "radio-choice-0" id = "radio3">Other</input >
</label>
Education Qualification
<label for = "checkbox1">
<input type = "checkbox" id = "checkbox1">BE
</label>
<label for = "checkbox2">
<input type = "checkbox" id = "checkbox2">BCA
</label>
<label for = "checkbox3">
<input type = "checkbox" id = "checkbox3">BBA
</label>
<label for = "checkbox4">
<input type = "checkbox" id = "checkbox4">MBA
</label>
<label for = "checkbox5">
<input type = "checkbox" id = "checkbox5">MCA
</label>
</form>
</div>
</div>
</body>
</html>
Let's carry out the following steps to see how the above code works −
Save the above html code as form_structure.html file in your server root folder.
Save the above html code as form_structure.html file in your server root folder.
Open this HTML file as http://localhost/form_structure.html and the following output will be displayed.
Open this HTML file as http://localhost/form_structure.html and the following output will be displayed.
27 Lectures
1 hours
Mahesh Kumar
27 Lectures
1.5 hours
Pratik Singh
72 Lectures
4.5 hours
Frahaan Hussain
60 Lectures
9 hours
Eduonix Learning Solutions
17 Lectures
2 hours
Sandip Bhattacharya
12 Lectures
53 mins
Laurence Svekis
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2164,
"s": 1948,
"text": "It is used to create a form layout to collect user input. The <form> element should contain action and method attribute that submit via HTTP POST or GET. Following form controls can be used to create the structure. "
},
{
"code": null,
"e": 2175,
"s": 2164,
"text": "Text Input"
},
{
"code": null,
"e": 2188,
"s": 2175,
"text": "Select Input"
},
{
"code": null,
"e": 2203,
"s": 2188,
"text": "Checkbox Input"
},
{
"code": null,
"e": 2215,
"s": 2203,
"text": "Radio Input"
},
{
"code": null,
"e": 2228,
"s": 2215,
"text": "Slider Input"
},
{
"code": null,
"e": 2241,
"s": 2228,
"text": "Search Input"
},
{
"code": null,
"e": 2316,
"s": 2241,
"text": "Following example demonstrates the use of form structure in jQuery Mobile."
},
{
"code": null,
"e": 5098,
"s": 2316,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1\">\n <link rel = \"stylesheet\" href = \"https://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.css\">\n <script src = \"https://code.jquery.com/jquery-1.11.3.min.js\"></script>\n <script src = \"https://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.js\"></script>\n </head>\n \n <body>\n <div data-role = \"page\">\n <div data-role = \"header\">\n <h2>Form Structure</h2>\n </div>\n \n <div data-role = \"main\" class = \"ui-content\">\n <form method = \"post\" action=\"demo.php\">\n <label for = \"fname\">Name</label>\n <input type = \"text\" name = \"fname\" id = \"fname\" \n placeholder = \"Full Name\">\n\n <label for = \"date\">Date</label>\n <input type = \"date\" name = \"date\" id = \"date\">\n\n <label for = \"select\">Select City</label>\n <select name = \"select\" id = \"select\">\n <option value = \"1\">Belgaum</option>\n <option value = \"2\">Pune</option>\n <option value = \"3\">Chennai</option>\n <option value = \"4\">Bangalore</option>\n <option value = \"5\">Mumbai</option>\n </select>\n\n Flipswitch\n <input type = \"checkbox\" data-role = \"flipswitch\"><br/>\n\n Gender\n <label for = \"radio1\">\n <input type = \"radio\" name = \"radio-choice-0\" id = \"radio1\">Male</input>\n </label>\n \n <label for = \"radio2\">\n <input type = \"radio\" name = \"radio-choice-0\" id = \"radio2\">Female</input>\n </label>\n \n <label for = \"radio3\">\n <input type = \"radio\" name = \"radio-choice-0\" id = \"radio3\">Other</input >\n </label>\n\n Education Qualification\n <label for = \"checkbox1\">\n <input type = \"checkbox\" id = \"checkbox1\">BE\n </label>\n \n <label for = \"checkbox2\">\n <input type = \"checkbox\" id = \"checkbox2\">BCA\n </label>\n \n <label for = \"checkbox3\">\n <input type = \"checkbox\" id = \"checkbox3\">BBA\n </label>\n \n <label for = \"checkbox4\">\n <input type = \"checkbox\" id = \"checkbox4\">MBA\n </label>\n \n <label for = \"checkbox5\">\n <input type = \"checkbox\" id = \"checkbox5\">MCA\n </label>\n </form>\n </div>\n </div>\n \n </body>\n</html>"
},
{
"code": null,
"e": 5168,
"s": 5098,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 5249,
"s": 5168,
"text": "Save the above html code as form_structure.html file in your server root folder."
},
{
"code": null,
"e": 5330,
"s": 5249,
"text": "Save the above html code as form_structure.html file in your server root folder."
},
{
"code": null,
"e": 5434,
"s": 5330,
"text": "Open this HTML file as http://localhost/form_structure.html and the following output will be displayed."
},
{
"code": null,
"e": 5538,
"s": 5434,
"text": "Open this HTML file as http://localhost/form_structure.html and the following output will be displayed."
},
{
"code": null,
"e": 5571,
"s": 5538,
"text": "\n 27 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 5585,
"s": 5571,
"text": " Mahesh Kumar"
},
{
"code": null,
"e": 5620,
"s": 5585,
"text": "\n 27 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 5634,
"s": 5620,
"text": " Pratik Singh"
},
{
"code": null,
"e": 5669,
"s": 5634,
"text": "\n 72 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 5686,
"s": 5669,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 5719,
"s": 5686,
"text": "\n 60 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 5747,
"s": 5719,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 5780,
"s": 5747,
"text": "\n 17 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 5801,
"s": 5780,
"text": " Sandip Bhattacharya"
},
{
"code": null,
"e": 5833,
"s": 5801,
"text": "\n 12 Lectures \n 53 mins\n"
},
{
"code": null,
"e": 5850,
"s": 5833,
"text": " Laurence Svekis"
},
{
"code": null,
"e": 5857,
"s": 5850,
"text": " Print"
},
{
"code": null,
"e": 5868,
"s": 5857,
"text": " Add Notes"
}
] |
Difference between window.location.href, window.location.replace and window.location.assign in JavaScript?
|
The window object includes the location object in JavaScript. It includes the following properties −
It returns the URL of the current page.
<!DOCTYPE html>
<html>
<body>
<p>Click below to get the complete URL of the page.</p>
<button onclick = "display()">URL</button>
<script>
function display() {
var res = location.href;
document.write(res);
}
</script>
</body>
</html>
It is used to replace the current document.
<!DOCTYPE html>
<html>
<body>
<button onclick = "display()">Replace current document</button>
<script>
function display() {
location.replace("https://www.qries.com")
}
</script>
</body>
</html>
If you want to load a new document, use JavaScript assign.
<!DOCTYPE html>
<html>
<body>
<button onclick = "display()">Open new document</button>
<script>
function display() {
location.assign("https://www.qries.com")
}
</script>
</body>
</html>
|
[
{
"code": null,
"e": 1163,
"s": 1062,
"text": "The window object includes the location object in JavaScript. It includes the following properties −"
},
{
"code": null,
"e": 1203,
"s": 1163,
"text": "It returns the URL of the current page."
},
{
"code": null,
"e": 1508,
"s": 1203,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <p>Click below to get the complete URL of the page.</p>\n <button onclick = \"display()\">URL</button>\n <script>\n function display() {\n var res = location.href;\n document.write(res);\n }\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 1552,
"s": 1508,
"text": "It is used to replace the current document."
},
{
"code": null,
"e": 1800,
"s": 1552,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <button onclick = \"display()\">Replace current document</button>\n <script>\n function display() {\n location.replace(\"https://www.qries.com\")\n }\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 1859,
"s": 1800,
"text": "If you want to load a new document, use JavaScript assign."
},
{
"code": null,
"e": 2099,
"s": 1859,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <button onclick = \"display()\">Open new document</button>\n <script>\n function display() {\n location.assign(\"https://www.qries.com\")\n }\n </script>\n </body>\n</html>"
}
] |
Coin game winner where every player has three choices - GeeksforGeeks
|
30 May, 2021
A and B are playing a game. At the beginning there are n coins. Given two more numbers x and y. In each move a player can pick x or y or 1 coins. A always starts the game. The player who picks the last coin wins the game or the person who is not able to pick any coin loses the game. For a given value of n, find whether A will win the game or not if both are playing optimally.
Examples:
Input : n = 5, x = 3, y = 4
Output : A
There are 5 coins, every player can pick 1 or
3 or 4 coins on his/her turn.
A can win by picking 3 coins in first chance.
Now 2 coins will be left so B will pick one
coin and now A can win by picking the last coin.
Input : 2 3 4
Output : B
Let us take few example values of n for x = 3, y = 4. n = 0 A can not pick any coin so he losses n = 1 A can pick 1 coin and win the game n = 2 A can pick only 1 coin. Now B will pick 1 coin and win the game n = 3 4 A will win the game by picking 3 or 4 coins n = 5, 6 A will choose 3 or 4 coins. Now B will have to choose from 2 coins so A will win.We can observe that A wins game for n coins only when B loses for coins n-1 or n-x or n-y.
C++
Java
Python3
C#
PHP
Javascript
// C++ program to find winner of game// if player can pick 1, x, y coins#include <bits/stdc++.h>using namespace std; // To find winner of gamebool findWinner(int x, int y, int n){ // To store results int dp[n + 1]; // Initial values dp[0] = false; dp[1] = true; // Computing other values. for (int i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 and !dp[i - 1]) dp[i] = true; else if (i - x >= 0 and !dp[i - x]) dp[i] = true; else if (i - y >= 0 and !dp[i - y]) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n];} // Driver program to test findWinner();int main(){ int x = 3, y = 4, n = 5; if (findWinner(x, y, n)) cout << 'A'; else cout << 'B'; return 0;}
// Java program to find winner of game// if player can pick 1, x, y coinsimport java.util.Arrays; public class GFG { // To find winner of game static boolean findWinner(int x, int y, int n) { // To store results boolean[] dp = new boolean[n + 1]; Arrays.fill(dp, false); // Initial values dp[0] = false; dp[1] = true; // Computing other values. for (int i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 && dp[i - 1] == false) dp[i] = true; else if (i - x >= 0 && dp[i - x] == false) dp[i] = true; else if (i - y >= 0 && dp[i - y] == false) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n]; } // Driver program to test findWinner(); public static void main(String args[]) { int x = 3, y = 4, n = 5; if (findWinner(x, y, n) == true) System.out.println('A'); else System.out.println('B'); }}// This code is contributed by Sumit Ghosh
# Python3 program to find winner of game# if player can pick 1, x, y coins # To find winner of gamedef findWinner(x, y, n): # To store results dp = [0 for i in range(n + 1)] # Initial values dp[0] = False dp[1] = True # Computing other values. for i in range(2, n + 1): # If A losses any of i-1 or i-x # or i-y game then he will # definitely win game i if (i - 1 >= 0 and not dp[i - 1]): dp[i] = True elif (i - x >= 0 and not dp[i - x]): dp[i] = True elif (i - y >= 0 and not dp[i - y]): dp[i] = True # Else A loses game. else: dp[i] = False # If dp[n] is true then A will # game otherwise he losses return dp[n] # Driver Codex = 3; y = 4; n = 5if (findWinner(x, y, n)): print('A')else: print('B') # This code is contributed by Azkia Anam
// C# program to find winner of game// if player can pick 1, x, y coinsusing System; public class GFG { // To find winner of game static bool findWinner(int x, int y, int n) { // To store results bool[] dp = new bool[n + 1]; for(int i = 0; i < n+1; i++) dp[i] =false; // Initial values dp[0] = false; dp[1] = true; // Computing other values. for (int i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 && dp[i - 1] == false) dp[i] = true; else if (i - x >= 0 && dp[i - x] == false) dp[i] = true; else if (i - y >= 0 && dp[i - y] == false) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n]; } // Driver program to test findWinner(); public static void Main() { int x = 3, y = 4, n = 5; if (findWinner(x, y, n) == true) Console.WriteLine('A'); else Console.WriteLine('B'); }} // This code is contributed by vt_m.
<?php// PHP program to find winner of game// if player can pick 1, x, y coins // To find winner of gamefunction findWinner( $x, $y, $n){ // To store results $dp= array(); // Initial values $dp[0] = false; $dp[1] = true; // Computing other values. for ($i = 2; $i <= $n; $i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if ($i - 1 >= 0 and !$dp[$i - 1]) $dp[$i] = true; else if ($i - $x >= 0 and !$dp[$i - $x]) $dp[$i] = true; else if ($i - $y >= 0 and !$dp[$i - $y]) $dp[$i] = true; // Else A loses game. else $dp[$i] = false; } // If dp[n] is true then A will // game otherwise he losses return $dp[$n];} // Driver program to test findWinner(); $x = 3; $y = 4; $n = 5; if (findWinner($x, $y, $n)) echo 'A'; else echo 'B'; // This code is contributed by anuj_67.?>
<script> // Javascript program to find winner of game// if player can pick 1, x, y coins // To find winner of gamefunction findWinner(x, y, n){ // To store results var dp = Array(n + 1).fill(0); // Initial values dp[0] = false; dp[1] = true; // Computing other values. for(var i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 && !dp[i - 1]) dp[i] = true; else if (i - x >= 0 && !dp[i - x]) dp[i] = true; else if (i - y >= 0 && !dp[i - y]) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n];} // Driver codevar x = 3, y = 4, n = 5;if (findWinner(x, y, n)) document.write('A');else document.write('B'); // This code is contributed by noob2000 </script>
Output:
A
This article is contributed by nuclode. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
vt_m
noob2000
arpitprasad928
jvishal1968
Dynamic Programming
Dynamic Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Bellman–Ford Algorithm | DP-23
Floyd Warshall Algorithm | DP-16
Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
Edit Distance | DP-5
Overlapping Subproblems Property in Dynamic Programming | DP-1
Efficient program to print all prime factors of a given number
Find minimum number of coins that make a given value
Minimum number of jumps to reach end
Tabulation vs Memoization
Longest Common Substring | DP-29
|
[
{
"code": null,
"e": 25216,
"s": 25188,
"text": "\n30 May, 2021"
},
{
"code": null,
"e": 25595,
"s": 25216,
"text": "A and B are playing a game. At the beginning there are n coins. Given two more numbers x and y. In each move a player can pick x or y or 1 coins. A always starts the game. The player who picks the last coin wins the game or the person who is not able to pick any coin loses the game. For a given value of n, find whether A will win the game or not if both are playing optimally."
},
{
"code": null,
"e": 25606,
"s": 25595,
"text": "Examples: "
},
{
"code": null,
"e": 25888,
"s": 25606,
"text": "Input : n = 5, x = 3, y = 4\nOutput : A\nThere are 5 coins, every player can pick 1 or\n3 or 4 coins on his/her turn.\nA can win by picking 3 coins in first chance.\nNow 2 coins will be left so B will pick one \ncoin and now A can win by picking the last coin.\n\nInput : 2 3 4\nOutput : B"
},
{
"code": null,
"e": 26330,
"s": 25888,
"text": "Let us take few example values of n for x = 3, y = 4. n = 0 A can not pick any coin so he losses n = 1 A can pick 1 coin and win the game n = 2 A can pick only 1 coin. Now B will pick 1 coin and win the game n = 3 4 A will win the game by picking 3 or 4 coins n = 5, 6 A will choose 3 or 4 coins. Now B will have to choose from 2 coins so A will win.We can observe that A wins game for n coins only when B loses for coins n-1 or n-x or n-y. "
},
{
"code": null,
"e": 26334,
"s": 26330,
"text": "C++"
},
{
"code": null,
"e": 26339,
"s": 26334,
"text": "Java"
},
{
"code": null,
"e": 26347,
"s": 26339,
"text": "Python3"
},
{
"code": null,
"e": 26350,
"s": 26347,
"text": "C#"
},
{
"code": null,
"e": 26354,
"s": 26350,
"text": "PHP"
},
{
"code": null,
"e": 26365,
"s": 26354,
"text": "Javascript"
},
{
"code": "// C++ program to find winner of game// if player can pick 1, x, y coins#include <bits/stdc++.h>using namespace std; // To find winner of gamebool findWinner(int x, int y, int n){ // To store results int dp[n + 1]; // Initial values dp[0] = false; dp[1] = true; // Computing other values. for (int i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 and !dp[i - 1]) dp[i] = true; else if (i - x >= 0 and !dp[i - x]) dp[i] = true; else if (i - y >= 0 and !dp[i - y]) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n];} // Driver program to test findWinner();int main(){ int x = 3, y = 4, n = 5; if (findWinner(x, y, n)) cout << 'A'; else cout << 'B'; return 0;}",
"e": 27344,
"s": 26365,
"text": null
},
{
"code": "// Java program to find winner of game// if player can pick 1, x, y coinsimport java.util.Arrays; public class GFG { // To find winner of game static boolean findWinner(int x, int y, int n) { // To store results boolean[] dp = new boolean[n + 1]; Arrays.fill(dp, false); // Initial values dp[0] = false; dp[1] = true; // Computing other values. for (int i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 && dp[i - 1] == false) dp[i] = true; else if (i - x >= 0 && dp[i - x] == false) dp[i] = true; else if (i - y >= 0 && dp[i - y] == false) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n]; } // Driver program to test findWinner(); public static void main(String args[]) { int x = 3, y = 4, n = 5; if (findWinner(x, y, n) == true) System.out.println('A'); else System.out.println('B'); }}// This code is contributed by Sumit Ghosh",
"e": 28677,
"s": 27344,
"text": null
},
{
"code": "# Python3 program to find winner of game# if player can pick 1, x, y coins # To find winner of gamedef findWinner(x, y, n): # To store results dp = [0 for i in range(n + 1)] # Initial values dp[0] = False dp[1] = True # Computing other values. for i in range(2, n + 1): # If A losses any of i-1 or i-x # or i-y game then he will # definitely win game i if (i - 1 >= 0 and not dp[i - 1]): dp[i] = True elif (i - x >= 0 and not dp[i - x]): dp[i] = True elif (i - y >= 0 and not dp[i - y]): dp[i] = True # Else A loses game. else: dp[i] = False # If dp[n] is true then A will # game otherwise he losses return dp[n] # Driver Codex = 3; y = 4; n = 5if (findWinner(x, y, n)): print('A')else: print('B') # This code is contributed by Azkia Anam",
"e": 29562,
"s": 28677,
"text": null
},
{
"code": "// C# program to find winner of game// if player can pick 1, x, y coinsusing System; public class GFG { // To find winner of game static bool findWinner(int x, int y, int n) { // To store results bool[] dp = new bool[n + 1]; for(int i = 0; i < n+1; i++) dp[i] =false; // Initial values dp[0] = false; dp[1] = true; // Computing other values. for (int i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 && dp[i - 1] == false) dp[i] = true; else if (i - x >= 0 && dp[i - x] == false) dp[i] = true; else if (i - y >= 0 && dp[i - y] == false) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n]; } // Driver program to test findWinner(); public static void Main() { int x = 3, y = 4, n = 5; if (findWinner(x, y, n) == true) Console.WriteLine('A'); else Console.WriteLine('B'); }} // This code is contributed by vt_m.",
"e": 30901,
"s": 29562,
"text": null
},
{
"code": "<?php// PHP program to find winner of game// if player can pick 1, x, y coins // To find winner of gamefunction findWinner( $x, $y, $n){ // To store results $dp= array(); // Initial values $dp[0] = false; $dp[1] = true; // Computing other values. for ($i = 2; $i <= $n; $i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if ($i - 1 >= 0 and !$dp[$i - 1]) $dp[$i] = true; else if ($i - $x >= 0 and !$dp[$i - $x]) $dp[$i] = true; else if ($i - $y >= 0 and !$dp[$i - $y]) $dp[$i] = true; // Else A loses game. else $dp[$i] = false; } // If dp[n] is true then A will // game otherwise he losses return $dp[$n];} // Driver program to test findWinner(); $x = 3; $y = 4; $n = 5; if (findWinner($x, $y, $n)) echo 'A'; else echo 'B'; // This code is contributed by anuj_67.?>",
"e": 31888,
"s": 30901,
"text": null
},
{
"code": "<script> // Javascript program to find winner of game// if player can pick 1, x, y coins // To find winner of gamefunction findWinner(x, y, n){ // To store results var dp = Array(n + 1).fill(0); // Initial values dp[0] = false; dp[1] = true; // Computing other values. for(var i = 2; i <= n; i++) { // If A losses any of i-1 or i-x // or i-y game then he will // definitely win game i if (i - 1 >= 0 && !dp[i - 1]) dp[i] = true; else if (i - x >= 0 && !dp[i - x]) dp[i] = true; else if (i - y >= 0 && !dp[i - y]) dp[i] = true; // Else A loses game. else dp[i] = false; } // If dp[n] is true then A will // game otherwise he losses return dp[n];} // Driver codevar x = 3, y = 4, n = 5;if (findWinner(x, y, n)) document.write('A');else document.write('B'); // This code is contributed by noob2000 </script>",
"e": 32853,
"s": 31888,
"text": null
},
{
"code": null,
"e": 32862,
"s": 32853,
"text": "Output: "
},
{
"code": null,
"e": 32864,
"s": 32862,
"text": "A"
},
{
"code": null,
"e": 33285,
"s": 32864,
"text": "This article is contributed by nuclode. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 33290,
"s": 33285,
"text": "vt_m"
},
{
"code": null,
"e": 33299,
"s": 33290,
"text": "noob2000"
},
{
"code": null,
"e": 33314,
"s": 33299,
"text": "arpitprasad928"
},
{
"code": null,
"e": 33326,
"s": 33314,
"text": "jvishal1968"
},
{
"code": null,
"e": 33346,
"s": 33326,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 33366,
"s": 33346,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 33464,
"s": 33366,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33495,
"s": 33464,
"text": "Bellman–Ford Algorithm | DP-23"
},
{
"code": null,
"e": 33528,
"s": 33495,
"text": "Floyd Warshall Algorithm | DP-16"
},
{
"code": null,
"e": 33596,
"s": 33528,
"text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)"
},
{
"code": null,
"e": 33617,
"s": 33596,
"text": "Edit Distance | DP-5"
},
{
"code": null,
"e": 33680,
"s": 33617,
"text": "Overlapping Subproblems Property in Dynamic Programming | DP-1"
},
{
"code": null,
"e": 33743,
"s": 33680,
"text": "Efficient program to print all prime factors of a given number"
},
{
"code": null,
"e": 33796,
"s": 33743,
"text": "Find minimum number of coins that make a given value"
},
{
"code": null,
"e": 33833,
"s": 33796,
"text": "Minimum number of jumps to reach end"
},
{
"code": null,
"e": 33859,
"s": 33833,
"text": "Tabulation vs Memoization"
}
] |
Difference between RGB vs RGBA color format - GeeksforGeeks
|
07 Apr, 2021
In this article, we will discuss the differences between RGB and RGBA color schemes in detail. We will also see how these schemes can be used in CSS.
RGB Color Scheme: It is a three-channel format containing data for red, green, and blue colors. In CSS, the RGB color format can be specified using:
rgb(red, green, blue)
Each parameter in rgb() function defines the intensity of colors in a range of 0 to 255. The value 0 defines no color of that type being used while 255 defines the highest value of that color being used.
RGBA Color Scheme: The RGBA color format is an extension of the RGB scheme with an added alpha channel that specifies the opacity of the color. In CSS, the RGBA color format can be specified using:
rgba(red, green, blue, alpha)
The alpha value is declared as a decimal number from 0 to 1, where 0 is fully transparent and 1 is fully opaque.
The below example demonstrates the difference between these two color schemes.
Example:
HTML
<html><head> <style> div { height: 75px; width: 500px; padding: 10px; font-size: 1.5rem; } #div1 { background-color: rgb(255, 0, 0); } #div2 { background-color: rgb(0, 192, 192); } #div3 { background-color: rgb(255, 0, 0, 0.1); } #div4 { background-color: rgb(0, 192, 192, 0.6); } </style></head><body> <h1 style="color: green"> GeeksforGeeks </h1> <h3> Difference between RGB and RGBA color scheme </h3> <p>An RGB color value is specified with the rgb() function: rgb(red, green, blue) </p> <p>An RGBA color value is specified with the rgba() function: rgba(red,green,blue,opacity) </p> <div id="div1">Red with rgb()</div> <div id="div2">Color with rgb()</div> <div id="div3"> Red with rgba() and alpha </div> <div id="div4"> Color with rgba() and alpha </div></body></html>
Output:
Key Differences between RGB Color Format and RGBA Color Format:
CSS-Basics
CSS-Questions
Picked
CSS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Design a web page using HTML and CSS
Form validation using jQuery
How to set space between the flexbox ?
Search Bar using HTML, CSS and JavaScript
How to Create Time-Table schedule using HTML ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
|
[
{
"code": null,
"e": 25402,
"s": 25374,
"text": "\n07 Apr, 2021"
},
{
"code": null,
"e": 25552,
"s": 25402,
"text": "In this article, we will discuss the differences between RGB and RGBA color schemes in detail. We will also see how these schemes can be used in CSS."
},
{
"code": null,
"e": 25701,
"s": 25552,
"text": "RGB Color Scheme: It is a three-channel format containing data for red, green, and blue colors. In CSS, the RGB color format can be specified using:"
},
{
"code": null,
"e": 25723,
"s": 25701,
"text": "rgb(red, green, blue)"
},
{
"code": null,
"e": 25927,
"s": 25723,
"text": "Each parameter in rgb() function defines the intensity of colors in a range of 0 to 255. The value 0 defines no color of that type being used while 255 defines the highest value of that color being used."
},
{
"code": null,
"e": 26125,
"s": 25927,
"text": "RGBA Color Scheme: The RGBA color format is an extension of the RGB scheme with an added alpha channel that specifies the opacity of the color. In CSS, the RGBA color format can be specified using:"
},
{
"code": null,
"e": 26155,
"s": 26125,
"text": "rgba(red, green, blue, alpha)"
},
{
"code": null,
"e": 26268,
"s": 26155,
"text": "The alpha value is declared as a decimal number from 0 to 1, where 0 is fully transparent and 1 is fully opaque."
},
{
"code": null,
"e": 26347,
"s": 26268,
"text": "The below example demonstrates the difference between these two color schemes."
},
{
"code": null,
"e": 26356,
"s": 26347,
"text": "Example:"
},
{
"code": null,
"e": 26361,
"s": 26356,
"text": "HTML"
},
{
"code": "<html><head> <style> div { height: 75px; width: 500px; padding: 10px; font-size: 1.5rem; } #div1 { background-color: rgb(255, 0, 0); } #div2 { background-color: rgb(0, 192, 192); } #div3 { background-color: rgb(255, 0, 0, 0.1); } #div4 { background-color: rgb(0, 192, 192, 0.6); } </style></head><body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <h3> Difference between RGB and RGBA color scheme </h3> <p>An RGB color value is specified with the rgb() function: rgb(red, green, blue) </p> <p>An RGBA color value is specified with the rgba() function: rgba(red,green,blue,opacity) </p> <div id=\"div1\">Red with rgb()</div> <div id=\"div2\">Color with rgb()</div> <div id=\"div3\"> Red with rgba() and alpha </div> <div id=\"div4\"> Color with rgba() and alpha </div></body></html>",
"e": 27279,
"s": 26361,
"text": null
},
{
"code": null,
"e": 27287,
"s": 27279,
"text": "Output:"
},
{
"code": null,
"e": 27351,
"s": 27287,
"text": "Key Differences between RGB Color Format and RGBA Color Format:"
},
{
"code": null,
"e": 27362,
"s": 27351,
"text": "CSS-Basics"
},
{
"code": null,
"e": 27376,
"s": 27362,
"text": "CSS-Questions"
},
{
"code": null,
"e": 27383,
"s": 27376,
"text": "Picked"
},
{
"code": null,
"e": 27387,
"s": 27383,
"text": "CSS"
},
{
"code": null,
"e": 27404,
"s": 27387,
"text": "Web Technologies"
},
{
"code": null,
"e": 27502,
"s": 27404,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27539,
"s": 27502,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 27568,
"s": 27539,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 27607,
"s": 27568,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 27649,
"s": 27607,
"text": "Search Bar using HTML, CSS and JavaScript"
},
{
"code": null,
"e": 27696,
"s": 27649,
"text": "How to Create Time-Table schedule using HTML ?"
},
{
"code": null,
"e": 27738,
"s": 27696,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27771,
"s": 27738,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 27814,
"s": 27771,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 27859,
"s": 27814,
"text": "Convert a string to an integer in JavaScript"
}
] |
Design a web page using HTML and CSS - GeeksforGeeks
|
04 Aug, 2021
Creating an attractive page will be difficult for those who are not experts in CSS. Without using CSS, you will not be able to make the web page, more attractive. So in order to make a web page, we need to have a knowledge of HTML and CSS. In this article, the main focus will be implementing CSS. In order to design a web page we need to first create an HTML web structure.
Creating structure: In this section, we will create a simple structure of web page by using <li> and <section> tags. So this will create a simple interface which you can check by running the following code .
HTML code:
html
<!DOCTYPE html> <html> <head> <title> Simple web Development Template </title></head> <body> <nav class="navbar background"> <ul class="nav-list"> <div class="logo"> <img src="logo.png"> </div> <li><a href="#web">Web Technology</a></li> <li><a href="#program">C Programming</a></li> <li><a href="#course">Courses</a></li> </ul> <div class="rightNav"> <input type="text" name="search" id="search"> <button class="btn btn-sm">Search</button> </div> </nav> <section class="firstsection"> <div class="box-main"> <div class="firstHalf"> <h1 class="text-big" id="web"> Web Technology </h1> <p class="text-small"> HTML stands for HyperText Markup Language. It is used to design web pages using a markup language. HTML is the combination of Hypertext and Markup language. Hypertext defines the link between the web pages. A markup language is used to define the text document within tag which defines the structure of web pages. HTML is a markup language that is used by the browser to manipulate text, images, and other content to display it in the required format. </p> </div> </div> </section> <section class="secondsection"> <div class="box-main"> <div class="secondHalf"> <h1 class="text-big" id="program"> C Programming </h1> <p class="text-small"> C is a procedural programming language. It was initially developed by Dennis Ritchie as a system programming language to write operating system. The main features of C language include low-level access to memory, simple set of keywords, and clean style, these features make C language suitable for system programming like operating system or compiler development. </p> </div> </div> </section> <section class="section"> <div class="paras"> <h1 class="sectionTag text-big">Java</h1> <p class="sectionSubTag text-small"> Java has been one of the most popular programming language for many years. Java is Object Oriented. However it is not considered as pure object oriented as it provides support for primitive data types (like int, char, etc) The Java codes are first compiled into byte code (machine independent code). Then the byte code is run on Java Virtual Machine (JVM) regardless of the underlying architecture. </p> </div> <div class="thumbnail"> <img src="img.png" alt="laptop image"> </div> </section> <footer class="background"> <p class="text-footer"> Copyright ©-All rights are reserved </p> </footer></body> </html>
We have used classes like section, section-left which is used in CSS to give a proper styling, as it will make the web page more attractive.
CSS design: We will use CSS to give proper design effects to the HTML web structure that we have created in HTML code. The most difficult part is to display the picture in a different direction. Consider the picture is in the right direction and the text along with it is in the left direction. When we use flex-direction:row-reverse, the image which is on the right side will be shown on the left side and the text will be shown on the right side.
CSS code:
CSS
<style> * { margin: 0; padding: 0; } .navbar { display: flex; align-items: center; justify-content: center; position: sticky; top: 0; cursor: pointer; } .background { background: black; background-blend-mode: darken; background-size: cover; } .nav-list { width: 70%; display: flex; align-items: center; } .logo { display: flex; justify-content: center; align-items: center; } .logo img { width: 180px; border-radius: 50px; } .nav-list li { list-style: none; padding: 26px 30px; } .nav-list li a { text-decoration: none; color: white; } .nav-list li a:hover { color: grey; } .rightnav { width: 30%; text-align: right; } #search { padding: 5px; font-size: 17px; border: 2px solid grey; border-radius: 9px; } .firstsection { background-color: green; height: 400px; } .secondsection { background-color: blue; height: 400px; } .box-main { display: flex; justify-content: center; align-items: center; color: black; max-width: 80%; margin: auto; height: 80%; } .firsthalf { width: 100%; display: flex; flex-direction: column; justify-content: center; } .secondhalf { width: 30%; } .secondhalf img { width: 70%; border: 4px solid white; border-radius: 150px; display: block; margin: auto; } .text-big { font-family: 'Piazzolla', serif; font-weight: bold; font-size: 35px; } .text-small { font-size: 18px; } .btn { padding: 8px 20px; margin: 7px 0; border: 2px solid white; border-radius: 8px; background: none; color: white; cursor: pointer; } .btn-sm { padding: 6px 10px; vertical-align: middle; } .section { height: 400px; display: flex; align-items: center; justify-content: center; max-width: 90%; margin: auto; } .section-Left { flex-direction: row-reverse; } .paras { padding: 0px 65px; } .thumbnail img { width: 250px; border: 2px solid black; border-radius: 26px; margin-top: 19px; } .center { text-align: center; } .text-footer { text-align: center; padding: 30px 0; font-family: 'Ubuntu', sans-serif; display: flex; justify-content: center; color: white; }</style>
Final code: We will combine both HTML and CSS in order to create the web page.
html
<!DOCTYPE html> <html> <head> <title>Simple web Development Template</title> <style> * { margin: 0; padding: 0; } .navbar { display: flex; align-items: center; justify-content: center; position: sticky; top: 0; cursor: pointer; } .background { background: black; background-blend-mode: darken; background-size: cover; } .nav-list { width: 70%; display: flex; align-items: center; } .logo { display: flex; justify-content: center; align-items: center; } .logo img { width: 180px; border-radius: 50px; } .nav-list li { list-style: none; padding: 26px 30px; } .nav-list li a { text-decoration: none; color: white; } .nav-list li a:hover { color: grey; } .rightnav { width: 30%; text-align: right; } #search { padding: 5px; font-size: 17px; border: 2px solid grey; border-radius: 9px; } .firstsection { background-color: green; height: 400px; } .secondsection { background-color: blue; height: 400px; } .box-main { display: flex; justify-content: center; align-items: center; color: black; max-width: 80%; margin: auto; height: 80%; } .firsthalf { width: 100%; display: flex; flex-direction: column; justify-content: center; } .secondhalf { width: 30%; } .secondhalf img { width: 70%; border: 4px solid white; border-radius: 150px; display: block; margin: auto; } .text-big { font-family: 'Piazzolla', serif; font-weight: bold; font-size: 35px; } .text-small { font-size: 18px; } .btn { padding: 8px 20px; margin: 7px 0; border: 2px solid white; border-radius: 8px; background: none; color: white; cursor: pointer; } .btn-sm { padding: 6px 10px; vertical-align: middle; } .section { height: 400px; display: flex; align-items: center; justify-content: center; max-width: 90%; margin: auto; } .section-Left { flex-direction: row-reverse; } .paras { padding: 0px 65px; } .thumbnail img { width: 250px; border: 2px solid black; border-radius: 26px; margin-top: 19px; } .center { text-align: center; } .text-footer { text-align: center; padding: 30px 0; font-family: 'Ubuntu', sans-serif; display: flex; justify-content: center; color: white; } </style></head> <body> <nav class="navbar background"> <ul class="nav-list"> <div class="logo"> <img src= "logo.png"> </div> <li><a href="#web">Web Technology</a></li> <li><a href="#program">C Programming</a></li> <li><a href="#course">Courses</a></li> </ul> <div class="rightNav"> <input type="text" name="search" id="search"> <button class="btn btn-sm">Search</button> </div> </nav> <section class="firstsection"> <div class="box-main"> <div class="firstHalf"> <h1 class="text-big" id="web">Web Technology</h1> <p class="text-small"> HTML stands for HyperText Markup Language. It is used to design web pages using a markup language. HTML is the combination of Hypertext and Markup language. Hypertext defines the link between the web pages. A markup language is used to define the text document within tag which defines the structure of web pages. HTML is a markup language that is used by the browser to manipulate text, images, and other content to display it in the required format. </p> </div> </div> </section> <section class="secondsection"> <div class="box-main"> <div class="firstHalf"> <h1 class="text-big" id="program"> C Programming </h1> <p class="text-small"> C is a procedural programming language. It was initially developed by Dennis Ritchie as a system programming language to write operating system. The main features of C language include low-level access to memory, simple set of keywords, and clean style, these features make C language suitable for system programming like operating system or compiler development. </p> </div> </div> </section> <section class="section"> <div class="paras"> <h1 class="sectionTag text-big">Java</h1> <p class="sectionSubTag text-small"> Java has been one of the most popular programming language for many years. Java is Object Oriented. However it is not considered as pure object oriented as it provides support for primitive data types (like int, char, etc) The Java codes are first compiled into byte code (machine independent code). Then the byte code is run on Java Virtual Machine (JVM) regardless of the underlying architecture. </p> </div> <div class="thumbnail"> <img src= "img.png" alt="laptop image"> </div> </section> <footer class="background"> <p class="text-footer"> Copyright ©-All rights are reserved </p> </footer></body> </html>
Output:
Supported Browser:
Google Chrome
Microsoft Edge
Firefox
Opera
Safari
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
harshittiwari7397
ysachin2314
CSS-Misc
HTML-Misc
CSS
HTML
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to create footer to stay at the bottom of a Web page?
Types of CSS (Cascading Style Sheet)
How to position a div at the bottom of its container using CSS?
Create a Responsive Navbar using ReactJS
How to Upload Image into Database and Display it using PHP ?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
Hide or show elements in HTML using display property
How to Insert Form Data into Database using PHP ?
REST API (Introduction)
|
[
{
"code": null,
"e": 24364,
"s": 24336,
"text": "\n04 Aug, 2021"
},
{
"code": null,
"e": 24739,
"s": 24364,
"text": "Creating an attractive page will be difficult for those who are not experts in CSS. Without using CSS, you will not be able to make the web page, more attractive. So in order to make a web page, we need to have a knowledge of HTML and CSS. In this article, the main focus will be implementing CSS. In order to design a web page we need to first create an HTML web structure."
},
{
"code": null,
"e": 24947,
"s": 24739,
"text": "Creating structure: In this section, we will create a simple structure of web page by using <li> and <section> tags. So this will create a simple interface which you can check by running the following code ."
},
{
"code": null,
"e": 24960,
"s": 24947,
"text": "HTML code: "
},
{
"code": null,
"e": 24965,
"s": 24960,
"text": "html"
},
{
"code": "<!DOCTYPE html> <html> <head> <title> Simple web Development Template </title></head> <body> <nav class=\"navbar background\"> <ul class=\"nav-list\"> <div class=\"logo\"> <img src=\"logo.png\"> </div> <li><a href=\"#web\">Web Technology</a></li> <li><a href=\"#program\">C Programming</a></li> <li><a href=\"#course\">Courses</a></li> </ul> <div class=\"rightNav\"> <input type=\"text\" name=\"search\" id=\"search\"> <button class=\"btn btn-sm\">Search</button> </div> </nav> <section class=\"firstsection\"> <div class=\"box-main\"> <div class=\"firstHalf\"> <h1 class=\"text-big\" id=\"web\"> Web Technology </h1> <p class=\"text-small\"> HTML stands for HyperText Markup Language. It is used to design web pages using a markup language. HTML is the combination of Hypertext and Markup language. Hypertext defines the link between the web pages. A markup language is used to define the text document within tag which defines the structure of web pages. HTML is a markup language that is used by the browser to manipulate text, images, and other content to display it in the required format. </p> </div> </div> </section> <section class=\"secondsection\"> <div class=\"box-main\"> <div class=\"secondHalf\"> <h1 class=\"text-big\" id=\"program\"> C Programming </h1> <p class=\"text-small\"> C is a procedural programming language. It was initially developed by Dennis Ritchie as a system programming language to write operating system. The main features of C language include low-level access to memory, simple set of keywords, and clean style, these features make C language suitable for system programming like operating system or compiler development. </p> </div> </div> </section> <section class=\"section\"> <div class=\"paras\"> <h1 class=\"sectionTag text-big\">Java</h1> <p class=\"sectionSubTag text-small\"> Java has been one of the most popular programming language for many years. Java is Object Oriented. However it is not considered as pure object oriented as it provides support for primitive data types (like int, char, etc) The Java codes are first compiled into byte code (machine independent code). Then the byte code is run on Java Virtual Machine (JVM) regardless of the underlying architecture. </p> </div> <div class=\"thumbnail\"> <img src=\"img.png\" alt=\"laptop image\"> </div> </section> <footer class=\"background\"> <p class=\"text-footer\"> Copyright ©-All rights are reserved </p> </footer></body> </html>",
"e": 28499,
"s": 24965,
"text": null
},
{
"code": null,
"e": 28640,
"s": 28499,
"text": "We have used classes like section, section-left which is used in CSS to give a proper styling, as it will make the web page more attractive."
},
{
"code": null,
"e": 29089,
"s": 28640,
"text": "CSS design: We will use CSS to give proper design effects to the HTML web structure that we have created in HTML code. The most difficult part is to display the picture in a different direction. Consider the picture is in the right direction and the text along with it is in the left direction. When we use flex-direction:row-reverse, the image which is on the right side will be shown on the left side and the text will be shown on the right side."
},
{
"code": null,
"e": 29101,
"s": 29089,
"text": "CSS code: "
},
{
"code": null,
"e": 29105,
"s": 29101,
"text": "CSS"
},
{
"code": "<style> * { margin: 0; padding: 0; } .navbar { display: flex; align-items: center; justify-content: center; position: sticky; top: 0; cursor: pointer; } .background { background: black; background-blend-mode: darken; background-size: cover; } .nav-list { width: 70%; display: flex; align-items: center; } .logo { display: flex; justify-content: center; align-items: center; } .logo img { width: 180px; border-radius: 50px; } .nav-list li { list-style: none; padding: 26px 30px; } .nav-list li a { text-decoration: none; color: white; } .nav-list li a:hover { color: grey; } .rightnav { width: 30%; text-align: right; } #search { padding: 5px; font-size: 17px; border: 2px solid grey; border-radius: 9px; } .firstsection { background-color: green; height: 400px; } .secondsection { background-color: blue; height: 400px; } .box-main { display: flex; justify-content: center; align-items: center; color: black; max-width: 80%; margin: auto; height: 80%; } .firsthalf { width: 100%; display: flex; flex-direction: column; justify-content: center; } .secondhalf { width: 30%; } .secondhalf img { width: 70%; border: 4px solid white; border-radius: 150px; display: block; margin: auto; } .text-big { font-family: 'Piazzolla', serif; font-weight: bold; font-size: 35px; } .text-small { font-size: 18px; } .btn { padding: 8px 20px; margin: 7px 0; border: 2px solid white; border-radius: 8px; background: none; color: white; cursor: pointer; } .btn-sm { padding: 6px 10px; vertical-align: middle; } .section { height: 400px; display: flex; align-items: center; justify-content: center; max-width: 90%; margin: auto; } .section-Left { flex-direction: row-reverse; } .paras { padding: 0px 65px; } .thumbnail img { width: 250px; border: 2px solid black; border-radius: 26px; margin-top: 19px; } .center { text-align: center; } .text-footer { text-align: center; padding: 30px 0; font-family: 'Ubuntu', sans-serif; display: flex; justify-content: center; color: white; }</style>",
"e": 31841,
"s": 29105,
"text": null
},
{
"code": null,
"e": 31920,
"s": 31841,
"text": "Final code: We will combine both HTML and CSS in order to create the web page."
},
{
"code": null,
"e": 31925,
"s": 31920,
"text": "html"
},
{
"code": "<!DOCTYPE html> <html> <head> <title>Simple web Development Template</title> <style> * { margin: 0; padding: 0; } .navbar { display: flex; align-items: center; justify-content: center; position: sticky; top: 0; cursor: pointer; } .background { background: black; background-blend-mode: darken; background-size: cover; } .nav-list { width: 70%; display: flex; align-items: center; } .logo { display: flex; justify-content: center; align-items: center; } .logo img { width: 180px; border-radius: 50px; } .nav-list li { list-style: none; padding: 26px 30px; } .nav-list li a { text-decoration: none; color: white; } .nav-list li a:hover { color: grey; } .rightnav { width: 30%; text-align: right; } #search { padding: 5px; font-size: 17px; border: 2px solid grey; border-radius: 9px; } .firstsection { background-color: green; height: 400px; } .secondsection { background-color: blue; height: 400px; } .box-main { display: flex; justify-content: center; align-items: center; color: black; max-width: 80%; margin: auto; height: 80%; } .firsthalf { width: 100%; display: flex; flex-direction: column; justify-content: center; } .secondhalf { width: 30%; } .secondhalf img { width: 70%; border: 4px solid white; border-radius: 150px; display: block; margin: auto; } .text-big { font-family: 'Piazzolla', serif; font-weight: bold; font-size: 35px; } .text-small { font-size: 18px; } .btn { padding: 8px 20px; margin: 7px 0; border: 2px solid white; border-radius: 8px; background: none; color: white; cursor: pointer; } .btn-sm { padding: 6px 10px; vertical-align: middle; } .section { height: 400px; display: flex; align-items: center; justify-content: center; max-width: 90%; margin: auto; } .section-Left { flex-direction: row-reverse; } .paras { padding: 0px 65px; } .thumbnail img { width: 250px; border: 2px solid black; border-radius: 26px; margin-top: 19px; } .center { text-align: center; } .text-footer { text-align: center; padding: 30px 0; font-family: 'Ubuntu', sans-serif; display: flex; justify-content: center; color: white; } </style></head> <body> <nav class=\"navbar background\"> <ul class=\"nav-list\"> <div class=\"logo\"> <img src= \"logo.png\"> </div> <li><a href=\"#web\">Web Technology</a></li> <li><a href=\"#program\">C Programming</a></li> <li><a href=\"#course\">Courses</a></li> </ul> <div class=\"rightNav\"> <input type=\"text\" name=\"search\" id=\"search\"> <button class=\"btn btn-sm\">Search</button> </div> </nav> <section class=\"firstsection\"> <div class=\"box-main\"> <div class=\"firstHalf\"> <h1 class=\"text-big\" id=\"web\">Web Technology</h1> <p class=\"text-small\"> HTML stands for HyperText Markup Language. It is used to design web pages using a markup language. HTML is the combination of Hypertext and Markup language. Hypertext defines the link between the web pages. A markup language is used to define the text document within tag which defines the structure of web pages. HTML is a markup language that is used by the browser to manipulate text, images, and other content to display it in the required format. </p> </div> </div> </section> <section class=\"secondsection\"> <div class=\"box-main\"> <div class=\"firstHalf\"> <h1 class=\"text-big\" id=\"program\"> C Programming </h1> <p class=\"text-small\"> C is a procedural programming language. It was initially developed by Dennis Ritchie as a system programming language to write operating system. The main features of C language include low-level access to memory, simple set of keywords, and clean style, these features make C language suitable for system programming like operating system or compiler development. </p> </div> </div> </section> <section class=\"section\"> <div class=\"paras\"> <h1 class=\"sectionTag text-big\">Java</h1> <p class=\"sectionSubTag text-small\"> Java has been one of the most popular programming language for many years. Java is Object Oriented. However it is not considered as pure object oriented as it provides support for primitive data types (like int, char, etc) The Java codes are first compiled into byte code (machine independent code). Then the byte code is run on Java Virtual Machine (JVM) regardless of the underlying architecture. </p> </div> <div class=\"thumbnail\"> <img src= \"img.png\" alt=\"laptop image\"> </div> </section> <footer class=\"background\"> <p class=\"text-footer\"> Copyright ©-All rights are reserved </p> </footer></body> </html>",
"e": 38577,
"s": 31925,
"text": null
},
{
"code": null,
"e": 38587,
"s": 38577,
"text": "Output: "
},
{
"code": null,
"e": 38607,
"s": 38587,
"text": " Supported Browser:"
},
{
"code": null,
"e": 38621,
"s": 38607,
"text": "Google Chrome"
},
{
"code": null,
"e": 38636,
"s": 38621,
"text": "Microsoft Edge"
},
{
"code": null,
"e": 38644,
"s": 38636,
"text": "Firefox"
},
{
"code": null,
"e": 38650,
"s": 38644,
"text": "Opera"
},
{
"code": null,
"e": 38657,
"s": 38650,
"text": "Safari"
},
{
"code": null,
"e": 38794,
"s": 38657,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 38812,
"s": 38794,
"text": "harshittiwari7397"
},
{
"code": null,
"e": 38824,
"s": 38812,
"text": "ysachin2314"
},
{
"code": null,
"e": 38833,
"s": 38824,
"text": "CSS-Misc"
},
{
"code": null,
"e": 38843,
"s": 38833,
"text": "HTML-Misc"
},
{
"code": null,
"e": 38847,
"s": 38843,
"text": "CSS"
},
{
"code": null,
"e": 38852,
"s": 38847,
"text": "HTML"
},
{
"code": null,
"e": 38869,
"s": 38852,
"text": "Web Technologies"
},
{
"code": null,
"e": 38896,
"s": 38869,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 38901,
"s": 38896,
"text": "HTML"
},
{
"code": null,
"e": 38999,
"s": 38901,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39057,
"s": 38999,
"text": "How to create footer to stay at the bottom of a Web page?"
},
{
"code": null,
"e": 39094,
"s": 39057,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 39158,
"s": 39094,
"text": "How to position a div at the bottom of its container using CSS?"
},
{
"code": null,
"e": 39199,
"s": 39158,
"text": "Create a Responsive Navbar using ReactJS"
},
{
"code": null,
"e": 39260,
"s": 39199,
"text": "How to Upload Image into Database and Display it using PHP ?"
},
{
"code": null,
"e": 39320,
"s": 39260,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 39381,
"s": 39320,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 39434,
"s": 39381,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 39484,
"s": 39434,
"text": "How to Insert Form Data into Database using PHP ?"
}
] |
Null List in C#
|
A null list exists in C#. To check whether a list is null or not, check them against the null literal.
Set a null like this −
List<string> myList = null;
Now, to check for null, use the equality operator −
myList == null;
The following is an example −
Live Demo
using System;
using System.Collections.Generic;
using System.Linq;
public class Demo {
public static void Main() {
List<string> myList = null;
// checking for null
Console.WriteLine(myList == null);
}
}
True
|
[
{
"code": null,
"e": 1188,
"s": 1062,
"text": "A null list exists in C#. To check whether a list is null or not, check them against the null literal.\nSet a null like this −"
},
{
"code": null,
"e": 1216,
"s": 1188,
"text": "List<string> myList = null;"
},
{
"code": null,
"e": 1268,
"s": 1216,
"text": "Now, to check for null, use the equality operator −"
},
{
"code": null,
"e": 1284,
"s": 1268,
"text": "myList == null;"
},
{
"code": null,
"e": 1314,
"s": 1284,
"text": "The following is an example −"
},
{
"code": null,
"e": 1325,
"s": 1314,
"text": " Live Demo"
},
{
"code": null,
"e": 1552,
"s": 1325,
"text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\npublic class Demo {\n public static void Main() {\n List<string> myList = null;\n // checking for null\n Console.WriteLine(myList == null);\n }\n}"
},
{
"code": null,
"e": 1557,
"s": 1552,
"text": "True"
}
] |
Comparing String objects using Relational Operators in C++ - GeeksforGeeks
|
28 Jun, 2017
If strings are compared using relational operators then, their characters are compared lexicographically according to the current character traits, means it starts comparison character by character starting from the first character until the characters in both strings are equal or a NULL character is encountered.
Parameters : Two Strings required to be compared. At left, one which is being compared and at right, another string with respect to which comparison is to be performed.
Return type : Relational operator return either true or false value i.e. they return boolean values, true if the corresponding comparison holds, false otherwise.
List of Relational Operators:
> : Greater than
< : Less than
== : Equal to
!= : Not equal to
>= : Greater than and equal to
<= : Less than and equal to
Important Conditions:
s1 < s2 : A string s1 is smaller than s2 string, if either, length of s1 is shorter than s2 or first mismatched character is smaller.s1 > s2 : A string s1 is greater than s2 string, if either, length of s1 is longer than s2 or first mismatched character is larger.<= and >= have almost same implementation with additional feature of being equal as well.If after comparing lexicographically, both strings are found same, then they are said to be equal.If any of the points from 1 to 3 follows up then, strings are said to be unequal.
s1 < s2 : A string s1 is smaller than s2 string, if either, length of s1 is shorter than s2 or first mismatched character is smaller.
s1 > s2 : A string s1 is greater than s2 string, if either, length of s1 is longer than s2 or first mismatched character is larger.
<= and >= have almost same implementation with additional feature of being equal as well.
If after comparing lexicographically, both strings are found same, then they are said to be equal.
If any of the points from 1 to 3 follows up then, strings are said to be unequal.
// CPP code to implement relational // operators on String objects#include<iostream>using namespace std; void relational_operation(string s1, string s2){ string s3 = s1 + s2; if(s1 != s2) cout << s1 << " is not equal to " << s2 << endl; if(s1 > s2) cout << s1 << " is greater than " << s2 << endl; else if(s1 < s2) cout << s1 << " is smaller than " << s2 << endl; if(s3 == s1 + s2) cout << s3 << " is equal to " << s1 + s2 << endl; } // Main functionint main(){ string s1("Geeks"); string s2("forGeeks"); relational_operation(s1, s2); return 0; }
Output:
Geeks is not equal to forGeeks
Geeks is smaller than forGeeks
GeeksforGeeks is equal to GeeksforGeeks
This article is contributed by Sakshi Tiwari. If you like GeeksforGeeks (We know you do!) and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
cpp-string
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Operator Overloading in C++
Polymorphism in C++
Friend class and function in C++
Iterators in C++ STL
Sorting a vector in C++
Convert string to char array in C++
Inline Functions in C++
List in C++ Standard Template Library (STL)
new and delete operators in C++ for dynamic memory
Destructors in C++
|
[
{
"code": null,
"e": 24098,
"s": 24070,
"text": "\n28 Jun, 2017"
},
{
"code": null,
"e": 24413,
"s": 24098,
"text": "If strings are compared using relational operators then, their characters are compared lexicographically according to the current character traits, means it starts comparison character by character starting from the first character until the characters in both strings are equal or a NULL character is encountered."
},
{
"code": null,
"e": 24582,
"s": 24413,
"text": "Parameters : Two Strings required to be compared. At left, one which is being compared and at right, another string with respect to which comparison is to be performed."
},
{
"code": null,
"e": 24744,
"s": 24582,
"text": "Return type : Relational operator return either true or false value i.e. they return boolean values, true if the corresponding comparison holds, false otherwise."
},
{
"code": null,
"e": 24774,
"s": 24744,
"text": "List of Relational Operators:"
},
{
"code": null,
"e": 24791,
"s": 24774,
"text": "> : Greater than"
},
{
"code": null,
"e": 24805,
"s": 24791,
"text": "< : Less than"
},
{
"code": null,
"e": 24819,
"s": 24805,
"text": "== : Equal to"
},
{
"code": null,
"e": 24837,
"s": 24819,
"text": "!= : Not equal to"
},
{
"code": null,
"e": 24868,
"s": 24837,
"text": ">= : Greater than and equal to"
},
{
"code": null,
"e": 24896,
"s": 24868,
"text": "<= : Less than and equal to"
},
{
"code": null,
"e": 24918,
"s": 24896,
"text": "Important Conditions:"
},
{
"code": null,
"e": 25451,
"s": 24918,
"text": "s1 < s2 : A string s1 is smaller than s2 string, if either, length of s1 is shorter than s2 or first mismatched character is smaller.s1 > s2 : A string s1 is greater than s2 string, if either, length of s1 is longer than s2 or first mismatched character is larger.<= and >= have almost same implementation with additional feature of being equal as well.If after comparing lexicographically, both strings are found same, then they are said to be equal.If any of the points from 1 to 3 follows up then, strings are said to be unequal."
},
{
"code": null,
"e": 25585,
"s": 25451,
"text": "s1 < s2 : A string s1 is smaller than s2 string, if either, length of s1 is shorter than s2 or first mismatched character is smaller."
},
{
"code": null,
"e": 25717,
"s": 25585,
"text": "s1 > s2 : A string s1 is greater than s2 string, if either, length of s1 is longer than s2 or first mismatched character is larger."
},
{
"code": null,
"e": 25807,
"s": 25717,
"text": "<= and >= have almost same implementation with additional feature of being equal as well."
},
{
"code": null,
"e": 25906,
"s": 25807,
"text": "If after comparing lexicographically, both strings are found same, then they are said to be equal."
},
{
"code": null,
"e": 25988,
"s": 25906,
"text": "If any of the points from 1 to 3 follows up then, strings are said to be unequal."
},
{
"code": "// CPP code to implement relational // operators on String objects#include<iostream>using namespace std; void relational_operation(string s1, string s2){ string s3 = s1 + s2; if(s1 != s2) cout << s1 << \" is not equal to \" << s2 << endl; if(s1 > s2) cout << s1 << \" is greater than \" << s2 << endl; else if(s1 < s2) cout << s1 << \" is smaller than \" << s2 << endl; if(s3 == s1 + s2) cout << s3 << \" is equal to \" << s1 + s2 << endl; } // Main functionint main(){ string s1(\"Geeks\"); string s2(\"forGeeks\"); relational_operation(s1, s2); return 0; }",
"e": 26621,
"s": 25988,
"text": null
},
{
"code": null,
"e": 26629,
"s": 26621,
"text": "Output:"
},
{
"code": null,
"e": 26732,
"s": 26629,
"text": "Geeks is not equal to forGeeks\nGeeks is smaller than forGeeks\nGeeksforGeeks is equal to GeeksforGeeks\n"
},
{
"code": null,
"e": 27051,
"s": 26732,
"text": "This article is contributed by Sakshi Tiwari. If you like GeeksforGeeks (We know you do!) and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 27176,
"s": 27051,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 27187,
"s": 27176,
"text": "cpp-string"
},
{
"code": null,
"e": 27191,
"s": 27187,
"text": "C++"
},
{
"code": null,
"e": 27195,
"s": 27191,
"text": "CPP"
},
{
"code": null,
"e": 27293,
"s": 27195,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27302,
"s": 27293,
"text": "Comments"
},
{
"code": null,
"e": 27315,
"s": 27302,
"text": "Old Comments"
},
{
"code": null,
"e": 27343,
"s": 27315,
"text": "Operator Overloading in C++"
},
{
"code": null,
"e": 27363,
"s": 27343,
"text": "Polymorphism in C++"
},
{
"code": null,
"e": 27396,
"s": 27363,
"text": "Friend class and function in C++"
},
{
"code": null,
"e": 27417,
"s": 27396,
"text": "Iterators in C++ STL"
},
{
"code": null,
"e": 27441,
"s": 27417,
"text": "Sorting a vector in C++"
},
{
"code": null,
"e": 27477,
"s": 27441,
"text": "Convert string to char array in C++"
},
{
"code": null,
"e": 27501,
"s": 27477,
"text": "Inline Functions in C++"
},
{
"code": null,
"e": 27545,
"s": 27501,
"text": "List in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 27596,
"s": 27545,
"text": "new and delete operators in C++ for dynamic memory"
}
] |
Getting inverse cosine and inverse hyperbolic cosine in Julia - acos(), acosh() and acosd() Methods - GeeksforGeeks
|
26 Mar, 2020
The acos() is an inbuilt function in julia which is used to calculate inverse cosine of the specified value and output is in radians.
Syntax: acos(x)
Parameters:
x: Specified values.
Returns: It returns the calculated inverse cosine of the specified value and output is in radians.
Example:
# Julia program to illustrate # the use of acos() method # Getting inverse cosine of the specified# values and output is in radians.println(acos(0))println(acos(-0.444))println(acos(0.7774))println(acos(1))
Output:
1.5707963267948966
2.0308542405657466
0.6802746363624282
0.0
The acosh() is an inbuilt function in julia which is used to calculate inverse hyperbolic cosine of the specified value.
Syntax: acosh(x)
Parameters:
x: Specified values.
Returns: It returns the calculated inverse hyperbolic cosine of the specified value.
Example:
# Julia program to illustrate # the use of acosh() method # Getting inverse hyperbolic cosine of the# specified values.println(acosh(2))println(acosh(10))println(acosh(74))println(acosh(45))
Output:
1.3169578969248166
2.993222846126381
4.997166616875528
4.499686190671499
The acosd() is an inbuilt function in julia which is used to calculate inverse cosine of the specified value and output is in degrees.
Syntax: acosd(x)
Parameters:
x: Specified values.
Returns: It returns the calculated inverse cosine of the specified value and output is in degrees.
Example:
# Julia program to illustrate # the use of acosd() method # Getting inverse cosine of the specified# values and output is in degrees.println(acosd(0))println(acosd(-0.444))println(acosd(0.7774))println(acosd(1))
Output:
90.0
116.35937679066326
38.976865573363945
0.0
Julia
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Get array dimensions and size of a dimension in Julia - size() Method
Searching in Array for a given element in Julia
Find maximum element along with its index in Julia - findmax() Method
Get number of elements of array in Julia - length() Method
Exception handling in Julia
Getting the maximum value from a list in Julia - max() Method
Working with Excel Files in Julia
Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)
Getting the absolute value of a number in Julia - abs() Method
Working with Date and Time in Julia
|
[
{
"code": null,
"e": 24544,
"s": 24516,
"text": "\n26 Mar, 2020"
},
{
"code": null,
"e": 24678,
"s": 24544,
"text": "The acos() is an inbuilt function in julia which is used to calculate inverse cosine of the specified value and output is in radians."
},
{
"code": null,
"e": 24694,
"s": 24678,
"text": "Syntax: acos(x)"
},
{
"code": null,
"e": 24706,
"s": 24694,
"text": "Parameters:"
},
{
"code": null,
"e": 24727,
"s": 24706,
"text": "x: Specified values."
},
{
"code": null,
"e": 24826,
"s": 24727,
"text": "Returns: It returns the calculated inverse cosine of the specified value and output is in radians."
},
{
"code": null,
"e": 24835,
"s": 24826,
"text": "Example:"
},
{
"code": "# Julia program to illustrate # the use of acos() method # Getting inverse cosine of the specified# values and output is in radians.println(acos(0))println(acos(-0.444))println(acos(0.7774))println(acos(1))",
"e": 25043,
"s": 24835,
"text": null
},
{
"code": null,
"e": 25051,
"s": 25043,
"text": "Output:"
},
{
"code": null,
"e": 25112,
"s": 25051,
"text": "1.5707963267948966\n2.0308542405657466\n0.6802746363624282\n0.0"
},
{
"code": null,
"e": 25233,
"s": 25112,
"text": "The acosh() is an inbuilt function in julia which is used to calculate inverse hyperbolic cosine of the specified value."
},
{
"code": null,
"e": 25250,
"s": 25233,
"text": "Syntax: acosh(x)"
},
{
"code": null,
"e": 25262,
"s": 25250,
"text": "Parameters:"
},
{
"code": null,
"e": 25283,
"s": 25262,
"text": "x: Specified values."
},
{
"code": null,
"e": 25368,
"s": 25283,
"text": "Returns: It returns the calculated inverse hyperbolic cosine of the specified value."
},
{
"code": null,
"e": 25377,
"s": 25368,
"text": "Example:"
},
{
"code": "# Julia program to illustrate # the use of acosh() method # Getting inverse hyperbolic cosine of the# specified values.println(acosh(2))println(acosh(10))println(acosh(74))println(acosh(45))",
"e": 25569,
"s": 25377,
"text": null
},
{
"code": null,
"e": 25577,
"s": 25569,
"text": "Output:"
},
{
"code": null,
"e": 25650,
"s": 25577,
"text": "1.3169578969248166\n2.993222846126381\n4.997166616875528\n4.499686190671499"
},
{
"code": null,
"e": 25785,
"s": 25650,
"text": "The acosd() is an inbuilt function in julia which is used to calculate inverse cosine of the specified value and output is in degrees."
},
{
"code": null,
"e": 25802,
"s": 25785,
"text": "Syntax: acosd(x)"
},
{
"code": null,
"e": 25814,
"s": 25802,
"text": "Parameters:"
},
{
"code": null,
"e": 25835,
"s": 25814,
"text": "x: Specified values."
},
{
"code": null,
"e": 25934,
"s": 25835,
"text": "Returns: It returns the calculated inverse cosine of the specified value and output is in degrees."
},
{
"code": null,
"e": 25943,
"s": 25934,
"text": "Example:"
},
{
"code": "# Julia program to illustrate # the use of acosd() method # Getting inverse cosine of the specified# values and output is in degrees.println(acosd(0))println(acosd(-0.444))println(acosd(0.7774))println(acosd(1))",
"e": 26156,
"s": 25943,
"text": null
},
{
"code": null,
"e": 26164,
"s": 26156,
"text": "Output:"
},
{
"code": null,
"e": 26211,
"s": 26164,
"text": "90.0\n116.35937679066326\n38.976865573363945\n0.0"
},
{
"code": null,
"e": 26217,
"s": 26211,
"text": "Julia"
},
{
"code": null,
"e": 26315,
"s": 26217,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26385,
"s": 26315,
"text": "Get array dimensions and size of a dimension in Julia - size() Method"
},
{
"code": null,
"e": 26433,
"s": 26385,
"text": "Searching in Array for a given element in Julia"
},
{
"code": null,
"e": 26503,
"s": 26433,
"text": "Find maximum element along with its index in Julia - findmax() Method"
},
{
"code": null,
"e": 26562,
"s": 26503,
"text": "Get number of elements of array in Julia - length() Method"
},
{
"code": null,
"e": 26590,
"s": 26562,
"text": "Exception handling in Julia"
},
{
"code": null,
"e": 26652,
"s": 26590,
"text": "Getting the maximum value from a list in Julia - max() Method"
},
{
"code": null,
"e": 26686,
"s": 26652,
"text": "Working with Excel Files in Julia"
},
{
"code": null,
"e": 26759,
"s": 26686,
"text": "Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)"
},
{
"code": null,
"e": 26822,
"s": 26759,
"text": "Getting the absolute value of a number in Julia - abs() Method"
}
] |
Find elements which are present in first array and not in second - GeeksforGeeks
|
11 Aug, 2021
Given two arrays, the task is that we find numbers which are present in first array, but not present in the second array. Examples :
Input : a[] = {1, 2, 3, 4, 5, 10};
b[] = {2, 3, 1, 0, 5};
Output : 4 10
4 and 10 are present in first array, but
not in second array.
Input : a[] = {4, 3, 5, 9, 11};
b[] = {4, 9, 3, 11, 10};
Output : 5
Method 1 (Simple) A Naive Approach is to use two loops and check element which not present in second array.
C++
Java
Python 3
C#
PHP
Javascript
// C++ simple program to// find elements which are// not present in second array#include<bits/stdc++.h>using namespace std; // Function for finding// elements which are there// in a[] but not in b[].void findMissing(int a[], int b[], int n, int m){ for (int i = 0; i < n; i++) { int j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) cout << a[i] << " "; }} // Driver codeint main(){ int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = sizeof(a) / sizeof(a[0]); int m = sizeof(b) / sizeof(b[1]); findMissing(a, b, n, m); return 0;}
// Java simple program to// find elements which are// not present in second arrayclass GFG{ // Function for finding elements // which are there in a[] but not // in b[]. static void findMissing(int a[], int b[], int n, int m) { for (int i = 0; i < n; i++) { int j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) System.out.print(a[i] + " "); } } // Driver Code public static void main(String[] args) { int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = a.length; int m = b.length; findMissing(a, b, n, m); }} // This code is contributed// by Anant Agarwal.
# Python 3 simple program to find elements# which are not present in second array # Function for finding elements which# are there in a[] but not in b[].def findMissing(a, b, n, m): for i in range(n): for j in range(m): if (a[i] == b[j]): break if (j == m - 1): print(a[i], end = " ") # Driver codeif __name__ == "__main__": a = [ 1, 2, 6, 3, 4, 5 ] b = [ 2, 4, 3, 1, 0 ] n = len(a) m = len(b) findMissing(a, b, n, m) # This code is contributed# by ChitraNayal
// C# simple program to find elements// which are not present in second arrayusing System; class GFG { // Function for finding elements // which are there in a[] but not // in b[]. static void findMissing(int []a, int []b, int n, int m) { for (int i = 0; i < n; i++) { int j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) Console.Write(a[i] + " "); } } // Driver code public static void Main() { int []a = {1, 2, 6, 3, 4, 5}; int []b = {2, 4, 3, 1, 0}; int n = a.Length; int m = b.Length; findMissing(a, b, n, m); }} // This code is contributed by vt_m.
<?php// PHP simple program to find// elements which are not// present in second array // Function for finding// elements which are there// in a[] but not in b[].function findMissing( $a, $b, $n, $m){ for ( $i = 0; $i < $n; $i++) { $j; for ($j = 0; $j < $m; $j++) if ($a[$i] == $b[$j]) break; if ($j == $m) echo $a[$i] , " "; }} // Driver code$a = array( 1, 2, 6, 3, 4, 5 );$b = array( 2, 4, 3, 1, 0 );$n = count($a);$m = count($b);findMissing($a, $b, $n, $m); // This code is contributed by anuj_67.?>
<script> // Javascript simple program to// find elements which are// not present in second array // Function for finding elements // which are there in a[] but not // in b[]. function findMissing(a,b,n,m) { for (let i = 0; i < n; i++) { let j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) document.write(a[i] + " "); } } // Driver Code let a=[ 1, 2, 6, 3, 4, 5 ]; let b=[2, 4, 3, 1, 0]; let n = a.length; let m = b.length; findMissing(a, b, n, m); // This code is contributed by avanitrachhadiya2155 </script>
Output :
6 5
Method 2 (Use Hashing) In this method, we store all elements of second array in a hash table (unordered_set). One by one check all elements of first array and print all those elements which are not present in the hash table.
C++
Java
Python3
C#
Javascript
// C++ efficient program to// find elements which are not// present in second array#include<bits/stdc++.h>using namespace std; // Function for finding// elements which are there// in a[] but not in b[].void findMissing(int a[], int b[], int n, int m){ // Store all elements of // second array in a hash table unordered_set <int> s; for (int i = 0; i < m; i++) s.insert(b[i]); // Print all elements of // first array that are not // present in hash table for (int i = 0; i < n; i++) if (s.find(a[i]) == s.end()) cout << a[i] << " ";} // Driver codeint main(){ int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = sizeof(a) / sizeof(a[0]); int m = sizeof(b) / sizeof(b[1]); findMissing(a, b, n, m); return 0;}
// Java efficient program to find elements// which are not present in second arrayimport java.util.HashSet;import java.util.Set; public class GfG{ // Function for finding elements which // are there in a[] but not in b[]. static void findMissing(int a[], int b[], int n, int m) { // Store all elements of // second array in a hash table HashSet<Integer> s = new HashSet<>(); for (int i = 0; i < m; i++) s.add(b[i]); // Print all elements of first array // that are not present in hash table for (int i = 0; i < n; i++) if (!s.contains(a[i])) System.out.print(a[i] + " "); } public static void main(String []args){ int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = a.length; int m = b.length; findMissing(a, b, n, m); }} // This code is contributed by Rituraj Jain
# Python3 efficient program to find elements# which are not present in second array # Function for finding elements which# are there in a[] but not in b[].def findMissing(a, b, n, m): # Store all elements of second # array in a hash table s = dict() for i in range(m): s[b[i]] = 1 # Print all elements of first array # that are not present in hash table for i in range(n): if a[i] not in s.keys(): print(a[i], end = " ") # Driver codea = [ 1, 2, 6, 3, 4, 5 ]b = [ 2, 4, 3, 1, 0 ]n = len(a)m = len(b)findMissing(a, b, n, m) # This code is contributed by mohit kumar
// C# efficient program to find elements// which are not present in second arrayusing System;using System.Collections.Generic; class GfG{ // Function for finding elements which // are there in a[] but not in b[]. static void findMissing(int []a, int []b, int n, int m) { // Store all elements of // second array in a hash table HashSet<int> s = new HashSet<int>(); for (int i = 0; i < m; i++) s.Add(b[i]); // Print all elements of first array // that are not present in hash table for (int i = 0; i < n; i++) if (!s.Contains(a[i])) Console.Write(a[i] + " "); } // Driver code public static void Main(String []args) { int []a = { 1, 2, 6, 3, 4, 5 }; int []b = { 2, 4, 3, 1, 0 }; int n = a.Length; int m = b.Length; findMissing(a, b, n, m); }} /* This code contributed by PrinciRaj1992 */
<script>// Javascript efficient program to find elements// which are not present in second array // Function for finding elements which // are there in a[] but not in b[]. function findMissing(a,b,n,m) { // Store all elements of // second array in a hash table let s = new Set(); for (let i = 0; i < m; i++) s.add(b[i]); // Print all elements of first array // that are not present in hash table for (let i = 0; i < n; i++) if (!s.has(a[i])) document.write(a[i] + " "); } let a=[1, 2, 6, 3, 4, 5 ]; let b=[2, 4, 3, 1, 0]; let n = a.length; let m = b.length; findMissing(a, b, n, m); // This code is contributed by patel2127</script>
Output :
6 5
Time complexity : O(n+m) Auxiliary Space : O(n)This article is contributed by DANISH_RAZA . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
vt_m
ukasp
mohit kumar 29
rituraj_jain
princiraj1992
avanitrachhadiya2155
patel2127
mohammedsadathkhan123
Accolite
Snapdeal
Zoho
Arrays
Hash
Zoho
Accolite
Snapdeal
Arrays
Hash
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Stack Data Structure (Introduction and Program)
Top 50 Array Coding Problems for Interviews
Introduction to Arrays
Multidimensional Arrays in Java
Linear Search
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
Hashing | Set 1 (Introduction)
Hashing | Set 3 (Open Addressing)
Count pairs with given sum
Hashing | Set 2 (Separate Chaining)
|
[
{
"code": null,
"e": 24478,
"s": 24450,
"text": "\n11 Aug, 2021"
},
{
"code": null,
"e": 24613,
"s": 24478,
"text": "Given two arrays, the task is that we find numbers which are present in first array, but not present in the second array. Examples : "
},
{
"code": null,
"e": 24834,
"s": 24613,
"text": "Input : a[] = {1, 2, 3, 4, 5, 10};\n b[] = {2, 3, 1, 0, 5};\nOutput : 4 10 \n4 and 10 are present in first array, but\nnot in second array.\n\nInput : a[] = {4, 3, 5, 9, 11};\n b[] = {4, 9, 3, 11, 10};\nOutput : 5 "
},
{
"code": null,
"e": 24945,
"s": 24836,
"text": "Method 1 (Simple) A Naive Approach is to use two loops and check element which not present in second array. "
},
{
"code": null,
"e": 24949,
"s": 24945,
"text": "C++"
},
{
"code": null,
"e": 24954,
"s": 24949,
"text": "Java"
},
{
"code": null,
"e": 24963,
"s": 24954,
"text": "Python 3"
},
{
"code": null,
"e": 24966,
"s": 24963,
"text": "C#"
},
{
"code": null,
"e": 24970,
"s": 24966,
"text": "PHP"
},
{
"code": null,
"e": 24981,
"s": 24970,
"text": "Javascript"
},
{
"code": "// C++ simple program to// find elements which are// not present in second array#include<bits/stdc++.h>using namespace std; // Function for finding// elements which are there// in a[] but not in b[].void findMissing(int a[], int b[], int n, int m){ for (int i = 0; i < n; i++) { int j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) cout << a[i] << \" \"; }} // Driver codeint main(){ int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = sizeof(a) / sizeof(a[0]); int m = sizeof(b) / sizeof(b[1]); findMissing(a, b, n, m); return 0;}",
"e": 25646,
"s": 24981,
"text": null
},
{
"code": "// Java simple program to// find elements which are// not present in second arrayclass GFG{ // Function for finding elements // which are there in a[] but not // in b[]. static void findMissing(int a[], int b[], int n, int m) { for (int i = 0; i < n; i++) { int j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) System.out.print(a[i] + \" \"); } } // Driver Code public static void main(String[] args) { int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = a.length; int m = b.length; findMissing(a, b, n, m); }} // This code is contributed// by Anant Agarwal.",
"e": 26468,
"s": 25646,
"text": null
},
{
"code": "# Python 3 simple program to find elements# which are not present in second array # Function for finding elements which# are there in a[] but not in b[].def findMissing(a, b, n, m): for i in range(n): for j in range(m): if (a[i] == b[j]): break if (j == m - 1): print(a[i], end = \" \") # Driver codeif __name__ == \"__main__\": a = [ 1, 2, 6, 3, 4, 5 ] b = [ 2, 4, 3, 1, 0 ] n = len(a) m = len(b) findMissing(a, b, n, m) # This code is contributed# by ChitraNayal",
"e": 27005,
"s": 26468,
"text": null
},
{
"code": "// C# simple program to find elements// which are not present in second arrayusing System; class GFG { // Function for finding elements // which are there in a[] but not // in b[]. static void findMissing(int []a, int []b, int n, int m) { for (int i = 0; i < n; i++) { int j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) Console.Write(a[i] + \" \"); } } // Driver code public static void Main() { int []a = {1, 2, 6, 3, 4, 5}; int []b = {2, 4, 3, 1, 0}; int n = a.Length; int m = b.Length; findMissing(a, b, n, m); }} // This code is contributed by vt_m.",
"e": 27807,
"s": 27005,
"text": null
},
{
"code": "<?php// PHP simple program to find// elements which are not// present in second array // Function for finding// elements which are there// in a[] but not in b[].function findMissing( $a, $b, $n, $m){ for ( $i = 0; $i < $n; $i++) { $j; for ($j = 0; $j < $m; $j++) if ($a[$i] == $b[$j]) break; if ($j == $m) echo $a[$i] , \" \"; }} // Driver code$a = array( 1, 2, 6, 3, 4, 5 );$b = array( 2, 4, 3, 1, 0 );$n = count($a);$m = count($b);findMissing($a, $b, $n, $m); // This code is contributed by anuj_67.?>",
"e": 28377,
"s": 27807,
"text": null
},
{
"code": "<script> // Javascript simple program to// find elements which are// not present in second array // Function for finding elements // which are there in a[] but not // in b[]. function findMissing(a,b,n,m) { for (let i = 0; i < n; i++) { let j; for (j = 0; j < m; j++) if (a[i] == b[j]) break; if (j == m) document.write(a[i] + \" \"); } } // Driver Code let a=[ 1, 2, 6, 3, 4, 5 ]; let b=[2, 4, 3, 1, 0]; let n = a.length; let m = b.length; findMissing(a, b, n, m); // This code is contributed by avanitrachhadiya2155 </script>",
"e": 29087,
"s": 28377,
"text": null
},
{
"code": null,
"e": 29097,
"s": 29087,
"text": "Output : "
},
{
"code": null,
"e": 29101,
"s": 29097,
"text": "6 5"
},
{
"code": null,
"e": 29327,
"s": 29101,
"text": "Method 2 (Use Hashing) In this method, we store all elements of second array in a hash table (unordered_set). One by one check all elements of first array and print all those elements which are not present in the hash table. "
},
{
"code": null,
"e": 29331,
"s": 29327,
"text": "C++"
},
{
"code": null,
"e": 29336,
"s": 29331,
"text": "Java"
},
{
"code": null,
"e": 29344,
"s": 29336,
"text": "Python3"
},
{
"code": null,
"e": 29347,
"s": 29344,
"text": "C#"
},
{
"code": null,
"e": 29358,
"s": 29347,
"text": "Javascript"
},
{
"code": "// C++ efficient program to// find elements which are not// present in second array#include<bits/stdc++.h>using namespace std; // Function for finding// elements which are there// in a[] but not in b[].void findMissing(int a[], int b[], int n, int m){ // Store all elements of // second array in a hash table unordered_set <int> s; for (int i = 0; i < m; i++) s.insert(b[i]); // Print all elements of // first array that are not // present in hash table for (int i = 0; i < n; i++) if (s.find(a[i]) == s.end()) cout << a[i] << \" \";} // Driver codeint main(){ int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = sizeof(a) / sizeof(a[0]); int m = sizeof(b) / sizeof(b[1]); findMissing(a, b, n, m); return 0;}",
"e": 30165,
"s": 29358,
"text": null
},
{
"code": "// Java efficient program to find elements// which are not present in second arrayimport java.util.HashSet;import java.util.Set; public class GfG{ // Function for finding elements which // are there in a[] but not in b[]. static void findMissing(int a[], int b[], int n, int m) { // Store all elements of // second array in a hash table HashSet<Integer> s = new HashSet<>(); for (int i = 0; i < m; i++) s.add(b[i]); // Print all elements of first array // that are not present in hash table for (int i = 0; i < n; i++) if (!s.contains(a[i])) System.out.print(a[i] + \" \"); } public static void main(String []args){ int a[] = { 1, 2, 6, 3, 4, 5 }; int b[] = { 2, 4, 3, 1, 0 }; int n = a.length; int m = b.length; findMissing(a, b, n, m); }} // This code is contributed by Rituraj Jain",
"e": 31133,
"s": 30165,
"text": null
},
{
"code": "# Python3 efficient program to find elements# which are not present in second array # Function for finding elements which# are there in a[] but not in b[].def findMissing(a, b, n, m): # Store all elements of second # array in a hash table s = dict() for i in range(m): s[b[i]] = 1 # Print all elements of first array # that are not present in hash table for i in range(n): if a[i] not in s.keys(): print(a[i], end = \" \") # Driver codea = [ 1, 2, 6, 3, 4, 5 ]b = [ 2, 4, 3, 1, 0 ]n = len(a)m = len(b)findMissing(a, b, n, m) # This code is contributed by mohit kumar",
"e": 31750,
"s": 31133,
"text": null
},
{
"code": "// C# efficient program to find elements// which are not present in second arrayusing System;using System.Collections.Generic; class GfG{ // Function for finding elements which // are there in a[] but not in b[]. static void findMissing(int []a, int []b, int n, int m) { // Store all elements of // second array in a hash table HashSet<int> s = new HashSet<int>(); for (int i = 0; i < m; i++) s.Add(b[i]); // Print all elements of first array // that are not present in hash table for (int i = 0; i < n; i++) if (!s.Contains(a[i])) Console.Write(a[i] + \" \"); } // Driver code public static void Main(String []args) { int []a = { 1, 2, 6, 3, 4, 5 }; int []b = { 2, 4, 3, 1, 0 }; int n = a.Length; int m = b.Length; findMissing(a, b, n, m); }} /* This code contributed by PrinciRaj1992 */",
"e": 32715,
"s": 31750,
"text": null
},
{
"code": "<script>// Javascript efficient program to find elements// which are not present in second array // Function for finding elements which // are there in a[] but not in b[]. function findMissing(a,b,n,m) { // Store all elements of // second array in a hash table let s = new Set(); for (let i = 0; i < m; i++) s.add(b[i]); // Print all elements of first array // that are not present in hash table for (let i = 0; i < n; i++) if (!s.has(a[i])) document.write(a[i] + \" \"); } let a=[1, 2, 6, 3, 4, 5 ]; let b=[2, 4, 3, 1, 0]; let n = a.length; let m = b.length; findMissing(a, b, n, m); // This code is contributed by patel2127</script>",
"e": 33495,
"s": 32715,
"text": null
},
{
"code": null,
"e": 33506,
"s": 33495,
"text": "Output : "
},
{
"code": null,
"e": 33510,
"s": 33506,
"text": "6 5"
},
{
"code": null,
"e": 33978,
"s": 33510,
"text": "Time complexity : O(n+m) Auxiliary Space : O(n)This article is contributed by DANISH_RAZA . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 33983,
"s": 33978,
"text": "vt_m"
},
{
"code": null,
"e": 33989,
"s": 33983,
"text": "ukasp"
},
{
"code": null,
"e": 34004,
"s": 33989,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 34017,
"s": 34004,
"text": "rituraj_jain"
},
{
"code": null,
"e": 34031,
"s": 34017,
"text": "princiraj1992"
},
{
"code": null,
"e": 34052,
"s": 34031,
"text": "avanitrachhadiya2155"
},
{
"code": null,
"e": 34062,
"s": 34052,
"text": "patel2127"
},
{
"code": null,
"e": 34084,
"s": 34062,
"text": "mohammedsadathkhan123"
},
{
"code": null,
"e": 34093,
"s": 34084,
"text": "Accolite"
},
{
"code": null,
"e": 34102,
"s": 34093,
"text": "Snapdeal"
},
{
"code": null,
"e": 34107,
"s": 34102,
"text": "Zoho"
},
{
"code": null,
"e": 34114,
"s": 34107,
"text": "Arrays"
},
{
"code": null,
"e": 34119,
"s": 34114,
"text": "Hash"
},
{
"code": null,
"e": 34124,
"s": 34119,
"text": "Zoho"
},
{
"code": null,
"e": 34133,
"s": 34124,
"text": "Accolite"
},
{
"code": null,
"e": 34142,
"s": 34133,
"text": "Snapdeal"
},
{
"code": null,
"e": 34149,
"s": 34142,
"text": "Arrays"
},
{
"code": null,
"e": 34154,
"s": 34149,
"text": "Hash"
},
{
"code": null,
"e": 34252,
"s": 34154,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34261,
"s": 34252,
"text": "Comments"
},
{
"code": null,
"e": 34274,
"s": 34261,
"text": "Old Comments"
},
{
"code": null,
"e": 34322,
"s": 34274,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 34366,
"s": 34322,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 34389,
"s": 34366,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 34421,
"s": 34389,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 34435,
"s": 34421,
"text": "Linear Search"
},
{
"code": null,
"e": 34520,
"s": 34435,
"text": "Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)"
},
{
"code": null,
"e": 34551,
"s": 34520,
"text": "Hashing | Set 1 (Introduction)"
},
{
"code": null,
"e": 34585,
"s": 34551,
"text": "Hashing | Set 3 (Open Addressing)"
},
{
"code": null,
"e": 34612,
"s": 34585,
"text": "Count pairs with given sum"
}
] |
Find the pattern of dataset by visualizing frequency distribution | Towards Data Science
|
“Visualization gives you answers to questions you didn’t know you had.” — Ben Schneiderman
Data science is defined as the art of interpreting data and getting useful information out of it. At the same time, Data visualization involves the representation of data. Though they are from completely different entities, the context is same. Data Science is not a single process, method, or workflow. We can bound in a way that data visualization is the subset of Data Science.
The best example of data science on our day-to-day basis is E-commerce recommendation for a user while shopping. The machine is learning about the user’s activities regularly and manipulates it thus by giving the best recommendation based on your interests and choice of shopping. It is very hard to analyze raw data. Data scientists need to visualize the manipulates data and analyze it to provide the best choices for users. And this is where data visualization comes into the picture.
There are several statistical and mathematical approaches by which we can get the insight information about a dataset. Among them, frequency distribution is the most easiest but important technique.
Why graphical representation of frequency distribution is needed?
Barplot for frequency distribution
Pie Charts for frequency distribution
Histogram for frequency distribution
Skewed Frequency Distribution
Symmetrical Distribution of frequency
Let’s start our journey...
Data visualization is the practice of translating information into a visual context, such as a map or graph, to make data easier for the human brain to understand and pull insights from. The main goal of data visualization is to find a pattern from data. Data visualization is one of the parts of Data Science. After collecting data we have to modify and modelling the data. At last, it should be visualized for final conclusions.
To understand it deeply, you must have prior knowledge of Frequency Distribution. To know what is a frequency distribution, how to develop a frequency distribution table. You can read our previous article based on frequency distribution.
towardsdatascience.com
In our previous article, we may have a look at how to create a frequency distribution table from raw data. We may face another problem when we try to find patterns from the data. Do you guess how to analyze a frequency table to find patterns from a dataset? If you want to find a pattern from a frequency table, you have to look up the frequency of each unique value or class interval. Just by looking up the frequency of each unique value, can you find any pattern? No, you have to compare the frequencies of each value at the same time. It will be easy for a few unique values or class intervals, or when the frequency values are less and easier to compare. But we will get puzzled if we try to compare the frequency for a large number of unique values. We can solve this problem by visualizing the data. Graphs make it much easier to scan and compare frequencies, providing a single picture of the entire distribution of variables. Because they are easy to grasp and also eye-catching. Sometimes you have to represent data in front of a non-technical audience. Graphs are better way of representation if we need to present our findings to a non-technical audience. In this article, We’ll discuss three kinds of graphs to represent the distribution table:
1.Bar Plots
2. Pie Charts
3. Histograms
We are considering a dataset for better demonstration
Through this article, we are using the wnba.csv dataset. The Women’s National Basketball Association (WNBA) is the professional basketball league in the USA. It has currently composed twelve teams. In our dataset, we have stats from all games of season 2016–2017. The dataset has 143 rows and 32 columns. The overview of a dataset is given below.
A bar plot or bar chart is a graph, representing the category of data with rectangular bars where lengths and heights are proportional to the values they represent.
Barplot was used for variables measured on an ordinal or nominal scale. Example of bar plot to show the population change in Edge from 1801–1961.
How to generate bar plot
To generate a bar plot we need two sets of values.
(i)One set containing a list of unique values.
(ii) Another set containing frequency for each unique value.
We can get two sets of values easily from a frequency table. To get a frequency table, we can use the pandas Series.value_counts() method. Then we have to use the Series. plot.bar() method. We can do that for the Pos (Player Position) variable in our dataset.
We can use Series.value_counts().plot.bar() to create bar plot just using one line code.
wnba[‘Pos’].value_counts().plot.bar()
Output:
The Series. lot.bar() method generates vertical bar blot where frequencies are in the y-axis and each unique value in the x-axis. Sometimes, you need to generate a horizontal bar plot. To generate a horizontal bar plot you can use the Series.plot.barh() method.
wnba[‘Pos’].value_counts().plot.barh()
Output:
How to Customize Bar Plot
The Series.value_counts().plot.bar() method has several parameters.
(i)x- to set a label for the x-axis.
(ii)y- to set a label for the y-axis.
(iii)color — to set a custom color.
(iv) rot — to rotate the tricks label of the x-axis.
(v) and so on.
By changing those parameters, you can decorate your bar plot as your need.
To know details, you can read the documentation of barplot.
A pie chart is a type of chart that visually displays data in a circular graph. Circular, spheres, and angular representations of data are mostly used graphs to represent real-world data. The shape of a pie chart is circular where the pie represents the whole data and the slice of the pie represents the parts of the data and records it discretely.
It can assume as a regular pie. Let us look at the following example that represents the ingredients used to prepare butter cake.
The whole pie represents a value of 100. The various colors represent the ingredients used to prepare the cake. We can understand division just by seeing the slice color.
Why do we use the pie chart?
The main advantage of a pie chart over a bar plot is that it gives a better sense for relative frequencies in the distribution, or proportional. Looking at the bar plot we just see which categories are more or less numerous but we can’t tell the proportion.
Just by seeing the pie chart, we can demonstrate the proportion of each category.
How to generate Pie Chart
We can generate pie charts using the Series.plot.pie() method. We can apply the method on the ‘Pos’ column in our dataset.
`wnba[‘Pos’].value_counts().plot.pie()
Output:
How to Customize Pie Chart
In most cases, we want that percentage of each slice should be displayed inside the slice. We can easily be doing it by setting the autopct parameter. This parameter accepts python strings and we can use ‘%.1f%%’ to have the percentage displayed with a precision of one decimal. In the below, we break down the strings.
We can show below how it looks for ‘Pos’ variable with a precision value of 2.
wnba[‘Pos’].value_counts().plot.pie(figsize = (6,6), autopct = ‘%.2f%%’)
Output:
A frequency distribution shows how often each different value in a set of data occurs. The histogram is the most widely used technique to visualize frequency distribution. It looks like a bar plot, but there is a significant difference.
Why do we use the Histogram?
To describe the distribution in elaborate ways when properties of the variable are measured in interval and ratio scales. Let’s try to understand the fact with a real-world example.
Let’s use the ‘PTS’ (total points ) which is measured on a ratio scale.
print(wnba[‘PTS’].describe())
Output:
If we try to analyze the above graph, we find that 75% of variables are distributed within a relatively narrow interval. The remaining 25% are distributed in relatively large intervals. To visualize the fact, we have to draw a graph that immediately shows the distribution.
How to generate Histogram
We can generate a histogram using the Series.plot.hist() method. We can apply the method on the ‘PTS’(total points) column in our dataset.
wnba[‘PTS’].plot.hist()
Output:
How to Customize Histogram
Sometimes, you have to create a fixed number of the bar such as 2, 5, 10, 20, 30. What will happen behind the scene if you want to create 5 bars in the histogram for a ‘PTS’variable? The following steps will help to do your job.
(I) Generate a grouped frequency distribution table for the ‘PTS’ variable with 5 class intervals.
(ii) For each class interval, plot a bar with height corresponding to the frequency of the interval.
print(wnba[‘PTS’].value_counts(bins = 5).sort_index())
Output:
Here, the bins parameter denotes the number of class intervals.
Now, we try to represent the data with the help of a histogram.
wnba[‘PTS’].plot.hist(bins=5)
Output:
You can also use the arange() function from numpy to generate the values for plotting the frequency of a specific range and interval, use grid parameter to understand bars clearly, use rot parameter for better readability.
from numpy import arangewnba[‘PTS’].plot.hist(grid = True, xticks = arange(2,585,58.2), rot = 30)
Output:
If you analyze the graph, you can find the patterns you wanted to see. We can see that 75% values between 2–277 and the remaining values distributed between 277–584. So the graph is left-sided (skewedly distributed), we will know about it in the letter part of the article.
A skewed distribution is where one tail is longer than another. These distributions are sometimes called asymmetric or asymmetrical distributions. Two types of skewed Distribution
(i)right-skewed distribution
If the tail points to the right, then the distribution is said to be right skewed.
(ii) left-skewed distribution
If the tail points to the left, then the distribution is said to be left-skewed.
A distribution is said to be symmetrical when the distribution on either side of the mean is a mirror image of the other.
A very common symmetrical distribution is one where the values are high in the middle and gradually decrease in frequency toward both ends of the histogram. This pattern is also called Normal Distribution.
Throughout the article, we try to learn about the technique of visualizing frequency distribution. However, when variables are measured in a nominal or ordinal scale, we should use a bar or pie chart. But if the variables are measured on an interval or ratio scale then we have to use the histogram.
If you are a data science enthusiast, please stay connected with me. I will come back shortly with another interesting article. Don’t forget to put some claps, if you like the article.
Previous series of articles about the basics of data science
towardsdatascience.com
towardsdatascience.com
Interesting article which will help you to know how to embed the interactive dataset with articles
towardsdatascience.com
If you enjoy the article, follow me on medium for more.
Connect me on LinkedIn for collaboration.
|
[
{
"code": null,
"e": 263,
"s": 172,
"text": "“Visualization gives you answers to questions you didn’t know you had.” — Ben Schneiderman"
},
{
"code": null,
"e": 644,
"s": 263,
"text": "Data science is defined as the art of interpreting data and getting useful information out of it. At the same time, Data visualization involves the representation of data. Though they are from completely different entities, the context is same. Data Science is not a single process, method, or workflow. We can bound in a way that data visualization is the subset of Data Science."
},
{
"code": null,
"e": 1132,
"s": 644,
"text": "The best example of data science on our day-to-day basis is E-commerce recommendation for a user while shopping. The machine is learning about the user’s activities regularly and manipulates it thus by giving the best recommendation based on your interests and choice of shopping. It is very hard to analyze raw data. Data scientists need to visualize the manipulates data and analyze it to provide the best choices for users. And this is where data visualization comes into the picture."
},
{
"code": null,
"e": 1331,
"s": 1132,
"text": "There are several statistical and mathematical approaches by which we can get the insight information about a dataset. Among them, frequency distribution is the most easiest but important technique."
},
{
"code": null,
"e": 1397,
"s": 1331,
"text": "Why graphical representation of frequency distribution is needed?"
},
{
"code": null,
"e": 1432,
"s": 1397,
"text": "Barplot for frequency distribution"
},
{
"code": null,
"e": 1470,
"s": 1432,
"text": "Pie Charts for frequency distribution"
},
{
"code": null,
"e": 1507,
"s": 1470,
"text": "Histogram for frequency distribution"
},
{
"code": null,
"e": 1537,
"s": 1507,
"text": "Skewed Frequency Distribution"
},
{
"code": null,
"e": 1575,
"s": 1537,
"text": "Symmetrical Distribution of frequency"
},
{
"code": null,
"e": 1602,
"s": 1575,
"text": "Let’s start our journey..."
},
{
"code": null,
"e": 2033,
"s": 1602,
"text": "Data visualization is the practice of translating information into a visual context, such as a map or graph, to make data easier for the human brain to understand and pull insights from. The main goal of data visualization is to find a pattern from data. Data visualization is one of the parts of Data Science. After collecting data we have to modify and modelling the data. At last, it should be visualized for final conclusions."
},
{
"code": null,
"e": 2271,
"s": 2033,
"text": "To understand it deeply, you must have prior knowledge of Frequency Distribution. To know what is a frequency distribution, how to develop a frequency distribution table. You can read our previous article based on frequency distribution."
},
{
"code": null,
"e": 2294,
"s": 2271,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 3552,
"s": 2294,
"text": "In our previous article, we may have a look at how to create a frequency distribution table from raw data. We may face another problem when we try to find patterns from the data. Do you guess how to analyze a frequency table to find patterns from a dataset? If you want to find a pattern from a frequency table, you have to look up the frequency of each unique value or class interval. Just by looking up the frequency of each unique value, can you find any pattern? No, you have to compare the frequencies of each value at the same time. It will be easy for a few unique values or class intervals, or when the frequency values are less and easier to compare. But we will get puzzled if we try to compare the frequency for a large number of unique values. We can solve this problem by visualizing the data. Graphs make it much easier to scan and compare frequencies, providing a single picture of the entire distribution of variables. Because they are easy to grasp and also eye-catching. Sometimes you have to represent data in front of a non-technical audience. Graphs are better way of representation if we need to present our findings to a non-technical audience. In this article, We’ll discuss three kinds of graphs to represent the distribution table:"
},
{
"code": null,
"e": 3564,
"s": 3552,
"text": "1.Bar Plots"
},
{
"code": null,
"e": 3578,
"s": 3564,
"text": "2. Pie Charts"
},
{
"code": null,
"e": 3592,
"s": 3578,
"text": "3. Histograms"
},
{
"code": null,
"e": 3646,
"s": 3592,
"text": "We are considering a dataset for better demonstration"
},
{
"code": null,
"e": 3993,
"s": 3646,
"text": "Through this article, we are using the wnba.csv dataset. The Women’s National Basketball Association (WNBA) is the professional basketball league in the USA. It has currently composed twelve teams. In our dataset, we have stats from all games of season 2016–2017. The dataset has 143 rows and 32 columns. The overview of a dataset is given below."
},
{
"code": null,
"e": 4158,
"s": 3993,
"text": "A bar plot or bar chart is a graph, representing the category of data with rectangular bars where lengths and heights are proportional to the values they represent."
},
{
"code": null,
"e": 4304,
"s": 4158,
"text": "Barplot was used for variables measured on an ordinal or nominal scale. Example of bar plot to show the population change in Edge from 1801–1961."
},
{
"code": null,
"e": 4329,
"s": 4304,
"text": "How to generate bar plot"
},
{
"code": null,
"e": 4380,
"s": 4329,
"text": "To generate a bar plot we need two sets of values."
},
{
"code": null,
"e": 4427,
"s": 4380,
"text": "(i)One set containing a list of unique values."
},
{
"code": null,
"e": 4488,
"s": 4427,
"text": "(ii) Another set containing frequency for each unique value."
},
{
"code": null,
"e": 4748,
"s": 4488,
"text": "We can get two sets of values easily from a frequency table. To get a frequency table, we can use the pandas Series.value_counts() method. Then we have to use the Series. plot.bar() method. We can do that for the Pos (Player Position) variable in our dataset."
},
{
"code": null,
"e": 4837,
"s": 4748,
"text": "We can use Series.value_counts().plot.bar() to create bar plot just using one line code."
},
{
"code": null,
"e": 4875,
"s": 4837,
"text": "wnba[‘Pos’].value_counts().plot.bar()"
},
{
"code": null,
"e": 4883,
"s": 4875,
"text": "Output:"
},
{
"code": null,
"e": 5145,
"s": 4883,
"text": "The Series. lot.bar() method generates vertical bar blot where frequencies are in the y-axis and each unique value in the x-axis. Sometimes, you need to generate a horizontal bar plot. To generate a horizontal bar plot you can use the Series.plot.barh() method."
},
{
"code": null,
"e": 5184,
"s": 5145,
"text": "wnba[‘Pos’].value_counts().plot.barh()"
},
{
"code": null,
"e": 5192,
"s": 5184,
"text": "Output:"
},
{
"code": null,
"e": 5218,
"s": 5192,
"text": "How to Customize Bar Plot"
},
{
"code": null,
"e": 5286,
"s": 5218,
"text": "The Series.value_counts().plot.bar() method has several parameters."
},
{
"code": null,
"e": 5323,
"s": 5286,
"text": "(i)x- to set a label for the x-axis."
},
{
"code": null,
"e": 5361,
"s": 5323,
"text": "(ii)y- to set a label for the y-axis."
},
{
"code": null,
"e": 5397,
"s": 5361,
"text": "(iii)color — to set a custom color."
},
{
"code": null,
"e": 5450,
"s": 5397,
"text": "(iv) rot — to rotate the tricks label of the x-axis."
},
{
"code": null,
"e": 5465,
"s": 5450,
"text": "(v) and so on."
},
{
"code": null,
"e": 5540,
"s": 5465,
"text": "By changing those parameters, you can decorate your bar plot as your need."
},
{
"code": null,
"e": 5600,
"s": 5540,
"text": "To know details, you can read the documentation of barplot."
},
{
"code": null,
"e": 5950,
"s": 5600,
"text": "A pie chart is a type of chart that visually displays data in a circular graph. Circular, spheres, and angular representations of data are mostly used graphs to represent real-world data. The shape of a pie chart is circular where the pie represents the whole data and the slice of the pie represents the parts of the data and records it discretely."
},
{
"code": null,
"e": 6080,
"s": 5950,
"text": "It can assume as a regular pie. Let us look at the following example that represents the ingredients used to prepare butter cake."
},
{
"code": null,
"e": 6251,
"s": 6080,
"text": "The whole pie represents a value of 100. The various colors represent the ingredients used to prepare the cake. We can understand division just by seeing the slice color."
},
{
"code": null,
"e": 6280,
"s": 6251,
"text": "Why do we use the pie chart?"
},
{
"code": null,
"e": 6538,
"s": 6280,
"text": "The main advantage of a pie chart over a bar plot is that it gives a better sense for relative frequencies in the distribution, or proportional. Looking at the bar plot we just see which categories are more or less numerous but we can’t tell the proportion."
},
{
"code": null,
"e": 6620,
"s": 6538,
"text": "Just by seeing the pie chart, we can demonstrate the proportion of each category."
},
{
"code": null,
"e": 6646,
"s": 6620,
"text": "How to generate Pie Chart"
},
{
"code": null,
"e": 6769,
"s": 6646,
"text": "We can generate pie charts using the Series.plot.pie() method. We can apply the method on the ‘Pos’ column in our dataset."
},
{
"code": null,
"e": 6808,
"s": 6769,
"text": "`wnba[‘Pos’].value_counts().plot.pie()"
},
{
"code": null,
"e": 6816,
"s": 6808,
"text": "Output:"
},
{
"code": null,
"e": 6843,
"s": 6816,
"text": "How to Customize Pie Chart"
},
{
"code": null,
"e": 7163,
"s": 6843,
"text": "In most cases, we want that percentage of each slice should be displayed inside the slice. We can easily be doing it by setting the autopct parameter. This parameter accepts python strings and we can use ‘%.1f%%’ to have the percentage displayed with a precision of one decimal. In the below, we break down the strings."
},
{
"code": null,
"e": 7242,
"s": 7163,
"text": "We can show below how it looks for ‘Pos’ variable with a precision value of 2."
},
{
"code": null,
"e": 7315,
"s": 7242,
"text": "wnba[‘Pos’].value_counts().plot.pie(figsize = (6,6), autopct = ‘%.2f%%’)"
},
{
"code": null,
"e": 7323,
"s": 7315,
"text": "Output:"
},
{
"code": null,
"e": 7560,
"s": 7323,
"text": "A frequency distribution shows how often each different value in a set of data occurs. The histogram is the most widely used technique to visualize frequency distribution. It looks like a bar plot, but there is a significant difference."
},
{
"code": null,
"e": 7589,
"s": 7560,
"text": "Why do we use the Histogram?"
},
{
"code": null,
"e": 7771,
"s": 7589,
"text": "To describe the distribution in elaborate ways when properties of the variable are measured in interval and ratio scales. Let’s try to understand the fact with a real-world example."
},
{
"code": null,
"e": 7843,
"s": 7771,
"text": "Let’s use the ‘PTS’ (total points ) which is measured on a ratio scale."
},
{
"code": null,
"e": 7873,
"s": 7843,
"text": "print(wnba[‘PTS’].describe())"
},
{
"code": null,
"e": 7881,
"s": 7873,
"text": "Output:"
},
{
"code": null,
"e": 8155,
"s": 7881,
"text": "If we try to analyze the above graph, we find that 75% of variables are distributed within a relatively narrow interval. The remaining 25% are distributed in relatively large intervals. To visualize the fact, we have to draw a graph that immediately shows the distribution."
},
{
"code": null,
"e": 8181,
"s": 8155,
"text": "How to generate Histogram"
},
{
"code": null,
"e": 8320,
"s": 8181,
"text": "We can generate a histogram using the Series.plot.hist() method. We can apply the method on the ‘PTS’(total points) column in our dataset."
},
{
"code": null,
"e": 8344,
"s": 8320,
"text": "wnba[‘PTS’].plot.hist()"
},
{
"code": null,
"e": 8352,
"s": 8344,
"text": "Output:"
},
{
"code": null,
"e": 8379,
"s": 8352,
"text": "How to Customize Histogram"
},
{
"code": null,
"e": 8608,
"s": 8379,
"text": "Sometimes, you have to create a fixed number of the bar such as 2, 5, 10, 20, 30. What will happen behind the scene if you want to create 5 bars in the histogram for a ‘PTS’variable? The following steps will help to do your job."
},
{
"code": null,
"e": 8707,
"s": 8608,
"text": "(I) Generate a grouped frequency distribution table for the ‘PTS’ variable with 5 class intervals."
},
{
"code": null,
"e": 8808,
"s": 8707,
"text": "(ii) For each class interval, plot a bar with height corresponding to the frequency of the interval."
},
{
"code": null,
"e": 8863,
"s": 8808,
"text": "print(wnba[‘PTS’].value_counts(bins = 5).sort_index())"
},
{
"code": null,
"e": 8871,
"s": 8863,
"text": "Output:"
},
{
"code": null,
"e": 8935,
"s": 8871,
"text": "Here, the bins parameter denotes the number of class intervals."
},
{
"code": null,
"e": 8999,
"s": 8935,
"text": "Now, we try to represent the data with the help of a histogram."
},
{
"code": null,
"e": 9029,
"s": 8999,
"text": "wnba[‘PTS’].plot.hist(bins=5)"
},
{
"code": null,
"e": 9037,
"s": 9029,
"text": "Output:"
},
{
"code": null,
"e": 9260,
"s": 9037,
"text": "You can also use the arange() function from numpy to generate the values for plotting the frequency of a specific range and interval, use grid parameter to understand bars clearly, use rot parameter for better readability."
},
{
"code": null,
"e": 9358,
"s": 9260,
"text": "from numpy import arangewnba[‘PTS’].plot.hist(grid = True, xticks = arange(2,585,58.2), rot = 30)"
},
{
"code": null,
"e": 9366,
"s": 9358,
"text": "Output:"
},
{
"code": null,
"e": 9640,
"s": 9366,
"text": "If you analyze the graph, you can find the patterns you wanted to see. We can see that 75% values between 2–277 and the remaining values distributed between 277–584. So the graph is left-sided (skewedly distributed), we will know about it in the letter part of the article."
},
{
"code": null,
"e": 9820,
"s": 9640,
"text": "A skewed distribution is where one tail is longer than another. These distributions are sometimes called asymmetric or asymmetrical distributions. Two types of skewed Distribution"
},
{
"code": null,
"e": 9849,
"s": 9820,
"text": "(i)right-skewed distribution"
},
{
"code": null,
"e": 9932,
"s": 9849,
"text": "If the tail points to the right, then the distribution is said to be right skewed."
},
{
"code": null,
"e": 9962,
"s": 9932,
"text": "(ii) left-skewed distribution"
},
{
"code": null,
"e": 10043,
"s": 9962,
"text": "If the tail points to the left, then the distribution is said to be left-skewed."
},
{
"code": null,
"e": 10165,
"s": 10043,
"text": "A distribution is said to be symmetrical when the distribution on either side of the mean is a mirror image of the other."
},
{
"code": null,
"e": 10371,
"s": 10165,
"text": "A very common symmetrical distribution is one where the values are high in the middle and gradually decrease in frequency toward both ends of the histogram. This pattern is also called Normal Distribution."
},
{
"code": null,
"e": 10671,
"s": 10371,
"text": "Throughout the article, we try to learn about the technique of visualizing frequency distribution. However, when variables are measured in a nominal or ordinal scale, we should use a bar or pie chart. But if the variables are measured on an interval or ratio scale then we have to use the histogram."
},
{
"code": null,
"e": 10856,
"s": 10671,
"text": "If you are a data science enthusiast, please stay connected with me. I will come back shortly with another interesting article. Don’t forget to put some claps, if you like the article."
},
{
"code": null,
"e": 10917,
"s": 10856,
"text": "Previous series of articles about the basics of data science"
},
{
"code": null,
"e": 10940,
"s": 10917,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 10963,
"s": 10940,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11062,
"s": 10963,
"text": "Interesting article which will help you to know how to embed the interactive dataset with articles"
},
{
"code": null,
"e": 11085,
"s": 11062,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11141,
"s": 11085,
"text": "If you enjoy the article, follow me on medium for more."
}
] |
JSF - h:setPropertyActionListener
|
The h:setPropertyActionListener tag adds an action listener to a component that sets a bean property to a given value.
<h:commandButton id = "submit" action = "result" value = "Show Message">
<f:setPropertyActionListener target = "#{userData.data}"
value = "JSF 2.0 User" />
</h:commandButton>
Let us create a test JSF application to test the above tag.
package com.tutorialspoint.test;
import java.io.Serializable;
import javax.faces.bean.ManagedBean;
import javax.faces.bean.SessionScoped;
@ManagedBean(name = "userData", eager = true)
@SessionScoped
public class UserData implements Serializable {
private static final long serialVersionUID = 1L;
public String data = "1";
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
}
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns = "http://www.w3.org/1999/xhtml">
<head>
<title>JSF Tutorial!</title>
</head>
<body>
<h2>f:attribute example</h2>
<hr />
<h:form>
<h:commandButton id = "submit" action = "result" value = "Show Message">
<f:setPropertyActionListener
target = "#{userData.data}" value = "JSF 2.0 User" />
</h:commandButton>
</h:form>
</body>
</html>
<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns = "http://www.w3.org/1999/xhtml"
xmlns:f = "http://java.sun.com/jsf/core"
xmlns:h = "http://java.sun.com/jsf/html"
xmlns:ui = "http://java.sun.com/jsf/facelets">
<head>
<title>JSF Tutorial!</title>
</head>
<h:body>
<h2>Result</h2>
<hr />
#{userData.data}
</h:body>
</html>
Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result.
Press Show Message button and you'll see the following result.
37 Lectures
3.5 hours
Chaand Sheikh
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2071,
"s": 1952,
"text": "The h:setPropertyActionListener tag adds an action listener to a component that sets a bean property to a given value."
},
{
"code": null,
"e": 2261,
"s": 2071,
"text": "<h:commandButton id = \"submit\" action = \"result\" value = \"Show Message\"> \n <f:setPropertyActionListener target = \"#{userData.data}\" \n value = \"JSF 2.0 User\" /> \n</h:commandButton> "
},
{
"code": null,
"e": 2321,
"s": 2261,
"text": "Let us create a test JSF application to test the above tag."
},
{
"code": null,
"e": 2776,
"s": 2321,
"text": "package com.tutorialspoint.test;\n\nimport java.io.Serializable;\n\nimport javax.faces.bean.ManagedBean;\nimport javax.faces.bean.SessionScoped;\n\n@ManagedBean(name = \"userData\", eager = true)\n@SessionScoped\npublic class UserData implements Serializable {\n private static final long serialVersionUID = 1L;\n public String data = \"1\";\n\n public String getData() {\n return data;\n }\n\n public void setData(String data) {\n this.data = data;\n }\n}"
},
{
"code": null,
"e": 3346,
"s": 2776,
"text": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\"\n \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns = \"http://www.w3.org/1999/xhtml\">\n <head>\n <title>JSF Tutorial!</title>\n </head>\n \n <body>\n <h2>f:attribute example</h2>\n <hr />\n \n <h:form>\n <h:commandButton id = \"submit\" action = \"result\" value = \"Show Message\"> \n <f:setPropertyActionListener \n target = \"#{userData.data}\" value = \"JSF 2.0 User\" />\n </h:commandButton>\n </h:form>\n \n </body>\n</html>"
},
{
"code": null,
"e": 3857,
"s": 3346,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \n\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns = \"http://www.w3.org/1999/xhtml\"\n xmlns:f = \"http://java.sun.com/jsf/core\" \n xmlns:h = \"http://java.sun.com/jsf/html\"\n xmlns:ui = \"http://java.sun.com/jsf/facelets\">\n \n <head>\n <title>JSF Tutorial!</title>\n </head>\n \n <h:body>\n <h2>Result</h2>\n <hr />\n #{userData.data}\n </h:body>\n</html> "
},
{
"code": null,
"e": 4073,
"s": 3857,
"text": "Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result."
},
{
"code": null,
"e": 4136,
"s": 4073,
"text": "Press Show Message button and you'll see the following result."
},
{
"code": null,
"e": 4171,
"s": 4136,
"text": "\n 37 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 4186,
"s": 4171,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 4193,
"s": 4186,
"text": " Print"
},
{
"code": null,
"e": 4204,
"s": 4193,
"text": " Add Notes"
}
] |
Matplotlib.colors.to_rgba() in Python - GeeksforGeeks
|
27 Jan, 2022
Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is a multi-platform data visualization library built on NumPy arrays and designed to work with the broader SciPy stack.
The matplotlib.colors.to_rgba() function is used convert c(color) to an RGBA color. It converts the color name into an array of RGBA encoded colors. It returns an RGBA tuple of four floats from 0-1.
Syntax: matplotlib.colors.to_rgba(c, alpha=None)Parameters:
c: It is a matplotlib color or a np.ma.masked color.
alpha: It is an optional parameter that accepts a scalar. It forces the alpha value if alpha is not None. But if c is “none”(case-sensitive) it maps to (0, 0, 0, 0).
Returns: It returns a tuple of scalars in the form of (r, g, b, a).
Example 1:
Python3
import matplotlib.pyplot as pltfrom matplotlib.collections import LineCollectionfrom matplotlib import colors as mcolorsimport numpy as np # simple example showing many# lines in a single set of axesx_axis = np.arange(100) # Here are different sets of# y to plot vs xy_axis = x_axis[:50, np.newaxis] + x_axis[np.newaxis, :] segments = np.zeros((50, 100, 2))segments[:, :, 1] = y_axissegments[:, :, 0] = x_axis #some supported values to test# masked array :segments = np.ma.masked_where((segments > 50) & (segments < 60), segments) # setting the plot limits.figure, axes = plt.subplots()axes.set_xlim(x_axis.min(), x_axis.max())axes.set_ylim(y_axis.min(), y_axis.max()) # colors is sequence of rgba# tuples and .rgba implementationcolors = [mcolors.to_rgba(c) for c in plt.rcParams['axes.prop_cycle'].by_key()['color']] line_segments = LineCollection(segments, linewidths = (0.5, 1, 1.5, 2), colors = colors, linestyle = 'solid') axes.add_collection(line_segments)axes.set_title(' With masked arrays')plt.show()
Output:
Example 2:
Python3
import matplotlib.pyplot as pltimport matplotlib.colors as mcolors # helper function to plot a color tabledef colortable(colors, title, colors_sort = True, emptycols = 0): # cell dimensions width = 212 height = 22 swatch_width = 48 margin = 12 topmargin = 40 # Sorting colors based on hue, # saturation, value and name. # implementation of to_rgb if colors_sort is True: to_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name) for name, color in colors.items()) names = [name for hsv, name in to_hsv] else: names = list(colors) length_of_names = len(names) length_cols = 4 - emptycols length_rows = length_of_names // length_cols + int(length_of_names % length_cols > 0) width2 = width * 4 + 2 * margin height2 = height * length_rows + margin + topmargin dpi = 72 figure, axes = plt.subplots(figsize =(width2 / dpi, height2 / dpi), dpi = dpi) figure.subplots_adjust(margin / width2, margin / height2, (width2-margin)/width2, (height2-topmargin)/height2) axes.set_xlim(0, width * 4) axes.set_ylim(height * (length_rows-0.5), -height / 2.) axes.yaxis.set_visible(False) axes.xaxis.set_visible(False) axes.set_axis_off() axes.set_title(title, fontsize = 24, loc ="left", pad = 10) for i, name in enumerate(names): rows = i % length_rows cols = i // length_rows y = rows * height swatch_start_x = width * cols swatch_end_x = width * cols + swatch_width text_pos_x = width * cols + swatch_width + 7 axes.text(text_pos_x, y, name, fontsize = 14, horizontalalignment ='left', verticalalignment ='center') axes.hlines(y, swatch_start_x, swatch_end_x, color = colors[name], linewidth = 18) return figure colortable(mcolors.BASE_COLORS, "Base Colors", colors_sort = False, emptycols = 1)colortable(mcolors.TABLEAU_COLORS, "Tableau Palette", colors_sort = False, emptycols = 2)colortable(mcolors.CSS4_COLORS, "CSS Colors") plt.show()
Output:
kapoorsagar226
simmytarika5
saurabh1990aror
Python-matplotlib
Python
Write From Home
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
Python Classes and Objects
How to drop one or multiple columns in Pandas Dataframe
Convert string to integer in Python
How to set input type date in dd-mm-yyyy format using HTML ?
Python infinity
Matplotlib.pyplot.title() in Python
Factory method design pattern in Java
|
[
{
"code": null,
"e": 25537,
"s": 25509,
"text": "\n27 Jan, 2022"
},
{
"code": null,
"e": 25750,
"s": 25537,
"text": "Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is a multi-platform data visualization library built on NumPy arrays and designed to work with the broader SciPy stack. "
},
{
"code": null,
"e": 25950,
"s": 25750,
"text": "The matplotlib.colors.to_rgba() function is used convert c(color) to an RGBA color. It converts the color name into an array of RGBA encoded colors. It returns an RGBA tuple of four floats from 0-1. "
},
{
"code": null,
"e": 26012,
"s": 25950,
"text": "Syntax: matplotlib.colors.to_rgba(c, alpha=None)Parameters: "
},
{
"code": null,
"e": 26067,
"s": 26012,
"text": "c: It is a matplotlib color or a np.ma.masked color. "
},
{
"code": null,
"e": 26235,
"s": 26067,
"text": "alpha: It is an optional parameter that accepts a scalar. It forces the alpha value if alpha is not None. But if c is “none”(case-sensitive) it maps to (0, 0, 0, 0). "
},
{
"code": null,
"e": 26305,
"s": 26235,
"text": "Returns: It returns a tuple of scalars in the form of (r, g, b, a). "
},
{
"code": null,
"e": 26318,
"s": 26305,
"text": "Example 1: "
},
{
"code": null,
"e": 26326,
"s": 26318,
"text": "Python3"
},
{
"code": "import matplotlib.pyplot as pltfrom matplotlib.collections import LineCollectionfrom matplotlib import colors as mcolorsimport numpy as np # simple example showing many# lines in a single set of axesx_axis = np.arange(100) # Here are different sets of# y to plot vs xy_axis = x_axis[:50, np.newaxis] + x_axis[np.newaxis, :] segments = np.zeros((50, 100, 2))segments[:, :, 1] = y_axissegments[:, :, 0] = x_axis #some supported values to test# masked array :segments = np.ma.masked_where((segments > 50) & (segments < 60), segments) # setting the plot limits.figure, axes = plt.subplots()axes.set_xlim(x_axis.min(), x_axis.max())axes.set_ylim(y_axis.min(), y_axis.max()) # colors is sequence of rgba# tuples and .rgba implementationcolors = [mcolors.to_rgba(c) for c in plt.rcParams['axes.prop_cycle'].by_key()['color']] line_segments = LineCollection(segments, linewidths = (0.5, 1, 1.5, 2), colors = colors, linestyle = 'solid') axes.add_collection(line_segments)axes.set_title(' With masked arrays')plt.show()",
"e": 27472,
"s": 26326,
"text": null
},
{
"code": null,
"e": 27482,
"s": 27472,
"text": "Output: "
},
{
"code": null,
"e": 27495,
"s": 27482,
"text": "Example 2: "
},
{
"code": null,
"e": 27503,
"s": 27495,
"text": "Python3"
},
{
"code": "import matplotlib.pyplot as pltimport matplotlib.colors as mcolors # helper function to plot a color tabledef colortable(colors, title, colors_sort = True, emptycols = 0): # cell dimensions width = 212 height = 22 swatch_width = 48 margin = 12 topmargin = 40 # Sorting colors based on hue, # saturation, value and name. # implementation of to_rgb if colors_sort is True: to_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name) for name, color in colors.items()) names = [name for hsv, name in to_hsv] else: names = list(colors) length_of_names = len(names) length_cols = 4 - emptycols length_rows = length_of_names // length_cols + int(length_of_names % length_cols > 0) width2 = width * 4 + 2 * margin height2 = height * length_rows + margin + topmargin dpi = 72 figure, axes = plt.subplots(figsize =(width2 / dpi, height2 / dpi), dpi = dpi) figure.subplots_adjust(margin / width2, margin / height2, (width2-margin)/width2, (height2-topmargin)/height2) axes.set_xlim(0, width * 4) axes.set_ylim(height * (length_rows-0.5), -height / 2.) axes.yaxis.set_visible(False) axes.xaxis.set_visible(False) axes.set_axis_off() axes.set_title(title, fontsize = 24, loc =\"left\", pad = 10) for i, name in enumerate(names): rows = i % length_rows cols = i // length_rows y = rows * height swatch_start_x = width * cols swatch_end_x = width * cols + swatch_width text_pos_x = width * cols + swatch_width + 7 axes.text(text_pos_x, y, name, fontsize = 14, horizontalalignment ='left', verticalalignment ='center') axes.hlines(y, swatch_start_x, swatch_end_x, color = colors[name], linewidth = 18) return figure colortable(mcolors.BASE_COLORS, \"Base Colors\", colors_sort = False, emptycols = 1)colortable(mcolors.TABLEAU_COLORS, \"Tableau Palette\", colors_sort = False, emptycols = 2)colortable(mcolors.CSS4_COLORS, \"CSS Colors\") plt.show()",
"e": 29726,
"s": 27503,
"text": null
},
{
"code": null,
"e": 29736,
"s": 29726,
"text": "Output: "
},
{
"code": null,
"e": 29755,
"s": 29740,
"text": "kapoorsagar226"
},
{
"code": null,
"e": 29768,
"s": 29755,
"text": "simmytarika5"
},
{
"code": null,
"e": 29784,
"s": 29768,
"text": "saurabh1990aror"
},
{
"code": null,
"e": 29802,
"s": 29784,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 29809,
"s": 29802,
"text": "Python"
},
{
"code": null,
"e": 29825,
"s": 29809,
"text": "Write From Home"
},
{
"code": null,
"e": 29923,
"s": 29825,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29955,
"s": 29923,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 29997,
"s": 29955,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 30039,
"s": 29997,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 30066,
"s": 30039,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 30122,
"s": 30066,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 30158,
"s": 30122,
"text": "Convert string to integer in Python"
},
{
"code": null,
"e": 30219,
"s": 30158,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 30235,
"s": 30219,
"text": "Python infinity"
},
{
"code": null,
"e": 30271,
"s": 30235,
"text": "Matplotlib.pyplot.title() in Python"
}
] |
jQuery | Add Elements with Examples - GeeksforGeeks
|
26 Feb, 2019
The add elements in jQuery is used to append the content in the document. The methods which is used to add the content are listed below:
append(): Inserts content at the end of the selected elements.
prepend(): Inserts content at the beginning of the selected elements.
after(): Inserts content after the selected elements.
before(): Inserts content before the selected elements.
Using append() method: The append() method in jQuery is used to add a new element at the end of the selected element.
Syntax:
$(selector).append("element_to_be_inserted")
Parameter: This method accepts single parameter element which need to be inserted.
Return value: It does not return anything.
Example: This example uses append() method to add new element.
<html> <head> <title>Append Elements</title> <head> <body> <ol> <li></li> <li></li> <li></li> </ol> <button type="button" id="add_li" name="Add"> Add List </button> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> <!-- Script to use append method to add list --> <script type="text/javascript"> $(document).ready( function() { $("#add_li").click( function() { $("ol").append("<li></li>") }) }) </script> </body></html>
Output:
Using prepend() method: The prepend() method in jQuery is used to add a new element at the beginning of the selected element.
Syntax:
$(selector).prepend("element_to_be_inserted")
Parameter: This method accepts single parameter which is to be inserted into the DOM as a parameters.
Return value: It does not return any value.
Example: This example uses prepend() method to add a new paragraph.
<!DOCTYPE html><html> <head> <title> prepend() method </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script><head> <body> <div id="container"> <p id="p-first">The first paragraph</p> <p id="p-second">The second paragraph</p> <p id="p-third">The third paragraph</p> </div> <button type="button" id="add_p" name="Add Elements"> Add Element </button> <!-- Script to use prepend() method to add elements--> <script type="text/javascript"> $(document).ready( function() { $("#add_p").click( function() { $("#container").prepend("<p>prepended paragraph</p>") }) }) </script> Prepend </body></html>
Output:
Using after method: The after() method in jQuery is used to inserts content after the selected element.
Syntax:
$(selector).after("element_to_be_inserted")
Parameter: This method accepts a single parameter which is used to be inserted into the DOM as a parameter.
Return value: It does not return anything.
Example: This example uses after() method to add a word after the geeksforgeeks image.
<!DOCTYPE html><html> <head> <title> Add Elements using after() method </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script><head> <body> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190222001705/images1.png" alt="jQuery" width="100" height="140"><br><br> <button id="btn1">Insert After</button> <!-- Script to use after() method to append content --> <script type="text/javascript"> $(document).ready( function() { $("#btn1").click(function() { $("img").after("<i>After</i>"); }); }) </script></body> </html>
Output:
Using before method: The before() method in jQuery is used to insert content before the selected element.
Syntax:
$(selector).before("element_to_be_inserted")
Parameter: This method accepts a single parameter which is used to be inserted into the DOM as a parameter.
Return value: It does not return anything.
Example: This example uses before method to add element before the geeksforgeeks image.
<!DOCTYPE html><html> <head> <title> Add Element using before() method </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script><head> <body> <img src="https://media.geeksforgeeks.org/wp-content/uploads/20190222001705/images1.png" alt="jQuery" width="100" height="140"><br><br> <button id="btn1">Insert before</button> <!-- Script to use before() method to add element --> <script type="text/javascript"> $(document).ready( function() { $("#btn1").click(function() { $("img").before("<i>Before</i>"); }); }) </script></body> </html>
Output:
jQuery-HTML/CSS
JQuery
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Show and Hide div elements using radio buttons?
How to prevent Body from scrolling when a modal is opened using jQuery ?
jQuery | ajax() Method
jQuery | removeAttr() with Examples
How to get the value in an input text box using jQuery ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
|
[
{
"code": null,
"e": 26978,
"s": 26950,
"text": "\n26 Feb, 2019"
},
{
"code": null,
"e": 27115,
"s": 26978,
"text": "The add elements in jQuery is used to append the content in the document. The methods which is used to add the content are listed below:"
},
{
"code": null,
"e": 27178,
"s": 27115,
"text": "append(): Inserts content at the end of the selected elements."
},
{
"code": null,
"e": 27248,
"s": 27178,
"text": "prepend(): Inserts content at the beginning of the selected elements."
},
{
"code": null,
"e": 27302,
"s": 27248,
"text": "after(): Inserts content after the selected elements."
},
{
"code": null,
"e": 27358,
"s": 27302,
"text": "before(): Inserts content before the selected elements."
},
{
"code": null,
"e": 27476,
"s": 27358,
"text": "Using append() method: The append() method in jQuery is used to add a new element at the end of the selected element."
},
{
"code": null,
"e": 27484,
"s": 27476,
"text": "Syntax:"
},
{
"code": null,
"e": 27529,
"s": 27484,
"text": "$(selector).append(\"element_to_be_inserted\")"
},
{
"code": null,
"e": 27612,
"s": 27529,
"text": "Parameter: This method accepts single parameter element which need to be inserted."
},
{
"code": null,
"e": 27655,
"s": 27612,
"text": "Return value: It does not return anything."
},
{
"code": null,
"e": 27718,
"s": 27655,
"text": "Example: This example uses append() method to add new element."
},
{
"code": "<html> <head> <title>Append Elements</title> <head> <body> <ol> <li></li> <li></li> <li></li> </ol> <button type=\"button\" id=\"add_li\" name=\"Add\"> Add List </button> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> <!-- Script to use append method to add list --> <script type=\"text/javascript\"> $(document).ready( function() { $(\"#add_li\").click( function() { $(\"ol\").append(\"<li></li>\") }) }) </script> </body></html>",
"e": 28411,
"s": 27718,
"text": null
},
{
"code": null,
"e": 28419,
"s": 28411,
"text": "Output:"
},
{
"code": null,
"e": 28545,
"s": 28419,
"text": "Using prepend() method: The prepend() method in jQuery is used to add a new element at the beginning of the selected element."
},
{
"code": null,
"e": 28553,
"s": 28545,
"text": "Syntax:"
},
{
"code": null,
"e": 28599,
"s": 28553,
"text": "$(selector).prepend(\"element_to_be_inserted\")"
},
{
"code": null,
"e": 28701,
"s": 28599,
"text": "Parameter: This method accepts single parameter which is to be inserted into the DOM as a parameters."
},
{
"code": null,
"e": 28745,
"s": 28701,
"text": "Return value: It does not return any value."
},
{
"code": null,
"e": 28813,
"s": 28745,
"text": "Example: This example uses prepend() method to add a new paragraph."
},
{
"code": "<!DOCTYPE html><html> <head> <title> prepend() method </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script><head> <body> <div id=\"container\"> <p id=\"p-first\">The first paragraph</p> <p id=\"p-second\">The second paragraph</p> <p id=\"p-third\">The third paragraph</p> </div> <button type=\"button\" id=\"add_p\" name=\"Add Elements\"> Add Element </button> <!-- Script to use prepend() method to add elements--> <script type=\"text/javascript\"> $(document).ready( function() { $(\"#add_p\").click( function() { $(\"#container\").prepend(\"<p>prepended paragraph</p>\") }) }) </script> Prepend </body></html>",
"e": 29600,
"s": 28813,
"text": null
},
{
"code": null,
"e": 29608,
"s": 29600,
"text": "Output:"
},
{
"code": null,
"e": 29712,
"s": 29608,
"text": "Using after method: The after() method in jQuery is used to inserts content after the selected element."
},
{
"code": null,
"e": 29720,
"s": 29712,
"text": "Syntax:"
},
{
"code": null,
"e": 29764,
"s": 29720,
"text": "$(selector).after(\"element_to_be_inserted\")"
},
{
"code": null,
"e": 29872,
"s": 29764,
"text": "Parameter: This method accepts a single parameter which is used to be inserted into the DOM as a parameter."
},
{
"code": null,
"e": 29915,
"s": 29872,
"text": "Return value: It does not return anything."
},
{
"code": null,
"e": 30002,
"s": 29915,
"text": "Example: This example uses after() method to add a word after the geeksforgeeks image."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Add Elements using after() method </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script><head> <body> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190222001705/images1.png\" alt=\"jQuery\" width=\"100\" height=\"140\"><br><br> <button id=\"btn1\">Insert After</button> <!-- Script to use after() method to append content --> <script type=\"text/javascript\"> $(document).ready( function() { $(\"#btn1\").click(function() { $(\"img\").after(\"<i>After</i>\"); }); }) </script></body> </html>",
"e": 30677,
"s": 30002,
"text": null
},
{
"code": null,
"e": 30685,
"s": 30677,
"text": "Output:"
},
{
"code": null,
"e": 30791,
"s": 30685,
"text": "Using before method: The before() method in jQuery is used to insert content before the selected element."
},
{
"code": null,
"e": 30799,
"s": 30791,
"text": "Syntax:"
},
{
"code": null,
"e": 30844,
"s": 30799,
"text": "$(selector).before(\"element_to_be_inserted\")"
},
{
"code": null,
"e": 30952,
"s": 30844,
"text": "Parameter: This method accepts a single parameter which is used to be inserted into the DOM as a parameter."
},
{
"code": null,
"e": 30995,
"s": 30952,
"text": "Return value: It does not return anything."
},
{
"code": null,
"e": 31083,
"s": 30995,
"text": "Example: This example uses before method to add element before the geeksforgeeks image."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Add Element using before() method </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script><head> <body> <img src=\"https://media.geeksforgeeks.org/wp-content/uploads/20190222001705/images1.png\" alt=\"jQuery\" width=\"100\" height=\"140\"><br><br> <button id=\"btn1\">Insert before</button> <!-- Script to use before() method to add element --> <script type=\"text/javascript\"> $(document).ready( function() { $(\"#btn1\").click(function() { $(\"img\").before(\"<i>Before</i>\"); }); }) </script></body> </html>",
"e": 31769,
"s": 31083,
"text": null
},
{
"code": null,
"e": 31777,
"s": 31769,
"text": "Output:"
},
{
"code": null,
"e": 31793,
"s": 31777,
"text": "jQuery-HTML/CSS"
},
{
"code": null,
"e": 31800,
"s": 31793,
"text": "JQuery"
},
{
"code": null,
"e": 31817,
"s": 31800,
"text": "Web Technologies"
},
{
"code": null,
"e": 31915,
"s": 31817,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31970,
"s": 31915,
"text": "How to Show and Hide div elements using radio buttons?"
},
{
"code": null,
"e": 32043,
"s": 31970,
"text": "How to prevent Body from scrolling when a modal is opened using jQuery ?"
},
{
"code": null,
"e": 32066,
"s": 32043,
"text": "jQuery | ajax() Method"
},
{
"code": null,
"e": 32102,
"s": 32066,
"text": "jQuery | removeAttr() with Examples"
},
{
"code": null,
"e": 32159,
"s": 32102,
"text": "How to get the value in an input text box using jQuery ?"
},
{
"code": null,
"e": 32199,
"s": 32159,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 32232,
"s": 32199,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 32277,
"s": 32232,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 32320,
"s": 32277,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Hadoop - copyFromLocal Command - GeeksforGeeks
|
27 Dec, 2021
Hadoop copyFromLocal command is used to copy the file from your local file system to the HDFS(Hadoop Distributed File System). copyFromLocal command has an optional switch –f which is used to replace the already existing file in the system, means it can be used to update that file. -f switch is similar to first delete a file and then copying it. If the file is already present in the folder then copy it into the same folder will automatically throw an error.
Syntax to copy a file from your local file system to HDFS is given below:
hdfs dfs -copyFromLocal /path 1 /path 2 .... /path n /destination
The copyFromLocal local command is similar to the -put command used in HDFS. we can also use hadoop fs as a synonym for hdfs dfs. The command can take multiple arguments where all the paths provided are of the source from where we want to copy the file except the last one which is the destination, where the file is copied. Make sure that the destination should be a directory.
Our objective is to copy the file from our local file system to HDFS. In my case, I want to copy the file name Salaries.csv which is present at /home/dikshant/Documents/hadoop_file directory.
Let’s see the current view of my Root directory in HDFS.
Step 1: Make a directory in HDFS where you want to copy this file with the below command.
hdfs dfs -mkdir /Hadoop_File
Step 2: Use copyFromLocal command as shown below to copy it to HDFS /Hadoop_File directory.
hdfs dfs -copyFromLocal /home/dikshant/Documents/hadoop_file/Salaries.csv /Hadoop_File
Step 3: Check whether the file is copied successfully or not by moving to its directory location with below command.
hdfs dfs -ls /Hadoop_File
From below Image, you can observe that copyFromLocal command itself does not copy the same name file at the same location. it says that the file already exists.
To update the content of the file or to Overwrite it, you should use -f switch as shown below.
hdfs dfs -copyFromLocal -f /home/dikshant/Documents/hadoop_file/Salaries.csv /Hadoop_File
Now you can easily observe that using copyFromLocal with -f switch does not produce any error or it will easily update or modify your file in HDFS.
varshagumber28
Hadoop
Hadoop
Hadoop
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference Between Hadoop and Spark
Anatomy of File Read and Write in HDFS
Introduction to Apache Pig
Hadoop MapReduce - Data Flow
What is Big Data?
Architecture of HBase
Hadoop - Pros and Cons
Hive - Alter Table
Architecture and Working of Hive
Applications of Big Data
|
[
{
"code": null,
"e": 25373,
"s": 25345,
"text": "\n27 Dec, 2021"
},
{
"code": null,
"e": 25836,
"s": 25373,
"text": "Hadoop copyFromLocal command is used to copy the file from your local file system to the HDFS(Hadoop Distributed File System). copyFromLocal command has an optional switch –f which is used to replace the already existing file in the system, means it can be used to update that file. -f switch is similar to first delete a file and then copying it. If the file is already present in the folder then copy it into the same folder will automatically throw an error. "
},
{
"code": null,
"e": 25911,
"s": 25836,
"text": "Syntax to copy a file from your local file system to HDFS is given below: "
},
{
"code": null,
"e": 25979,
"s": 25913,
"text": "hdfs dfs -copyFromLocal /path 1 /path 2 .... /path n /destination"
},
{
"code": null,
"e": 26359,
"s": 25979,
"text": "The copyFromLocal local command is similar to the -put command used in HDFS. we can also use hadoop fs as a synonym for hdfs dfs. The command can take multiple arguments where all the paths provided are of the source from where we want to copy the file except the last one which is the destination, where the file is copied. Make sure that the destination should be a directory. "
},
{
"code": null,
"e": 26552,
"s": 26359,
"text": "Our objective is to copy the file from our local file system to HDFS. In my case, I want to copy the file name Salaries.csv which is present at /home/dikshant/Documents/hadoop_file directory. "
},
{
"code": null,
"e": 26615,
"s": 26556,
"text": "Let’s see the current view of my Root directory in HDFS. "
},
{
"code": null,
"e": 26706,
"s": 26615,
"text": "Step 1: Make a directory in HDFS where you want to copy this file with the below command. "
},
{
"code": null,
"e": 26737,
"s": 26708,
"text": "hdfs dfs -mkdir /Hadoop_File"
},
{
"code": null,
"e": 26834,
"s": 26741,
"text": "Step 2: Use copyFromLocal command as shown below to copy it to HDFS /Hadoop_File directory. "
},
{
"code": null,
"e": 26923,
"s": 26836,
"text": "hdfs dfs -copyFromLocal /home/dikshant/Documents/hadoop_file/Salaries.csv /Hadoop_File"
},
{
"code": null,
"e": 27043,
"s": 26925,
"text": "Step 3: Check whether the file is copied successfully or not by moving to its directory location with below command. "
},
{
"code": null,
"e": 27071,
"s": 27045,
"text": "hdfs dfs -ls /Hadoop_File"
},
{
"code": null,
"e": 27239,
"s": 27077,
"text": "From below Image, you can observe that copyFromLocal command itself does not copy the same name file at the same location. it says that the file already exists. "
},
{
"code": null,
"e": 27337,
"s": 27241,
"text": "To update the content of the file or to Overwrite it, you should use -f switch as shown below. "
},
{
"code": null,
"e": 27429,
"s": 27339,
"text": "hdfs dfs -copyFromLocal -f /home/dikshant/Documents/hadoop_file/Salaries.csv /Hadoop_File"
},
{
"code": null,
"e": 27580,
"s": 27431,
"text": "Now you can easily observe that using copyFromLocal with -f switch does not produce any error or it will easily update or modify your file in HDFS. "
},
{
"code": null,
"e": 27595,
"s": 27580,
"text": "varshagumber28"
},
{
"code": null,
"e": 27602,
"s": 27595,
"text": "Hadoop"
},
{
"code": null,
"e": 27609,
"s": 27602,
"text": "Hadoop"
},
{
"code": null,
"e": 27616,
"s": 27609,
"text": "Hadoop"
},
{
"code": null,
"e": 27714,
"s": 27616,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27750,
"s": 27714,
"text": "Difference Between Hadoop and Spark"
},
{
"code": null,
"e": 27789,
"s": 27750,
"text": "Anatomy of File Read and Write in HDFS"
},
{
"code": null,
"e": 27816,
"s": 27789,
"text": "Introduction to Apache Pig"
},
{
"code": null,
"e": 27845,
"s": 27816,
"text": "Hadoop MapReduce - Data Flow"
},
{
"code": null,
"e": 27863,
"s": 27845,
"text": "What is Big Data?"
},
{
"code": null,
"e": 27885,
"s": 27863,
"text": "Architecture of HBase"
},
{
"code": null,
"e": 27908,
"s": 27885,
"text": "Hadoop - Pros and Cons"
},
{
"code": null,
"e": 27927,
"s": 27908,
"text": "Hive - Alter Table"
},
{
"code": null,
"e": 27960,
"s": 27927,
"text": "Architecture and Working of Hive"
}
] |
Chat Bot in Python with ChatterBot Module - GeeksforGeeks
|
27 Apr, 2022
Nobody likes to be alone always, but sometimes loneliness could be a better medicine to hunch the thirst for a peaceful environment. Even during such lonely quarantines, we may ignore humans but not humanoids. Yes, if you have guessed this article for a chatbot, then you have cracked it right. We won’t require 6000 lines of code to create a chatbot but just a six-letter word “Python” is enough. Let us have a quick glance at Python’s ChatterBot to create our bot. ChatterBot is a Python library built based on machine learning with an inbuilt conversational dialog flow and training engine. The bot created using this library will get trained automatically with the response it gets from the user.
Quick resolution for a complaint or a problem.
Improve business branding thereby achieving great customer satisfaction.
Answering questions and answers for customers.
Making a reservation at hotel or at restaurant.
Save human effort 24×7.
Enhance business revenue by providing ideas and inspirations.
Finding details about business such as hours of operation, phone number and address.
Automate sales and lead generation process.
Reduce customer agents waiting time answering phone calls.
24×7 availability.
Instant answers to queries.
Support multi-language to enhance businesses.
Simple and Easy to Use UI to engage more customers.
Cost effective and user interactive.
Avoid communication with call agents thereby reducing the time consuming tasks.
Understand the Customer behavior
Increase sales of business by offering promo codes or gifts.
Chatbots deliver instantly by understanding the user requests with pre-defined rules and AI based chatbots. There are two types of chatbots.
Rule Based Chatbots: This type of chatbots answer the customer queries using the pre-defined rules. These bots answer common queries such as hours of operation of business, addresses, phone numbers and tracking status.
Conversational AI Chatbots: This type of chatbots using Natural language Processing(NLP) to understand the context and intent of a user input before providing the response. These Bots train themselves as per the user inputs and more they learn, more they become user interactive.
Install chatterbot using Python Package Index(PyPi) with this command
pip install chatterbot
Below is the implementation.
Python3
# Import "chatbot" from# chatterbot package.from chatterbot import ChatBot # Inorder to train our bot, we have# to import a trainer package# "ChatterBotCorpusTrainer"from chatterbot.trainers import ChatterBotCorpusTrainer # Give a name to the chatbot “corona bot”# and assign a trainer component.chatbot=ChatBot('corona bot') # Create a new trainer for the chatbottrainer = ChatterBotCorpusTrainer(chatbot) # Now let us train our bot with multiple corpustrainer.train("chatterbot.corpus.english.greetings", "chatterbot.corpus.english.conversations" ) response = chatbot.get_response('What is your Number')print(response) response = chatbot.get_response('Who are you?')print(response)
Output:
surindertarika1234
deepeshnagpal
python-modules
Python
Write From Home
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
Python Classes and Objects
How to drop one or multiple columns in Pandas Dataframe
Convert string to integer in Python
How to set input type date in dd-mm-yyyy format using HTML ?
Python infinity
Matplotlib.pyplot.title() in Python
Factory method design pattern in Java
|
[
{
"code": null,
"e": 25562,
"s": 25534,
"text": "\n27 Apr, 2022"
},
{
"code": null,
"e": 26264,
"s": 25562,
"text": "Nobody likes to be alone always, but sometimes loneliness could be a better medicine to hunch the thirst for a peaceful environment. Even during such lonely quarantines, we may ignore humans but not humanoids. Yes, if you have guessed this article for a chatbot, then you have cracked it right. We won’t require 6000 lines of code to create a chatbot but just a six-letter word “Python” is enough. Let us have a quick glance at Python’s ChatterBot to create our bot. ChatterBot is a Python library built based on machine learning with an inbuilt conversational dialog flow and training engine. The bot created using this library will get trained automatically with the response it gets from the user. "
},
{
"code": null,
"e": 26311,
"s": 26264,
"text": "Quick resolution for a complaint or a problem."
},
{
"code": null,
"e": 26384,
"s": 26311,
"text": "Improve business branding thereby achieving great customer satisfaction."
},
{
"code": null,
"e": 26431,
"s": 26384,
"text": "Answering questions and answers for customers."
},
{
"code": null,
"e": 26479,
"s": 26431,
"text": "Making a reservation at hotel or at restaurant."
},
{
"code": null,
"e": 26503,
"s": 26479,
"text": "Save human effort 24×7."
},
{
"code": null,
"e": 26566,
"s": 26503,
"text": "Enhance business revenue by providing ideas and inspirations. "
},
{
"code": null,
"e": 26651,
"s": 26566,
"text": "Finding details about business such as hours of operation, phone number and address."
},
{
"code": null,
"e": 26695,
"s": 26651,
"text": "Automate sales and lead generation process."
},
{
"code": null,
"e": 26755,
"s": 26695,
"text": "Reduce customer agents waiting time answering phone calls. "
},
{
"code": null,
"e": 26774,
"s": 26755,
"text": "24×7 availability."
},
{
"code": null,
"e": 26802,
"s": 26774,
"text": "Instant answers to queries."
},
{
"code": null,
"e": 26848,
"s": 26802,
"text": "Support multi-language to enhance businesses."
},
{
"code": null,
"e": 26900,
"s": 26848,
"text": "Simple and Easy to Use UI to engage more customers."
},
{
"code": null,
"e": 26937,
"s": 26900,
"text": "Cost effective and user interactive."
},
{
"code": null,
"e": 27017,
"s": 26937,
"text": "Avoid communication with call agents thereby reducing the time consuming tasks."
},
{
"code": null,
"e": 27050,
"s": 27017,
"text": "Understand the Customer behavior"
},
{
"code": null,
"e": 27112,
"s": 27050,
"text": "Increase sales of business by offering promo codes or gifts. "
},
{
"code": null,
"e": 27254,
"s": 27112,
"text": "Chatbots deliver instantly by understanding the user requests with pre-defined rules and AI based chatbots. There are two types of chatbots. "
},
{
"code": null,
"e": 27477,
"s": 27254,
"text": "Rule Based Chatbots: This type of chatbots answer the customer queries using the pre-defined rules. These bots answer common queries such as hours of operation of business, addresses, phone numbers and tracking status. "
},
{
"code": null,
"e": 27758,
"s": 27477,
"text": "Conversational AI Chatbots: This type of chatbots using Natural language Processing(NLP) to understand the context and intent of a user input before providing the response. These Bots train themselves as per the user inputs and more they learn, more they become user interactive."
},
{
"code": null,
"e": 27830,
"s": 27758,
"text": "Install chatterbot using Python Package Index(PyPi) with this command "
},
{
"code": null,
"e": 27853,
"s": 27830,
"text": "pip install chatterbot"
},
{
"code": null,
"e": 27883,
"s": 27853,
"text": "Below is the implementation. "
},
{
"code": null,
"e": 27891,
"s": 27883,
"text": "Python3"
},
{
"code": "# Import \"chatbot\" from# chatterbot package.from chatterbot import ChatBot # Inorder to train our bot, we have# to import a trainer package# \"ChatterBotCorpusTrainer\"from chatterbot.trainers import ChatterBotCorpusTrainer # Give a name to the chatbot “corona bot”# and assign a trainer component.chatbot=ChatBot('corona bot') # Create a new trainer for the chatbottrainer = ChatterBotCorpusTrainer(chatbot) # Now let us train our bot with multiple corpustrainer.train(\"chatterbot.corpus.english.greetings\", \"chatterbot.corpus.english.conversations\" ) response = chatbot.get_response('What is your Number')print(response) response = chatbot.get_response('Who are you?')print(response)",
"e": 28593,
"s": 27891,
"text": null
},
{
"code": null,
"e": 28601,
"s": 28593,
"text": "Output:"
},
{
"code": null,
"e": 28622,
"s": 28603,
"text": "surindertarika1234"
},
{
"code": null,
"e": 28636,
"s": 28622,
"text": "deepeshnagpal"
},
{
"code": null,
"e": 28651,
"s": 28636,
"text": "python-modules"
},
{
"code": null,
"e": 28658,
"s": 28651,
"text": "Python"
},
{
"code": null,
"e": 28674,
"s": 28658,
"text": "Write From Home"
},
{
"code": null,
"e": 28772,
"s": 28674,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28804,
"s": 28772,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28846,
"s": 28804,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 28888,
"s": 28846,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 28915,
"s": 28888,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 28971,
"s": 28915,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 29007,
"s": 28971,
"text": "Convert string to integer in Python"
},
{
"code": null,
"e": 29068,
"s": 29007,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 29084,
"s": 29068,
"text": "Python infinity"
},
{
"code": null,
"e": 29120,
"s": 29084,
"text": "Matplotlib.pyplot.title() in Python"
}
] |
Flip the text using CSS - GeeksforGeeks
|
10 Mar, 2021
The Flipping effect creates the mirror image of the text. You can flip your text both horizontally and vertically. CSS3 allows adding various effects, including text flipping due to its transformation functions. You can flip a text without any JavaScript code.
Given below is the example of flipping the text without using JavaScript it includes only HTML and CSS.
There are various types of text flipping:
Horizontal Flip
Vertical Flip
Upside Down Flip
Mirror Image of text
Follow the steps:
Create HTML file:Use <span> element with class name “abc” (as your choice).
Create CSS file:Specify the Display and Margin properties of <span>.Use the transform properties to set the flip you required ( like vertical text flip , Horizontal text flip, Upside down text flip , Mirroring of text )Add the colour if you want that your flip text should have different colour.
Specify the Display and Margin properties of <span>.Use the transform properties to set the flip you required ( like vertical text flip , Horizontal text flip, Upside down text flip , Mirroring of text )Add the colour if you want that your flip text should have different colour.
Specify the Display and Margin properties of <span>.
Use the transform properties to set the flip you required ( like vertical text flip , Horizontal text flip, Upside down text flip , Mirroring of text )
Add the colour if you want that your flip text should have different colour.
Below examples illustrates the approach:
Example 1: HTML CSS code to flip the Text Horizontally
HTML
<!DOCTYPE html><html> <head> <title> Title you want </title> <style> span{ display: Inline-block; margin: 50px; } .GFG{ transform: scale(-1, 1); color: #000080; -moz-transform: scale(-1, 1); -webkit-transform: scale(-1, 1); -o-transform: scale(-1, 1); -ms-transform: scale(-1, 1); transform: scale(-1, 1); } </style> </head> <body> <!-- here write your text you want to flip --> <span>GeeksforGeeks</span> <!-- your class name must be as you above written with .class name --> <span class="GFG">GeeksforGeeks</span> </body> </html>
Output:
Flip text horizontally
Example 2: HTML CSS code to flip the text upside-down.
HTML
<!DOCTYPE html><html> <head> <title>Title as you want</title> <style> .container { display: flex; justify-content: center; align-items: center; height: 100vh; } .backwards { display: inline; font-size: 100px; font-style: bold; -moz-transform: scale(-1, -1); -webkit-transform: scale(-1, -1); -o-transform: scale(-1, -1); -ms-transform: scale(-1, -1); transform: scale(-1, -1); } </style> </head> <body> <ul class="container"> <li class="backwards">G</li> <li class="backwards">e</li> <li class="backwards">e</li> <li class="backwards">k</li> <li class="backwards">S</li> </ul> </body> </html>
Output:
Flip upside-down
Example 3: HTML CSS code to flip the text vertically.
HTML
<!DOCTYPE html> <html> <head> <title> Title you want </title> <!-- write your title between title tag --> <style> span { display: Inline-block; margin: 50px; } .GFG { transform: scale(1, -1); color: #000080; -moz-transform: scale(1, -1); -webkit-transform: scale(1, -1); -o-transform: scale(1, -1); -ms-transform: scale(1, -1); transform: scale(1, -1); } </style> </head> <body> <span>GeeksforGeeks</span> <!-- here write your text you want to flip --> <span class="GFG">GeeksforGeeks</span> <!-- your class name must be as you above written with .class name --> </body> </html>
Output:
flip text vertically
Example 4:
HTML
<!DOCTYPE html><html> <head> <title>Title as you want </title> <style> body { display: flex; justify-content: center; } .main { display: inline-flex; } .box { margin-top: 100px; font-size: 5em; color: #000; font-weight: 900; } #box1::after { content: "GeeksforGeeks"; display: flex; transform: rotateX(180deg); -webkit-background-clip: text; color: #ddd; } </style></head> <body> <div class="main"> <div class="box" id="box1">GeeksforGeeks</div> </div></body> </html>
Output:
mirroring the text
CSS-Properties
CSS-Questions
CSS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to apply style to parent if it has child with CSS?
How to position a div at the bottom of its container using CSS?
How to set space between the flexbox ?
Design a web page using HTML and CSS
How to Upload Image into Database and Display it using PHP ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
Difference between var, let and const keywords in JavaScript
|
[
{
"code": null,
"e": 25937,
"s": 25909,
"text": "\n10 Mar, 2021"
},
{
"code": null,
"e": 26198,
"s": 25937,
"text": "The Flipping effect creates the mirror image of the text. You can flip your text both horizontally and vertically. CSS3 allows adding various effects, including text flipping due to its transformation functions. You can flip a text without any JavaScript code."
},
{
"code": null,
"e": 26302,
"s": 26198,
"text": "Given below is the example of flipping the text without using JavaScript it includes only HTML and CSS."
},
{
"code": null,
"e": 26344,
"s": 26302,
"text": "There are various types of text flipping:"
},
{
"code": null,
"e": 26360,
"s": 26344,
"text": "Horizontal Flip"
},
{
"code": null,
"e": 26374,
"s": 26360,
"text": "Vertical Flip"
},
{
"code": null,
"e": 26391,
"s": 26374,
"text": "Upside Down Flip"
},
{
"code": null,
"e": 26412,
"s": 26391,
"text": "Mirror Image of text"
},
{
"code": null,
"e": 26430,
"s": 26412,
"text": "Follow the steps:"
},
{
"code": null,
"e": 26506,
"s": 26430,
"text": "Create HTML file:Use <span> element with class name “abc” (as your choice)."
},
{
"code": null,
"e": 26803,
"s": 26506,
"text": "Create CSS file:Specify the Display and Margin properties of <span>.Use the transform properties to set the flip you required ( like vertical text flip , Horizontal text flip, Upside down text flip , Mirroring of text )Add the colour if you want that your flip text should have different colour."
},
{
"code": null,
"e": 27084,
"s": 26803,
"text": "Specify the Display and Margin properties of <span>.Use the transform properties to set the flip you required ( like vertical text flip , Horizontal text flip, Upside down text flip , Mirroring of text )Add the colour if you want that your flip text should have different colour."
},
{
"code": null,
"e": 27137,
"s": 27084,
"text": "Specify the Display and Margin properties of <span>."
},
{
"code": null,
"e": 27290,
"s": 27137,
"text": "Use the transform properties to set the flip you required ( like vertical text flip , Horizontal text flip, Upside down text flip , Mirroring of text )"
},
{
"code": null,
"e": 27367,
"s": 27290,
"text": "Add the colour if you want that your flip text should have different colour."
},
{
"code": null,
"e": 27408,
"s": 27367,
"text": "Below examples illustrates the approach:"
},
{
"code": null,
"e": 27464,
"s": 27408,
"text": "Example 1: HTML CSS code to flip the Text Horizontally"
},
{
"code": null,
"e": 27469,
"s": 27464,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <title> Title you want </title> <style> span{ display: Inline-block; margin: 50px; } .GFG{ transform: scale(-1, 1); color: #000080; -moz-transform: scale(-1, 1); -webkit-transform: scale(-1, 1); -o-transform: scale(-1, 1); -ms-transform: scale(-1, 1); transform: scale(-1, 1); } </style> </head> <body> <!-- here write your text you want to flip --> <span>GeeksforGeeks</span> <!-- your class name must be as you above written with .class name --> <span class=\"GFG\">GeeksforGeeks</span> </body> </html>",
"e": 28145,
"s": 27469,
"text": null
},
{
"code": null,
"e": 28153,
"s": 28145,
"text": "Output:"
},
{
"code": null,
"e": 28176,
"s": 28153,
"text": "Flip text horizontally"
},
{
"code": null,
"e": 28231,
"s": 28176,
"text": "Example 2: HTML CSS code to flip the text upside-down."
},
{
"code": null,
"e": 28236,
"s": 28231,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <title>Title as you want</title> <style> .container { display: flex; justify-content: center; align-items: center; height: 100vh; } .backwards { display: inline; font-size: 100px; font-style: bold; -moz-transform: scale(-1, -1); -webkit-transform: scale(-1, -1); -o-transform: scale(-1, -1); -ms-transform: scale(-1, -1); transform: scale(-1, -1); } </style> </head> <body> <ul class=\"container\"> <li class=\"backwards\">G</li> <li class=\"backwards\">e</li> <li class=\"backwards\">e</li> <li class=\"backwards\">k</li> <li class=\"backwards\">S</li> </ul> </body> </html>",
"e": 28980,
"s": 28236,
"text": null
},
{
"code": null,
"e": 28988,
"s": 28980,
"text": "Output:"
},
{
"code": null,
"e": 29005,
"s": 28988,
"text": "Flip upside-down"
},
{
"code": null,
"e": 29059,
"s": 29005,
"text": "Example 3: HTML CSS code to flip the text vertically."
},
{
"code": null,
"e": 29064,
"s": 29059,
"text": "HTML"
},
{
"code": "<!DOCTYPE html> <html> <head> <title> Title you want </title> <!-- write your title between title tag --> <style> span { display: Inline-block; margin: 50px; } .GFG { transform: scale(1, -1); color: #000080; -moz-transform: scale(1, -1); -webkit-transform: scale(1, -1); -o-transform: scale(1, -1); -ms-transform: scale(1, -1); transform: scale(1, -1); } </style> </head> <body> <span>GeeksforGeeks</span> <!-- here write your text you want to flip --> <span class=\"GFG\">GeeksforGeeks</span> <!-- your class name must be as you above written with .class name --> </body> </html>",
"e": 29818,
"s": 29064,
"text": null
},
{
"code": null,
"e": 29826,
"s": 29818,
"text": "Output:"
},
{
"code": null,
"e": 29847,
"s": 29826,
"text": "flip text vertically"
},
{
"code": null,
"e": 29858,
"s": 29847,
"text": "Example 4:"
},
{
"code": null,
"e": 29863,
"s": 29858,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <title>Title as you want </title> <style> body { display: flex; justify-content: center; } .main { display: inline-flex; } .box { margin-top: 100px; font-size: 5em; color: #000; font-weight: 900; } #box1::after { content: \"GeeksforGeeks\"; display: flex; transform: rotateX(180deg); -webkit-background-clip: text; color: #ddd; } </style></head> <body> <div class=\"main\"> <div class=\"box\" id=\"box1\">GeeksforGeeks</div> </div></body> </html>",
"e": 30574,
"s": 29863,
"text": null
},
{
"code": null,
"e": 30582,
"s": 30574,
"text": "Output:"
},
{
"code": null,
"e": 30601,
"s": 30582,
"text": "mirroring the text"
},
{
"code": null,
"e": 30616,
"s": 30601,
"text": "CSS-Properties"
},
{
"code": null,
"e": 30630,
"s": 30616,
"text": "CSS-Questions"
},
{
"code": null,
"e": 30634,
"s": 30630,
"text": "CSS"
},
{
"code": null,
"e": 30651,
"s": 30634,
"text": "Web Technologies"
},
{
"code": null,
"e": 30749,
"s": 30651,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30804,
"s": 30749,
"text": "How to apply style to parent if it has child with CSS?"
},
{
"code": null,
"e": 30868,
"s": 30804,
"text": "How to position a div at the bottom of its container using CSS?"
},
{
"code": null,
"e": 30907,
"s": 30868,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 30944,
"s": 30907,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 31005,
"s": 30944,
"text": "How to Upload Image into Database and Display it using PHP ?"
},
{
"code": null,
"e": 31045,
"s": 31005,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 31078,
"s": 31045,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 31123,
"s": 31078,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 31166,
"s": 31123,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Installing MongoDB on Windows with Python - GeeksforGeeks
|
06 Oct, 2021
We would explain the installation of MongoDB in steps. Before you install, I would suggest everyone use ide spyder, Anaconda.
Step 1 -> Install the community EditionInstallation Link
Step 2 -> Run the installed MongoDB windows installer package that you just downloaded.
MongoDB get installed here->
C:\Program Files\MongoDB\Server\3.4\
Step 3 -> Let’s set MongoDB environment
(a) Create data directory where all data is stored.On C: drive create a folder data inside it create a folder dborRunmd C:\data\db
md C:\data\db
(b) To start MongoDBRun ->"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe"
Wait till the connection message appears
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe"
Wait till the connection message appears
(c) Verify Environment Path or set path if not correctly setOpen environment variables, you can search this by windows search.Open Environment Variable under the System variables section open Path.This would look like this.Add the path of bin folder as shown in the image above.
(c) Verify Environment Path or set path if not correctly setOpen environment variables, you can search this by windows search.
Open Environment Variable under the System variables section open Path.This would look like this.Add the path of bin folder as shown in the image above.
(d) To Connect to MongoDBOpen other command prompt and run->"C:\Program Files\MongoDB\Server\3.4\bin\mongo.exe
(d) To Connect to MongoDBOpen other command prompt and run->
"C:\Program Files\MongoDB\Server\3.4\bin\mongo.exe
Step 4-> Ready MongoDBOpen Command Prompt(Admin mode) type->
mongod
NOTE : Till step 4 MongoDB will work only when the Command Prompt is open and it’s listening.Now we’ll see Extension to make it better.
Below steps from step 5 to step 8 are optional :Step 5-> Open command prompt and run-
mkdir c:\data\db
mkdir c:\data\log
Step 6-> Create a configuration file at C:\Program Files\MongoDB\Server\3.4\mongod.cfg (name of file mongod.cfg)
systemLog:
destination: file
path: c:\data\log\mongod.log
storage:
dbPath: c:\data\db
This can be created and saved in Admin mode of Notepad or Notepad++ or any other editor to run notepad admin mode press Ctrl + Shift + Enter. Admin mode of notepad will let you create mongod.cfg and save above text file.
Step 7 -> Install the MongoDB service by starting mongod.exe with the –install option and the -config option to specify the previously created configuration file.Now run this command on command prompt
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe"
--config "C:\Program Files\MongoDB\Server\3.4\mongod.cfg" --install
Step 8-> To start & stop MongoDB runTo start :
net start MongoDB
To stop :
net stop MongoDB
NOTE : ALL commands are run on Command Prompt Admin mode, to open command prompt Admin Mode either open normal command prompt and press Ctrl+Shift+Enter or Right click on left windows icon start button where you can see the options.Step 9 -> Open Anaconda Command Prompt as shown in the image.
Step 10 -> Install package to use MongoDBTo install this package with conda run:
conda install -c anaconda pymongo
Congratulations!! Installation completed.( Pymongo works only when MongoDB is started, use net start MongoDB to start it and then work on spyder)You can study and understand MongoDB in python here.
This article is contributed by SHAURYA UPPAL. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
how-to-install
Python-mongoDB
GBlog
Installation Guide
Python
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ...
DSA Sheet by Love Babbar
Socket Programming in C/C++
GET and POST requests using Python
Must Do Coding Questions for Product Based Companies
How to Install PIP on Windows ?
Installation of Node.js on Linux
How to install Jupyter Notebook on Windows?
How to Install FFmpeg on Windows?
How to Install Anaconda on Windows?
|
[
{
"code": null,
"e": 42677,
"s": 42649,
"text": "\n06 Oct, 2021"
},
{
"code": null,
"e": 42803,
"s": 42677,
"text": "We would explain the installation of MongoDB in steps. Before you install, I would suggest everyone use ide spyder, Anaconda."
},
{
"code": null,
"e": 42860,
"s": 42803,
"text": "Step 1 -> Install the community EditionInstallation Link"
},
{
"code": null,
"e": 42948,
"s": 42860,
"text": "Step 2 -> Run the installed MongoDB windows installer package that you just downloaded."
},
{
"code": null,
"e": 42977,
"s": 42948,
"text": "MongoDB get installed here->"
},
{
"code": null,
"e": 43014,
"s": 42977,
"text": "C:\\Program Files\\MongoDB\\Server\\3.4\\"
},
{
"code": null,
"e": 43054,
"s": 43014,
"text": "Step 3 -> Let’s set MongoDB environment"
},
{
"code": null,
"e": 43185,
"s": 43054,
"text": "(a) Create data directory where all data is stored.On C: drive create a folder data inside it create a folder dborRunmd C:\\data\\db"
},
{
"code": null,
"e": 43199,
"s": 43185,
"text": "md C:\\data\\db"
},
{
"code": null,
"e": 43319,
"s": 43199,
"text": "(b) To start MongoDBRun ->\"C:\\Program Files\\MongoDB\\Server\\3.4\\bin\\mongod.exe\"\nWait till the connection message appears"
},
{
"code": null,
"e": 43373,
"s": 43319,
"text": "\"C:\\Program Files\\MongoDB\\Server\\3.4\\bin\\mongod.exe\"\n"
},
{
"code": null,
"e": 43414,
"s": 43373,
"text": "Wait till the connection message appears"
},
{
"code": null,
"e": 43693,
"s": 43414,
"text": "(c) Verify Environment Path or set path if not correctly setOpen environment variables, you can search this by windows search.Open Environment Variable under the System variables section open Path.This would look like this.Add the path of bin folder as shown in the image above."
},
{
"code": null,
"e": 43820,
"s": 43693,
"text": "(c) Verify Environment Path or set path if not correctly setOpen environment variables, you can search this by windows search."
},
{
"code": null,
"e": 43973,
"s": 43820,
"text": "Open Environment Variable under the System variables section open Path.This would look like this.Add the path of bin folder as shown in the image above."
},
{
"code": null,
"e": 44085,
"s": 43973,
"text": "(d) To Connect to MongoDBOpen other command prompt and run->\"C:\\Program Files\\MongoDB\\Server\\3.4\\bin\\mongo.exe\n"
},
{
"code": null,
"e": 44146,
"s": 44085,
"text": "(d) To Connect to MongoDBOpen other command prompt and run->"
},
{
"code": null,
"e": 44198,
"s": 44146,
"text": "\"C:\\Program Files\\MongoDB\\Server\\3.4\\bin\\mongo.exe\n"
},
{
"code": null,
"e": 44259,
"s": 44198,
"text": "Step 4-> Ready MongoDBOpen Command Prompt(Admin mode) type->"
},
{
"code": null,
"e": 44266,
"s": 44259,
"text": "mongod"
},
{
"code": null,
"e": 44402,
"s": 44266,
"text": "NOTE : Till step 4 MongoDB will work only when the Command Prompt is open and it’s listening.Now we’ll see Extension to make it better."
},
{
"code": null,
"e": 44488,
"s": 44402,
"text": "Below steps from step 5 to step 8 are optional :Step 5-> Open command prompt and run-"
},
{
"code": null,
"e": 44524,
"s": 44488,
"text": "mkdir c:\\data\\db\nmkdir c:\\data\\log\n"
},
{
"code": null,
"e": 44637,
"s": 44524,
"text": "Step 6-> Create a configuration file at C:\\Program Files\\MongoDB\\Server\\3.4\\mongod.cfg (name of file mongod.cfg)"
},
{
"code": null,
"e": 44736,
"s": 44637,
"text": "systemLog:\n destination: file\n path: c:\\data\\log\\mongod.log\nstorage:\n dbPath: c:\\data\\db\n"
},
{
"code": null,
"e": 44957,
"s": 44736,
"text": "This can be created and saved in Admin mode of Notepad or Notepad++ or any other editor to run notepad admin mode press Ctrl + Shift + Enter. Admin mode of notepad will let you create mongod.cfg and save above text file."
},
{
"code": null,
"e": 45158,
"s": 44957,
"text": "Step 7 -> Install the MongoDB service by starting mongod.exe with the –install option and the -config option to specify the previously created configuration file.Now run this command on command prompt"
},
{
"code": null,
"e": 45280,
"s": 45158,
"text": "\"C:\\Program Files\\MongoDB\\Server\\3.4\\bin\\mongod.exe\" \n--config \"C:\\Program Files\\MongoDB\\Server\\3.4\\mongod.cfg\" --install"
},
{
"code": null,
"e": 45327,
"s": 45280,
"text": "Step 8-> To start & stop MongoDB runTo start :"
},
{
"code": null,
"e": 45345,
"s": 45327,
"text": "net start MongoDB"
},
{
"code": null,
"e": 45355,
"s": 45345,
"text": "To stop :"
},
{
"code": null,
"e": 45373,
"s": 45355,
"text": "net stop MongoDB\n"
},
{
"code": null,
"e": 45667,
"s": 45373,
"text": "NOTE : ALL commands are run on Command Prompt Admin mode, to open command prompt Admin Mode either open normal command prompt and press Ctrl+Shift+Enter or Right click on left windows icon start button where you can see the options.Step 9 -> Open Anaconda Command Prompt as shown in the image."
},
{
"code": null,
"e": 45748,
"s": 45667,
"text": "Step 10 -> Install package to use MongoDBTo install this package with conda run:"
},
{
"code": null,
"e": 45784,
"s": 45748,
"text": "conda install -c anaconda pymongo \n"
},
{
"code": null,
"e": 45982,
"s": 45784,
"text": "Congratulations!! Installation completed.( Pymongo works only when MongoDB is started, use net start MongoDB to start it and then work on spyder)You can study and understand MongoDB in python here."
},
{
"code": null,
"e": 46279,
"s": 45982,
"text": "This article is contributed by SHAURYA UPPAL. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 46404,
"s": 46279,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 46419,
"s": 46404,
"text": "how-to-install"
},
{
"code": null,
"e": 46434,
"s": 46419,
"text": "Python-mongoDB"
},
{
"code": null,
"e": 46440,
"s": 46434,
"text": "GBlog"
},
{
"code": null,
"e": 46459,
"s": 46440,
"text": "Installation Guide"
},
{
"code": null,
"e": 46466,
"s": 46459,
"text": "Python"
},
{
"code": null,
"e": 46485,
"s": 46466,
"text": "Technical Scripter"
},
{
"code": null,
"e": 46583,
"s": 46485,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 46657,
"s": 46583,
"text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..."
},
{
"code": null,
"e": 46682,
"s": 46657,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 46710,
"s": 46682,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 46745,
"s": 46710,
"text": "GET and POST requests using Python"
},
{
"code": null,
"e": 46798,
"s": 46745,
"text": "Must Do Coding Questions for Product Based Companies"
},
{
"code": null,
"e": 46830,
"s": 46798,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 46863,
"s": 46830,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 46907,
"s": 46863,
"text": "How to install Jupyter Notebook on Windows?"
},
{
"code": null,
"e": 46941,
"s": 46907,
"text": "How to Install FFmpeg on Windows?"
}
] |
Z-Buffer or Depth-Buffer method - GeeksforGeeks
|
06 Mar, 2018
When viewing a picture containing non transparent objects and surfaces, it is not possible to see those objects from view which are behind from the objects closer to eye. To get the realistic screen image, removal of these hidden surfaces is must. The identification and removal of these surfaces is called as the Hidden-surface problem.
Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for hidden surface detection. It is an Image space method. Image space methods are based on the pixel to be drawn on 2D. For these methods, the running time complexity is the number of pixels times number of objects. And the space complexity is two times the number of pixels because two arrays of pixels are required, one for frame buffer and the other for the depth buffer.
The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally z-axis is represented as the depth. The algorithm for the Z-buffer method is given below :
Algorithm :
First of all, initialize the depth of each pixel.
i.e, d(i, j) = infinite (max length)
Initialize the color value for each pixel
as c(i, j) = background color
for each polygon, do the following steps :
for (each pixel in polygon's projection)
{
find depth i.e, z of polygon
at (x, y) corresponding to pixel (i, j)
if (z < d(i, j))
{
d(i, j) = z;
c(i, j) = color;
}
}
Let’s consider an example to understand the algorithm in a better way. Assume the polygon given is as below :
In starting, assume that the depth of each pixel is infinite.
As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the result is:
Now, let’s change the z values. In the figure given below, the z values goes from 0 to 3.
In starting, the depth of each pixel will be infinite as :
Now, the z values generated on the pixel will be different which are as shown below :
Therefore, in the Z buffer method, each surface is processed separately one position at a time across the surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e, (smallest z) surface determines the color to be displayed in frame buffer. The z values, i.e, the depth values are usually normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and when z = 1, it is called as the Front Clipping Pane.
In this method, 2 buffers are used :
Frame bufferDepth buffer
Frame buffer
Depth buffer
Calculation of depth :As we know that the equation of the plane is :
ax + by + cz + d = 0, this implies
z = -(ax + by + d)/c, c!=0
Calculation of each depth could be very expensive, but the computation can be reduced to a single add per pixel by using an increment method as shown in figure below :
Let’s denote the depth at point A as Z and at point B as Z’. Therefore :
AX + BY + CZ + D = 0 implies
Z = (-AX - BY - D)/C ------------(1)
Similarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)
Hence from (1) and (2), we conclude :
Z' = Z - A/C ------------(3)
Hence, calculation of depth can be done by recording the plane equation of each polygon in the (normalized) viewing coordinate system and then using the incremental method to find the depth Z.So, to summarize, it can be said that this approach compares surface depths at each pixel position on the projection plane. Object depth is usually measured from the view plane along the z-axis of a viewing system.Example :
Let S1, S2, S3 are the surfaces. The surface closest to projection plane is called visible surface. The computer would start (arbitrarily) with surface 1 and put it’s value into the buffer. It would do the same for the next surface. It would then check each overlapping pixel and check to see which one is closer to the viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the smallest depth from the view plane, so it is visible at that position.
Points to remember :1) Z buffer method does not require pre-sorting of polygons.2) This method can be executed quickly even with many polygons.3) This can be implemented in hardware to overcome the speed problem.4) No object to object comparison is required.5) This method can be applied to non-polygonal objects.6) Hardware implementation of this algorithm are available in some graphics workstations.7) The method is simple to use and does not require additional data structure.8) The z-value of a polygon can be calculated incrementally.9) Cannot be applied on transparent surfaces i.e, it only deals with opaque surfaces. For ex :10) If only a few objects in the scene are to be rendered, then this method is less attractive because of additional buffer and the overhead involved in updating the buffer.11) Wastage of time may occur because of drawing of hidden objects.
computer-graphics
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 25759,
"s": 25731,
"text": "\n06 Mar, 2018"
},
{
"code": null,
"e": 26097,
"s": 25759,
"text": "When viewing a picture containing non transparent objects and surfaces, it is not possible to see those objects from view which are behind from the objects closer to eye. To get the realistic screen image, removal of these hidden surfaces is must. The identification and removal of these surfaces is called as the Hidden-surface problem."
},
{
"code": null,
"e": 26565,
"s": 26097,
"text": "Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for hidden surface detection. It is an Image space method. Image space methods are based on the pixel to be drawn on 2D. For these methods, the running time complexity is the number of pixels times number of objects. And the space complexity is two times the number of pixels because two arrays of pixels are required, one for frame buffer and the other for the depth buffer."
},
{
"code": null,
"e": 26757,
"s": 26565,
"text": "The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally z-axis is represented as the depth. The algorithm for the Z-buffer method is given below :"
},
{
"code": null,
"e": 26769,
"s": 26757,
"text": "Algorithm :"
},
{
"code": null,
"e": 27181,
"s": 26769,
"text": "First of all, initialize the depth of each pixel.\ni.e, d(i, j) = infinite (max length)\nInitialize the color value for each pixel \nas c(i, j) = background color\nfor each polygon, do the following steps :\n\nfor (each pixel in polygon's projection)\n{\n find depth i.e, z of polygon\n at (x, y) corresponding to pixel (i, j)\n \n if (z < d(i, j))\n {\n d(i, j) = z;\n c(i, j) = color;\n }\n}\n"
},
{
"code": null,
"e": 27291,
"s": 27181,
"text": "Let’s consider an example to understand the algorithm in a better way. Assume the polygon given is as below :"
},
{
"code": null,
"e": 27353,
"s": 27291,
"text": "In starting, assume that the depth of each pixel is infinite."
},
{
"code": null,
"e": 27473,
"s": 27353,
"text": "As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the result is:"
},
{
"code": null,
"e": 27563,
"s": 27473,
"text": "Now, let’s change the z values. In the figure given below, the z values goes from 0 to 3."
},
{
"code": null,
"e": 27622,
"s": 27563,
"text": "In starting, the depth of each pixel will be infinite as :"
},
{
"code": null,
"e": 27708,
"s": 27622,
"text": "Now, the z values generated on the pixel will be different which are as shown below :"
},
{
"code": null,
"e": 28177,
"s": 27708,
"text": "Therefore, in the Z buffer method, each surface is processed separately one position at a time across the surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e, (smallest z) surface determines the color to be displayed in frame buffer. The z values, i.e, the depth values are usually normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and when z = 1, it is called as the Front Clipping Pane."
},
{
"code": null,
"e": 28214,
"s": 28177,
"text": "In this method, 2 buffers are used :"
},
{
"code": null,
"e": 28239,
"s": 28214,
"text": "Frame bufferDepth buffer"
},
{
"code": null,
"e": 28252,
"s": 28239,
"text": "Frame buffer"
},
{
"code": null,
"e": 28265,
"s": 28252,
"text": "Depth buffer"
},
{
"code": null,
"e": 28334,
"s": 28265,
"text": "Calculation of depth :As we know that the equation of the plane is :"
},
{
"code": null,
"e": 28399,
"s": 28334,
"text": "ax + by + cz + d = 0, this implies \n\nz = -(ax + by + d)/c, c!=0\n"
},
{
"code": null,
"e": 28567,
"s": 28399,
"text": "Calculation of each depth could be very expensive, but the computation can be reduced to a single add per pixel by using an increment method as shown in figure below :"
},
{
"code": null,
"e": 28640,
"s": 28567,
"text": "Let’s denote the depth at point A as Z and at point B as Z’. Therefore :"
},
{
"code": null,
"e": 28834,
"s": 28640,
"text": "AX + BY + CZ + D = 0 implies\n\nZ = (-AX - BY - D)/C ------------(1)\n\nSimilarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)\n\nHence from (1) and (2), we conclude :\n\nZ' = Z - A/C ------------(3)\n"
},
{
"code": null,
"e": 29250,
"s": 28834,
"text": "Hence, calculation of depth can be done by recording the plane equation of each polygon in the (normalized) viewing coordinate system and then using the incremental method to find the depth Z.So, to summarize, it can be said that this approach compares surface depths at each pixel position on the projection plane. Object depth is usually measured from the view plane along the z-axis of a viewing system.Example :"
},
{
"code": null,
"e": 29741,
"s": 29250,
"text": "Let S1, S2, S3 are the surfaces. The surface closest to projection plane is called visible surface. The computer would start (arbitrarily) with surface 1 and put it’s value into the buffer. It would do the same for the next surface. It would then check each overlapping pixel and check to see which one is closer to the viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the smallest depth from the view plane, so it is visible at that position."
},
{
"code": null,
"e": 30616,
"s": 29741,
"text": "Points to remember :1) Z buffer method does not require pre-sorting of polygons.2) This method can be executed quickly even with many polygons.3) This can be implemented in hardware to overcome the speed problem.4) No object to object comparison is required.5) This method can be applied to non-polygonal objects.6) Hardware implementation of this algorithm are available in some graphics workstations.7) The method is simple to use and does not require additional data structure.8) The z-value of a polygon can be calculated incrementally.9) Cannot be applied on transparent surfaces i.e, it only deals with opaque surfaces. For ex :10) If only a few objects in the scene are to be rendered, then this method is less attractive because of additional buffer and the overhead involved in updating the buffer.11) Wastage of time may occur because of drawing of hidden objects."
},
{
"code": null,
"e": 30634,
"s": 30616,
"text": "computer-graphics"
},
{
"code": null,
"e": 30653,
"s": 30634,
"text": "Technical Scripter"
}
] |
PHP | Encapsulation - GeeksforGeeks
|
10 Oct, 2019
In today’s technical world, maintaining privacy has become one of the demanding needs for the protection of important data. Whenever data modified in one function affects the other functions, it causes a lot of problems in any software. To overcome this problem, object-oriented programming in PHP uses the concept of Encapsulation.
So the OOPs concept of Encapsulation in PHP means, enclosing the internal details of the object to protect from external sources. It describes, combining the class, data variables and member functions that work on data together within a single unit to form an object. Otherwise, its the bundling of properties and behavior in a single class unit.
Data is not accessed directly, in fact, they are accessed through functions(GET or SET) written inside the class. Attributes are kept private but getter(GET) and setter(SET) methods are kept public for manipulation of these attributes.
PHP program for encapsulation: The methods or functions in the following program are update password and check the course name. GFG class defines all the operations related to GFG users.
<?php // PHP program to implements encapsulationclass GFG { private $userId; private $pwd; // Update GFG password public function updatePwd($userId, $pwd) { // Write function body echo("Function to update password '" . $pwd . "' for user " . $userId); echo "<br>"; } // Check account balance public function courseName($userId) { // Write function body echo("Function to check course name of user " . $userId); echo "<br>"; }} $obj = new GFG();$obj -> updatePwd('GFG12', 'geeks54321');$obj -> courseName('GFG06'); ?>
Output:
Function to update password 'geeks54321' for user GFG12
Function to check course name of user GFG06
Note: The data members and class properties are not accessible to the outer end-user. So they cannot change the properties.
Program to Access Variables
<?php class Student { private $firstname; private $gender; public function getFirstName() { return $this->firstname; } public function setFirstName($firstname) { $this->firstname = $firstname; echo("First name is set to ".$firstname); echo("<br>"); } public function getGender() { return $this->gender; } public function setGender($gender) { if ('Male' !== $gender and 'Female' !== $gender) { echo('Set gender as Male or Female for gender'); } $this->gender = $gender; echo("Gender is set to ".$gender); echo("<br>"); }} $student = new Student();$student->setFirstName('Meena');$student->setGender('Female'); ?>
Output:
First name is set to Meena
Gender is set to Female
Note:
Encapsulation can be used if the properties of the object are private and updating them through public methods.
Encapsulation in PHP can be achieved using the implementation of access specifiers.
It is very careful about OOPs concept of inheritance as many times inheritance can undermine the concept of encapsulation.
Inheritance exposes some details of a parent class, effectively breaking encapsulation.
Advantages of Encapsulation:
Data Hiding and Abstraction: Unnecessary details, internal representation and implementation are hidden from the end-users for protection of data structure and the Class. Data access is prohibited from members of other classes by creating private methods. It protects the internal state of any object by keeping member variables private and preventing any inconsistent state. It is the enclosing of data and related operations into that object.Note: Encapsulation is used to hide internal views from the client.
Note: Encapsulation is used to hide internal views from the client.
Data security: Encapsulation helps in making data very robust and secured, as the data and member functions are wrapped together to form an object. All the tasks are done inside without any external worry and it also makes life very easy.
Reduces complexity: Encapsulation helps in reducing development complexity of the software by hiding the details of implementation and exposing the methods or operations.
Reusability: There are instances, you don’t have to re-write same functionality that you inherited from the parent class.
Reliability: You can make the class read-only or write-only by writing SET or GET methods.
Easier testing of code: Encapsulated PHP code is easy to test as the functions used for testing child class ensures the testing of parent class functions also.
Increased flexibility: Class variables can be accessed by GET or SET methods increasing flexibility. It is easy to maintain as the internal implementation can be changed without changing the code.
Conclusion: Object-oriented programming in PHP is achieved by using the concept of Encapsulation which is used for information hiding. It reduces the easy-accessibility of attributes of the present class. Getter and Setter methods are used for avoiding external unwanted access. It also helps in validating new values assigned to the properties.In short, Encapsulation in PHP is the process of hiding all the secret details of an object that actually do not contribute much to the crucial characteristics of the Class.
PHP-OOP
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to fetch data from localserver database and display on HTML table using PHP ?
How to create admin login page using PHP?
PHP str_replace() Function
How to pass form variables from one page to other page in PHP ?
Different ways for passing data to view in Laravel
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 26217,
"s": 26189,
"text": "\n10 Oct, 2019"
},
{
"code": null,
"e": 26550,
"s": 26217,
"text": "In today’s technical world, maintaining privacy has become one of the demanding needs for the protection of important data. Whenever data modified in one function affects the other functions, it causes a lot of problems in any software. To overcome this problem, object-oriented programming in PHP uses the concept of Encapsulation."
},
{
"code": null,
"e": 26897,
"s": 26550,
"text": "So the OOPs concept of Encapsulation in PHP means, enclosing the internal details of the object to protect from external sources. It describes, combining the class, data variables and member functions that work on data together within a single unit to form an object. Otherwise, its the bundling of properties and behavior in a single class unit."
},
{
"code": null,
"e": 27133,
"s": 26897,
"text": "Data is not accessed directly, in fact, they are accessed through functions(GET or SET) written inside the class. Attributes are kept private but getter(GET) and setter(SET) methods are kept public for manipulation of these attributes."
},
{
"code": null,
"e": 27320,
"s": 27133,
"text": "PHP program for encapsulation: The methods or functions in the following program are update password and check the course name. GFG class defines all the operations related to GFG users."
},
{
"code": "<?php // PHP program to implements encapsulationclass GFG { private $userId; private $pwd; // Update GFG password public function updatePwd($userId, $pwd) { // Write function body echo(\"Function to update password '\" . $pwd . \"' for user \" . $userId); echo \"<br>\"; } // Check account balance public function courseName($userId) { // Write function body echo(\"Function to check course name of user \" . $userId); echo \"<br>\"; }} $obj = new GFG();$obj -> updatePwd('GFG12', 'geeks54321');$obj -> courseName('GFG06'); ?>",
"e": 27986,
"s": 27320,
"text": null
},
{
"code": null,
"e": 27994,
"s": 27986,
"text": "Output:"
},
{
"code": null,
"e": 28095,
"s": 27994,
"text": "Function to update password 'geeks54321' for user GFG12\nFunction to check course name of user GFG06\n"
},
{
"code": null,
"e": 28219,
"s": 28095,
"text": "Note: The data members and class properties are not accessible to the outer end-user. So they cannot change the properties."
},
{
"code": null,
"e": 28247,
"s": 28219,
"text": "Program to Access Variables"
},
{
"code": "<?php class Student { private $firstname; private $gender; public function getFirstName() { return $this->firstname; } public function setFirstName($firstname) { $this->firstname = $firstname; echo(\"First name is set to \".$firstname); echo(\"<br>\"); } public function getGender() { return $this->gender; } public function setGender($gender) { if ('Male' !== $gender and 'Female' !== $gender) { echo('Set gender as Male or Female for gender'); } $this->gender = $gender; echo(\"Gender is set to \".$gender); echo(\"<br>\"); }} $student = new Student();$student->setFirstName('Meena');$student->setGender('Female'); ?>",
"e": 28991,
"s": 28247,
"text": null
},
{
"code": null,
"e": 28999,
"s": 28991,
"text": "Output:"
},
{
"code": null,
"e": 29051,
"s": 28999,
"text": "First name is set to Meena\nGender is set to Female\n"
},
{
"code": null,
"e": 29057,
"s": 29051,
"text": "Note:"
},
{
"code": null,
"e": 29169,
"s": 29057,
"text": "Encapsulation can be used if the properties of the object are private and updating them through public methods."
},
{
"code": null,
"e": 29253,
"s": 29169,
"text": "Encapsulation in PHP can be achieved using the implementation of access specifiers."
},
{
"code": null,
"e": 29376,
"s": 29253,
"text": "It is very careful about OOPs concept of inheritance as many times inheritance can undermine the concept of encapsulation."
},
{
"code": null,
"e": 29464,
"s": 29376,
"text": "Inheritance exposes some details of a parent class, effectively breaking encapsulation."
},
{
"code": null,
"e": 29493,
"s": 29464,
"text": "Advantages of Encapsulation:"
},
{
"code": null,
"e": 30005,
"s": 29493,
"text": "Data Hiding and Abstraction: Unnecessary details, internal representation and implementation are hidden from the end-users for protection of data structure and the Class. Data access is prohibited from members of other classes by creating private methods. It protects the internal state of any object by keeping member variables private and preventing any inconsistent state. It is the enclosing of data and related operations into that object.Note: Encapsulation is used to hide internal views from the client."
},
{
"code": null,
"e": 30073,
"s": 30005,
"text": "Note: Encapsulation is used to hide internal views from the client."
},
{
"code": null,
"e": 30312,
"s": 30073,
"text": "Data security: Encapsulation helps in making data very robust and secured, as the data and member functions are wrapped together to form an object. All the tasks are done inside without any external worry and it also makes life very easy."
},
{
"code": null,
"e": 30483,
"s": 30312,
"text": "Reduces complexity: Encapsulation helps in reducing development complexity of the software by hiding the details of implementation and exposing the methods or operations."
},
{
"code": null,
"e": 30605,
"s": 30483,
"text": "Reusability: There are instances, you don’t have to re-write same functionality that you inherited from the parent class."
},
{
"code": null,
"e": 30696,
"s": 30605,
"text": "Reliability: You can make the class read-only or write-only by writing SET or GET methods."
},
{
"code": null,
"e": 30856,
"s": 30696,
"text": "Easier testing of code: Encapsulated PHP code is easy to test as the functions used for testing child class ensures the testing of parent class functions also."
},
{
"code": null,
"e": 31053,
"s": 30856,
"text": "Increased flexibility: Class variables can be accessed by GET or SET methods increasing flexibility. It is easy to maintain as the internal implementation can be changed without changing the code."
},
{
"code": null,
"e": 31572,
"s": 31053,
"text": "Conclusion: Object-oriented programming in PHP is achieved by using the concept of Encapsulation which is used for information hiding. It reduces the easy-accessibility of attributes of the present class. Getter and Setter methods are used for avoiding external unwanted access. It also helps in validating new values assigned to the properties.In short, Encapsulation in PHP is the process of hiding all the secret details of an object that actually do not contribute much to the crucial characteristics of the Class."
},
{
"code": null,
"e": 31580,
"s": 31572,
"text": "PHP-OOP"
},
{
"code": null,
"e": 31584,
"s": 31580,
"text": "PHP"
},
{
"code": null,
"e": 31601,
"s": 31584,
"text": "Web Technologies"
},
{
"code": null,
"e": 31605,
"s": 31601,
"text": "PHP"
},
{
"code": null,
"e": 31703,
"s": 31605,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31785,
"s": 31703,
"text": "How to fetch data from localserver database and display on HTML table using PHP ?"
},
{
"code": null,
"e": 31827,
"s": 31785,
"text": "How to create admin login page using PHP?"
},
{
"code": null,
"e": 31854,
"s": 31827,
"text": "PHP str_replace() Function"
},
{
"code": null,
"e": 31918,
"s": 31854,
"text": "How to pass form variables from one page to other page in PHP ?"
},
{
"code": null,
"e": 31969,
"s": 31918,
"text": "Different ways for passing data to view in Laravel"
},
{
"code": null,
"e": 32009,
"s": 31969,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 32042,
"s": 32009,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 32087,
"s": 32042,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 32130,
"s": 32087,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Getting the maximum value from a list in Julia - max() Method - GeeksforGeeks
|
21 Apr, 2020
The max() is an inbuilt function in julia which is used to return the maximum value of the parameters.
Syntax: max(x, y, ...)
Parameters:
x: Specified 1st value.
y: Specified 2nd value and so on.
Returns: It returns the maximum value of the parameters.
Example 1:
# Julia program to illustrate # the use of max() method # Getting the maximum value of the parameters.println(max(1, 2, 3, 4))println(max(1, 3, 5))println(max(2, 4, 6))
Output:
4
5
6
Example 2:
# Julia program to illustrate # the use of max() method # Getting the maximum value of the parameters.println(max(0.6, 0.7, 0.2))println(max(2.5, 3.5, 0.3, 1.7))
Output:
0.7
3.5
Julia
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Vectors in Julia
Getting rounded value of a number in Julia - round() Method
Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)
Storing Output on a File in Julia
Manipulating matrices in Julia
Creating array with repeated elements in Julia - repeat() Method
Reshaping array dimensions in Julia | Array reshape() Method
while loop in Julia
Get array dimensions and size of a dimension in Julia - size() Method
Working with CSV Files in Julia
|
[
{
"code": null,
"e": 25425,
"s": 25397,
"text": "\n21 Apr, 2020"
},
{
"code": null,
"e": 25528,
"s": 25425,
"text": "The max() is an inbuilt function in julia which is used to return the maximum value of the parameters."
},
{
"code": null,
"e": 25551,
"s": 25528,
"text": "Syntax: max(x, y, ...)"
},
{
"code": null,
"e": 25563,
"s": 25551,
"text": "Parameters:"
},
{
"code": null,
"e": 25587,
"s": 25563,
"text": "x: Specified 1st value."
},
{
"code": null,
"e": 25621,
"s": 25587,
"text": "y: Specified 2nd value and so on."
},
{
"code": null,
"e": 25678,
"s": 25621,
"text": "Returns: It returns the maximum value of the parameters."
},
{
"code": null,
"e": 25689,
"s": 25678,
"text": "Example 1:"
},
{
"code": "# Julia program to illustrate # the use of max() method # Getting the maximum value of the parameters.println(max(1, 2, 3, 4))println(max(1, 3, 5))println(max(2, 4, 6))",
"e": 25859,
"s": 25689,
"text": null
},
{
"code": null,
"e": 25867,
"s": 25859,
"text": "Output:"
},
{
"code": null,
"e": 25874,
"s": 25867,
"text": "4\n5\n6\n"
},
{
"code": null,
"e": 25885,
"s": 25874,
"text": "Example 2:"
},
{
"code": "# Julia program to illustrate # the use of max() method # Getting the maximum value of the parameters.println(max(0.6, 0.7, 0.2))println(max(2.5, 3.5, 0.3, 1.7))",
"e": 26048,
"s": 25885,
"text": null
},
{
"code": null,
"e": 26056,
"s": 26048,
"text": "Output:"
},
{
"code": null,
"e": 26065,
"s": 26056,
"text": "0.7\n3.5\n"
},
{
"code": null,
"e": 26071,
"s": 26065,
"text": "Julia"
},
{
"code": null,
"e": 26169,
"s": 26071,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26186,
"s": 26169,
"text": "Vectors in Julia"
},
{
"code": null,
"e": 26246,
"s": 26186,
"text": "Getting rounded value of a number in Julia - round() Method"
},
{
"code": null,
"e": 26319,
"s": 26246,
"text": "Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)"
},
{
"code": null,
"e": 26353,
"s": 26319,
"text": "Storing Output on a File in Julia"
},
{
"code": null,
"e": 26384,
"s": 26353,
"text": "Manipulating matrices in Julia"
},
{
"code": null,
"e": 26449,
"s": 26384,
"text": "Creating array with repeated elements in Julia - repeat() Method"
},
{
"code": null,
"e": 26510,
"s": 26449,
"text": "Reshaping array dimensions in Julia | Array reshape() Method"
},
{
"code": null,
"e": 26530,
"s": 26510,
"text": "while loop in Julia"
},
{
"code": null,
"e": 26600,
"s": 26530,
"text": "Get array dimensions and size of a dimension in Julia - size() Method"
}
] |
Python - Find first element by second in tuple List - GeeksforGeeks
|
09 May, 2022
Sometimes, while working with Python records, we can have a problem in which we need to find the first element of tuple from the given second element. This kind of problem can occur in domains such as web development. Let’s discuss certain ways in which this task can be performed.
Input : test_list = [(4, 5), (5, 6), (1, 3), (6, 6)] K = 6 Output : [5, 6] Input : test_list = [(4, 5), (5, 7), (1, 3), (6, 8)] K = 6 Output : []
Method #1 : Using list comprehension This is one of the ways in which this task can be performed. In this, we iterate for each tuple, and if we find key matching to value, we store in result list.
Python3
# Python3 code to demonstrate working of# Find first element by second in tuple List# Using list comprehension # initializing listtest_list = [(4, 5), (5, 6), (1, 3), (6, 9)] # printing original listprint("The original list is : " + str(test_list)) # initializing KK = 6 # Find first element by second in tuple List# Using list comprehensionres = [x for (x, y) in test_list if y == K] # printing resultprint("The key from value : " + str(res))
The original list is : [(4, 5), (5, 6), (1, 3), (6, 9)]
The key from value : [5]
Method #2 : Using next() + generator expression This is yet another way in which this task can be solved. In here, the next() is used to get the successive elements and generator expression is used to check for the logic.
Python3
# Python3 code to demonstrate working of# Find first element by second in tuple List# Using next() + generator expression # initializing listtest_list = [(4, 5), (5, 6), (1, 3), (6, 9)] # printing original listprint("The original list is : " + str(test_list)) # initializing KK = 6 # Find first element by second in tuple List# Using next() + generator expressionres = next((x for x, y in test_list if y == K), None) # printing resultprint("The key from value : " + str(res))
The original list is : [(4, 5), (5, 6), (1, 3), (6, 9)]
The key from value : 5
singghakshay
sumitgumber28
Python List-of-Tuples
Python list-programs
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Python | Get unique values from a list
Create a directory in Python
Defaultdict in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 25537,
"s": 25509,
"text": "\n09 May, 2022"
},
{
"code": null,
"e": 25819,
"s": 25537,
"text": "Sometimes, while working with Python records, we can have a problem in which we need to find the first element of tuple from the given second element. This kind of problem can occur in domains such as web development. Let’s discuss certain ways in which this task can be performed."
},
{
"code": null,
"e": 25965,
"s": 25819,
"text": "Input : test_list = [(4, 5), (5, 6), (1, 3), (6, 6)] K = 6 Output : [5, 6] Input : test_list = [(4, 5), (5, 7), (1, 3), (6, 8)] K = 6 Output : []"
},
{
"code": null,
"e": 26163,
"s": 25965,
"text": "Method #1 : Using list comprehension This is one of the ways in which this task can be performed. In this, we iterate for each tuple, and if we find key matching to value, we store in result list. "
},
{
"code": null,
"e": 26171,
"s": 26163,
"text": "Python3"
},
{
"code": "# Python3 code to demonstrate working of# Find first element by second in tuple List# Using list comprehension # initializing listtest_list = [(4, 5), (5, 6), (1, 3), (6, 9)] # printing original listprint(\"The original list is : \" + str(test_list)) # initializing KK = 6 # Find first element by second in tuple List# Using list comprehensionres = [x for (x, y) in test_list if y == K] # printing resultprint(\"The key from value : \" + str(res))",
"e": 26615,
"s": 26171,
"text": null
},
{
"code": null,
"e": 26696,
"s": 26615,
"text": "The original list is : [(4, 5), (5, 6), (1, 3), (6, 9)]\nThe key from value : [5]"
},
{
"code": null,
"e": 26921,
"s": 26696,
"text": " Method #2 : Using next() + generator expression This is yet another way in which this task can be solved. In here, the next() is used to get the successive elements and generator expression is used to check for the logic. "
},
{
"code": null,
"e": 26929,
"s": 26921,
"text": "Python3"
},
{
"code": "# Python3 code to demonstrate working of# Find first element by second in tuple List# Using next() + generator expression # initializing listtest_list = [(4, 5), (5, 6), (1, 3), (6, 9)] # printing original listprint(\"The original list is : \" + str(test_list)) # initializing KK = 6 # Find first element by second in tuple List# Using next() + generator expressionres = next((x for x, y in test_list if y == K), None) # printing resultprint(\"The key from value : \" + str(res))",
"e": 27405,
"s": 26929,
"text": null
},
{
"code": null,
"e": 27484,
"s": 27405,
"text": "The original list is : [(4, 5), (5, 6), (1, 3), (6, 9)]\nThe key from value : 5"
},
{
"code": null,
"e": 27497,
"s": 27484,
"text": "singghakshay"
},
{
"code": null,
"e": 27511,
"s": 27497,
"text": "sumitgumber28"
},
{
"code": null,
"e": 27533,
"s": 27511,
"text": "Python List-of-Tuples"
},
{
"code": null,
"e": 27554,
"s": 27533,
"text": "Python list-programs"
},
{
"code": null,
"e": 27561,
"s": 27554,
"text": "Python"
},
{
"code": null,
"e": 27659,
"s": 27561,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27691,
"s": 27659,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27733,
"s": 27691,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27775,
"s": 27733,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 27831,
"s": 27775,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 27858,
"s": 27831,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 27889,
"s": 27858,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 27928,
"s": 27889,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 27957,
"s": 27928,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 27979,
"s": 27957,
"text": "Defaultdict in Python"
}
] |
response.json() - Python requests - GeeksforGeeks
|
22 Nov, 2021
response.json() returns a JSON object of the result (if the result was written in JSON format, if not it raises an error). Python requests are generally used to fetch the content from a particular resource URI. Whenever we make a request to a specified URI through Python, it returns a response object. Now, this response object would be used to access certain features such as content, headers, etc. This article revolves around how to check the response.json() out of a response object. It is one of the most used methods in the requests module.
To illustrate the use of response.json(), let’s ping geeksforgeeks.org. To run this script, you need to have Python and requests installed on your PC.
Download and Install Python 3 Latest Version
How to install requests in Python – For windows, linux, mac
Example code:
Python3
# import requests moduleimport requests # Making a get requestresponse = requests.get('https://api.github.com') # print responseprint(response) # print json contentprint(response.json())
Example Implementation:
Save the above file as request.py and run using
Python request.py
Output:
Check the json content at the terminal output. It returns a Python dictionary.
There are many libraries to make an HTTP request in Python, which are httplib, urllib, httplib2, treq, etc., but requests is the one of the best with cool features. If any attribute of requests shows NULL, check the status code using below attribute.
requests.status_code
If status_code doesn’t lie in range of 200-29. You probably need to check method begin used for making a request + the url you are requesting for resources.
vw23kz0zj19z181y27qygo2n5135r7k77r78h55l
Python-requests
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
Python String | replace()
*args and **kwargs in Python
Convert integer to string in Python
Check if element exists in list in Python
Create a Pandas DataFrame from Lists
sum() function in Python
isupper(), islower(), lower(), upper() in Python and their applications
|
[
{
"code": null,
"e": 25713,
"s": 25685,
"text": "\n22 Nov, 2021"
},
{
"code": null,
"e": 26262,
"s": 25713,
"text": "response.json() returns a JSON object of the result (if the result was written in JSON format, if not it raises an error). Python requests are generally used to fetch the content from a particular resource URI. Whenever we make a request to a specified URI through Python, it returns a response object. Now, this response object would be used to access certain features such as content, headers, etc. This article revolves around how to check the response.json() out of a response object. It is one of the most used methods in the requests module. "
},
{
"code": null,
"e": 26414,
"s": 26262,
"text": "To illustrate the use of response.json(), let’s ping geeksforgeeks.org. To run this script, you need to have Python and requests installed on your PC. "
},
{
"code": null,
"e": 26459,
"s": 26414,
"text": "Download and Install Python 3 Latest Version"
},
{
"code": null,
"e": 26520,
"s": 26459,
"text": "How to install requests in Python – For windows, linux, mac "
},
{
"code": null,
"e": 26534,
"s": 26520,
"text": "Example code:"
},
{
"code": null,
"e": 26542,
"s": 26534,
"text": "Python3"
},
{
"code": "# import requests moduleimport requests # Making a get requestresponse = requests.get('https://api.github.com') # print responseprint(response) # print json contentprint(response.json())",
"e": 26729,
"s": 26542,
"text": null
},
{
"code": null,
"e": 26753,
"s": 26729,
"text": "Example Implementation:"
},
{
"code": null,
"e": 26802,
"s": 26753,
"text": "Save the above file as request.py and run using "
},
{
"code": null,
"e": 26820,
"s": 26802,
"text": "Python request.py"
},
{
"code": null,
"e": 26828,
"s": 26820,
"text": "Output:"
},
{
"code": null,
"e": 26909,
"s": 26830,
"text": "Check the json content at the terminal output. It returns a Python dictionary."
},
{
"code": null,
"e": 27161,
"s": 26909,
"text": "There are many libraries to make an HTTP request in Python, which are httplib, urllib, httplib2, treq, etc., but requests is the one of the best with cool features. If any attribute of requests shows NULL, check the status code using below attribute. "
},
{
"code": null,
"e": 27182,
"s": 27161,
"text": "requests.status_code"
},
{
"code": null,
"e": 27339,
"s": 27182,
"text": "If status_code doesn’t lie in range of 200-29. You probably need to check method begin used for making a request + the url you are requesting for resources."
},
{
"code": null,
"e": 27380,
"s": 27339,
"text": "vw23kz0zj19z181y27qygo2n5135r7k77r78h55l"
},
{
"code": null,
"e": 27396,
"s": 27380,
"text": "Python-requests"
},
{
"code": null,
"e": 27403,
"s": 27396,
"text": "Python"
},
{
"code": null,
"e": 27501,
"s": 27403,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27533,
"s": 27501,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27555,
"s": 27533,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 27597,
"s": 27555,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 27623,
"s": 27597,
"text": "Python String | replace()"
},
{
"code": null,
"e": 27652,
"s": 27623,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 27688,
"s": 27652,
"text": "Convert integer to string in Python"
},
{
"code": null,
"e": 27730,
"s": 27688,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27767,
"s": 27730,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 27792,
"s": 27767,
"text": "sum() function in Python"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.