markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Students can see what assignments they have submitted using `nbgrader list --inbound`:
%%bash export HOME=/tmp/student_home && cd $HOME nbgrader list --inbound
[ListApp | INFO] Submitted assignments: [ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Importantly, students can run `nbgrader submit` as many times as they want, and all submitted copies of the assignment will be preserved:
%%bash export HOME=/tmp/student_home && cd $HOME nbgrader submit "ps1"
[SubmitApp | INFO] Source: /private/tmp/student_home/ps1 [SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:43.070290 UTC [SubmitApp | INFO] Submitted as: example_course ps1 2018-04-22 14:29:43.070290 UTC
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
We can see all versions that have been submitted by again running `nbgrader list --inbound`:
%%bash export HOME=/tmp/student_home && cd $HOME nbgrader list --inbound
[ListApp | INFO] Submitted assignments: [ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC [ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:43.070290 UTC
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Note that the `nbgrader submit` (as well as `nbgrader fetch`) command also does not rely on having access to the nbgrader database -- the database is only used by instructors. ``nbgrader`` requires that the submitted notebook names match the released notebook names for each assignment. For example if a student were to ...
%%bash export HOME=/tmp/student_home && cd $HOME # assume the student renamed the assignment file mv ps1/problem1.ipynb ps1/myproblem1.ipynb nbgrader submit "ps1"
[SubmitApp | INFO] Source: /private/tmp/student_home/ps1 [SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:46.167901 UTC [SubmitApp | WARNING] Possible missing notebooks and/or extra notebooks submitted for assignment ps1: Expected: problem1.ipynb: MISSING ...
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
By default this assignment will still be submitted however only the "FOUND" notebooks (for the given assignment) can be ``autograded`` and will appear on the ``formgrade`` extension. "EXTRA" notebooks will not be ``autograded`` and will not appear on the ``formgrade`` extension. To ensure that students cannot submit an...
%%file /tmp/student_home/nbgrader_config.py c = get_config() c.Exchange.root = '/tmp/exchange' c.Exchange.course_id = "example_course" c.ExchangeSubmit.strict = True %%bash export HOME=/tmp/student_home && cd $HOME nbgrader submit "ps1"
[SubmitApp | INFO] Source: /private/tmp/student_home/ps1 [SubmitApp | INFO] Destination: /tmp/exchange/example_course/inbound/jhamrick+ps1+2018-04-22 14:29:47.497419 UTC [SubmitApp | CRITICAL] Assignment ps1 not submitted. There are missing notebooks for the submission: Expected: problem1.ipynb: MISSING p...
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Collecting assignments
.. seealso:: :doc:`creating_and_grading_assignments` Details on grading assignments after they have been collected :doc:`/command_line_tools/nbgrader-collect` Command line options for ``nbgrader fetch`` :doc:`/command_line_tools/nbgrader-list` Command line options for ``nbgrader l...
_____no_output_____
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
First, as a reminder, here is what the instructor's `nbgrader_config.py` file looks like:
%%bash cat nbgrader_config.py
c = get_config() c.Exchange.course_id = "example_course" c.Exchange.root = "/tmp/exchange"
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
From the formgrader From the formgrader extension, we can collect submissions by clicking on the "collect" button:![](images/manage_assignments7.png)As with releasing, this will display a pop-up window when the operation is complete, telling you how many submissions were collected:![](images/collect_assignment.png)Fro...
%%bash nbgrader list --inbound
[ListApp | INFO] Submitted assignments: [ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:40.397476 UTC [ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:43.070290 UTC [ListApp | INFO] example_course jhamrick ps1 2018-04-22 14:29:46.167901 UTC
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
The instructor can then collect all submitted assignments with `nbgrader collect` and passing the name of the assignment (and as with the other nbgrader commands for instructors, this must be run from the root of the course directory):
%%bash nbgrader collect "ps1"
[CollectApp | INFO] Processing 1 submissions of 'ps1' for course 'example_course' [CollectApp | INFO] Collecting submission: jhamrick ps1
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
This will copy the student submissions to the `submitted` folder in a way that is automatically compatible with `nbgrader autograde`:
%%bash ls -l submitted
total 0 drwxr-xr-x 3 jhamrick staff 96 May 31 2017 bitdiddle drwxr-xr-x 3 jhamrick staff 96 May 31 2017 hacker drwxr-xr-x 3 jhamrick staff 96 Apr 22 15:29 jhamrick
BSD-3-Clause-Clear
nbgrader/docs/source/user_guide/managing_assignment_files.ipynb
dechristo/nbgrader
Common way of speeding up a machine learning algorithm is by using Principal Component Analysis (PCA). If your learning algorithm is too slow because the input dimension is too high, then using PCA to speed it up can be a reasonable choice.
X3 = pid2 # X denotes the input functions and here class defines whether the person is ill or not print(X1) y3 = pid['class'] #y denotes the output functions print(y1) from sklearn.model_selection import train_test_split #As given we are assigning 70% of data for training and 30%...
_____no_output_____
MIT
IIT Mandi/3rd Week/IITMANDI.Assignment2(Week 3)-checkpoint.ipynb
thechiragthakur/Data-Science-Using-Python
The Transformer Decoder: Ungraded Lab NotebookIn this notebook, you'll explore the transformer decoder and how to implement it with Trax. BackgroundIn the last lecture notebook, you saw how to translate the mathematics of attention into NumPy code. Here, you'll see how multi-head causal attention fits into a GPT-2 tr...
import sys import os import time import numpy as np import gin import textwrap wrapper = textwrap.TextWrapper(width=70) import trax from trax import layers as tl from trax.fastmath import numpy as jnp # to print the entire np array np.set_printoptions(threshold=sys.maxsize)
INFO:tensorflow:tokens_length=568 inputs_length=512 targets_length=114 noise_density=0.15 mean_noise_span_length=3.0
MIT
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
Sentence gets embedded, add positional encodingEmbed the words, then create vectors representing each word's position in each sentence $\in \{ 0, 1, 2, \ldots , K\}$ = `range(max_len)`, where `max_len` = $K+1$)
def PositionalEncoder(vocab_size, d_model, dropout, max_len, mode): """Returns a list of layers that: 1. takes a block of text as input, 2. embeds the words in that text, and 3. adds positional encoding, i.e. associates a number in range(max_len) with each word in each sentence of emb...
_____no_output_____
MIT
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
Multi-head causal attentionThe layers and array dimensions involved in multi-head causal attention (which looks at previous words in the input text) are summarized in the figure below: `tl.CausalAttention()` does all of this for you! You might be wondering, though, whether you need to pass in your input text 3 times, ...
def FeedForward(d_model, d_ff, dropout, mode, ff_activation): """Returns a list of layers that implements a feed-forward block. The input is an activation tensor. Args: d_model (int): depth of embedding. d_ff (int): depth of feed-forward layer. dropout (float): dropout rate (how m...
_____no_output_____
MIT
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
Decoder blockHere, we return a list containing two residual blocks. The first wraps around the causal attention layer, whose inputs are normalized and to which we apply dropout regulation. The second wraps around the feed-forward layer. You may notice that the second call to `tl.Residual()` doesn't call a normalizatio...
def DecoderBlock(d_model, d_ff, n_heads, dropout, mode, ff_activation): """Returns a list of layers that implements a Transformer decoder block. The input is an activation tensor. Args: d_model (int): depth of embedding. d_ff (int): depth of feed-forward layer. n_...
_____no_output_____
MIT
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
The transformer decoder: putting it all together A.k.a. repeat N times, dense layer and softmax for output
def TransformerLM(vocab_size=33300, d_model=512, d_ff=2048, n_layers=6, n_heads=8, dropout=0.1, max_len=4096, mode='train', ff_activation=tl.Relu): """Returns a Transformer...
_____no_output_____
MIT
NLP/Attention/2/C4_W2_lecture_notebook_Transformer_Decoder.ipynb
verneh/DataSci
Summary statistics in VCF formatmodified from the create_vcf of mrcieu/gwasvcf package to transform the mash output matrixs from the rds format into a vcf file, with a effect size = to the coef and the se = 1, named as EF:SE.Input:a collection of gene-level rds file, each file is a matrix of mash output, with colnames...
[global] import glob # single column file each line is the data filename parameter: analysis_units = path # Path to data directory parameter: data_dir = "/" # data file suffix parameter: data_suffix = "" # Path to work directory where output locates parameter: wd = path("./output") # An identifier for your run of analy...
_____no_output_____
MIT
pipeline/misc/rds_to_vcf.ipynb
floutt/xqtl-pipeline
Convolutional Dictionary Learning=================================This example demonstrates the use of [prlcnscdl.ConvBPDNDictLearn_Consensus](http://sporco.rtfd.org/en/latest/modules/sporco.dictlrn.prlcnscdl.htmlsporco.dictlrn.prlcnscdl.ConvBPDNDictLearn_Consensus) for learning a convolutional dictionary from a set of...
from __future__ import print_function from builtins import input import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40 import numpy as np from sporco.dictlrn import prlcnscdl from sporco import util from sporco import signal from sporco import plot plot.config_notebook_plotting()
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Load training images.
exim = util.ExampleImages(scaled=True, zoom=0.25) S1 = exim.image('barbara.png', idxexp=np.s_[10:522, 100:612]) S2 = exim.image('kodim23.png', idxexp=np.s_[:, 60:572]) S3 = exim.image('monarch.png', idxexp=np.s_[:, 160:672]) S4 = exim.image('sail.png', idxexp=np.s_[:, 210:722]) S5 = exim.image('tulips.png', idxexp=np.s...
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Highpass filter training images.
npd = 16 fltlmbd = 5 sl, sh = signal.tikhonov_filter(S, fltlmbd, npd)
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Construct initial dictionary.
np.random.seed(12345) D0 = np.random.randn(8, 8, 3, 64)
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Set regularization parameter and options for dictionary learning solver.
lmbda = 0.2 opt = prlcnscdl.ConvBPDNDictLearn_Consensus.Options({'Verbose': True, 'MaxMainIter': 200, 'CBPDN': {'rho': 50.0*lmbda + 0.5}, 'CCMOD': {'rho': 1.0, 'ZeroMean': True}})
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Create solver object and solve.
d = prlcnscdl.ConvBPDNDictLearn_Consensus(D0, sh, lmbda, opt) D1 = d.solve() print("ConvBPDNDictLearn_Consensus solve time: %.2fs" % d.timer.elapsed('solve'))
Itn Fnc DFid Regℓ1 ----------------------------------
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Display initial and final dictionaries.
D1 = D1.squeeze() fig = plot.figure(figsize=(14, 7)) plot.subplot(1, 2, 1) plot.imview(util.tiledict(D0), title='D0', fig=fig) plot.subplot(1, 2, 2) plot.imview(util.tiledict(D1), title='D1', fig=fig) fig.show()
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
Get iterations statistics from solver object and plot functional value
its = d.getitstat() plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional')
_____no_output_____
BSD-3-Clause
cdl/cbpdndl_parcns_clr.ipynb
bwohlberg/sporco-notebooks
A little notebook to help visualise the official numbers for personal use. Absolutely no guarantees are made.**This is not a replacement for expert advice. Please listen to your local health authorities.**The data is dynamically loaded from: https://github.com/CSSEGISandData/COVID-19
%matplotlib inline %config InlineBackend.figure_format ='retina' import matplotlib.pyplot as plt import pandas as pd from jhu_helpers import * jhu = aggregte_jhu_by_state(*get_jhu_data()) #jhu.confirmed.columns.tolist() # print a list of all countries in the data set # look at recent numbers from highly affected count...
_____no_output_____
MIT
international_cases.ipynb
debsankha/covid-19
Question 1:
Create a checker board generator, which takes as inputs n and 2 elements to generate an n x n checkerboard with those two elements as alternating squares. Examples checker_board(2, 7, 6) [ [7, 6], [6, 7] ] checker_board(3, "A", "B") [ ["A", "B", "A"], ["B", "A", "B"], ["A", "B", "A"] ] checker_board(4, "c", "d...
_____no_output_____
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Answer :
def checker_board(n,a,b): if a == b: return "invalid" board = [] for i in range(n): temp = [] for j in range(n): temp.append(a) a, b = b, a b, a = temp[0:2] board.append(temp) return board for i in checker_boa...
[7, 6] [6, 7] ['A', 'B', 'A'] ['B', 'A', 'B'] ['A', 'B', 'A'] ['c', 'd', 'c', 'd'] ['d', 'c', 'd', 'c'] ['c', 'd', 'c', 'd'] ['d', 'c', 'd', 'c']
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Question 2:
A string is an almost-palindrome if, by changing only one character, you can make it a palindrome. Create a function that returns True if a string is an almost-palindrome and False otherwise. Examples almost_palindrome("abcdcbg") True # Transformed to "abcdcba" by changing "g" to "a". almost_palindrome("abccia") Tr...
_____no_output_____
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Answer :
import string def isPalindrome(str_): return str_ == str_[::-1] def almost_palindrome(str_): check = string.ascii_lowercase + "0123456789" for i in str_: for j in check: temp = str_.replace(i, j, 1) if isPalindrome(temp): return True ...
True True False False
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Question 3:
Create a function that finds how many prime numbers there are, up to the given integer. Examples prime_numbers(10) 4 # 2, 3, 5 and 7 prime_numbers(20) 8 # 2, 3, 5, 7, 11, 13, 17 and 19 prime_numbers(30) 10 # 2, 3, 5, 7, 11, 13, 17, 19, 23 and 29
_____no_output_____
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Answer :
def isPrime(n): if n <= 1: return False for i in range(2, int(n**(1/2))+1): if n % i == 0: return False return True def prime_numbers(n): if n<2: return 0 return sum([isPrime(i) for i in range(2,n+1)]) print(prime_numbers(10)) print(prime_numbers(20)) print(...
4 8 10
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Question 4:
If today was Monday, in two days, it would be Wednesday. Create a function that takes in a list of days as input and the number of days to increment by. Return a list of days after n number of days has passed. Examples after_n_days(["Thursday", "Monday"], 4) ["Monday", "Friday"] after_n_days(["Sunday", "Sunday", "S...
_____no_output_____
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Answer :
def after_n_days(lst,n): new_lst = [] if n>=7: _, n = divmod(n,7) for i in lst: days = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"] idx = days.index(i) days = [days[idx]]+days[idx+1::]+days[0:idx] new_lst.append(day...
['Monday', 'Friday'] ['Monday', 'Monday', 'Monday'] ['Tuesday', 'Wednesday', 'Saturday']
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Question 5:
You are in the process of creating a chat application and want to add an anonymous name feature. This anonymous name feature will create an alias that consists of two capitalized words beginning with the same letter as the users first name. Create a function that determines if the list of users is mapped to a list of ...
_____no_output_____
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
Answer :
def is_correct_aliases(lst1,lst2): bool_ = [] for i, j in zip(lst1,lst2): temp = j.split() bool_.append(i[0] == temp[0][0] and i[0] == temp[1][0]) return all(bool_) print(is_correct_aliases(["Adrian M.", "Harriet S.", "Mandy T."], ["Amazing Artichoke", "Hopeful Hedge...
True True False
MIT
Python Advance Programming Assignment/Assignment_19.ipynb
kpsanjeet/Python-Programming-Basic-Assignment
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/enterprise/healthcare/Disambiguation.ipynb)
import json with open('251keys.json') as f: license_keys = json.load(f) license_keys.keys() # Install java import os ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PAT...
_____no_output_____
Apache-2.0
jupyter/enterprise/healthcare/Disambiguation.ipynb
richardclarus/spark-nlp-workshop
Supply chain physics*This notebook illustrates methods to investigate the physics of a supply chain****Alessandro Tufano 2020 Import packages
import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Generate empirical demand and production We define an yearly sample of production quantity $x$, and demand quantity $d$
number_of_sample = 365 #days mu_production = 105 #units per day sigma_production = 1 # units per day mu_demand = 100 #units per day sigma_demand = 0.3 # units per day x = np.random.normal(mu_production,sigma_production,number_of_sample) #d = np.random.normal(mu_demand,sigma_demand,number_of_sample) d = brownian(x0...
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Define the inventory function $q$ The empirical inventory function $q$ is defined as the differende between production and demand, plus the residual inventory. $q_t = q_{t-1} + x_t - d_t$
q = [mu_production] #initial inventory with production mean value for i in range(0,len(d)): inventory_value = q[i] + x[i] - d[i] if inventory_value <0 : inventory_value=0 q.append(inventory_value) plt.plot(q) plt.xlabel('days') plt.ylabel('Inventory quantity $q$') plt.title('Inventory functio...
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Define pull and push forces (the momentum $p=\dot{q}$) By using continuous notation we obtain the derivative $\dot{q}=p=x-d$. The derivative of the inventory represents the *momentum* of the supply chain, i.e. the speed a which the inventory values goes up (production), and down (demand). We use the term **productivi...
p1 = [q[i]-q[i-1] for i in range(1,len(q))] p2 = [x[i]-d[i] for i in range(1,len(d))] plt.plot(p1) plt.plot(p2) plt.xlabel('days') plt.ylabel('Value') plt.title('Momentum function $p$') p=np.array(p)
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Define a linear potential $V(q)$ we introduce a linear potential to describe the amount of *energy* related with a given quantity of the inventory $q$.
F0 = 0.1 #eta = 1.2 #lam = mu_demand #F0=eta*lam print(F0) V_q = -F0*q V_q = V_q[0:-1]
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Define the energy conservation function using the Lagrangianm and the Hamiltonian We use the Lagrangian to describe the energy conservation equation.$L(q,\dot{q}) = H = \frac{1}{2}\dot{q} - V(q)$
H = (p**2)/2 - F0*q[0:-1] plt.plot(H) plt.xlabel('days') plt.ylabel('value') plt.title('Function $H$')
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Obtain the inventory $q$, given $H$
S_q = [H[i-1] + H[i] for i in range(1,len(H))] plt.plot(S_q) plt.xlabel('days') plt.ylabel('value') plt.title('Function $S[q]$') #compare with q plt.plot(q) plt.xlabel('days') plt.ylabel('Inventory quantity $q$') plt.title('Inventory function $q$') plt.legend(['Model inventory','Empirical inventory'])
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Inventory control Define the Brownian process
from math import sqrt from scipy.stats import norm import numpy as np def brownian(x0, n, dt, delta, out=None): """ Generate an instance of Brownian motion (i.e. the Wiener process): X(t) = X(0) + N(0, delta**2 * t; 0, t) where N(a,b; t0, t1) is a normally distributed random variable with mean a...
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Define the supply chain control model
# supply chain control model def supply_chain_control_model(p,beta,eta,F0): #p is the productivity function defined as the defivative of q #beta is the diffusion coefficient, i.e. the delta of the Brownian process, the std of the demand can be used #eta represents the flexibility of the productio. It is the...
_____no_output_____
MIT
examples/Supply chain physics.ipynb
aletuf93/logproj
Ingest DataOriginal data was from inside airbnb, to secure the files, they were copied to google drive
import os from google_drive_downloader import GoogleDriveDownloader as gdd
_____no_output_____
MIT
variable_exploration/mk/1_Ingestion_Wrangling/0_data_pull.ipynb
georgetown-analytics/Airbnb-Price-Prediction
Pull files from Google Drivelistings shared url: https://drive.google.com/file/d/1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO/view?usp=sharingcalendar shared url: https://drive.google.com/file/d/1VjlSWEr4vaJHdT9o2OF9N2Ga0X2b22v9/view?usp=sharingreviews shared url: https://drive.google.com/file/d/1_ojDocAs_LtcBLNxDHqH_TSBWjPz-Zme...
# gdd.download_file_from_google_drive(file_id='1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO', # dest_path='../data/gdrive/listings.csv.gz' #source_dest = {'../data/gdrive/listings.csv.gz':'1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO'} source_dest = {'../data/gdrive/listings.csv.gz':'1e8hVygvxFgJo3QgUrzg...
_____no_output_____
MIT
variable_exploration/mk/1_Ingestion_Wrangling/0_data_pull.ipynb
georgetown-analytics/Airbnb-Price-Prediction
Demo for paper "First Order Motion Model for Image Animation"--- **Clone repository**
!git clone https://github.com/hamdirhibi/Deep-fake cd Deep-fake
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
``` This is formatted as code```**Mount your Google drive folder on Colab**
from google.colab import drive drive.mount('/content/gdrive')
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
**Add folder https://drive.google.com/drive/folders/157-wifsuylAkO1E4hBGO_QXyn22mDXET?usp=sharing to your google drive.Alternativelly you can use this mirror link https://drive.google.com/drive/folders/157-wifsuylAkO1E4hBGO_QXyn22mDXET?usp=sharing **Load driving video and source image**
import imageio import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage.transform import resize from IPython.display import HTML import warnings warnings.filterwarnings("ignore") source_image = imageio.imread('/content/gdrive/My Drive/first-order-motion-model/rayen.jpg')...
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
**Create a model and load checkpoints**
from demo import load_checkpoints generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml', checkpoint_path='/content/gdrive/My Drive/first-order-motion-model/vox-cpk.pth.tar')
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
** **bold text**Perform image animation**
from demo import make_animation from skimage import img_as_ubyte predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True) #save resulting video imageio.mimsave('../generated.mp4', [img_as_ubyte(frame) for frame in predictions], fps=fps) #video can be downloaded from /content fo...
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
**In the cell above we use relative keypoint displacement to animate the objects. We can use absolute coordinates instead, but in this way all the object proporions will be inherited from the driving video. For example Putin haircut will be extended to match Trump haircut.**
predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=False, adapt_movement_scale=True) HTML(display(source_image, driving_video, predictions).to_html5_video())
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
Running on your data**First we need to crop a face from both source image and video, while simple graphic editor like paint can be used for cropping from image. Cropping from video is more complicated. You can use ffpmeg for this.**
!ffmpeg -i /content/gdrive/My\ Drive/first-order-motion-model/07.mkv -ss 00:08:57.50 -t 00:00:08 -filter:v "crop=600:600:760:50" -async 1 hinton.mp4
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
**Another posibility is to use some screen recording tool, or if you need to crop many images at ones use face detector(https://github.com/1adrianb/face-alignment) , see https://github.com/AliaksandrSiarohin/video-preprocessing for preprcessing of VoxCeleb.**
source_image = imageio.imread('/content/gdrive/My Drive/first-order-motion-model/09.png') driving_video = imageio.mimread('hinton.mp4', memtest=False) #Resize image and video to 256x256 source_image = resize(source_image, (256, 256))[..., :3] driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_v...
_____no_output_____
MIT
deep_fake.ipynb
hamdirhibi/Deep-fake
Realizando limpeza dos dados Por Adriano Santos Dentre as atividades que um cientista de dados deve realizar, o processo de limpeza e tratamento é uma das mais importantes.Nesta aula aprenderemos a:* Remover informações de um DataFrame;
# Carregando módulos import pandas as pd # Importando os dados para manipulação df = pd.read_csv('../Dados/WHO.csv', delimiter=',') print (df.head())
Country Region Population Under15 Over60 \ 0 Afghanistan Eastern Mediterranean 29825 47.42 3.82 1 Albania Europe 3162 21.33 14.93 2 Algeria Africa 38482 27.42 7.17 3 Andorra Europe ...
MIT
scripts/Introd. ciencia de dados - Parte 3.ipynb
adrianosantospb/jatic2017
Verificando a existência de dados missing (NaN)
# O any() possibibilitará saber, coluna a coluna, se qualquer um dos valores é inexistente. df.isnull().any() # Possibilitirá se existe alguma coluna em branco. print (df.isnull().all()) print ('Número de registros:', df.shape) # O comando dropna() remove do DataFrame qualquer linha que tenha pelo menos um NaN. df.dro...
Número de registros: 194
MIT
scripts/Introd. ciencia de dados - Parte 3.ipynb
adrianosantospb/jatic2017
Comandos para remoção de coluna
# Para remover, faça: df.drop('CellularSubscribers', axis=1, inplace=True) # axis 1 = coluna; axis 0 = linha. print (df.columns) # Avaliando se existe duplicata print(df.duplicated('Region').head())
0 False 1 False 2 False 3 True 4 True dtype: bool
MIT
scripts/Introd. ciencia de dados - Parte 3.ipynb
adrianosantospb/jatic2017
To determine distance between local maxima and minimaWe need to calculate both, and then concatenate the arrays, and use that to select the data
# determine the days of minimum electricity consumption # throughout the 5 months, that is the local minima # we use peak values but we turn the series upside down with the # reciprocal function valleys, _ = find_peaks(1 / elec_pday.values, height=(-np.Inf, 1/60)) valleys # compare the number of observations in the ...
_____no_output_____
MIT
Chapter10/R4-Calculating-distance-between-events.ipynb
paulorobertolds/Python-Feature-Engineering-Cookbook
To determine the time elapsed between local maxima and minima, we need create a dataframe with those values executing: tmp = pd.DataFrame(elec_pday[peaksandvalleys]).reset_index(drop=False)and then, 1) add the year, 2) reconstitute the date, and 3) calculate the time between the local maxima and minima, as we have d...
import featuretools as ft # load data set from feature tools data_dict = ft.demo.load_mock_customer() data = data_dict["transactions"].merge( data_dict["sessions"]).merge(data_dict["customers"]) cols = ['customer_id', 'transaction_id', 'transaction_time', 'amount', ] data = data[...
_____no_output_____
MIT
Chapter10/R4-Calculating-distance-between-events.ipynb
paulorobertolds/Python-Feature-Engineering-Cookbook
**Table of Contents**Feature engineering - quantifying access to facilities - batch modeRead the shortlisted propertiesLoop through each property and build the neighborhood facility tableFeature engineer with access to amenitiesPlot the distribution of facility accessStore to disk Feature engineering - quantifying acc...
import pandas as pd import matplotlib.pyplot as plt from pprint import pprint %matplotlib inline from arcgis.gis import GIS from arcgis.geocoding import geocode, batch_geocode from arcgis.features import Feature, FeatureLayer, FeatureSet, GeoAccessor, GeoSeriesAccessor from arcgis.features import SpatialDataFrame from...
_____no_output_____
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
Connect to GIS
gis = GIS(profile='') route_service_url = gis.properties.helperServices.route.url route_service = RouteLayer(route_service_url, gis=gis)
_____no_output_____
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
Read the shortlisted properties
prop_list_df = pd.read_csv('resources/houses_for_sale_att_filtered.csv') prop_list_df.shape prop_list_df = pd.DataFrame.spatial.from_xy(prop_list_df, 'LONGITUDE','LATITUDE') type(prop_list_df)
_____no_output_____
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
Loop through each property and build the neighborhood facility table
groceries_count = [] restaurants_count = [] hospitals_count = [] coffee_count = [] bars_count = [] gas_count = [] shops_service_count = [] travel_transport_count = [] parks_count = [] education_count = [] route_length = [] route_duration = [] destination_address = '309 SW 6th Ave #600, Portland, OR 97204' count=0 for ...
1: 18517652 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route 2: 18465613 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route 3: 18005102 : Groc : Rest : Hosp : Coffee : Bars : Gas : Shops : Travel : Parks : Edu : Route 4: 18216924 : Groc : Rest : Hosp : Cof...
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
Feature engineer with access to amenities
prop_list_df['grocery_count'] = groceries_count prop_list_df['restaurant_count']= restaurants_count prop_list_df['hospitals_count']= hospitals_count prop_list_df['coffee_count']= coffee_count prop_list_df['bars_count']=bars_count prop_list_df['gas_count']=gas_count prop_list_df['shops_count']=shops_service_count prop_l...
_____no_output_____
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
Plot the distribution of facility access
prop_list_df.columns facility_list = ['grocery_count', 'restaurant_count', 'hospitals_count', 'coffee_count', 'bars_count', 'gas_count', 'shops_count', 'travel_count', 'parks_count', 'edu_count', 'commute_length', 'commute_duration'] axes = prop_list_df[facility_list].hist(bins=25, layout=(3,4), figsize=...
_____no_output_____
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
From the histograms above, most houses don't have very many bars in 5 miles around them. The commute length and duration appears to be tightly clustered around the lower end of the spectrum. Most houses have at least 1 hospital or medical center near them and a large number of parks, restaurants, educational institutio...
prop_list_df.to_csv('resources/houses_facility_counts.csv') prop_list_df.spatial.to_featureclass('resources/shp/houses_facility_counts.shp')
_____no_output_____
Apache-2.0
talks/GeoDevPDX2018/04_feature-engineering-neighboring-facilities-batch.ipynb
nitz21/arcpy
**The aim of this lab is to introduce DATA and FEATURES.** Extracting features from data FMML Module 1, Lab 2 Module Coordinator : amit.pandey@research.iiit.ac.in
! pip install wikipedia import wikipedia import nltk from nltk.util import ngrams from collections import Counter import matplotlib.pyplot as plt import numpy as np import re import unicodedata import plotly.express as px import pandas as pd
Collecting wikipedia Downloading wikipedia-1.4.0.tar.gz (27 kB) Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from wikipedia) (4.6.3) Requirement already satisfied: requests<3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from wikipedia) (2.23.0) Requirement already...
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
**What are features?**features are individual independent variables that act like a input to your system.
import matplotlib.pyplot as plt from matplotlib import cm import numpy as np from mpl_toolkits.mplot3d.axes3d import get_test_data # set up a figure twice as wide as it is tall fig = plt.figure(figsize=plt.figaspect(0.9)) # ============= # First subplot # ============= # set up the axes for the first plot ax = fig...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
**Part 2: Features of text**How do we apply machine learning on text? We can't directly use the text as input to our algorithms. We need to convert them to features. In this notebook, we will explore a simple way of converting text to features.Let us download a few documents off Wikipedia.
topic1 = 'Giraffe' topic2 = 'Elephant' wikipedia.set_lang('en') eng1 = wikipedia.page(topic1).content eng2 = wikipedia.page(topic2).content wikipedia.set_lang('fr') fr1 = wikipedia.page(topic1).content fr2 = wikipedia.page(topic2).content fr2
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
We need to clean this up a bit. Let us remove all the special characters and keep only 26 letters and space. Note that this will remove accented characters in French also. We are also removing all the numbers and spaces. So this is not an ideal solution.
def cleanup(text): text = text.lower() # make it lowercase text = re.sub('[^a-z]+', '', text) # only keep characters return text print(eng1)
The giraffe is a tall African mammal belonging to the genus Giraffa. Specifically, It is an even-toed ungulate. It is the tallest living terrestrial animal and the largest ruminant on Earth. Traditionally, giraffes were thought to be one species, Giraffa camelopardalis, with nine subspecies. Most recently, researchers ...
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Instead of directly using characters as the features, to understand a text better, we may consider group of tokens i.e. ngrams as features.for this example let us consider that each character is one word, and let us see how n-grams work. **nltk library provides many tools for text processing, please explore them.** No...
# convert a tuple of characters to a string def tuple2string(tup): st = '' for ii in tup: st = st + ii return st # convert a tuple of tuples to a list of strings def key2string(keys): return [tuple2string(i) for i in keys] # plot the histogram def plothistogram(ngram): keys = key2string(ngram.keys()) ...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Let us compare the histograms of English pages and French pages. Can you spot a difference?
## we passed ngrams 'n' as 1 to get unigrams. Unigram is nothing but single token (in this case character). unigram_eng1 = Counter(ngrams(eng1,1)) plothistogram(unigram_eng1) plt.title('English 1') plt.show() unigram_eng2 = Counter(ngrams(eng2,1)) plothistogram(unigram_eng2) plt.title('English 2') plt.show() unigram_f...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
A good feature is one that helps in easy prediction and classification.for ex : if you wish to differentiate between grapes and apples, size can be one of the useful features. We can see that the unigrams for French and English are very similar. So this is not a good feature if we want to distinguish between English an...
## Now instead of unigram, we will use bigrams as features, and see how useful bigrams are as features. bigram_eng1 = Counter(ngrams(eng1,2)) # bigrams plothistogram(bigram_eng1) plt.title('English 1') plt.show() bigram_eng2 = Counter(ngrams(eng2,2)) plothistogram(bigram_eng2) plt.title('English 2') plt.show() bigra...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Another way to visualize bigrams is to use a 2-dimensional graph.
## lets have a lot at bigrams. bigram_eng1 def plotbihistogram(ngram): freq = np.zeros((26,26)) for ii in range(26): for jj in range(26): freq[ii,jj] = ngram[(chr(ord('a')+ii), chr(ord('a')+jj))] plt.imshow(freq, cmap = 'jet') return freq bieng1 = plotbihistogram(bigram_eng1) plt.show() bieng2 = plot...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Let us look at the top 10 ngrams for each text.
from IPython.core.debugger import set_trace def ind2tup(ind): ind = int(ind) i = int(ind/26) j = int(ind%26) return (chr(ord('a')+i), chr(ord('a')+j)) def ShowTopN(bifreq, n=10): f = bifreq.flatten() arg = np.argsort(-f) for ii in range(n): print(f'{ind2tup(arg[ii])} : {f[arg[ii]]}') print('\nEngli...
English 1: ('t', 'h') : 714.0 ('h', 'e') : 705.0 ('i', 'n') : 577.0 ('e', 's') : 546.0 ('a', 'n') : 541.0 ('e', 'r') : 457.0 ('r', 'e') : 445.0 ('r', 'a') : 418.0 ('a', 'l') : 407.0 ('n', 'd') : 379.0 English 2: ('a', 'n') : 1344.0 ('t', 'h') : 1271.0 ('h', 'e') : 1163.0 ('i', 'n') : 946.0 ('e', 'r') : 744.0 ('l', 'e...
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
**At times, we need to reduce the number of features. We will discuss this more in the upcoming sessions, but a small example has been discussed here. Instead of using each unique token (a word) as a feature, we reduced the number of features by using 1-gram and 2-gram of characters as features.** We observe that the ...
from keras.datasets import mnist #loading the dataset (train_X, train_y), (test_X, test_y) = mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step 11501568/11490434 [==============================] - 0s 0us/step
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Extract a subset of the data for our experiment:
no1 = train_X[train_y==1,:,:] ## dataset corresponding to number = 1. no0 = train_X[train_y==0,:,:] ## dataset corresponding to number = 0.
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Let us visualize a few images here
for ii in range(5): plt.subplot(1, 5, ii+1) plt.imshow(no1[ii,:,:]) plt.show() for ii in range(5): plt.subplot(1, 5, ii+1) plt.imshow(no0[ii,:,:]) plt.show()
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
We can even use value of each pixel as a feature. But let us see how to derive other features. Now, let us start with a simple feature: the sum of all pixels and see how good this feature is.
## sum of pixel values. sum1 = np.sum(no1>0, (1,2)) # threshold before adding up sum0 = np.sum(no0>0, (1,2))
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Let us visualize how good this feature is: (X-axis is mean, y-axis is the digit)
plt.hist(sum1, alpha=0.7); plt.hist(sum0, alpha=0.7);
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
We can already see that this feature separates the two classes quite well.Let us look at another, more complicated feature. We will count the number black pixels that are surrounded on four sides by non-black pixels, or "hole pixels".
def cumArray(img): img2 = img.copy() for ii in range(1, img2.shape[1]): img2[ii,:] = img2[ii,:] + img2[ii-1,:] # for every row, add up all the rows above it. #print(img2) img2 = img2>0 #print(img2) return img2 def getHolePixels(img): im1 = cumArray(img) im2 = np.rot90(cumArray(np.rot90(img)), 3) #...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Visualize a few:
imgs = [no1[456,:,:], no0[456,:,:]] for img in imgs: plt.subplot(1,2,1) plt.imshow(getHolePixels(img)) plt.subplot(1,2,2) plt.imshow(img) plt.show()
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
Now let us plot the number of hole pixels and see how this feature behaves
hole1 = np.array([getHolePixels(i).sum() for i in no1]) hole0 = np.array([getHolePixels(i).sum() for i in no0]) plt.hist(hole1, alpha=0.7); plt.hist(hole0, alpha=0.7);
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
This feature works even better to distinguish between one and zero.Now let us try the number of pixels in the 'hull' or the number with the holes filled in: Let us try one more feature, where we look at the number of boundary pixels in each image.
def minus(a, b): return a & ~ b def getBoundaryPixels(img): img = img.copy()>0 # binarize the image rshift = np.roll(img, 1, 1) lshift = np.roll(img, -1 ,1) ushift = np.roll(img, -1, 0) dshift = np.roll(img, 1, 0) boundary = minus(img, rshift) | minus(img, lshift) | minus(img, ushift) | minus(img, dshif...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
What will happen if we plot two features together? Feel free to explore the above graph with your mouse.We have seen that we extracted four features from a 28*28 dimensional image.Some questions to explore:Which is the best combination of features?How would you test or visualize four or more features?Can you come up wi...
import pandas as pd df = pd.read_csv('/content/sample_data/california_housing_train.csv') df.head() df.columns df = df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'}) import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from mpl_toolkits.mplot3d import Axes3D sns.set(style = "da...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
**Task :** Download a CSV file from the internet, upload it to your google drive. Read the CSV file and plot graphs using different combination of features and write your analysis Ex : IRIS flower datasaet
import pandas as pd from google.colab import drive drive.mount('/content/drive') iris = pd.read_csv('/content/drive/MyDrive/iris_csv.csv') iris.head() iris.columns iris_trimmed = iris[['sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'class']] import seaborn as sns import matplotlib.pyplot as plt sns.pairplo...
_____no_output_____
Apache-2.0
module1_lab_2.ipynb
Ishitha2003/-fmml20211052
varibles and data types varibles are container is a name is given to a memory allocated with program datatypes values and datatypes are the varibles kinds like int datatype, string datatype etc. how python can identify variable and data types . so basically identify if you write a= 30 it sees no double quotes(") ...
p="harry" a=348 b=3434.334343 print(type(p)) print(type(a)) print(type(b))
<class 'str'> <class 'int'> <class 'float'>
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
variable = are to store a value keyword = reserverd word in python ...................................... identifier = class / function / variables Name EXAMPLE DEF, CLASS ARE THE RESERVED WORD IN PYTHON WHAT ARE DATA TYPES some data types like int = -34,-3,-1,0,3,4,6,7.. are int data types float = decimal withi...
if 89798>3: print(True) else: print(False)
True
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
NONE is simply denoted for represent if you want to give none values then you can use it as a=None to show in code
d=None print(type(d))
<class 'NoneType'>
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
what is type ? python has class and objects we will discuss later about that.type is function which we call and gives the outputs of which class of variable present in name or variable you created . rule for creating variable names1. variable name contains names underscore and digits.2. A variable can start with al...
a =343 b=38437498 print("sum of a+b",a+b)
_____no_output_____
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
assignment operator .py if you add 3 to a int variable just follow step using assignment operator
i=8 i+=3 print(i) i=34 i-=34 print(i) p=3 p*=4 print(p) o=344 o/=34 print(o)
10.117647058823529
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
camparision operator camparision operator campare between two entities to which is True or False like boolean.
b=4>6 print(b) b=34>33 print(b) b=(34>=3) print(b) n=(3434==34343) print(n) p=24!=98 print(p)
True
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
logical operator AND , OR and NOT are the most usable of all the time which is related to boolean algebra concept here. NOT is use only for one variable .
bool1=True bool2=False print("the value of bool1 and bool2",bool1 and bool2) print("the value of bool1 or bool2",bool1 or bool2) print("the value of not bool2", not bool2)
the value of bool1 and bool2 False the value of bool1 or bool2 True the value of not bool2 True
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
type funcion and typecastingtype is used to find the data type of given variable in python.and typecasting is used to change one type to another datatypes like int variable to float variable. typecasting.py
a=43 a=float(a) print(a)
43.0
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
string to int literalint to string literal what is input function?input function allows to you to take input values from the user through Keyboard as a string or int value under the string datatype etc. input function.py a=input("enter your name")a=int(a)print(a) PRACTICE SET add.py
# write a program to add two number
the sum is a+b 68
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
a=34b=34print("the sum is a+b",a+b)
# write a program to find the remainder if a number is divisible by 2. p=45 p/=2 print(p)
the remainder when a is divisible by b is 0
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
a= 45 b= 15print("the remainder when a is divisible by b is",a%b)
# check the type of a funtion using input funtion
<class 'str'>
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
a=input("enter a number ")print(type(a))
# use camparision between two variable having a=34 and b =80 and is greator or not. a=34 b=80 print("a is greator than b is ",a>b)
_____no_output_____
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania