markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
But an incredibly neat trick, and you will see how to transpose NumPy arrays soon, is when you need to effectively transpose a list of lists.
list_of_lists = [range(10)]*5 list_of_lists zip(*list_of_lists) #splat!
tutorial/jupyter python numpy plotting/2_Hands-on_Learning.ipynb
ivukotic/ML_platform_tests
gpl-3.0
Section 1: Introduction & Contents Section: 2 How it works: BTT is trained on the entire 100 Mb corpus of biotech business articles from Fiercebiotech.com Section: 3 Performance: Query result is returned in about one second. Section: 4 Business takeaways: Digital health topics are gainining visibility and prominence in the life science business world. Section 2: How BTT works Basic operation: BTT input: search query. Output: interactive scatterplot that identifies prominent individuals and companys related to query. Most code is wrapped up in the Topics class. After creating an instance, tf-idf representations and TextRank keywords are loaded from JSON and pickle files using the load() function. See the above for an example of this code. Step-by-step 1) tf-idf is used to find documents related to query 2) pandas is used to search a JSON file for pre-computed Named Entities of each document. Named entities are only returned if they have in the top 50 percentile of TextRank score. Call these TextRank-weighted named entities "prominent entities" 3) Each prominent entity is given a score equal to (cosine similarity of document that entity is found in)*(TextRank score). Top 200 are plotted in Bokeh. The y-axis value is equal to the product of the TextRank and tf-idf scores. User experience I currently interact through this app using a Bokeh server. The user can type in a query and see the results in real time (See below screen shot). Soon this will be on Heroku. Section 3: Performance speed I pickled the tf-idf representation and threw all of the TextRank keywords into a JSON file (read with pandas) so that identification of prominent individuals could be done very quickly.
start = time.time() # these next two lines allow you submit a query to the algorithm t.ww2('antibody') # Who's who? function - does information retrieval data_scatter_dict = t.formatSearchResults(format='tfidf_tf_product',return_top_n=200) #user can format data in various ways end = time.time() print 'Query took ' + str(end-start) + ' seconds to execute' print 'Some hits:' print [str(data_scatter_dict['keywords'][x]) for x in [0,11,20]]
.ipynb_checkpoints/Data-Incubator-Final-Interview-ryanmdavis-Copy1-checkpoint.ipynb
ryanmdavis/BioTechTopics
mit
Fast named entity return is possible because all NLP (TextRank, tfidf, named entity recognition) is done offline and stored in pickle or JSON format and loaded later. User can mouseover the scatterplot data to identify the named entity. User can track trends and named entities relevant to their query as a function of time. Key packages: Scikit-learn, pandas, nltk, Bokeh, Scrapy Section 4: Business Take-away The below plot (bottom half) shows that digital health is gaining visibility and attention in the life science industry. That plot shows for each year the sum of the cosine similarity between the phrase "digital health" and each document in that year, normalized to the total number of documents in that year. Thus, the plot can be interpreted as showing that digital health is occuping more and more attention among life science business professionals. This is therefore an exciting time for biomedical researchers with data-handling skills like myself! I believe that my unique combination of programming and life science industry exposure would compound well with the additional training from the Data Incubator, resulting in a quick offer from one of the Data Incubator's partner companies after completing the program. Data points in the upper right quadrant include: IBM Watson: Health AI England's National Health Service: recently launched NHS Digital Academy, a health informatics training program Launchpad Digital Health: Incubator/VC for digital health companies
#plotBokehInJpnb(t,'digital health')
.ipynb_checkpoints/Data-Incubator-Final-Interview-ryanmdavis-Copy1-checkpoint.ipynb
ryanmdavis/BioTechTopics
mit
There also look to be sentences that might be standard sentences, such as Any additional payments are listed below. From Web Scraping to Text-Scraping Using Natural Language Processing Within the text are things that we might recognise as company names, dates, or addresses. Entity recognition refers to a natural language processing technique that attempts to extract words that describe "things", that is, entities, as well as identifying what sorts of "thing", or entity, they are. One powerful Python natural language processing package, spacy, has an entity recognition capability. Let's see how to use it and what sort of output it produces:
#Import the spacy package import spacy #The package parses lanuage according to different statistically trained models #Let's load in the basic English model: nlp = spacy.load('en') #Generate a version of the text annotated using features detected by the model doc = nlp(bigtext)
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
The parsed text is annotated in a variety of ways. For example, we can directly access all the sentences in the original text:
list(doc.sents) ents = list(doc.ents) entTypes = [] for entity in ents: entTypes.append(entity.label_) print(entity, '::', entity.label_) for entType in set(entTypes): print(entType, spacy.explain(entType))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
We can also look at each of the tokens in text and identify whether it is part of a entity, and if so, what sort. The .iob_ attributes identifies O as not part of an entity, B as the first token in an entity, and I as continuing part of an entity.
for token in doc[:15]: print('::'.join([token.text, token.ent_type_,token.ent_iob_]) )
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Looking at the extracted entities, we see we get some good hits: Averbrook Ltd. is an ORG; 20 January 2016 and 14 October 2016 are both instances of a DATE Some near misses: Zeus Publishing isn't a PERSON, although we might see why it has been recognised as such. (Could we overlay the model with an additional mapping of if PERSON and endswith.in(['Publishing', 'Holdings']) -> ORG ?) And some things that are mis-categorised: 52 Doughty Street isn't really meaningful as a QUANTITY. Several things we might usefully want to categorise - such as a UK postcode, for example, which might be useful in and of itself, or when helping us to identify an address - is not recognised as an entity. Things recognised as dates we might want to then further parse as date object types:
from dateutil import parser as dtparser [(d, dtparser.parse(d.string)) for d in ents if d.label_ == 'DATE'] #see also https://github.com/akoumjian/datefinder #datefinder - Find dates inside text using Python and get back datetime objects
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Token Shapes As well as indentifying entities, spacy analyses texts at several othr levels. One such level of abstraction is the "shape" of each token. This identifies whether or not a character is an upper or lower case alphabetic character, a digit, or a punctuation character (which appears as itself):
for token in doc[:15]: print(token, '::', token.shape_)
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Scraping a Text Based on Its Shape Structure And Adding New Entity Types The "shape" of a token provides an additional structural item that we might be able to make use of in scrapers of the raw text. For example, writing an efficient regular expression to identify a UK postcode can be a difficult task, but we can start to cobble one together from the shapes of different postcodes written in "standard" postcode form:
[pc.shape_ for pc in nlp('MK7 6AA, SW1A 1AA, N7 6BB')]
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
We can define a matcher function that will identify the tokens in a document that match a particular ordered combination of shape patterns. For example, the postcode like things described above have the shapes: XXd dXX XXdX dXX Xd dXX We can use these structural patterns to identify token pairs as possible postcodes.
from spacy.matcher import Matcher nlp = spacy.load('en') matcher = Matcher(nlp.vocab) matcher.add('POSTCODE', None, [{'SHAPE':'XXdX'}, {'SHAPE':'dXX'}], [{'SHAPE':'XXd'}, {'SHAPE':'dXX'}], [{'SHAPE':'Xd'}, {'SHAPE':'dXX'}])
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Let's test that:
pcdoc = nlp('pc is WC1N 4CC okay, as is MK7 4AA and Sir James Smith and Lady Jane Grey are presumably persons.') matches = matcher(pcdoc) #See what we matched, and let's see what entities we have detected print('Matches: {}\nEntities: {}'.format([pcdoc[m[1]:m[2]] for m in matches], [(m,m.label_) for m in pcdoc.ents]))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Adding a new entity type with a matcher callback The matcher seems to have matched the postcodes, but is not identifying them as entities. (We also note that the entity matcher has missed the "Sir" title. In some cases, it might also match a postcode as a person.) To add the matched items to the entity list, we need to add a callback function to the matcher.
##Define a POSTCODE as a new entity type by adding matched postcodes to the doc.ents #https://stackoverflow.com/a/47799669 nlp = spacy.load('en') matcher = Matcher(nlp.vocab) def add_entity_label(matcher, doc, i, matches): match_id, start, end = matches[i] doc.ents += ((match_id, start, end),) #Recognise postcodes from different shapes matcher.add('POSTCODE', add_entity_label, [{'SHAPE': 'XXdX'},{'SHAPE':'dXX'}], [{'SHAPE':'XXd'},{'SHAPE':'dXX'}]) pcdoc = nlp('pc is WC1N 4CC okay, as is MK7 4AA and James Smith is presumably a person') matches = matcher(pcdoc) print('Matches: {}\nEntities: {}'.format([pcdoc[m[1]:m[2]] for m in matches], [(m,m.label_) for m in pcdoc.ents]))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Let's put those pieces together more succinctly:
bigtext #Generate base tagged doc doc = nlp(bigtext) #Run postcode tagger over the doc _ = matcher(doc)
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
The tagged document should now include POSTCODE entities. One of the easiest ways to check the effectiveness of a new entity tagger is to check the document with recognised entities visualised within it. The displacy package has a Jupyter enabled visualiser for doing just that.
from spacy import displacy displacy.render(doc, jupyter=True, style='ent')
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Matching A Large Number of Phrases If we have a large number of phrases that are examples of a particular (new) entity type, we can match them using a PhraseMatcher. For example, suppose we have a table of MP data:
import pandas as pd mpdata=pd.read_csv('members_mar18.csv') mpdata.head(5)
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
From this, we can extract a list of MP names, albeit in reverse word order.
term_list = mpdata['list_name'].tolist() term_list[:5]
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
If we wanted to match those names as "MP" entities, we could use the following recipe to add an MP entity type that will be returned if any of the MP names are matched:
from spacy.matcher import PhraseMatcher nlp = spacy.load('en') matcher = PhraseMatcher(nlp.vocab) patterns = [nlp(text) for text in term_list] matcher.add('MP', add_entity_label, *patterns)
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Let's test that new entity on a test string:
doc = nlp("The MPs were Adams, Nigel, Afolami, Bim and Abbott, Ms Diane.") matches = matcher(doc) displacy.render(doc, jupyter=True, style='ent')
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Matching a Regular Expression Sometimes we may want to use a regular expression as an entity detector. For example, we might want to tighten up the postcode entity detectio by using a regular expression, rather than shape matching.
import re #https://stackoverflow.com/a/164994/454773 regex_ukpc = r'([Gg][Ii][Rr] 0[Aa]{2})|((([A-Za-z][0-9]{1,2})|(([A-Za-z][A-Ha-hJ-Yj-y][0-9]{1,2})|(([A-Za-z][0-9][A-Za-z])|([A-Za-z][A-Ha-hJ-Yj-y][0-9]?[A-Za-z]))))\s?[0-9][A-Za-z]{2})' #Based on https://spacy.io/usage/linguistic-features nlp = spacy.load('en') doc = nlp("The postcodes were MK1 6AA and W1A 1AA.") for match in re.finditer(regex_ukpc, doc.text): start, end = match.span() # get matched indices entity = doc.char_span(start, end, label='POSTCODE') # create Span from indices doc.ents = list(doc.ents) + [entity] entity.merge() displacy.render(doc, jupyter=True, style='ent')
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Updating the training of already existing Entities We note previously that the matcher was missing the "Sir" title on matched persons.
nlp('pc is WC1N 4CC okay, as is MK7 4AA and Sir James Smith and Lady Jane Grey are presumably persons').ents
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Let's see if we can update the training of the model so that it does recognise the "Sir" title as part of a person's name. We can do that by creating some new training data and using it to update the model. The entities dict identifies the index values in the test string that delimit the entity we want to extract.
# training data TRAIN_DATA = [ ('Received from Sir John Smith last week.', { 'entities': [(14, 28, 'PERSON')] }), ('Sir Richard Jones is another person', { 'entities': [(0, 18, 'PERSON')] }) ]
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
In this case, we are going to let spacy learn its own patterns, as a statistical model, that will - if the learning pays off correctly - identify things like "Sir Bimble Bobs" as a PERSON entity.
import random #model='en' #'en_core_web_sm' #nlp = spacy.load(model) cycles=20 optimizer = nlp.begin_training() for i in range(cycles): random.shuffle(TRAIN_DATA) for txt, annotations in TRAIN_DATA: nlp.update([txt], [annotations], sgd=optimizer) nlp('pc is WC1N 4CC okay, as is MK7 4AA and Sir James Smith and Lady Jane Grey are presumably persons').ents
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
One of the things that can be a bit fiddly is generating the training strings. We ca produce a little utility function that will help us create a training pattern by identifying the index value(s) associated with a particular substring, that we wish to identify as an example of a particular entity type, inside a text string. The first thing we need to do is find the index values within a string that show where a particular substring can be found. The Python find() and index() methods will find the first location of a substring in a string. However, where a substring appears several times in a sring, we need a new function to identify all the locations. There are several ways of doing this...
#Find multiple matches using .find() #https://stackoverflow.com/a/4665027/454773 def _find_all(string, substring): #Generator to return index of each string match start = 0 while True: start = string.find(substring, start) if start == -1: return yield start start += len(substring) def find_all(string, substring): return list(_find_all(string, substring)) #Find multiple matches using a regular expression #https://stackoverflow.com/a/4664889/454773 import re def refind_all(string, substring): return [m.start() for m in re.finditer(substring, string)] txt = 'This is a string.' substring = 'is' print( find_all(txt, substring) ) print( refind_all(txt, substring) )
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
We can use either of these functions to find the location of a substring in a string, and then use these index values to help us create our training data.
def trainingTupleBuilder(string, substring, typ, entities=None): ixs = refind_all(string, substring) offset = len(substring) if entities is None: entities = {'entities':[]} for ix in ixs: entities['entities'].append( (ix, ix+offset, typ) ) return (string, entities) #('Received from Sir John Smith last week.', {'entities': [(14, 28, 'PERSON')]}) trainingTupleBuilder('Received from Sir John Smith last week.','Sir John Smith','PERSON')
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Training a Simple Model to Recognise Addresses As well as extracting postcodes as entities, could we also train a simple model to extract addresses?
TRAIN_DATA = [] TRAIN_DATA.append(trainingTupleBuilder("He lives at 27, Oswaldtwistle Way, Birmingham",'27, Oswaldtwistle Way, Birmingham','B-ADDRESS')) TRAIN_DATA.append(trainingTupleBuilder("Payments from Boondoggle Limited, 377, Hope Street, Little Village, Halifax. Received: October, 2017",'377, Hope Street, Little Village, Halifax','B-ADDRESS')) TRAIN_DATA
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
The B- prefix identifies the entity as a multi-token entity.
#https://spacy.io/usage/training def spacytrainer(model=None, output_dir=None, n_iter=100, debug=False): """Load the model, set up the pipeline and train the entity recognizer.""" if model is not None: if isinstance(model,str): nlp = spacy.load(model) # load existing spaCy model print("Loaded model '%s'" % model) #Else we assume we have passed in an nlp model else: nlp = model else: nlp = spacy.blank('en') # create blank Language class print("Created blank 'en' model") # create the built-in pipeline components and add them to the pipeline # nlp.create_pipe works for built-ins that are registered with spaCy if 'ner' not in nlp.pipe_names: ner = nlp.create_pipe('ner') nlp.add_pipe(ner, last=True) # otherwise, get it so we can add labels else: ner = nlp.get_pipe('ner') # add labels for _, annotations in TRAIN_DATA: for ent in annotations.get('entities'): ner.add_label(ent[2]) # get names of other pipes to disable them during training other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner'] with nlp.disable_pipes(*other_pipes): # only train NER optimizer = nlp.begin_training() for itn in range(n_iter): random.shuffle(TRAIN_DATA) losses = {} for text, annotations in TRAIN_DATA: nlp.update( [text], # batch of texts [annotations], # batch of annotations drop=0.5, # dropout - make it harder to memorise data sgd=optimizer, # callable to update weights losses=losses) if debug: print(losses) # test the trained model if debug: for text, _ in TRAIN_DATA: doc = nlp(text) print('Entities', [(ent.text, ent.label_) for ent in doc.ents]) print('Tokens', [(t.text, t.ent_type_, t.ent_iob) for t in doc]) # save model to output directory if output_dir is not None: output_dir = Path(output_dir) if not output_dir.exists(): output_dir.mkdir() nlp.to_disk(output_dir) print("Saved model to", output_dir) # test the saved model print("Loading from", output_dir) nlp2 = spacy.load(output_dir) for text, _ in TRAIN_DATA: doc = nlp2(text) print('Entities', [(ent.text, ent.label_) for ent in doc.ents]) print('Tokens', [(t.text, t.ent_type_, t.ent_iob) for t in doc]) return nlp
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Let's update the en model to include a really crude address parser based on the two lines of training data described above.
nlp = spacytrainer('en') #See if we can identify the address addr_doc = nlp(text) displacy.render(addr_doc , jupyter=True, style='ent')
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Parts of Speech (POS) As well as recognising different types of entity, which may be identified across several different words, the spacy parser also marks up each separate word (or token) as a particular "part-of-speech" (POS), such as a noun, verb, or adjective. Parts of speech are identified as .pos_ or .tag_ token attributes.
tags = [] for token in doc[:15]: print(token, '::', token.pos_, '::', token.tag_) tags.append(token.tag_)
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
An explain() function describes each POS type in natural language terms:
for tag in set(tags): print(tag, '::', spacy.explain(tag))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
We can also get a list of "noun chunks" identified in the text, as well as other words they relate to in a sentence:
for chunk in doc.noun_chunks: print(' :: '.join([chunk.text, chunk.root.text, chunk.root.dep_, chunk.root.head.text]))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Scraping a Text Based on Its POS Structure - textacy As well as the basic spacy functionality, packages exist that build on spacy to provide further tools for working with abstractions identified using spacy. For example, the textacy package provides a way of parsing sentences using regular expressions defined over (Ontonotes5?) POS tags:
import textacy list(textacy.extract.pos_regex_matches(nlp(text),r'<NOUN> <ADP> <PROPN|ADP>+')) textacy.constants.POS_REGEX_PATTERNS xx='A sum of £2000-3000 last or £2,000 or £2000-£3000 or £2,000-£3,000 year' for t in nlp(xx): print(t,t.tag_, t.pos_) for e in nlp(xx).ents: print(e, e.label_) list(textacy.extract.pos_regex_matches(nlp('A sum of £2000-3000 last or £2,000 or £2000-£3000 or £2,000-£3,000 year'),r'<SYM><NUM><SYM>?<NUM>?'))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
If we can define appropriate POS pattern, we can extract terms from an arbitrary text based on that pattern, an approach that is far more general than trying to write a regular expression pattern matcher over just the raw text.
#define approx amount eg £10,000-£15,000 or £10,000-15,000 parse('{}£{a}-£{b:g}{}','eg £10,000-£15,000 or £14,000-£16,000'.replace(',',''))
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
More Complex Matching Rules Matchers can be created over a wide range of attributes (docs), including POS tags and entity labels. For example, we can start trying to build an address tagger by looking for things that end with a postcode.
nlp = spacy.load('en') matcher = Matcher(nlp.vocab) matcher.add('POSTCODE', add_entity_label, [{'SHAPE': 'XXdX'},{'SHAPE':'dXX'}], [{'SHAPE':'XXd'},{'SHAPE':'dXX'}]) matcher.add('ADDRESS', add_entity_label, [{'POS':'NUM','OP':'+'},{'POS':'PROPN','OP':'+'}, {'ENT_TYPE':'POSTCODE', 'OP':'+'}], [{'ENT_TYPE':'GPE','OP':'+'}, {'ENT_TYPE':'POSTCODE', 'OP':'+'}]) addr_doc = nlp(text) matcher(addr_doc) displacy.render(addr_doc , jupyter=True, style='ent') for m in matcher(addr_doc): print(addr_doc[m[1]:m[2]]) print([(e, e.label_) for e in addr_doc.ents])
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
In this case, we note that the visualiser cannot cope with rendering multiple entity types over one or more words. In the above example, the POSTCODE entitites are highlighted, but we note from the matcher that ADDRESS ranges are also identified that extend across entities defined over fewer terms. Visualising - displaCy We can look at the structure of a text by printing out the child elements associated with each token in a sentence:
for sent in nlp(text).sents: print(sent,'\n') for token in sent: print(token, ': ', str(list(token.children))) print()
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
However, the displaCy toolset, included as part of spacy, provides a more appealing way of visualising parsed documents in two different ways: as a dependency graph, showing POS tags for each token and how they relate to each other; as a text display with extracted entities highlighted. The dependency graph identifies POS tags as well as how tokens are related in natural language grammatical phrases:
from spacy import displacy displacy.render(doc, jupyter=True,style='dep') displacy.render(doc, jupyter=True,style='dep',options={'distance':85, 'compact':True})
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
We can also use displaCy to highlight, inline, the entities extracted from a text.
displacy.render(pcdoc, jupyter=True,style='ent') displacy.render(doc, jupyter=True,style='ent')
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Extending Entities eg add a flag to say a person is an MP
mpdata=pd.read_csv('members_mar18.csv') tmp = mpdata.to_dict(orient='record') mpdatadict = {k['list_name']:k for k in tmp } #via https://spacy.io/usage/processing-pipelines mpdata=pd.read_csv('members_mar18.csv') """Example of a spaCy v2.0 pipeline component to annotate MP record with MNIS data""" from spacy.tokens import Doc, Span, Token class RESTMPComponent(object): """spaCy v2.0 pipeline component that annotates MP entity with MP data. """ name = 'mp_annotator' # component name, will show up in the pipeline def __init__(self, nlp, label='MP'): """Initialise the pipeline component. The shared nlp instance is used to initialise the matcher with the shared vocab, get the label ID and generate Doc objects as phrase match patterns. """ # Get MP data mpdata=pd.read_csv('members_mar18.csv') mpdatadict = mpdata.to_dict(orient='record') # Convert MP data to a dict keyed by MP name self.mpdata = {k['list_name']:k for k in mpdatadict } self.label = nlp.vocab.strings[label] # get entity label ID # Set up the PhraseMatcher with Doc patterns for each MP name patterns = [nlp(c) for c in self.mpdata.keys()] self.matcher = PhraseMatcher(nlp.vocab) self.matcher.add('MPS', None, *patterns) # Register attribute on the Token. We'll be overwriting this based on # the matches, so we're only setting a default value, not a getter. # If no default value is set, it defaults to None. Token.set_extension('is_mp', default=False) Token.set_extension('mnis_id') Token.set_extension('constituency') Token.set_extension('party') # Register attributes on Doc and Span via a getter that checks if one of # the contained tokens is set to is_country == True. Doc.set_extension('is_mp', getter=self.is_mp) Span.set_extension('is_mp', getter=self.is_mp) def __call__(self, doc): """Apply the pipeline component on a Doc object and modify it if matches are found. Return the Doc, so it can be processed by the next component in the pipeline, if available. """ matches = self.matcher(doc) spans = [] # keep the spans for later so we can merge them afterwards for _, start, end in matches: # Generate Span representing the entity & set label entity = Span(doc, start, end, label=self.label) spans.append(entity) # Set custom attribute on each token of the entity # Can be extended with other data associated with the MP for token in entity: token._.set('is_mp', True) token._.set('mnis_id', self.mpdata[entity.text]['member_id']) token._.set('constituency', self.mpdata[entity.text]['constituency']) token._.set('party', self.mpdata[entity.text]['party']) # Overwrite doc.ents and add entity – be careful not to replace! doc.ents = list(doc.ents) + [entity] for span in spans: # Iterate over all spans and merge them into one token. This is done # after setting the entities – otherwise, it would cause mismatched # indices! span.merge() return doc # don't forget to return the Doc! def is_mp(self, tokens): """Getter for Doc and Span attributes. Returns True if one of the tokens is an MP.""" return any([t._.get('is_mp') for t in tokens]) # For simplicity, we start off with only the blank English Language class # and no model or pre-defined pipeline loaded. nlp = spacy.load('en') rest_mp = RESTMPComponent(nlp) # initialise component nlp.add_pipe(rest_mp) # add it to the pipeline doc = nlp(u"Some text about MPs Abbott, Ms Diane and Afriyie, Adam") print('Pipeline', nlp.pipe_names) # pipeline contains component name print('Doc has MPs', doc._.is_mp) # Doc contains MPs for token in doc: if token._.is_mp: print(token.text, '::', token._.constituency,'::', token._.party, '::', token._.mnis_id) # MP data print('Entities', [(e.text, e.label_) for e in doc.ents]) # entities
notebooks/Text Scraping - Notes.ipynb
psychemedia/parlihacks
mit
Simple Plot In order to plot observed stellar abundances, you just need to enter the wanted ratios with the xaxis and yaxis parameters. Stellab has been coded in a way that any abundance ratio can be plotted (see Appendix A below), as long as the considered data sets contain the elements. In this example, we consider the Milky Way.
# Create an instance of Stellab s = stellab.stellab() # Plot observational data (you can try all the ratios you want) s.plot_spectro(xaxis='[Fe/H]', yaxis='[Eu/Fe]') plt.xlim(-4.5,0.75) plt.ylim(-1.6,1.6)
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Solar Normalization By default, the solar normalization $\log(n_A/n_B)_\odot$ is taken from the reference paper that provide the data set. But every data point can be re-normalized to any other solar values (see Appendix B), using the norm parameter. This is highly recommended, since the original data points may not have the same solar normalization.
# First, you can see the list of the available solar abundances s.list_solar_norm()
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Here is an example of how the observational data can be re-normalized.
# Plot using the default solar normalization of each data set s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]') plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.6) # Plot using the same solar normalization for all data sets s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]',norm='Asplund_et_al_2009') plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.6)
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Important Note In some papers, I had a hard time finding the solar normalization used by the authors. This means I cannot apply the re-normalization for their data set. When that happens, I print a warning below the plot and add two asterisk after the reference paper in the legend. Personal Selection You can select a subset of the observational data implemented in Stellab.
# First, you can see the list of the available reference papers s.list_ref_papers() # Create a list of reference papers obs = ['stellab_data/milky_way_data/Jacobson_et_al_2015_stellab',\ 'stellab_data/milky_way_data/Venn_et_al_2004_stellab',\ 'stellab_data/milky_way_data/Yong_et_al_2013_stellab',\ 'stellab_data/milky_way_data/Bensby_et_al_2014_stellab'] # Plot data using your selection of data points s.plot_spectro(xaxis='[Fe/H]', yaxis='[Ca/Fe]', norm='Asplund_et_al_2009', obs=obs) plt.xlim(-4.5,0.7) plt.ylim(-1.4,1.6)
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Galaxy Selection The Milky Way (milky_way) is the default galaxy. But you can select another galaxy among Sculptor, Fornax, and Carina (use lower case letters).
# Plot data using a specific galaxy s.plot_spectro(xaxis='[Fe/H]', yaxis='[Si/Fe]',norm='Asplund_et_al_2009', galaxy='fornax') plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.4)
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Plot Error Bars It is possible to plot error bars with the show_err parameter, and print the mean errors with the show_mean_err parameter.
# Plot error bars for a specific galaxy s.plot_spectro(xaxis='[Fe/H]',yaxis='[Ti/Fe]',\ norm='Asplund_et_al_2009', galaxy='sculptor', show_err=True, show_mean_err=True) plt.xlim(-4.5,0.75) plt.ylim(-1.4,1.4)
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Appendix A - Abundance Ratios Let's consider that a data set provides stellar abundances in the form of [X/Y], where Y is the reference element (often H or Fe) and X represents any element. It is possible to change the reference element by using simple substractions and additions. Substraction Let's say we want [Ca/Mg] from [Ca/Fe] and [Mg/Fe]. $$[\mathrm{Ca}/\mathrm{Mg}]=\log(n_\mathrm{Ca}/n_\mathrm{Mg})-\log(n_\mathrm{Ca}/n_\mathrm{Mg})_\odot$$ $$=\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Ca}/n_\mathrm{Fe}}{n_\mathrm{Mg}/n_\mathrm{Fe}}\right)_\odot$$ $$=\log(n_\mathrm{Ca}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{Ca}/n_\mathrm{Fe})\odot+\log(n\mathrm{Mg}/n_\mathrm{Fe})_\odot$$ $$=[\mathrm{Ca}/\mathrm{Fe}]-[\mathrm{Mg}/\mathrm{Fe}]$$ Addition Let's say we want [Mg/H] from [Fe/H] and [Mg/Fe]. $$[\mathrm{Mg}/\mathrm{H}]=\log(n_\mathrm{Mg}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{H})_\odot$$ $$=\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)-\log\left(\frac{n_\mathrm{Mg}/n_\mathrm{Fe}}{n_\mathrm{H}/n_\mathrm{Fe}}\right)_\odot$$ $$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})-\log(n_\mathrm{H}/n_\mathrm{Fe})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot+\log(n\mathrm{H}/n_\mathrm{Fe})_\odot$$ $$=\log(n_\mathrm{Mg}/n_\mathrm{Fe})+\log(n_\mathrm{Fe}/n_\mathrm{H})-\log(n_\mathrm{Mg}/n_\mathrm{Fe})\odot-\log(n\mathrm{Fe}/n_\mathrm{H})_\odot$$ $$=[\mathrm{Mg}/\mathrm{Fe}]+[\mathrm{Fe}/\mathrm{H}]$$ Test
# Everything should be on a horizontal line s.plot_spectro(xaxis='[Mg/H]', yaxis='[Ti/Ti]') plt.xlim(-1,1) plt.ylim(-1,1) # Everything should be on a vertical line s.plot_spectro(xaxis='[Mg/Mg]', yaxis='[Ti/Mg]') plt.xlim(-1,1) plt.ylim(-1,1) # Everything should be at zero s.plot_spectro(xaxis='[Mg/Mg]', yaxis='[Ti/Ti]') plt.xlim(-1,1) plt.ylim(-1,1)
regression_tests/Stellab_tests.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Define Variables constants: $m_A$: Mass of the Rov. $T_1$: Distance from COM to back thruster axis. $T_2$: Distance from COM to upper thruster axis. $W_x$: distance from center axis to back thruster axis. $W_h$: distance from center axis to upper thruster. $B_h$: the z messure of COB (center of bouyency) distance from COG (center of gravity) $B_h$: the x messure of COB distance from COG.
# Inertial Reference Frame N = me.ReferenceFrame('N') # Define a world coordinate origin O = me.Point('O') O.set_vel(N, 0) rot = list(me.dynamicsymbols('r0:3')) #rot = list(symbols('r0:3')) drot = list(me.dynamicsymbols('dr0:3')) x = list(me.dynamicsymbols('v0:3')) # Coordinates of robot in World Frame dx = list(me.dynamicsymbols('dv0:3')) kin_diff=Matrix(x+rot).diff()-Matrix(dx+drot) kin_diff #xxx = me.dynamicsymbols('xxx') #from sympy import * #eval(sm.srepr(kin_diff)) #import pickle #pickle.dump(xxx,open('/tmp/forcing_vector.pkl','wb')) # Constants for the Robot Body Wx = symbols('W_x') # 2*w is the width between thrusters Wh = symbols('W_h') T1 = symbols('T_1') # Distance between thruster base and center of mass T2 = symbols('T_2') Bh = symbols('B_h') Bw = symbols('B_w') m_b = symbols('m_b') # Mass of the body v_b = symbols('v_b') # Volume of the body mu = symbols('\mu') #drag mu_r = symbols('\mu_r') #rotational drag g = symbols('g') I = list(symbols('Ixx, Iyy, Izz')) # Moments of inertia of body # Robot Reference Frame Rz=N.orientnew('R_z', 'Axis', (rot[2], N.z)) Rz.set_ang_vel(N,drot[2]*N.z) Rx=Rz.orientnew('R_x', 'Axis', (rot[0], Rz.x)) Rx.set_ang_vel(Rz,drot[0]*Rz.x) R=Rx.orientnew('R', 'Axis', (rot[1], Rx.y)) R.set_ang_vel(Rx,drot[1]*Rx.y) #### adding dumping Torqe for each rotation T_z=(R,-drot[2]*N.z*mu_r) #rotaional dumping Torqe T_x=(R,-drot[0]*Rz.x*mu_r) #rotaional dumping Torqe T_y=(R,-drot[1]*Rx.y*mu_r) #rotaional dumping Torqe # Center of mass of body COM = O.locatenew('COM', x[0]*N.x + x[1]*N.y + x[2]*N.z) # Set the velocity of COM COM.set_vel(N, dx[0]*N.x + dx[1]*N.y + dx[2]*N.z) #center of bouyency COB = COM.locatenew('COB', R.x*Bw+R.z*Bh) COB.v2pt_theory(COM, N, R); R.ang_vel_in(N) # Calculate inertia of body Ib = me.inertia(R, *I) # Create a rigid body object for body Body = me.RigidBody('Body', COM, R, m_b, (Ib, COM)) # Points of thrusters L1 = COM.locatenew('L_1', -R.x*T1-Wx*R.y) L2 = COM.locatenew('L_2', -R.x*T1+Wx*R.y) L3 = COM.locatenew('L_3', -R.x*T2-Wh*R.z) # Set the velocity of points L1.v2pt_theory(COM, N, R) L2.v2pt_theory(COM, N, R) L3.v2pt_theory(COM, N, R);
demos/openrov/notebooks/rov_friction-3d-full.ipynb
orig74/DroneSimLab
mit
Calculating hydrodynamic drag under sphire assumption and ignoring inertia forces $F_{D}\,=\,{\tfrac {1}{2}}\,\rho \,v^{2}\,C_{D}\,A$ https://en.wikipedia.org/wiki/Drag_(physics) we define $\mu$ as: $\mu=\,{\tfrac {1}{2}}\,\rho \,C_{D}\,A$ then: $F_{D}\,=\mu \,v^{2}$
#dCw=Cw.diff() v=N.x*dx[0]+N.y*dx[1]+N.z*dx[2] Fd=-v.normalize()*v.magnitude()**2*mu Fd #rotational drags #Fr=-R.ang_vel_in(R_static)*mu #Fr=(N,-drot[2]**2*N.z*mu),-drot[0]**2*Rz.x-drot[1]**2*Rx.y #Fr=Fr*mu #Fr #thrust forces symbols F1, F2, F3 = symbols('f_1, f_2, f_3') Fg = -N.z *m_b * g Fb = N.z * v_b * 1e3 *g #whight of 1m^3 water in kg (MKS units) kane = me.KanesMethod(N, q_ind=x+rot, u_ind=dx+drot, kd_eqs=kin_diff) bodies = (Body,) loads = ( (L1, F1 * R.x), (L2, F2 * R.x), (L3, F3 * R.z), (COM, Fg ), (COB, Fb ), (COM, Fd ), T_x, T_y, T_z ) fr, frstar = kane.kanes_equations(loads=loads, bodies=bodies) mu_r mass_matrix = trigsimp(kane.mass_matrix_full) mass_matrix forcing_vector = trigsimp(kane.forcing_full) #open('/tmp/mass_matrix.srepr','wb').write(mass_matrix) coordinates = tuple(x+rot) coordinates speeds = tuple(dx+drot) speeds specified = (F1, F2, F3) constants = [Wx,Wh,T1,T2,Bh,Bw,m_b,v_b,mu,mu_r,g]+I if 1: open('./forcing_vector.srepr','wb').write(srepr((\ forcing_vector, coordinates, mass_matrix, speeds, constants, specified, )).encode()) if 1: (forcing_vector,\ coordinates, mass_matrix, speeds, constants, specified)=eval(open('./forcing_vector.srepr','rb').read()) right_hand_side = generate_ode_function(forcing_vector, coordinates, speeds, constants, mass_matrix=mass_matrix, specifieds=specified) help(right_hand_side) x0 = np.zeros(12) #MKS units #constants = [Wx,Wh,T1,T2,Bh,Bw,m_b,v_b,mu,g]+I numerical_constants = np.array([ 0.1, # Wx [m] 0.15, # Wh [m] 0.1, # T1 [m] 0.05, # T2 [m] 0.08, # Bh [m] 0.01, # Bw [m] 1.0 *3, # m_b [kg] 0.001 *3 , # v_b [M^3] 5.9, # mu 0.2, # mu_r 9.8, # g MKS 0.5, # Ixx [kg*m^2] 0.5, # Iyy [kg*m^2] 0.5, # Izz [kg*m^2] ] ) #args = {'constants': numerical_constants, numerical_specified=[0.8,0.5,0] frames_per_sec = 60.0 final_time = 60.0 t = np.linspace(0.0, final_time, int(final_time * frames_per_sec)) right_hand_side(x0, 0.0, numerical_specified, numerical_constants) def controller(x, t): if t<20 or t>35: #return [0.8,0.5,0] return [0.0,0.0,0] else: return [-0.55*30,-0.5*30,0] #def controller(x, t): # return [0.0,0.0,0] y = odeint(right_hand_side, x0, t, args=(controller, numerical_constants)) #y = odeint(right_hand_side, x0, t, args=(numerical_specified, numerical_constants)) y.shape def plot(): plt.figure() #plt.plot(sys.times, np.rad2deg(x[:, :3])) plt.subplot(2,3,1) #plt.plot(t, np.rad2deg(y[:, 0])) plt.plot(t, y[:, :3]) plt.legend([latex(s, mode='inline') for s in coordinates[:3]]) plt.subplot(2,3,2) plt.plot(t, np.rad2deg(y[:, 3:6])) plt.legend([latex(s, mode='inline') for s in coordinates[3:6]]) plt.subplot(2,3,3) plt.title('XY') plt.plot(y[:,0],y[:,1]) plt.axis('equal') plt.subplot(2,3,6) plt.title('Z') plt.plot(y[:,2]) plt.axis('equal') plt.subplot(2,3,4) plt.plot(t, np.rad2deg(y[:, 9:12])) plt.legend([latex(s, mode='inline') for s in coordinates[9:12]]) plot()
demos/openrov/notebooks/rov_friction-3d-full.ipynb
orig74/DroneSimLab
mit
another method
x0 = np.zeros(12) xx=x0 y=[] for ct in t: x_dot=right_hand_side(xx, ct, controller, numerical_constants) y.append(xx) xx=xx+x_dot*1/frames_per_sec y=np.array(y) y.shape plot()
demos/openrov/notebooks/rov_friction-3d-full.ipynb
orig74/DroneSimLab
mit
Selecting Asset Data Checkout the QuantConnect docs to learn how to select asset data.
spy = qb.AddEquity("SPY") eur = qb.AddForex("EURUSD")
Jupyter/BasicQuantBookTemplate.ipynb
Mendelone/forex_trading
apache-2.0
Historical Data Requests We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol. For more information, please follow the link.
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution h1 = qb.History(360, Resolution.Daily) # Plot closing prices from "SPY" h1.loc["SPY"]["close"].plot() # Gets historical data from the subscribed assets, from the last 30 days with daily resolution h2 = qb.History(timedelta(30), Resolution.Daily) # Plot high prices from "EURUSD" h2.loc["EURUSD"]["high"].plot() # Gets historical data from the subscribed assets, between two dates with daily resolution h3 = qb.History(spy.Symbol, datetime(2014,1,1), datetime.now(), Resolution.Daily) # Only fetchs historical data from a desired symbol h4 = qb.History(spy.Symbol, 360, Resolution.Daily) # or qb.History("SPY", 360, Resolution.Daily) # Only fetchs historical data from a desired symbol # When we are not dealing with equity, we must use the generic method h5 = qb.History[QuoteBar](eur.Symbol, timedelta(30), Resolution.Daily) # or qb.History[QuoteBar]("EURUSD", timedelta(30), Resolution.Daily)
Jupyter/BasicQuantBookTemplate.ipynb
Mendelone/forex_trading
apache-2.0
Historical Options Data Requests Select the option data Sets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35)) Get the OptionHistory, an object that has information about the historical options data
goog = qb.AddOption("GOOG") goog.SetFilter(-2, 2, timedelta(0), timedelta(180)) option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4)) print option_history.GetStrikes() print option_history.GetExpiryDates() h6 = option_history.GetAllData()
Jupyter/BasicQuantBookTemplate.ipynb
Mendelone/forex_trading
apache-2.0
Indicators We can easily get the indicator of a given symbol with QuantBook. For all indicators, please checkout QuantConnect Indicators Reference Table
# Example with BB, it is a datapoint indicator # Define the indicator bb = BollingerBands(30, 2) # Gets historical data of indicator bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily) # drop undesired fields bbdf = bbdf.drop('standarddeviation', 1) # Plot bbdf.plot() # For EURUSD bbdf = qb.Indicator(bb, "EURUSD", 360, Resolution.Daily) bbdf = bbdf.drop('standarddeviation', 1) bbdf.plot() # Example with ADX, it is a bar indicator adx = AverageDirectionalIndex("adx", 14) adxdf = qb.Indicator(adx, "SPY", 360, Resolution.Daily) adxdf.plot() # For EURUSD adxdf = qb.Indicator(adx, "EURUSD", 360, Resolution.Daily) adxdf.plot() # Example with ADO, it is a tradebar indicator (requires volume in its calculation) ado = AccumulationDistributionOscillator("ado", 5, 30) adodf = qb.Indicator(ado, "SPY", 360, Resolution.Daily) adodf.plot() # For EURUSD. # Uncomment to check that this SHOULD fail, since Forex is data type is not TradeBar. # adodf = qb.Indicator(ado, "EURUSD", 360, Resolution.Daily) # adodf.plot() # SMA cross: symbol = "EURUSD" # Get History hist = qb.History[QuoteBar](symbol, 500, Resolution.Daily) # Get the fast moving average fast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily) # Get the fast moving average slow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily) # Remove undesired columns and rename others fast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'}) slow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'}) # Concatenate the information and plot df = pd.concat([hist.loc[symbol]["close"], fast, slow], axis=1).dropna(axis=0) df.plot() # Get indicator defining a lookback period in terms of timedelta ema1 = qb.Indicator(ExponentialMovingAverage(50), "SPY", timedelta(100), Resolution.Daily) # Get indicator defining a start and end date ema2 = qb.Indicator(ExponentialMovingAverage(50), "SPY", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily) ema = pd.concat([ema1, ema2], axis=1) ema.plot() rsi = RelativeStrengthIndex(14) # Selects which field we want to use in our indicator (default is Field.Close) rsihi = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.High) rsilo = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.Low) rsihi = rsihi.rename(columns={'relativestrengthindex': 'high'}) rsilo = rsilo.rename(columns={'relativestrengthindex': 'low'}) rsi = pd.concat([rsihi['high'], rsilo['low']], axis=1) rsi.plot()
Jupyter/BasicQuantBookTemplate.ipynb
Mendelone/forex_trading
apache-2.0
On a vu dans les chapitres précédents comment définir des fonctions avec def, des boucles avec while et for et des tests avec if ainsi que quelques exemples sur chaque notion mais indépendants des autres. Très souvent en programmation, on a besoin d'utiliser plus tous ces outils à la fois. C'est leur utilisation simultanée qui permet de résoudre des problèmes très divers et de les exprimer en quelques lignes de code. Dans ce chapitre, nous allons voir quelques exemples qui utilisent les fonctions, les boucles et les conditions dans un même programme. Conjecture de Syracuse La suite de Syracuse une suite d'entiers naturels définie de la manière suivante. On part d'un nombre entier plus grand que zéro ; s’il est pair, on le divise par 2 ; s’il est impair, on le multiplie par 3 et on ajoute 1. En répétant l’opération, on obtient une suite d'entiers positifs dont chacun ne dépend que de son prédécesseur. Par exemple, la suite de Syracuse du nombre 23 est: 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1, ... Après que le nombre 1 a été atteint, la suite des valeurs (1, 4, 2, 1, 4, 2, ...) se répète indéfiniment en un cycle de longueur 3, appelé cycle trivial. La conjecture de Syracuse est l'hypothèse selon laquelle la suite de Syracuse de n'importe quel entier strictement positif atteint 1. En dépit de la simplicité de son énoncé, cette conjecture défie depuis de nombreuses années les mathématiciens. Paul Erdos a dit à propos de la conjecture de Syracuse : "les mathématiques ne sont pas encore prêtes pour de tels problèmes".
def syracuse(n): while n != 1: print(n, end=' ') if n % 2 == 0: n = n//2 else: n = 3*n+1 syracuse(23) syracuse(245) syracuse(245154)
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Pouvez-vous trouver un nombre n tel que la suite de Syracuse n'atteint pas le cycle 4-2-1? Énumérer les diviseurs d'un nombre entier Une fonction qui retourne la liste des diviseurs d'un nombre entiers peut s'écrire comme ceci en utilisant une boucle for et un test if :
def diviseurs(n): L = [] for i in range(1, n+1): if n % i == 0: L.append(i) return L
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
On vérifie que la fonction marche bien:
diviseurs(12) diviseurs(13) diviseurs(15) diviseurs(24)
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Tester si un nombre est premier Une fonction peut en utiliser une autre. Par exemple, en utilisant la fonction diviseur que l'on a définit plus haut, on peut tester si un nombre est premier:
def est_premier_1(n): L = diviseurs(n) return len(L) == 2 est_premier_1(12) est_premier_1(13) [n for n in range(20) if est_premier_1(n)]
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
On pourrait faire plus efficace, car il suffit de vérifier la non-existence de diviseurs inférieurs à la racine carrée de n.
from math import sqrt def est_premier(n): sq = int(sqrt(n)) for i in range(2, sq): if n % i == 0: return False return True
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
En utilisant cette fonciton, on trouve que la liste des premiers nombres premiers inférieurs à 20 est:
[n for n in range(20) if est_premier(n)]
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Le résulat est erroné! Pourquoi? La fonction est_premier(8) retourne True en ce moment, car la racine carrée de 8 vaut 2.828 et donc sq=int(2.828) est égal à 2 et la boucle ne teste pas la valeur i=2, car range(2,2) retourne une liste vide. On peut corriger de la façon suivante en ajoutant un +1 au bon endroit:
from math import sqrt def est_premier(n): sq = int(sqrt(n)) for i in range(2, sq+1): if n % i == 0: return False return True
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
On vérifie que la fonction retourne bien que 4 et 8 ne sont pas des nombres premiers:
[n for n in range(20) if est_premier(n)]
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Mais il y a encore une erreur, car 0 et 1 ne devraient pas faire partie de la liste. Une solution est de traiter ces deux cas de base à part:
from math import sqrt def est_premier(n): if n == 0 or n == 1: return False sq = int(sqrt(n)) for i in range(2, sq+1): if n % i == 0: return False return True
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
On vérifie que tout marche bien maintenant:
[n for n in range(50) if est_premier(n)]
NotesDeCours/17-exemples.ipynb
seblabbe/MATH2010-Logiciels-mathematiques
gpl-3.0
Create the core, menus and pipeline tree The core object carrys all the system information and is operated on by the other classes
new_core = Core() project_menu = ProjectMenu() module_menu = ModuleMenu() theme_menu = ThemeMenu() pipe_tree = Tree()
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Create a new project
project_title = "DTOcean" new_project = project_menu.new_project(new_core, project_title)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Set the device type
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection") variable_id = "device.system_type" my_var = options_branch.get_input_variable(new_core, new_project, variable_id) my_var.set_raw_interface(new_core, "Wave Floating") my_var.read(new_core, new_project)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Initiate the pipeline This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
project_menu.initiate_pipeline(new_core, new_project)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Discover available modules
names = module_menu.get_available(new_core, new_project) message = html_list(names) HTML(message)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Activate a module Note that the order of activation is important and that we can't deactivate yet!
module_name = 'Installation' module_menu.activate(new_core, new_project, module_name)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Activate the Economics themes
names = theme_menu.get_available(new_core, new_project) message = html_list(names) HTML(message) theme_menu.activate(new_core, new_project, "Economics")
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Check the status of the module inputs
installation_branch = pipe_tree.get_branch(new_core, new_project, 'Installation') input_status = installation_branch.get_input_status(new_core, new_project) message = html_dict(input_status) HTML(message)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Initiate the dataflow This indicates that the filtering and module / theme selections are complete
project_menu.initiate_dataflow(new_core, new_project)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Load test data Prepare the test data for loading. The test_data directory of the source code should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
%run test_data/inputs_wp5.py installation_branch.read_test_data(new_core, new_project, "test_data/inputs_wp5.pkl")
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Prepare the Economics Theme
theme_name = "Economics" eco_branch = pipe_tree.get_branch(new_core, new_project, "Economics") input_status = eco_branch.get_input_status(new_core, new_project) message = html_dict(input_status) HTML(message)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Set the discount rate and cost estimates and project lifetime
discount_rate = 0.05 device_cost = 1000000 project_lifetime = 20 new_var = eco_branch.get_input_variable(new_core, new_project, "project.discount_rate") new_var.set_raw_interface(new_core, discount_rate) new_var.read(new_core, new_project) new_var = eco_branch.get_input_variable(new_core, new_project, "device.system_cost") new_var.set_raw_interface(new_core, device_cost) new_var.read(new_core, new_project) new_var = eco_branch.get_input_variable(new_core, new_project, "project.lifetime") new_var.set_raw_interface(new_core, project_lifetime) new_var.read(new_core, new_project)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Check if the module can be executed
can_execute = module_menu.is_executable(new_core, new_project, module_name) display(can_execute) input_status = installation_branch.get_input_status(new_core, new_project) message = html_dict(input_status) HTML(message)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Execute the current module The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once. Note, any data supplied by the module will be automatically copied into the active data state.
module_menu.execute_current(new_core, new_project)
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Examine the results
output_status = installation_branch.get_output_status(new_core, new_project) message = html_dict(output_status) HTML(message) economics_data = new_core.get_data_value(new_project, "project.device_phase_installation_costs") economics_data economics_data_breakdown = new_core.get_data_value(new_project, "project.device_phase_installation_cost_breakdown") economics_data_breakdown port = new_core.get_data_value(new_project, "project.port") port comp_cost = new_core.get_data_value(new_project, "project.electrical_phase_installation_costs") comp_cost comp_time = new_core.get_data_value(new_project, "project.electrical_phase_installation_times") comp_time economics_data_breakdown = new_core.get_data_value(new_project, "project.electrical_phase_installation_time_breakdown") economics_data_breakdown comp_cost = new_core.get_data_value(new_project, "project.mooring_phase_installation_costs") comp_cost comp_time = new_core.get_data_value(new_project, "project.mooring_phase_installation_times") comp_time economics_data_breakdown = new_core.get_data_value(new_project, "project.mooring_phase_installation_time_breakdown") economics_data_breakdown device_cost_breakdown = new_core.get_data_value(new_project, "project.device_phase_cost_class_breakdown") electrical_cost_breakdown = new_core.get_data_value(new_project, "project.electrical_phase_installation_cost_breakdown") mooring_cost_breakdown = new_core.get_data_value(new_project, "project.mooring_phase_installation_cost_breakdown") economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_phase_cost_breakdown") economics_data_breakdown economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_cost_class_breakdown") economics_data_breakdown device_time_breakdown = new_core.get_data_value(new_project, "project.device_phase_time_class_breakdown") device_time_breakdown economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_phase_time_breakdown") economics_data_breakdown economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_time_class_breakdown") economics_data_breakdown economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_economics_data") economics_data_breakdown output_status = eco_branch.get_output_status(new_core, new_project) message = html_dict(output_status) HTML(message) economics_data_breakdown = new_core.get_data_value(new_project, "project.capex_breakdown") economics_data_breakdown economics_data_breakdown = new_core.get_data_value(new_project, "project.capex_total") economics_data_breakdown
notebooks/DTOcean Installation Module Example.ipynb
DTOcean/dtocean-core
gpl-3.0
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 2 sample_id = 7 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return x/255; """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
one_hot_map = np.eye(10) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function return one_hot_map[x] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function print( len(image_shape) ) x = tf.placeholder(tf.float32,(None,)+image_shape, name="x") return x def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function y = tf.placeholder(tf.float32,[None,n_classes], name="y") return y def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function keep_prob = tf.placeholder(tf.float32, name="keep_prob") return keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # find # of input channels and create weight tensor channels = x_tensor.get_shape().as_list()[3] weight_dimension = conv_ksize + (channels,) + (conv_num_outputs,) weight = tf.Variable( tf.truncated_normal( weight_dimension, mean=0.0, stddev=0.1 ) ) # conv layer bias = tf.Variable(tf.zeros(conv_num_outputs)) conv_layer = tf.nn.conv2d(x_tensor, weight, (1,) + conv_strides + (1,), padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias) conv_layer = tf.nn.relu(conv_layer) # max pooling conv_layer = tf.nn.max_pool( conv_layer, (1,) + pool_ksize + (1,), (1,) + pool_strides + (1,), padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function return tf.contrib.layers.flatten(x_tensor) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.contrib.layers.fully_connected( inputs=x_tensor, num_outputs=num_outputs, activation_fn=tf.nn.relu, biases_initializer=tf.zeros_initializer, weights_initializer=lambda size, dtype, partition_info: tf.truncated_normal(shape=size,dtype=dtype,mean=0.0,stddev=0.1) ) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.contrib.layers.fully_connected( inputs=x_tensor, num_outputs=num_outputs, weights_initializer=lambda size, dtype, partition_info: tf.truncated_normal(shape=size,dtype=dtype,mean=0.0,stddev=0.1) ) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x = conv2d_maxpool(x, 16, (4,4), (1,1), (2,2), (1,1)) x = conv2d_maxpool(x, 32, (4,4), (1,1), (2,2), (1,1)) x = conv2d_maxpool(x, 64, (4,4), (1,1), (2,2), (1,1)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x = flatten(x) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x = fully_conn(x, 512) x = tf.nn.dropout(x, keep_prob) x = fully_conn(x, 256) x = tf.nn.dropout(x, keep_prob) x = fully_conn(x, 64) x = tf.nn.dropout(x, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) logits = output(x,10) # TODO: return output return logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function cost = session.run( cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.0 }) validation = session.run( accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.0 }) print( "cost: {}, accuracy: {}".format(cost, validation))
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 20 batch_size = 128 keep_probability = 0.5
image-classification/dlnd_image_classification.ipynb
arthurtsang/deep-learning
mit
QAOA example problems <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/experiments/qaoa/example_problems"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/example_problems.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/example_problems.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/example_problems.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> The shallowest depth version of the Quantum Approximate Optimization Algorithm (QAOA) consists of the application of two unitary operators: the problem unitary and the driver unitary. The first of these depends on the parameter $\gamma$ and applies a phase to pairs of bits according to the problem-specific cost operator $C$: $$ U_C ! \left(\gamma \right) = e^{-i \gamma C } = \prod_{j < k} e^{-i \gamma w_{jk} Z_j Z_k} $$ whereas the driver unitary depends on the parameter $\beta$, is problem-independent, and serves to drive transitions between bitstrings within the superposition state: $$ \newcommand{\gammavector}{\boldsymbol{\gamma}} \newcommand{\betavector}{\boldsymbol{\beta}} U_B ! \left(\beta \right) = e^{-i \beta B} = \prod_j e^{- i \beta X_j}, \quad \qquad B = \sum_j X_j $$ where $X_j$ is the Pauli $X$ operator on qubit $j$. These operators can be implemented by sequentially evolving under each term of the product; specifically the problem unitary is applied with a sequence of two-body interactions while the driver unitary is a single qubit rotation on each qubit. For higher-depth versions of the algorithm the two unitaries are sequentially re-applied each with their own $\beta$ or $\gamma$. The number of applications of the pair of unitaries is represented by the hyperparameter $p$ with parameters $\gammavector = (\gamma_1, \dots, \gamma_p)$ and $\betavector = (\beta_1, \dots, \beta_p)$. For $n$ qubits, we prepare the parameterized state $$ \newcommand{\bra}[1]{\langle #1|} \newcommand{\ket}[1]{|#1\rangle} | \gammavector , \betavector \rangle = U_B(\beta_p) U_C(\gamma_p ) \cdots U_B(\beta_1) U_C(\gamma_1 ) \ket{+}^{\otimes n}, $$ where $\ket{+}^{\otimes n}$ is the symmetric superposition of computational basis states. <img src="./images/qaoa_circuit.png" alt="QAOA circuit"/> The optimization problems we study in this work are defined through a cost function with a corresponding quantum operator C given by $$ C = \sum_{j < k} w_{jk} Z_j Z_k $$ where $Z_j$ dnotes the Pauli $Z$ operator on qubit $j$, and the $w_{jk}$ correspond to scalar weights with values ${0, \pm1}$. Because these clauses act on at most two qubits, we are able to associate a graph with a given problem instance with weighted edges given by the $w_{jk}$ adjacency matrix. Setup Install the ReCirq package:
try: import recirq except ImportError: !pip install git+https://github.com/quantumlib/ReCirq
docs/qaoa/example_problems.ipynb
quantumlib/ReCirq
apache-2.0
Now import Cirq, ReCirq and the module dependencies:
import networkx as nx import numpy as np import scipy.optimize import cirq import recirq %matplotlib inline from matplotlib import pyplot as plt # theme colors QBLUE = '#1967d2' QRED = '#ea4335ff' QGOLD = '#fbbc05ff'
docs/qaoa/example_problems.ipynb
quantumlib/ReCirq
apache-2.0
Hardware grid First, we study problem graphs which match the connectivity of our hardware, which we term "Hardware Grid problems". Despite results showing that problems on such graphs are efficient to solve on average, we study these problems as they do not require routing. This family of problems is composed of random instances generated by sampling $w_{ij}$ to be $\pm 1$ for edges in the device topology or a subgraph thereof.
from recirq.qaoa.problems import get_all_hardware_grid_problems import cirq.contrib.routing as ccr hg_problems = get_all_hardware_grid_problems( device_graph=ccr.gridqubits_to_graph_device(recirq.get_device_obj_by_name('Sycamore23').qubits), central_qubit=cirq.GridQubit(6,3), n_instances=10, rs=np.random.RandomState(5) ) instance_i = 0 n_qubits = 23 problem = hg_problems[n_qubits, instance_i] fig, ax = plt.subplots(figsize=(6,5)) pos = {i: coord for i, coord in enumerate(problem.coordinates)} nx.draw_networkx(problem.graph, pos=pos, with_labels=False, node_color=QBLUE) if True: # toggle edge labels edge_labels = {(i1, i2): f"{weight:+d}" for i1, i2, weight in problem.graph.edges.data('weight')} nx.draw_networkx_edge_labels(problem.graph, pos=pos, edge_labels=edge_labels) ax.axis('off') fig.tight_layout()
docs/qaoa/example_problems.ipynb
quantumlib/ReCirq
apache-2.0
Sherrington-Kirkpatrick model Next, we study instances of the Sherrington-Kirkpatrick (SK) model, defined on the complete graph with $w_{ij}$ randomly chosen to be $\pm 1$. This is a canonical example of a frustrated spin glass and is most penalized by routing, which can be performed optimally using the linear swap networks at the cost of a linear increase in circuit depth.
from recirq.qaoa.problems import get_all_sk_problems n_qubits = 17 all_sk_problems = get_all_sk_problems(max_n_qubits=17, n_instances=10, rs=np.random.RandomState(5)) sk_problem = all_sk_problems[n_qubits, instance_i] fig, ax = plt.subplots(figsize=(6,5)) pos = nx.circular_layout(sk_problem.graph) nx.draw_networkx(sk_problem.graph, pos=pos, with_labels=False, node_color=QRED) if False: # toggle edge labels edge_labels = {(i1, i2): f"{weight:+d}" for i1, i2, weight in sk_problem.graph.edges.data('weight')} nx.draw_networkx_edge_labels(sk_problem.graph, pos=pos, edge_labels=edge_labels) ax.axis('off') fig.tight_layout()
docs/qaoa/example_problems.ipynb
quantumlib/ReCirq
apache-2.0
3-regular MaxCut Finally, we study instances of the MaxCut problem on 3-regular graphs. This is a prototypical discrete optimization problem with a low, fixed node degree but a high dimension which cannot be trivially mapped to a planar architecture. It more closely matches problems of industrial interest. For these problems, we use an automated routing algorithm to heuristically insert SWAP operations.
from recirq.qaoa.problems import get_all_3_regular_problems n_qubits = 22 instance_i = 0 threereg_problems = get_all_3_regular_problems(max_n_qubits=22, n_instances=10, rs=np.random.RandomState(5)) threereg_problem = threereg_problems[n_qubits, instance_i] fig, ax = plt.subplots(figsize=(6,5)) pos = nx.spring_layout(threereg_problem.graph, seed=11) nx.draw_networkx(threereg_problem.graph, pos=pos, with_labels=False, node_color=QGOLD) if False: # toggle edge labels edge_labels = {(i1, i2): f"{weight:+d}" for i1, i2, weight in threereg_problem.graph.edges.data('weight')} nx.draw_networkx_edge_labels(threereg_problem.graph, pos=pos, edge_labels=edge_labels) ax.axis('off') fig.tight_layout()
docs/qaoa/example_problems.ipynb
quantumlib/ReCirq
apache-2.0
In this case, we'll just stick with the standard meteorological data. The "realtime" data from NDBC contains approximately 45 days of data from each buoy. We'll retreive that record for buoy 51002 and then do some cleaning of the data.
df = NDBC.realtime_observations('46042') df.tail()
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
Let's get rid of the columns with all missing data. We could use the drop method and manually name all of the columns, but that would require us to know which are all NaN and that sounds like manual labor - something that programmers hate. Pandas has the dropna method that allows us to drop rows or columns where any or all values are NaN. In this case, let's drop all columns with all NaN values.
df = df.dropna(axis='columns', how='all') df.head()
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Use the realtime_observations method to retreive supplemental data for buoy 41002. **Note** assign the data to something other that df or you'll have to rerun the data download cell above. We suggest using the name supl_obs.</li> </ul> </div>
# Your code goes here # supl_obs =
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
Solution
# %load solutions/get_obs.py
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
Finally, we need to trim down the data. The file contains 45 days worth of observations. Let's look at the last week's worth of data.
import pandas as pd idx = df.time >= (pd.Timestamp.utcnow() - pd.Timedelta(days=7)) df = df[idx] df.head()
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
We're almost ready, but now the index column is not that meaningful. It starts at a non-zero row, which is fine with our initial file, but let's re-zero the index so we have a nice clean data frame to start with.
df.reset_index(drop=True, inplace=True) df.head()
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
<a href="#top">Top</a> <hr style="height:2px;"> <a name="basictimeseries"></a> Basic Timeseries Plotting Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We're going to learn the basics of creating timeseries plots with matplotlib by plotting buoy wind, gust, temperature, and pressure data.
# Convention for import of the pyplot interface import matplotlib.pyplot as plt # Set-up to have matplotlib use its support for notebook inline plots %matplotlib inline
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit
We'll start by plotting the windspeed observations from the buoy.
plt.rc('font', size=12) fig, ax = plt.subplots(figsize=(10, 6)) # Specify how our lines should look ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed') # Same as above ax.set_xlabel('Time') ax.set_ylabel('Speed (m/s)') ax.set_title('Buoy Wind Data') ax.grid(True) ax.legend(loc='upper left');
notebooks/Time_Series/Basic Time Series Plotting.ipynb
Unidata/unidata-python-workshop
mit