text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
# -*- coding: utf-8 -*-
# Docker, starting from 0.7.x, generates names from notable scientists and hackers.
# Please, for any amazing man that you add to the list, consider adding an equally amazing woman to it, and vice versa.
RIGHT = [
# Muhammad ibn Jābir al-Ḥarrānī al-Battānī was a founding father of astronomy. https://en.wikipedia.org/wiki/Mu%E1%B8%A5ammad_ibn_J%C4%81bir_al-%E1%B8%A4arr%C4%81n%C4%AB_al-Batt%C4%81n%C4%AB
'albattani',
# Frances E. Allen, became the first female IBM Fellow in 1989. In 2006, she became the first female recipient of the ACM's Turing Award. https://en.wikipedia.org/wiki/Frances_E._Allen
'allen',
# June Almeida - Scottish virologist who took the first pictures of the rubella virus - https://en.wikipedia.org/wiki/June_Almeida
'almeida',
# Maria Gaetana Agnesi - Italian mathematician, philosopher, theologian and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a Mathematics Professor at a University. https://en.wikipedia.org/wiki/Maria_Gaetana_Agnesi
'agnesi',
# Archimedes was a physicist, engineer and mathematician who invented too many things to list them here. https://en.wikipedia.org/wiki/Archimedes
'archimedes',
# Maria Ardinghelli - Italian translator, mathematician and physicist - https://en.wikipedia.org/wiki/Maria_Ardinghelli
'ardinghelli',
# Aryabhata - Ancient Indian mathematician-astronomer during 476-550 CE https://en.wikipedia.org/wiki/Aryabhata
'aryabhata',
# Wanda Austin - Wanda Austin is the President and CEO of The Aerospace Corporation, a leading architect for the US security space programs. https://en.wikipedia.org/wiki/Wanda_Austin
'austin',
# Charles Babbage invented the concept of a programmable computer. https://en.wikipedia.org/wiki/Charles_Babbage.
'babbage',
# Stefan Banach - Polish mathematician, was one of the founders of modern functional analysis. https://en.wikipedia.org/wiki/Stefan_Banach
'banach',
# John Bardeen co-invented the transistor - https://en.wikipedia.org/wiki/John_Bardeen
'bardeen',
# Jean Bartik, born Betty Jean Jennings, was one of the original programmers for the ENIAC computer. https://en.wikipedia.org/wiki/Jean_Bartik
'bartik',
# Laura Bassi, the world's first female professor https://en.wikipedia.org/wiki/Laura_Bassi
'bassi',
# Hugh Beaver, British engineer, founder of the Guinness Book of World Records https://en.wikipedia.org/wiki/Hugh_Beaver
'beaver',
# Alexander Graham Bell - an eminent Scottish-born scientist, inventor, engineer and innovator who is credited with inventing the first practical telephone - https://en.wikipedia.org/wiki/Alexander_Graham_Bell
'bell',
# Homi J Bhabha - was an Indian nuclear physicist, founding director, and professor of physics at the Tata Institute of Fundamental Research. Colloquially known as 'father of Indian nuclear programme'- https://en.wikipedia.org/wiki/Homi_J._Bhabha
'bhabha',
# Bhaskara II - Ancient Indian mathematician-astronomer whose work on calculus predates Newton and Leibniz by over half a millennium - https://en.wikipedia.org/wiki/Bh%C4%81skara_II#Calculus
'bhaskara',
# Elizabeth Blackwell - American doctor and first American woman to receive a medical degree - https://en.wikipedia.org/wiki/Elizabeth_Blackwell
'blackwell',
# Niels Bohr is the father of quantum theory. https://en.wikipedia.org/wiki/Niels_Bohr.
'bohr',
# Kathleen Booth, she's credited with writing the first assembly language. https://en.wikipedia.org/wiki/Kathleen_Booth
'booth',
# Anita Borg - Anita Borg was the founding director of the Institute for Women and Technology (IWT). https://en.wikipedia.org/wiki/Anita_Borg
'borg',
# Satyendra Nath Bose - He provided the foundation for Bose–Einstein statistics and the theory of the Bose–Einstein condensate. - https://en.wikipedia.org/wiki/Satyendra_Nath_Bose
'bose',
# Evelyn Boyd Granville - She was one of the first African-American woman to receive a Ph.D. in mathematics; she earned it in 1949 from Yale University. https://en.wikipedia.org/wiki/Evelyn_Boyd_Granville
'boyd',
# Brahmagupta - Ancient Indian mathematician during 598-670 CE who gave rules to compute with zero - https://en.wikipedia.org/wiki/Brahmagupta#Zero
'brahmagupta',
# Walter Houser Brattain co-invented the transistor - https://en.wikipedia.org/wiki/Walter_Houser_Brattain
'brattain',
# Emmett Brown invented time travel. https://en.wikipedia.org/wiki/Emmett_Brown (thanks Brian Goff)
'brown',
# Rachel Carson - American marine biologist and conservationist, her book Silent Spring and other writings are credited with advancing the global environmental movement. https://en.wikipedia.org/wiki/Rachel_Carson
'carson',
# Subrahmanyan Chandrasekhar - Astrophysicist known for his mathematical theory on different stages and evolution in structures of the stars. He has won nobel prize for physics - https://en.wikipedia.org/wiki/Subrahmanyan_Chandrasekhar
'chandrasekhar',
# Claude Shannon - The father of information theory and founder of digital circuit design theory. (https://en.wikipedia.org/wiki/Claude_Shannon)
'shannon',
# Joan Clarke - Bletchley Park code breaker during the Second World War who pioneered techniques that remained top secret for decades. Also an accomplished numismatist https://en.wikipedia.org/wiki/Joan_Clarke
'clarke',
# Jane Colden - American botanist widely considered the first female American botanist - https://en.wikipedia.org/wiki/Jane_Colden
'colden',
# Gerty Theresa Cori - American biochemist who became the third woman—and first American woman—to win a Nobel Prize in science, and the first woman to be awarded the Nobel Prize in Physiology or Medicine. Cori was born in Prague. https://en.wikipedia.org/wiki/Gerty_Cori
'cori',
# Seymour Roger Cray was an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades. https://en.wikipedia.org/wiki/Seymour_Cray
'cray',
# This entry reflects a husband and wife team who worked together:
# Joan Curran was a Welsh scientist who developed radar and invented chaff, a radar countermeasure. https://en.wikipedia.org/wiki/Joan_Curran
# Samuel Curran was an Irish physicist who worked alongside his wife during WWII and invented the proximity fuse. https://en.wikipedia.org/wiki/Samuel_Curran
'curran',
# Marie Curie discovered radioactivity. https://en.wikipedia.org/wiki/Marie_Curie.
'curie',
# Charles Darwin established the principles of natural evolution. https://en.wikipedia.org/wiki/Charles_Darwin.
'darwin',
# Leonardo Da Vinci invented too many things to list here. https://en.wikipedia.org/wiki/Leonardo_da_Vinci.
'davinci',
# Edsger Wybe Dijkstra was a Dutch computer scientist and mathematical scientist. https://en.wikipedia.org/wiki/Edsger_W._Dijkstra.
'dijkstra',
# Donna Dubinsky - played an integral role in the development of personal digital assistants (PDAs) serving as CEO of Palm, Inc. and co-founding Handspring. https://en.wikipedia.org/wiki/Donna_Dubinsky
'dubinsky',
# Annie Easley - She was a leading member of the team which developed software for the Centaur rocket stage and one of the first African-Americans in her field. https://en.wikipedia.org/wiki/Annie_Easley
'easley',
# Thomas Alva Edison, prolific inventor https://en.wikipedia.org/wiki/Thomas_Edison
'edison',
# Albert Einstein invented the general theory of relativity. https://en.wikipedia.org/wiki/Albert_Einstein
'einstein',
# Gertrude Elion - American biochemist, pharmacologist and the 1988 recipient of the Nobel Prize in Medicine - https://en.wikipedia.org/wiki/Gertrude_Elion
'elion',
# Douglas Engelbart gave the mother of all demos: https://en.wikipedia.org/wiki/Douglas_Engelbart
'engelbart',
# Euclid invented geometry. https://en.wikipedia.org/wiki/Euclid
'euclid',
# Leonhard Euler invented large parts of modern mathematics. https:#de.wikipedia.org/wiki/Leonhard_Euler
'euler',
# Pierre de Fermat pioneered several aspects of modern mathematics. https://en.wikipedia.org/wiki/Pierre_de_Fermat
'fermat',
# Enrico Fermi invented the first nuclear reactor. https://en.wikipedia.org/wiki/Enrico_Fermi.
'fermi',
# Richard Feynman was a key contributor to quantum mechanics and particle physics. https://en.wikipedia.org/wiki/Richard_Feynman
'feynman',
# Benjamin Franklin is famous for his experiments in electricity and the invention of the lightning rod.
'franklin',
# Galileo was a founding father of modern astronomy, and faced politics and obscurantism to establish scientific truth. https://en.wikipedia.org/wiki/Galileo_Galilei
'galileo',
# William Henry 'Bill' Gates III is an American business magnate, philanthropist, investor, computer programmer, and inventor. https://en.wikipedia.org/wiki/Bill_Gates
'gates',
# Adele Goldberg, was one of the designers and developers of the Smalltalk language. https://en.wikipedia.org/wiki/Adele_Goldberg_(computer_scientist)
'goldberg',
# Adele Goldstine, born Adele Katz, wrote the complete technical description for the first electronic digital computer, ENIAC. https://en.wikipedia.org/wiki/Adele_Goldstine
'goldstine',
# Shafi Goldwasser is a computer scientist known for creating theoretical foundations of modern cryptography. Winner of 2012 ACM Turing Award. https://en.wikipedia.org/wiki/Shafi_Goldwasser
'goldwasser',
# James Golick, all around gangster.
'golick',
# Jane Goodall - British primatologist, ethologist, and anthropologist who is considered to be the world's foremost expert on chimpanzees - https://en.wikipedia.org/wiki/Jane_Goodall
'goodall',
# Lois Haibt - American computer scientist, part of the team at IBM that developed FORTRAN - https://en.wikipedia.org/wiki/Lois_Haibt
'haibt',
# Margaret Hamilton - Director of the Software Engineering Division of the MIT Instrumentation Laboratory, which developed on-board flight software for the Apollo space program. https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist)
'hamilton',
# Stephen Hawking pioneered the field of cosmology by combining general relativity and quantum mechanics. https://en.wikipedia.org/wiki/Stephen_Hawking
'hawking',
# Werner Heisenberg was a founding father of quantum mechanics. https://en.wikipedia.org/wiki/Werner_Heisenberg
'heisenberg',
# Jaroslav Heyrovský was the inventor of the polarographic method, father of the electroanalytical method, and recipient of the Nobel Prize in 1959. His main field of work was polarography. https://en.wikipedia.org/wiki/Jaroslav_Heyrovsk%C3%BD
'heyrovsky',
# Dorothy Hodgkin was a British biochemist, credited with the development of protein crystallography. She was awarded the Nobel Prize in Chemistry in 1964. https://en.wikipedia.org/wiki/Dorothy_Hodgkin
'hodgkin',
# Erna Schneider Hoover revolutionized modern communication by inventing a computerized telephone switching method. https://en.wikipedia.org/wiki/Erna_Schneider_Hoover
'hoover',
# Grace Hopper developed the first compiler for a computer programming language and is credited with popularizing the term 'debugging' for fixing computer glitches. https://en.wikipedia.org/wiki/Grace_Hopper
'hopper',
# Frances Hugle, she was an American scientist, engineer, and inventor who contributed to the understanding of semiconductors, integrated circuitry, and the unique electrical principles of microscopic materials. https://en.wikipedia.org/wiki/Frances_Hugle
'hugle',
# Hypatia - Greek Alexandrine Neoplatonist philosopher in Egypt who was one of the earliest mothers of mathematics - https://en.wikipedia.org/wiki/Hypatia
'hypatia',
# Yeong-Sil Jang was a Korean scientist and astronomer during the Joseon Dynasty; he invented the first metal printing press and water gauge. https://en.wikipedia.org/wiki/Jang_Yeong-sil
'jang',
# Betty Jennings - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Jean_Bartik
'jennings',
# Mary Lou Jepsen, was the founder and chief technology officer of One Laptop Per Child (OLPC), and the founder of Pixel Qi. https://en.wikipedia.org/wiki/Mary_Lou_Jepsen
'jepsen',
# Irène Joliot-Curie - French scientist who was awarded the Nobel Prize for Chemistry in 1935. Daughter of Marie and Pierre Curie. https://en.wikipedia.org/wiki/Ir%C3%A8ne_Joliot-Curie
'joliot',
# Karen Spärck Jones came up with the concept of inverse document frequency, which is used in most search engines today. https://en.wikipedia.org/wiki/Karen_Sp%C3%A4rck_Jones
'jones',
# A. P. J. Abdul Kalam - is an Indian scientist aka Missile Man of India for his work on the development of ballistic missile and launch vehicle technology - https://en.wikipedia.org/wiki/A._P._J._Abdul_Kalam
'kalam',
# Susan Kare, created the icons and many of the interface elements for the original Apple Macintosh in the 1980s, and was an original employee of NeXT, working as the Creative Director. https://en.wikipedia.org/wiki/Susan_Kare
'kare',
# Mary Kenneth Keller, Sister Mary Kenneth Keller became the first American woman to earn a PhD in Computer Science in 1965. https://en.wikipedia.org/wiki/Mary_Kenneth_Keller
'keller',
# Har Gobind Khorana - Indian-American biochemist who shared the 1968 Nobel Prize for Physiology - https://en.wikipedia.org/wiki/Har_Gobind_Khorana
'khorana',
# Jack Kilby invented silicone integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Jack_Kilby
'kilby',
# Maria Kirch - German astronomer and first woman to discover a comet - https://en.wikipedia.org/wiki/Maria_Margarethe_Kirch
'kirch',
# Donald Knuth - American computer scientist, author of 'The Art of Computer Programming' and creator of the TeX typesetting system. https://en.wikipedia.org/wiki/Donald_Knuth
'knuth',
# Sophie Kowalevski - Russian mathematician responsible for important original contributions to analysis, differential equations and mechanics - https://en.wikipedia.org/wiki/Sofia_Kovalevskaya
'kowalevski',
# Marie-Jeanne de Lalande - French astronomer, mathematician and cataloguer of stars - https://en.wikipedia.org/wiki/Marie-Jeanne_de_Lalande
'lalande',
# Hedy Lamarr - Actress and inventor. The principles of her work are now incorporated into modern Wi-Fi, CDMA and Bluetooth technology. https://en.wikipedia.org/wiki/Hedy_Lamarr
'lamarr',
# Leslie B. Lamport - American computer scientist. Lamport is best known for his seminal work in distributed systems and was the winner of the 2013 Turing Award. https://en.wikipedia.org/wiki/Leslie_Lamport
'lamport',
# Mary Leakey - British paleoanthropologist who discovered the first fossilized Proconsul skull - https://en.wikipedia.org/wiki/Mary_Leakey
'leakey',
# Henrietta Swan Leavitt - she was an American astronomer who discovered the relation between the luminosity and the period of Cepheid variable stars. https://en.wikipedia.org/wiki/Henrietta_Swan_Leavitt
'leavitt',
# Daniel Lewin - Mathematician, Akamai co-founder, soldier, 9/11 victim-- Developed optimization techniques for routing traffic on the internet. Died attempting to stop the 9-11 hijackers. https://en.wikipedia.org/wiki/Daniel_Lewin
'lewin',
# Ruth Lichterman - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Ruth_Teitelbaum
'lichterman',
# Barbara Liskov - co-developed the Liskov substitution principle. Liskov was also the winner of the Turing Prize in 2008. - https://en.wikipedia.org/wiki/Barbara_Liskov
'liskov',
# Ada Lovelace invented the first algorithm. https://en.wikipedia.org/wiki/Ada_Lovelace (thanks James Turnbull)
'lovelace',
# Auguste and Louis Lumière - the first filmmakers in history - https://en.wikipedia.org/wiki/Auguste_and_Louis_Lumi%C3%A8re
'lumiere',
# Mahavira - Ancient Indian mathematician during 9th century AD who discovered basic algebraic identities - https://en.wikipedia.org/wiki/Mah%C4%81v%C4%ABra_(mathematician)
'mahavira',
# Maria Mayer - American theoretical physicist and Nobel laureate in Physics for proposing the nuclear shell model of the atomic nucleus - https://en.wikipedia.org/wiki/Maria_Mayer
'mayer',
# John McCarthy invented LISP: https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)
'mccarthy',
# Barbara McClintock - a distinguished American cytogeneticist, 1983 Nobel Laureate in Physiology or Medicine for discovering transposons. https://en.wikipedia.org/wiki/Barbara_McClintock
'mcclintock',
# Malcolm McLean invented the modern shipping container: https://en.wikipedia.org/wiki/Malcom_McLean
'mclean',
# Kay McNulty - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Kathleen_Antonelli
'mcnulty',
# Lise Meitner - Austrian/Swedish physicist who was involved in the discovery of nuclear fission. The element meitnerium is named after her - https://en.wikipedia.org/wiki/Lise_Meitner
'meitner',
# Carla Meninsky, was the game designer and programmer for Atari 2600 games Dodge 'Em and Warlords. https://en.wikipedia.org/wiki/Carla_Meninsky
'meninsky',
# Johanna Mestorf - German prehistoric archaeologist and first female museum director in Germany - https://en.wikipedia.org/wiki/Johanna_Mestorf
'mestorf',
# Marvin Minsky - Pioneer in Artificial Intelligence, co-founder of the MIT's AI Lab, won the Turing Award in 1969. https://en.wikipedia.org/wiki/Marvin_Minsky
'minsky',
# Maryam Mirzakhani - an Iranian mathematician and the first woman to win the Fields Medal. https://en.wikipedia.org/wiki/Maryam_Mirzakhani
'mirzakhani',
# Samuel Morse - contributed to the invention of a single-wire telegraph system based on European telegraphs and was a co-developer of the Morse code - https://en.wikipedia.org/wiki/Samuel_Morse
'morse',
# Ian Murdock - founder of the Debian project - https://en.wikipedia.org/wiki/Ian_Murdock
'murdock',
# Isaac Newton invented classic mechanics and modern optics. https://en.wikipedia.org/wiki/Isaac_Newton
'newton',
# Florence Nightingale, more prominently known as a nurse, was also the first female member of the Royal Statistical Society and a pioneer in statistical graphics https://en.wikipedia.org/wiki/Florence_Nightingale#Statistics_and_sanitary_reform
'nightingale',
# Alfred Nobel - a Swedish chemist, engineer, innovator, and armaments manufacturer (inventor of dynamite) - https://en.wikipedia.org/wiki/Alfred_Nobel
'nobel',
# Emmy Noether, German mathematician. Noether's Theorem is named after her. https://en.wikipedia.org/wiki/Emmy_Noether
'noether',
# Poppy Northcutt. Poppy Northcutt was the first woman to work as part of NASA’s Mission Control. http:#www.businessinsider.com/poppy-northcutt-helped-apollo-astronauts-2014-12?op=1
'northcutt',
# Robert Noyce invented silicone integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Robert_Noyce
'noyce',
# Panini - Ancient Indian linguist and grammarian from 4th century CE who worked on the world's first formal system - https://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini#Comparison_with_modern_formal_systems
'panini',
# Ambroise Pare invented modern surgery. https://en.wikipedia.org/wiki/Ambroise_Par%C3%A9
'pare',
# Louis Pasteur discovered vaccination, fermentation and pasteurization. https://en.wikipedia.org/wiki/Louis_Pasteur.
'pasteur',
# Cecilia Payne-Gaposchkin was an astronomer and astrophysicist who, in 1925, proposed in her Ph.D. thesis an explanation for the composition of stars in terms of the relative abundances of hydrogen and helium. https://en.wikipedia.org/wiki/Cecilia_Payne-Gaposchkin
'payne',
# Radia Perlman is a software designer and network engineer and most famous for her invention of the spanning-tree protocol (STP). https://en.wikipedia.org/wiki/Radia_Perlman
'perlman',
# Rob Pike was a key contributor to Unix, Plan 9, the X graphic system, utf-8, and the Go programming language. https://en.wikipedia.org/wiki/Rob_Pike
'pike',
# Henri Poincaré made fundamental contributions in several fields of mathematics. https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9
'poincare',
# Laura Poitras is a director and producer whose work, made possible by open source crypto tools, advances the causes of truth and freedom of information by reporting disclosures by whistleblowers such as Edward Snowden. https://en.wikipedia.org/wiki/Laura_Poitras
'poitras',
# Claudius Ptolemy - a Greco-Egyptian writer of Alexandria, known as a mathematician, astronomer, geographer, astrologer, and poet of a single epigram in the Greek Anthology - https://en.wikipedia.org/wiki/Ptolemy
'ptolemy',
# C. V. Raman - Indian physicist who won the Nobel Prize in 1930 for proposing the Raman effect. - https://en.wikipedia.org/wiki/C._V._Raman
'raman',
# Srinivasa Ramanujan - Indian mathematician and autodidact who made extraordinary contributions to mathematical analysis, number theory, infinite series, and continued fractions. - https://en.wikipedia.org/wiki/Srinivasa_Ramanujan
'ramanujan',
# Sally Kristen Ride was an American physicist and astronaut. She was the first American woman in space, and the youngest American astronaut. https://en.wikipedia.org/wiki/Sally_Ride
'ride',
# Rita Levi-Montalcini - Won Nobel Prize in Physiology or Medicine jointly with colleague Stanley Cohen for the discovery of nerve growth factor (https://en.wikipedia.org/wiki/Rita_Levi-Montalcini)
'montalcini',
# Dennis Ritchie - co-creator of UNIX and the C programming language. - https://en.wikipedia.org/wiki/Dennis_Ritchie
'ritchie',
# Wilhelm Conrad Röntgen - German physicist who was awarded the first Nobel Prize in Physics in 1901 for the discovery of X-rays (Röntgen rays). https://en.wikipedia.org/wiki/Wilhelm_R%C3%B6ntgen
'roentgen',
# Rosalind Franklin - British biophysicist and X-ray crystallographer whose research was critical to the understanding of DNA - https://en.wikipedia.org/wiki/Rosalind_Franklin
'rosalind',
# Meghnad Saha - Indian astrophysicist best known for his development of the Saha equation, used to describe chemical and physical conditions in stars - https://en.wikipedia.org/wiki/Meghnad_Saha
'saha',
# Jean E. Sammet developed FORMAC, the first widely used computer language for symbolic manipulation of mathematical formulas. https://en.wikipedia.org/wiki/Jean_E._Sammet
'sammet',
# Carol Shaw - Originally an Atari employee, Carol Shaw is said to be the first female video game designer. https://en.wikipedia.org/wiki/Carol_Shaw_(video_game_designer)
'shaw',
# Dame Stephanie 'Steve' Shirley - Founded a software company in 1962 employing women working from home. https://en.wikipedia.org/wiki/Steve_Shirley
'shirley',
# William Shockley co-invented the transistor - https://en.wikipedia.org/wiki/William_Shockley
'shockley',
# Françoise Barré-Sinoussi - French virologist and Nobel Prize Laureate in Physiology or Medicine; her work was fundamental in identifying HIV as the cause of AIDS. https://en.wikipedia.org/wiki/Fran%C3%A7oise_Barr%C3%A9-Sinoussi
'sinoussi',
# Betty Snyder - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Betty_Holberton
'snyder',
# Frances Spence - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Frances_Spence
'spence',
# Richard Matthew Stallman - the founder of the Free Software movement, the GNU project, the Free Software Foundation, and the League for Programming Freedom. He also invented the concept of copyleft to protect the ideals of this movement, and enshrined this concept in the widely-used GPL (General Public License) for software. https://en.wikiquote.org/wiki/Richard_Stallman
'stallman',
# Michael Stonebraker is a database research pioneer and architect of Ingres, Postgres, VoltDB and SciDB. Winner of 2014 ACM Turing Award. https://en.wikipedia.org/wiki/Michael_Stonebraker
'stonebraker',
# Janese Swanson (with others) developed the first of the Carmen Sandiego games. She went on to found Girl Tech. https://en.wikipedia.org/wiki/Janese_Swanson
'swanson',
# Aaron Swartz was influential in creating RSS, Markdown, Creative Commons, Reddit, and much of the internet as we know it today. He was devoted to freedom of information on the web. https://en.wikiquote.org/wiki/Aaron_Swartz
'swartz',
# Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory. https://en.wikipedia.org/wiki/Bertha_Swirles
'swirles',
# Nikola Tesla invented the AC electric system and every gadget ever used by a James Bond villain. https://en.wikipedia.org/wiki/Nikola_Tesla
'tesla',
# Ken Thompson - co-creator of UNIX and the C programming language - https://en.wikipedia.org/wiki/Ken_Thompson
'thompson',
# Linus Torvalds invented Linux and Git. https://en.wikipedia.org/wiki/Linus_Torvalds
'torvalds',
# Alan Turing was a founding father of computer science. https://en.wikipedia.org/wiki/Alan_Turing.
'turing',
# Varahamihira - Ancient Indian mathematician who discovered trigonometric formulae during 505-587 CE - https://en.wikipedia.org/wiki/Var%C4%81hamihira#Contributions
'varahamihira',
# Sir Mokshagundam Visvesvaraya - is a notable Indian engineer. He is a recipient of the Indian Republic's highest honour, the Bharat Ratna, in 1955. On his birthday, 15 September is celebrated as Engineer's Day in India in his memory - https://en.wikipedia.org/wiki/Visvesvaraya
'visvesvaraya',
# Christiane Nüsslein-Volhard - German biologist, won Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. https://en.wikipedia.org/wiki/Christiane_N%C3%BCsslein-Volhard
'volhard',
# Marlyn Wescoff - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Marlyn_Meltzer
'wescoff',
# Andrew Wiles - Notable British mathematician who proved the enigmatic Fermat's Last Theorem - https://en.wikipedia.org/wiki/Andrew_Wiles
'wiles',
# Roberta Williams, did pioneering work in graphical adventure games for personal computers, particularly the King's Quest series. https://en.wikipedia.org/wiki/Roberta_Williams
'williams',
# Sophie Wilson designed the first Acorn Micro-Computer and the instruction set for ARM processors. https://en.wikipedia.org/wiki/Sophie_Wilson
'wilson',
# Jeannette Wing - co-developed the Liskov substitution principle. - https://en.wikipedia.org/wiki/Jeannette_Wing
'wing',
# Steve Wozniak invented the Apple I and Apple II. https://en.wikipedia.org/wiki/Steve_Wozniak
'wozniak',
# The Wright brothers, Orville and Wilbur - credited with inventing and building the world's first successful airplane and making the first controlled, powered and sustained heavier-than-air human flight - https://en.wikipedia.org/wiki/Wright_brothers
'wright',
# Rosalyn Sussman Yalow - Rosalyn Sussman Yalow was an American medical physicist, and a co-winner of the 1977 Nobel Prize in Physiology or Medicine for development of the radioimmunoassay technique. https://en.wikipedia.org/wiki/Rosalyn_Sussman_Yalow
'yalow',
# Ada Yonath - an Israeli crystallographer, the first woman from the Middle East to win a Nobel prize in the sciences. https://en.wikipedia.org/wiki/Ada_Yonath
'yonath',
# Max Karl Ernst Ludwig Planck - German Nobel-prize winning physicist. https://en.wikipedia.org/wiki/Max_Planck
'planck',
# Katherine Johnson - physicist, space scientist, and mathematician. https://en.wikipedia.org/wiki/Katherine_Johnson
'johnson'
]
LEFT = [
'admiring',
'adoring',
'affectionate',
'agitated',
'amazing',
'angry',
'awesome',
'backstabbing',
'berserk',
'big',
'boring',
'clever',
'cocky',
'compassionate',
'condescending',
'cranky',
'desperate',
'determined',
'distracted',
'dreamy',
'drunk',
'eager',
'ecstatic',
'elastic',
'elated',
'elegant',
'evil',
'fervent',
'focused',
'furious',
'gigantic',
'gloomy',
'goofy',
'grave',
'happy',
'high',
'hopeful',
'hungry',
'infallible',
'jolly',
'jovial',
'kickass',
'lonely',
'loving',
'mad',
'modest',
'naughty',
'nauseous',
'nostalgic',
'peaceful',
'pedantic',
'pensive',
'prickly',
'reverent',
'romantic',
'sad',
'serene',
'sharp',
'sick',
'silly',
'sleepy',
'small',
'stoic',
'stupefied',
'suspicious',
'tender',
'thirsty',
'tiny',
'trusting',
'zen'
]
|
kdoichinov/pynamesgen
|
constants.py
|
Python
|
gpl-3.0
| 29,819
|
[
"Brian"
] |
a3eca2bb0b140f15e97c26e211aa8ed2a4ad2cbb2ee71bbd95f70616b36b4067
|
import os
import time
import tarfile
import xml.sax
import numpy as np
from gpaw.mpi import broadcast as mpi_broadcast
from gpaw.mpi import world
from gpaw.io import FileReference
intsize = 4
floatsize = np.array([1], float).itemsize
complexsize = np.array([1], complex).itemsize
itemsizes = {'int': intsize, 'float': floatsize, 'complex': complexsize}
class Writer:
def __init__(self, name, comm=world):
self.comm = comm # for possible future use
self.dims = {}
self.files = {}
self.xml1 = ['<gpaw_io version="0.1" endianness="%s">' %
('big', 'little')[int(np.little_endian)]]
self.xml2 = []
if os.path.isfile(name):
os.rename(name, name[:-4] + '.old'+name[-4:])
self.tar = tarfile.open(name, 'w')
self.mtime = int(time.time())
def dimension(self, name, value):
if name in self.dims.keys() and self.dims[name] != value:
raise Warning('Dimension %s changed from %s to %s' % \
(name, self.dims[name], value))
self.dims[name] = value
def __setitem__(self, name, value):
if isinstance(value, float):
value = repr(value)
self.xml1 += [' <parameter %-20s value="%s"/>' %
('name="%s"' % name, value)]
def add(self, name, shape, array=None, dtype=None, units=None,
parallel=False, write=True):
if array is not None:
array = np.asarray(array)
self.dtype, type, itemsize = self.get_data_type(array, dtype)
self.xml2 += [' <array name="%s" type="%s">' % (name, type)]
self.xml2 += [' <dimension length="%s" name="%s"/>' %
(self.dims[dim], dim)
for dim in shape]
self.xml2 += [' </array>']
self.shape = [self.dims[dim] for dim in shape]
size = itemsize * np.product([self.dims[dim] for dim in shape])
self.write_header(name, size)
if array is not None:
self.fill(array)
def get_data_type(self, array=None, dtype=None):
if dtype is None:
dtype = array.dtype
if dtype in [int, bool]:
dtype = np.int32
dtype = np.dtype(dtype)
type = {np.int32: 'int',
np.float64: 'float',
np.complex128: 'complex'}[dtype.type]
return dtype, type, dtype.itemsize
def fill(self, array, *indices, **kwargs):
self.write(np.asarray(array, self.dtype).tostring())
def write_header(self, name, size):
assert name not in self.files.keys()
tarinfo = tarfile.TarInfo(name)
tarinfo.mtime = self.mtime
tarinfo.size = size
self.files[name] = tarinfo
self.size = size
self.n = 0
self.tar.addfile(tarinfo)
def write(self, string):
self.tar.fileobj.write(string)
self.n += len(string)
if self.n == self.size:
blocks, remainder = divmod(self.size, tarfile.BLOCKSIZE)
if remainder > 0:
self.tar.fileobj.write('\0' * (tarfile.BLOCKSIZE - remainder))
blocks += 1
self.tar.offset += blocks * tarfile.BLOCKSIZE
def close(self):
self.xml2 += ['</gpaw_io>\n']
string = '\n'.join(self.xml1 + self.xml2)
self.write_header('info.xml', len(string))
self.write(string)
self.tar.close()
class Reader(xml.sax.handler.ContentHandler):
def __init__(self, name, comm=world):
self.comm = comm # used for broadcasting replicated data
self.master = (self.comm.rank == 0)
self.dims = {}
self.shapes = {}
self.dtypes = {}
self.parameters = {}
xml.sax.handler.ContentHandler.__init__(self)
self.tar = tarfile.open(name, 'r')
f = self.tar.extractfile('info.xml')
xml.sax.parse(f, self)
def startElement(self, tag, attrs):
if tag == 'gpaw_io':
self.byteswap = ((attrs['endianness'] == 'little')
!= np.little_endian)
elif tag == 'array':
name = attrs['name']
self.dtypes[name] = attrs['type']
self.shapes[name] = []
self.name = name
elif tag == 'dimension':
n = int(attrs['length'])
self.shapes[self.name].append(n)
self.dims[attrs['name']] = n
else:
assert tag == 'parameter'
try:
value = eval(attrs['value'], {})
except (SyntaxError, NameError):
value = attrs['value'].encode()
self.parameters[attrs['name']] = value
def dimension(self, name):
return self.dims[name]
def __getitem__(self, name):
return self.parameters[name]
def has_array(self, name):
return name in self.shapes
def get(self, name, *indices, **kwargs):
broadcast = kwargs.pop('broadcast', False)
if self.master or not broadcast:
fileobj, shape, size, dtype = self.get_file_object(name, indices)
array = np.fromstring(fileobj.read(size), dtype)
if self.byteswap:
array = array.byteswap()
if dtype == np.int32:
array = np.asarray(array, int)
array.shape = shape
if shape == ():
array = array.item()
else:
array = None
if broadcast:
array = mpi_broadcast(array, 0, self.comm)
return array
def get_reference(self, name, indices, length=None):
fileobj, shape, size, dtype = self.get_file_object(name, indices)
assert dtype != np.int32
return TarFileReference(fileobj, shape, dtype, self.byteswap, length)
def get_file_object(self, name, indices):
dtype, type, itemsize = self.get_data_type(name)
fileobj = self.tar.extractfile(name)
n = len(indices)
shape = self.shapes[name]
size = itemsize * np.prod(shape[n:], dtype=int)
offset = 0
stride = size
for i in range(n - 1, -1, -1):
offset += indices[i] * stride
stride *= shape[i]
fileobj.seek(offset)
return fileobj, shape[n:], size, dtype
def get_data_type(self, name):
type = self.dtypes[name]
dtype = np.dtype({'int': np.int32,
'float': float,
'complex': complex}[type])
return dtype, type, dtype.itemsize
def get_parameters(self):
return self.parameters
def close(self):
self.tar.close()
class TarFileReference(FileReference):
def __init__(self, fileobj, shape, dtype, byteswap, length):
self.fileobj = fileobj
self.shape = tuple(shape)
self.dtype = dtype
self.itemsize = dtype.itemsize
self.byteswap = byteswap
self.offset = fileobj.tell()
self.length = length
def __len__(self):
return self.shape[0]
def __getitem__(self, indices):
if isinstance(indices, slice):
start, stop, step = indices.indices(len(self))
if start != 0 or step != 1 or stop != len(self):
raise NotImplementedError('You can only slice a TarReference '
'with [:] or [int]')
else:
indices = ()
elif isinstance(indices, int):
indices = (indices,)
else: # Probably tuple or ellipsis
raise NotImplementedError('You can only slice a TarReference '
'with [:] or [int]')
n = len(indices)
size = np.prod(self.shape[n:], dtype=int) * self.itemsize
offset = self.offset
stride = size
for i in range(n - 1, -1, -1):
offset += indices[i] * stride
stride *= self.shape[i]
self.fileobj.seek(offset)
array = np.fromstring(self.fileobj.read(size), self.dtype)
if self.byteswap:
array = array.byteswap()
array.shape = self.shape[n:]
if self.length:
array = array[..., :self.length].copy()
return array
|
robwarm/gpaw-symm
|
gpaw/io/tar.py
|
Python
|
gpl-3.0
| 8,285
|
[
"GPAW"
] |
c833d53ba45eedb5867db6ce554ac22186583b4838f52bc43c083592deb88b80
|
#
# This program is free software you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation either version 3 of the License, or
# (at your option) any later version.
#
# Written (C) 2012-2013 Heiko Strathmann
#
from numpy import *
from pylab import *
from scipy import *
from modshogun import RealFeatures
from modshogun import MeanShiftDataGenerator
from modshogun import GaussianKernel, CombinedKernel
from modshogun import LinearTimeMMD, MMDKernelSelectionOpt
from modshogun import PERMUTATION, MMD1_GAUSSIAN
from modshogun import EuclideanDistance
from modshogun import Statistics, Math
# for nice plotting that fits into our shogun tutorial
import latex_plot_inits
def linear_time_mmd_graphical():
# parameters, change to get different results
m=1000 # set to 10000 for a good test result
dim=2
# setting the difference of the first dimension smaller makes a harder test
difference=1
# number of samples taken from null and alternative distribution
num_null_samples=150
# streaming data generator for mean shift distributions
gen_p=MeanShiftDataGenerator(0, dim)
gen_q=MeanShiftDataGenerator(difference, dim)
# use the median kernel selection
# create combined kernel with Gaussian kernels inside (shoguns Gaussian kernel is
# compute median data distance in order to use for Gaussian kernel width
# 0.5*median_distance normally (factor two in Gaussian kernel)
# However, shoguns kernel width is different to usual parametrization
# Therefore 0.5*2*median_distance^2
# Use a subset of data for that, only 200 elements. Median is stable
sigmas=[2**x for x in range(-3,10)]
widths=[x*x*2 for x in sigmas]
print "kernel widths:", widths
combined=CombinedKernel()
for i in range(len(sigmas)):
combined.append_kernel(GaussianKernel(10, widths[i]))
# mmd instance using streaming features, blocksize of 10000
block_size=1000
mmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size)
# kernel selection instance (this can easily replaced by the other methods for selecting
# single kernels
selection=MMDKernelSelectionOpt(mmd)
# perform kernel selection
kernel=selection.select_kernel()
kernel=GaussianKernel.obtain_from_generic(kernel)
mmd.set_kernel(kernel);
print "selected kernel width:", kernel.get_width()
# sample alternative distribution, stream ensures different samples each run
alt_samples=zeros(num_null_samples)
for i in range(len(alt_samples)):
alt_samples[i]=mmd.compute_statistic()
# sample from null distribution
# bootstrapping, biased statistic
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_num_null_samples(num_null_samples)
null_samples_boot=mmd.sample_null()
# fit normal distribution to null and sample a normal distribution
mmd.set_null_approximation_method(MMD1_GAUSSIAN)
variance=mmd.compute_variance_estimate()
null_samples_gaussian=normal(0,sqrt(variance),num_null_samples)
# to plot data, sample a few examples from stream first
features=gen_p.get_streamed_features(m)
features=features.create_merged_copy(gen_q.get_streamed_features(m))
data=features.get_feature_matrix()
# plot
figure()
# plot data of p and q
subplot(2,3,1)
grid(True)
gca().xaxis.set_major_locator( MaxNLocator(nbins = 4) ) # reduce number of x-ticks
gca().yaxis.set_major_locator( MaxNLocator(nbins = 4) ) # reduce number of x-ticks
plot(data[0][0:m], data[1][0:m], 'ro', label='$x$')
plot(data[0][m+1:2*m], data[1][m+1:2*m], 'bo', label='$x$', alpha=0.5)
title('Data, shift in $x_1$='+str(difference)+'\nm='+str(m))
xlabel('$x_1, y_1$')
ylabel('$x_2, y_2$')
# histogram of first data dimension and pdf
subplot(2,3,2)
grid(True)
gca().xaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
gca().yaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
hist(data[0], bins=50, alpha=0.5, facecolor='r', normed=True)
hist(data[1], bins=50, alpha=0.5, facecolor='b', normed=True)
xs=linspace(min(data[0])-1,max(data[0])+1, 50)
plot(xs,normpdf( xs, 0, 1), 'r', linewidth=3)
plot(xs,normpdf( xs, difference, 1), 'b', linewidth=3)
xlabel('$x_1, y_1$')
ylabel('$p(x_1), p(y_1)$')
title('Data PDF in $x_1, y_1$')
# compute threshold for test level
alpha=0.05
null_samples_boot.sort()
null_samples_gaussian.sort()
thresh_boot=null_samples_boot[floor(len(null_samples_boot)*(1-alpha))];
thresh_gaussian=null_samples_gaussian[floor(len(null_samples_gaussian)*(1-alpha))];
type_one_error_boot=sum(null_samples_boot<thresh_boot)/float(num_null_samples)
type_one_error_gaussian=sum(null_samples_gaussian<thresh_boot)/float(num_null_samples)
# plot alternative distribution with threshold
subplot(2,3,4)
grid(True)
gca().xaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
gca().yaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
hist(alt_samples, 20, normed=True);
axvline(thresh_boot, 0, 1, linewidth=2, color='red')
type_two_error=sum(alt_samples<thresh_boot)/float(num_null_samples)
title('Alternative Dist.\n' + 'Type II error is ' + str(type_two_error))
# compute range for all null distribution histograms
hist_range=[min([min(null_samples_boot), min(null_samples_gaussian)]), max([max(null_samples_boot), max(null_samples_gaussian)])]
# plot null distribution with threshold
subplot(2,3,3)
grid(True)
gca().xaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
gca().yaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
hist(null_samples_boot, 20, range=hist_range, normed=True);
axvline(thresh_boot, 0, 1, linewidth=2, color='red')
title('Sampled Null Dist.\n' + 'Type I error is ' + str(type_one_error_boot))
# plot null distribution gaussian
subplot(2,3,5)
grid(True)
gca().xaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
gca().yaxis.set_major_locator( MaxNLocator(nbins = 3) ) # reduce number of x-ticks
hist(null_samples_gaussian, 20, range=hist_range, normed=True);
axvline(thresh_gaussian, 0, 1, linewidth=2, color='red')
title('Null Dist. Gaussian\nType I error is ' + str(type_one_error_gaussian))
# pull plots a bit apart
subplots_adjust(hspace=0.5)
subplots_adjust(wspace=0.5)
if __name__=='__main__':
linear_time_mmd_graphical()
show()
|
curiousguy13/shogun
|
examples/undocumented/python_modular/graphical/statistics_linear_time_mmd.py
|
Python
|
gpl-3.0
| 6,345
|
[
"Gaussian"
] |
e889eae7fce9e6fe0c398a2e3aa56ec882ff571584b1fa41edd24d632b4e242a
|
"""Python bindings for 0MQ."""
#
# Copyright (c) 2010 Brian E. Granger
#
# This file is part of pyzmq.
#
# pyzmq is free software; you can redistribute it and/or modify it under
# the terms of the Lesser GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# pyzmq is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# Lesser GNU General Public License for more details.
#
# You should have received a copy of the Lesser GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
from zmq.utils import initthreads # initialize threads
initthreads.init_threads()
from zmq import core, devices
from zmq.core import *
def get_includes():
"""Return a list of directories to include for linking against pyzmq with cython."""
from os.path import join, dirname
base = dirname(__file__)
return [ join(base, subdir) for subdir in ('core', 'devices', 'utils')]
__all__ = ['get_includes'] + core.__all__
|
takluyver/pyzmq
|
zmq/__init__.py
|
Python
|
lgpl-3.0
| 1,389
|
[
"Brian"
] |
d1e9734f3a281fef58d30fd16ae73f20eda9ca4e723821a0293cf67ea72abddd
|
#! test QC_JSON Schema noncontiguous mol
import numpy as np
import psi4
import json
# Generate JSON data
json_data = {
"schema_name": "qc_schema_input",
"schema_version": 1,
"molecule": {
"geometry": [
0.0, 0.0, -0.1294769411935893,
0.0, -1.494187339479985, 1.0274465079245698,
0.0, 1.494187339479985, 1.0274465079245698
],
"symbols": ["O", "H", "H"],
"fragments": [[2, 0, 1]]
},
"driver": "energy",
"model": {
"method": "SCF",
"basis": "cc-pVDZ"
},
"keywords": {
"scf_type": "df"
}
}
# Check non-contiguous fragment throws
json_ret = psi4.json_wrapper.run_json(json_data)
psi4.compare_integers(False, json_ret["success"], "JSON Failure") #TEST
psi4.compare_integers("non-contiguous frag" in json_ret["error"]['error_message'], True, "Contiguous Fragment Error") #TEST
# Check symbol length errors
del json_data["molecule"]["fragments"]
json_data["molecule"]["symbols"] = ["O", "H"]
json_ret = psi4.json_wrapper.run_json(json_data)
psi4.compare_integers(False, json_data["success"], "JSON Failure") #TEST
psi4.compare_integers("dropped atoms!" in json_data["error"]['error_message'], True, "Symbol Error") #TEST
# Check keyword errors
json_data["molecule"]["symbols"] = ["O", "H", "H"]
json_data["model"] = {"method": "SCF", "basis": "sto-3g"}
json_data["keywords"] = {"scf_type": "super_df"}
json_ret = psi4.json_wrapper.run_json(json_data)
psi4.compare_integers(False, json_ret["success"], "JSON Failure") #TEST
psi4.compare_integers("valid choice" in json_ret["error"]['error_message'], True, "Keyword Error") #TEST
|
CDSherrill/psi4
|
tests/json/schema-1-throws/input.py
|
Python
|
lgpl-3.0
| 1,737
|
[
"Psi4"
] |
3e1fb9ce4f17191992d9c1cac99cec9fda57ac464b2de31e3e6800c20ab4f6d3
|
from __future__ import unicode_literals
import base64
import datetime
import hashlib
import json
import netrc
import os
import re
import socket
import sys
import time
import math
from ..compat import (
compat_cookiejar,
compat_cookies,
compat_etree_fromstring,
compat_getpass,
compat_http_client,
compat_os_name,
compat_str,
compat_urllib_error,
compat_urllib_parse_urlencode,
compat_urllib_request,
compat_urlparse,
)
from ..downloader.f4m import remove_encrypted_media
from ..utils import (
NO_DEFAULT,
age_restricted,
bug_reports_message,
clean_html,
compiled_regex_type,
determine_ext,
error_to_compat_str,
ExtractorError,
fix_xml_ampersands,
float_or_none,
int_or_none,
parse_iso8601,
RegexNotFoundError,
sanitize_filename,
sanitized_Request,
unescapeHTML,
unified_strdate,
unified_timestamp,
url_basename,
xpath_element,
xpath_text,
xpath_with_ns,
determine_protocol,
parse_duration,
mimetype2ext,
update_Request,
update_url_query,
parse_m3u8_attributes,
extract_attributes,
parse_codecs,
)
class InfoExtractor(object):
"""Information Extractor class.
Information extractors are the classes that, given a URL, extract
information about the video (or videos) the URL refers to. This
information includes the real video URL, the video title, author and
others. The information is stored in a dictionary which is then
passed to the YoutubeDL. The YoutubeDL processes this
information possibly downloading the video to the file system, among
other possible outcomes.
The type field determines the type of the result.
By far the most common value (and the default if _type is missing) is
"video", which indicates a single video.
For a video, the dictionaries must include the following fields:
id: Video identifier.
title: Video title, unescaped.
Additionally, it must contain either a formats entry or a url one:
formats: A list of dictionaries for each format available, ordered
from worst to best quality.
Potential fields:
* url Mandatory. The URL of the video file
* manifest_url
The URL of the manifest file in case of
fragmented media (DASH, hls, hds)
* ext Will be calculated from URL if missing
* format A human-readable description of the format
("mp4 container with h264/opus").
Calculated from the format_id, width, height.
and format_note fields if missing.
* format_id A short description of the format
("mp4_h264_opus" or "19").
Technically optional, but strongly recommended.
* format_note Additional info about the format
("3D" or "DASH video")
* width Width of the video, if known
* height Height of the video, if known
* resolution Textual description of width and height
* tbr Average bitrate of audio and video in KBit/s
* abr Average audio bitrate in KBit/s
* acodec Name of the audio codec in use
* asr Audio sampling rate in Hertz
* vbr Average video bitrate in KBit/s
* fps Frame rate
* vcodec Name of the video codec in use
* container Name of the container format
* filesize The number of bytes, if known in advance
* filesize_approx An estimate for the number of bytes
* player_url SWF Player URL (used for rtmpdump).
* protocol The protocol that will be used for the actual
download, lower-case.
"http", "https", "rtsp", "rtmp", "rtmpe",
"m3u8", "m3u8_native" or "http_dash_segments".
* fragments A list of fragments of the fragmented media,
with the following entries:
* "url" (mandatory) - fragment's URL
* "duration" (optional, int or float)
* "filesize" (optional, int)
* preference Order number of this format. If this field is
present and not None, the formats get sorted
by this field, regardless of all other values.
-1 for default (order by other properties),
-2 or smaller for less than default.
< -1000 to hide the format (if there is
another one which is strictly better)
* language Language code, e.g. "de" or "en-US".
* language_preference Is this in the language mentioned in
the URL?
10 if it's what the URL is about,
-1 for default (don't know),
-10 otherwise, other values reserved for now.
* quality Order number of the video quality of this
format, irrespective of the file format.
-1 for default (order by other properties),
-2 or smaller for less than default.
* source_preference Order number for this video source
(quality takes higher priority)
-1 for default (order by other properties),
-2 or smaller for less than default.
* http_headers A dictionary of additional HTTP headers
to add to the request.
* stretched_ratio If given and not 1, indicates that the
video's pixels are not square.
width : height ratio as float.
* no_resume The server does not support resuming the
(HTTP or RTMP) download. Boolean.
url: Final video URL.
ext: Video filename extension.
format: The video format, defaults to ext (used for --get-format)
player_url: SWF Player URL (used for rtmpdump).
The following fields are optional:
alt_title: A secondary title of the video.
display_id An alternative identifier for the video, not necessarily
unique, but available before title. Typically, id is
something like "4234987", title "Dancing naked mole rats",
and display_id "dancing-naked-mole-rats"
thumbnails: A list of dictionaries, with the following entries:
* "id" (optional, string) - Thumbnail format ID
* "url"
* "preference" (optional, int) - quality of the image
* "width" (optional, int)
* "height" (optional, int)
* "resolution" (optional, string "{width}x{height"},
deprecated)
* "filesize" (optional, int)
thumbnail: Full URL to a video thumbnail image.
description: Full video description.
uploader: Full name of the video uploader.
license: License name the video is licensed under.
creator: The creator of the video.
release_date: The date (YYYYMMDD) when the video was released.
timestamp: UNIX timestamp of the moment the video became available.
upload_date: Video upload date (YYYYMMDD).
If not explicitly set, calculated from timestamp.
uploader_id: Nickname or id of the video uploader.
uploader_url: Full URL to a personal webpage of the video uploader.
location: Physical location where the video was filmed.
subtitles: The available subtitles as a dictionary in the format
{language: subformats}. "subformats" is a list sorted from
lower to higher preference, each element is a dictionary
with the "ext" entry and one of:
* "data": The subtitles file contents
* "url": A URL pointing to the subtitles file
"ext" will be calculated from URL if missing
automatic_captions: Like 'subtitles', used by the YoutubeIE for
automatically generated captions
duration: Length of the video in seconds, as an integer or float.
view_count: How many users have watched the video on the platform.
like_count: Number of positive ratings of the video
dislike_count: Number of negative ratings of the video
repost_count: Number of reposts of the video
average_rating: Average rating give by users, the scale used depends on the webpage
comment_count: Number of comments on the video
comments: A list of comments, each with one or more of the following
properties (all but one of text or html optional):
* "author" - human-readable name of the comment author
* "author_id" - user ID of the comment author
* "id" - Comment ID
* "html" - Comment as HTML
* "text" - Plain text of the comment
* "timestamp" - UNIX timestamp of comment
* "parent" - ID of the comment this one is replying to.
Set to "root" to indicate that this is a
comment to the original video.
age_limit: Age restriction for the video, as an integer (years)
webpage_url: The URL to the video webpage, if given to youtube-dl it
should allow to get the same result again. (It will be set
by YoutubeDL if it's missing)
categories: A list of categories that the video falls in, for example
["Sports", "Berlin"]
tags: A list of tags assigned to the video, e.g. ["sweden", "pop music"]
is_live: True, False, or None (=unknown). Whether this video is a
live stream that goes on instead of a fixed-length video.
start_time: Time in seconds where the reproduction should start, as
specified in the URL.
end_time: Time in seconds where the reproduction should end, as
specified in the URL.
The following fields should only be used when the video belongs to some logical
chapter or section:
chapter: Name or title of the chapter the video belongs to.
chapter_number: Number of the chapter the video belongs to, as an integer.
chapter_id: Id of the chapter the video belongs to, as a unicode string.
The following fields should only be used when the video is an episode of some
series or programme:
series: Title of the series or programme the video episode belongs to.
season: Title of the season the video episode belongs to.
season_number: Number of the season the video episode belongs to, as an integer.
season_id: Id of the season the video episode belongs to, as a unicode string.
episode: Title of the video episode. Unlike mandatory video title field,
this field should denote the exact title of the video episode
without any kind of decoration.
episode_number: Number of the video episode within a season, as an integer.
episode_id: Id of the video episode, as a unicode string.
The following fields should only be used when the media is a track or a part of
a music album:
track: Title of the track.
track_number: Number of the track within an album or a disc, as an integer.
track_id: Id of the track (useful in case of custom indexing, e.g. 6.iii),
as a unicode string.
artist: Artist(s) of the track.
genre: Genre(s) of the track.
album: Title of the album the track belongs to.
album_type: Type of the album (e.g. "Demo", "Full-length", "Split", "Compilation", etc).
album_artist: List of all artists appeared on the album (e.g.
"Ash Borer / Fell Voices" or "Various Artists", useful for splits
and compilations).
disc_number: Number of the disc or other physical medium the track belongs to,
as an integer.
release_year: Year (YYYY) when the album was released.
Unless mentioned otherwise, the fields should be Unicode strings.
Unless mentioned otherwise, None is equivalent to absence of information.
_type "playlist" indicates multiple videos.
There must be a key "entries", which is a list, an iterable, or a PagedList
object, each element of which is a valid dictionary by this specification.
Additionally, playlists can have "title", "description" and "id" attributes
with the same semantics as videos (see above).
_type "multi_video" indicates that there are multiple videos that
form a single show, for examples multiple acts of an opera or TV episode.
It must have an entries key like a playlist and contain all the keys
required for a video at the same time.
_type "url" indicates that the video must be extracted from another
location, possibly by a different extractor. Its only required key is:
"url" - the next URL to extract.
The key "ie_key" can be set to the class name (minus the trailing "IE",
e.g. "Youtube") if the extractor class is known in advance.
Additionally, the dictionary may have any properties of the resolved entity
known in advance, for example "title" if the title of the referred video is
known ahead of time.
_type "url_transparent" entities have the same specification as "url", but
indicate that the given additional information is more precise than the one
associated with the resolved URL.
This is useful when a site employs a video service that hosts the video and
its technical metadata, but that video service does not embed a useful
title, description etc.
Subclasses of this one should re-define the _real_initialize() and
_real_extract() methods and define a _VALID_URL regexp.
Probably, they should also be added to the list of extractors.
Finally, the _WORKING attribute should be set to False for broken IEs
in order to warn the users and skip the tests.
"""
_ready = False
_downloader = None
_WORKING = True
def __init__(self, downloader=None):
"""Constructor. Receives an optional downloader."""
self._ready = False
self.set_downloader(downloader)
@classmethod
def suitable(cls, url):
"""Receives a URL and returns True if suitable for this IE."""
# This does not use has/getattr intentionally - we want to know whether
# we have cached the regexp for *this* class, whereas getattr would also
# match the superclass
if '_VALID_URL_RE' not in cls.__dict__:
cls._VALID_URL_RE = re.compile(cls._VALID_URL)
return cls._VALID_URL_RE.match(url) is not None
@classmethod
def _match_id(cls, url):
if '_VALID_URL_RE' not in cls.__dict__:
cls._VALID_URL_RE = re.compile(cls._VALID_URL)
m = cls._VALID_URL_RE.match(url)
assert m
return m.group('id')
@classmethod
def working(cls):
"""Getter method for _WORKING."""
return cls._WORKING
def initialize(self):
"""Initializes an instance (authentication, etc)."""
if not self._ready:
self._real_initialize()
self._ready = True
def extract(self, url):
"""Extracts URL information and returns it in list of dicts."""
try:
self.initialize()
return self._real_extract(url)
except ExtractorError:
raise
except compat_http_client.IncompleteRead as e:
raise ExtractorError('A network error has occurred.', cause=e, expected=True)
except (KeyError, StopIteration) as e:
raise ExtractorError('An extractor error has occurred.', cause=e)
def set_downloader(self, downloader):
"""Sets the downloader for this IE."""
self._downloader = downloader
def _real_initialize(self):
"""Real initialization process. Redefine in subclasses."""
pass
def _real_extract(self, url):
"""Real extraction process. Redefine in subclasses."""
pass
@classmethod
def ie_key(cls):
"""A string for getting the InfoExtractor with get_info_extractor"""
return compat_str(cls.__name__[:-2])
@property
def IE_NAME(self):
return compat_str(type(self).__name__[:-2])
def _request_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, data=None, headers={}, query={}):
""" Returns the response handle """
if note is None:
self.report_download_webpage(video_id)
elif note is not False:
if video_id is None:
self.to_screen('%s' % (note,))
else:
self.to_screen('%s: %s' % (video_id, note))
if isinstance(url_or_request, compat_urllib_request.Request):
url_or_request = update_Request(
url_or_request, data=data, headers=headers, query=query)
else:
if query:
url_or_request = update_url_query(url_or_request, query)
if data is not None or headers:
url_or_request = sanitized_Request(url_or_request, data, headers)
try:
return self._downloader.urlopen(url_or_request)
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
if errnote is False:
return False
if errnote is None:
errnote = 'Unable to download webpage'
errmsg = '%s: %s' % (errnote, error_to_compat_str(err))
if fatal:
raise ExtractorError(errmsg, sys.exc_info()[2], cause=err)
else:
self._downloader.report_warning(errmsg)
return False
def _download_webpage_handle(self, url_or_request, video_id, note=None, errnote=None, fatal=True, encoding=None, data=None, headers={}, query={}):
""" Returns a tuple (page content as string, URL handle) """
# Strip hashes from the URL (#1038)
if isinstance(url_or_request, (compat_str, str)):
url_or_request = url_or_request.partition('#')[0]
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query)
if urlh is False:
assert not fatal
return False
content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal, encoding=encoding)
return (content, urlh)
@staticmethod
def _guess_encoding_from_content(content_type, webpage_bytes):
m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type)
if m:
encoding = m.group(1)
else:
m = re.search(br'<meta[^>]+charset=[\'"]?([^\'")]+)[ /\'">]',
webpage_bytes[:1024])
if m:
encoding = m.group(1).decode('ascii')
elif webpage_bytes.startswith(b'\xff\xfe'):
encoding = 'utf-16'
else:
encoding = 'utf-8'
return encoding
def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True, prefix=None, encoding=None):
content_type = urlh.headers.get('Content-Type', '')
webpage_bytes = urlh.read()
if prefix is not None:
webpage_bytes = prefix + webpage_bytes
if not encoding:
encoding = self._guess_encoding_from_content(content_type, webpage_bytes)
if self._downloader.params.get('dump_intermediate_pages', False):
try:
url = url_or_request.get_full_url()
except AttributeError:
url = url_or_request
self.to_screen('Dumping request to ' + url)
dump = base64.b64encode(webpage_bytes).decode('ascii')
self._downloader.to_screen(dump)
if self._downloader.params.get('write_pages', False):
try:
url = url_or_request.get_full_url()
except AttributeError:
url = url_or_request
basen = '%s_%s' % (video_id, url)
if len(basen) > 240:
h = '___' + hashlib.md5(basen.encode('utf-8')).hexdigest()
basen = basen[:240 - len(h)] + h
raw_filename = basen + '.dump'
filename = sanitize_filename(raw_filename, restricted=True)
self.to_screen('Saving request to ' + filename)
# Working around MAX_PATH limitation on Windows (see
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx)
if compat_os_name == 'nt':
absfilepath = os.path.abspath(filename)
if len(absfilepath) > 259:
filename = '\\\\?\\' + absfilepath
with open(filename, 'wb') as outf:
outf.write(webpage_bytes)
try:
content = webpage_bytes.decode(encoding, 'replace')
except LookupError:
content = webpage_bytes.decode('utf-8', 'replace')
if ('<title>Access to this site is blocked</title>' in content and
'Websense' in content[:512]):
msg = 'Access to this webpage has been blocked by Websense filtering software in your network.'
blocked_iframe = self._html_search_regex(
r'<iframe src="([^"]+)"', content,
'Websense information URL', default=None)
if blocked_iframe:
msg += ' Visit %s for more details' % blocked_iframe
raise ExtractorError(msg, expected=True)
if '<title>The URL you requested has been blocked</title>' in content[:512]:
msg = (
'Access to this webpage has been blocked by Indian censorship. '
'Use a VPN or proxy server (with --proxy) to route around it.')
block_msg = self._html_search_regex(
r'</h1><p>(.*?)</p>',
content, 'block message', default=None)
if block_msg:
msg += ' (Message: "%s")' % block_msg.replace('\n', ' ')
raise ExtractorError(msg, expected=True)
return content
def _download_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, tries=1, timeout=5, encoding=None, data=None, headers={}, query={}):
""" Returns the data of the page as a string """
success = False
try_count = 0
while success is False:
try:
res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding, data=data, headers=headers, query=query)
success = True
except compat_http_client.IncompleteRead as e:
try_count += 1
if try_count >= tries:
raise e
self._sleep(timeout, video_id)
if res is False:
return res
else:
content, _ = res
return content
def _download_xml(self, url_or_request, video_id,
note='Downloading XML', errnote='Unable to download XML',
transform_source=None, fatal=True, encoding=None, data=None, headers={}, query={}):
"""Return the xml as an xml.etree.ElementTree.Element"""
xml_string = self._download_webpage(
url_or_request, video_id, note, errnote, fatal=fatal, encoding=encoding, data=data, headers=headers, query=query)
if xml_string is False:
return xml_string
if transform_source:
xml_string = transform_source(xml_string)
return compat_etree_fromstring(xml_string.encode('utf-8'))
def _download_json(self, url_or_request, video_id,
note='Downloading JSON metadata',
errnote='Unable to download JSON metadata',
transform_source=None,
fatal=True, encoding=None, data=None, headers={}, query={}):
json_string = self._download_webpage(
url_or_request, video_id, note, errnote, fatal=fatal,
encoding=encoding, data=data, headers=headers, query=query)
if (not fatal) and json_string is False:
return None
return self._parse_json(
json_string, video_id, transform_source=transform_source, fatal=fatal)
def _parse_json(self, json_string, video_id, transform_source=None, fatal=True):
if transform_source:
json_string = transform_source(json_string)
try:
return json.loads(json_string)
except ValueError as ve:
errmsg = '%s: Failed to parse JSON ' % video_id
if fatal:
raise ExtractorError(errmsg, cause=ve)
else:
self.report_warning(errmsg + str(ve))
def report_warning(self, msg, video_id=None):
idstr = '' if video_id is None else '%s: ' % video_id
self._downloader.report_warning(
'[%s] %s%s' % (self.IE_NAME, idstr, msg))
def to_screen(self, msg):
"""Print msg to screen, prefixing it with '[ie_name]'"""
self._downloader.to_screen('[%s] %s' % (self.IE_NAME, msg))
def report_extraction(self, id_or_name):
"""Report information extraction."""
self.to_screen('%s: Extracting information' % id_or_name)
def report_download_webpage(self, video_id):
"""Report webpage download."""
self.to_screen('%s: Downloading webpage' % video_id)
def report_age_confirmation(self):
"""Report attempt to confirm age."""
self.to_screen('Confirming age')
def report_login(self):
"""Report attempt to log in."""
self.to_screen('Logging in')
@staticmethod
def raise_login_required(msg='This video is only available for registered users'):
raise ExtractorError(
'%s. Use --username and --password or --netrc to provide account credentials.' % msg,
expected=True)
@staticmethod
def raise_geo_restricted(msg='This video is not available from your location due to geo restriction'):
raise ExtractorError(
'%s. You might want to use --proxy to workaround.' % msg,
expected=True)
# Methods for following #608
@staticmethod
def url_result(url, ie=None, video_id=None, video_title=None):
"""Returns a URL that points to a page that should be processed"""
# TODO: ie should be the class used for getting the info
video_info = {'_type': 'url',
'url': url,
'ie_key': ie}
if video_id is not None:
video_info['id'] = video_id
if video_title is not None:
video_info['title'] = video_title
return video_info
@staticmethod
def playlist_result(entries, playlist_id=None, playlist_title=None, playlist_description=None):
"""Returns a playlist"""
video_info = {'_type': 'playlist',
'entries': entries}
if playlist_id:
video_info['id'] = playlist_id
if playlist_title:
video_info['title'] = playlist_title
if playlist_description:
video_info['description'] = playlist_description
return video_info
def _search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, flags=0, group=None):
"""
Perform a regex search on the given string, using a single or a list of
patterns returning the first matching group.
In case of failure return a default value or raise a WARNING or a
RegexNotFoundError, depending on fatal, specifying the field name.
"""
if isinstance(pattern, (str, compat_str, compiled_regex_type)):
mobj = re.search(pattern, string, flags)
else:
for p in pattern:
mobj = re.search(p, string, flags)
if mobj:
break
if not self._downloader.params.get('no_color') and compat_os_name != 'nt' and sys.stderr.isatty():
_name = '\033[0;34m%s\033[0m' % name
else:
_name = name
if mobj:
if group is None:
# return the first matching group
return next(g for g in mobj.groups() if g is not None)
else:
return mobj.group(group)
elif default is not NO_DEFAULT:
return default
elif fatal:
raise RegexNotFoundError('Unable to extract %s' % _name)
else:
self._downloader.report_warning('unable to extract %s' % _name + bug_reports_message())
return None
def _html_search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, flags=0, group=None):
"""
Like _search_regex, but strips HTML tags and unescapes entities.
"""
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
if res:
return clean_html(res).strip()
else:
return res
def _get_netrc_login_info(self, netrc_machine=None):
username = None
password = None
netrc_machine = netrc_machine or self._NETRC_MACHINE
if self._downloader.params.get('usenetrc', False):
try:
info = netrc.netrc().authenticators(netrc_machine)
if info is not None:
username = info[0]
password = info[2]
else:
raise netrc.NetrcParseError(
'No authenticators for %s' % netrc_machine)
except (IOError, netrc.NetrcParseError) as err:
self._downloader.report_warning(
'parsing .netrc: %s' % error_to_compat_str(err))
return username, password
def _get_login_info(self, username_option='username', password_option='password', netrc_machine=None):
"""
Get the login info as (username, password)
First look for the manually specified credentials using username_option
and password_option as keys in params dictionary. If no such credentials
available look in the netrc file using the netrc_machine or _NETRC_MACHINE
value.
If there's no info available, return (None, None)
"""
if self._downloader is None:
return (None, None)
downloader_params = self._downloader.params
# Attempt to use provided username and password or .netrc data
if downloader_params.get(username_option) is not None:
username = downloader_params[username_option]
password = downloader_params[password_option]
else:
username, password = self._get_netrc_login_info(netrc_machine)
return username, password
def _get_tfa_info(self, note='two-factor verification code'):
"""
Get the two-factor authentication info
TODO - asking the user will be required for sms/phone verify
currently just uses the command line option
If there's no info available, return None
"""
if self._downloader is None:
return None
downloader_params = self._downloader.params
if downloader_params.get('twofactor') is not None:
return downloader_params['twofactor']
return compat_getpass('Type %s and press [Return]: ' % note)
# Helper functions for extracting OpenGraph info
@staticmethod
def _og_regexes(prop):
content_re = r'content=(?:"([^"]+?)"|\'([^\']+?)\'|\s*([^\s"\'=<>`]+?))'
property_re = (r'(?:name|property)=(?:\'og:%(prop)s\'|"og:%(prop)s"|\s*og:%(prop)s\b)'
% {'prop': re.escape(prop)})
template = r'<meta[^>]+?%s[^>]+?%s'
return [
template % (property_re, content_re),
template % (content_re, property_re),
]
@staticmethod
def _meta_regex(prop):
return r'''(?isx)<meta
(?=[^>]+(?:itemprop|name|property|id|http-equiv)=(["\']?)%s\1)
[^>]+?content=(["\'])(?P<content>.*?)\2''' % re.escape(prop)
def _og_search_property(self, prop, html, name=None, **kargs):
if not isinstance(prop, (list, tuple)):
prop = [prop]
if name is None:
name = 'OpenGraph %s' % prop[0]
og_regexes = []
for p in prop:
og_regexes.extend(self._og_regexes(p))
escaped = self._search_regex(og_regexes, html, name, flags=re.DOTALL, **kargs)
if escaped is None:
return None
return unescapeHTML(escaped)
def _og_search_thumbnail(self, html, **kargs):
return self._og_search_property('image', html, 'thumbnail URL', fatal=False, **kargs)
def _og_search_description(self, html, **kargs):
return self._og_search_property('description', html, fatal=False, **kargs)
def _og_search_title(self, html, **kargs):
return self._og_search_property('title', html, **kargs)
def _og_search_video_url(self, html, name='video url', secure=True, **kargs):
regexes = self._og_regexes('video') + self._og_regexes('video:url')
if secure:
regexes = self._og_regexes('video:secure_url') + regexes
return self._html_search_regex(regexes, html, name, **kargs)
def _og_search_url(self, html, **kargs):
return self._og_search_property('url', html, **kargs)
def _html_search_meta(self, name, html, display_name=None, fatal=False, **kwargs):
if not isinstance(name, (list, tuple)):
name = [name]
if display_name is None:
display_name = name[0]
return self._html_search_regex(
[self._meta_regex(n) for n in name],
html, display_name, fatal=fatal, group='content', **kwargs)
def _dc_search_uploader(self, html):
return self._html_search_meta('dc.creator', html, 'uploader')
def _rta_search(self, html):
# See http://www.rtalabel.org/index.php?content=howtofaq#single
if re.search(r'(?ix)<meta\s+name="rating"\s+'
r' content="RTA-5042-1996-1400-1577-RTA"',
html):
return 18
return 0
def _media_rating_search(self, html):
# See http://www.tjg-designs.com/WP/metadata-code-examples-adding-metadata-to-your-web-pages/
rating = self._html_search_meta('rating', html)
if not rating:
return None
RATING_TABLE = {
'safe for kids': 0,
'general': 8,
'14 years': 14,
'mature': 17,
'restricted': 19,
}
return RATING_TABLE.get(rating.lower())
def _family_friendly_search(self, html):
# See http://schema.org/VideoObject
family_friendly = self._html_search_meta('isFamilyFriendly', html)
if not family_friendly:
return None
RATING_TABLE = {
'1': 0,
'true': 0,
'0': 18,
'false': 18,
}
return RATING_TABLE.get(family_friendly.lower())
def _twitter_search_player(self, html):
return self._html_search_meta('twitter:player', html,
'twitter card player')
def _search_json_ld(self, html, video_id, expected_type=None, **kwargs):
json_ld = self._search_regex(
r'(?s)<script[^>]+type=(["\'])application/ld\+json\1[^>]*>(?P<json_ld>.+?)</script>',
html, 'JSON-LD', group='json_ld', **kwargs)
default = kwargs.get('default', NO_DEFAULT)
if not json_ld:
return default if default is not NO_DEFAULT else {}
# JSON-LD may be malformed and thus `fatal` should be respected.
# At the same time `default` may be passed that assumes `fatal=False`
# for _search_regex. Let's simulate the same behavior here as well.
fatal = kwargs.get('fatal', True) if default == NO_DEFAULT else False
return self._json_ld(json_ld, video_id, fatal=fatal, expected_type=expected_type)
def _json_ld(self, json_ld, video_id, fatal=True, expected_type=None):
if isinstance(json_ld, compat_str):
json_ld = self._parse_json(json_ld, video_id, fatal=fatal)
if not json_ld:
return {}
info = {}
if not isinstance(json_ld, (list, tuple, dict)):
return info
if isinstance(json_ld, dict):
json_ld = [json_ld]
for e in json_ld:
if e.get('@context') == 'http://schema.org':
item_type = e.get('@type')
if expected_type is not None and expected_type != item_type:
return info
if item_type == 'TVEpisode':
info.update({
'episode': unescapeHTML(e.get('name')),
'episode_number': int_or_none(e.get('episodeNumber')),
'description': unescapeHTML(e.get('description')),
})
part_of_season = e.get('partOfSeason')
if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason':
info['season_number'] = int_or_none(part_of_season.get('seasonNumber'))
part_of_series = e.get('partOfSeries') or e.get('partOfTVSeries')
if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries':
info['series'] = unescapeHTML(part_of_series.get('name'))
elif item_type == 'Article':
info.update({
'timestamp': parse_iso8601(e.get('datePublished')),
'title': unescapeHTML(e.get('headline')),
'description': unescapeHTML(e.get('articleBody')),
})
elif item_type == 'VideoObject':
info.update({
'url': e.get('contentUrl'),
'title': unescapeHTML(e.get('name')),
'description': unescapeHTML(e.get('description')),
'thumbnail': e.get('thumbnailUrl'),
'duration': parse_duration(e.get('duration')),
'timestamp': unified_timestamp(e.get('uploadDate')),
'filesize': float_or_none(e.get('contentSize')),
'tbr': int_or_none(e.get('bitrate')),
'width': int_or_none(e.get('width')),
'height': int_or_none(e.get('height')),
})
break
return dict((k, v) for k, v in info.items() if v is not None)
@staticmethod
def _hidden_inputs(html):
html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html)
hidden_inputs = {}
for input in re.findall(r'(?i)(<input[^>]+>)', html):
attrs = extract_attributes(input)
if not input:
continue
if attrs.get('type') not in ('hidden', 'submit'):
continue
name = attrs.get('name') or attrs.get('id')
value = attrs.get('value')
if name and value is not None:
hidden_inputs[name] = value
return hidden_inputs
def _form_hidden_inputs(self, form_id, html):
form = self._search_regex(
r'(?is)<form[^>]+?id=(["\'])%s\1[^>]*>(?P<form>.+?)</form>' % form_id,
html, '%s form' % form_id, group='form')
return self._hidden_inputs(form)
def _sort_formats(self, formats, field_preference=None):
if not formats:
raise ExtractorError('No video formats found')
for f in formats:
# Automatically determine tbr when missing based on abr and vbr (improves
# formats sorting in some cases)
if 'tbr' not in f and f.get('abr') is not None and f.get('vbr') is not None:
f['tbr'] = f['abr'] + f['vbr']
def _formats_key(f):
# TODO remove the following workaround
from ..utils import determine_ext
if not f.get('ext') and 'url' in f:
f['ext'] = determine_ext(f['url'])
if isinstance(field_preference, (list, tuple)):
return tuple(
f.get(field)
if f.get(field) is not None
else ('' if field == 'format_id' else -1)
for field in field_preference)
preference = f.get('preference')
if preference is None:
preference = 0
if f.get('ext') in ['f4f', 'f4m']: # Not yet supported
preference -= 0.5
protocol = f.get('protocol') or determine_protocol(f)
proto_preference = 0 if protocol in ['http', 'https'] else (-0.5 if protocol == 'rtsp' else -0.1)
if f.get('vcodec') == 'none': # audio only
preference -= 50
if self._downloader.params.get('prefer_free_formats'):
ORDER = ['aac', 'mp3', 'm4a', 'webm', 'ogg', 'opus']
else:
ORDER = ['webm', 'opus', 'ogg', 'mp3', 'aac', 'm4a']
ext_preference = 0
try:
audio_ext_preference = ORDER.index(f['ext'])
except ValueError:
audio_ext_preference = -1
else:
if f.get('acodec') == 'none': # video only
preference -= 40
if self._downloader.params.get('prefer_free_formats'):
ORDER = ['flv', 'mp4', 'webm']
else:
ORDER = ['webm', 'flv', 'mp4']
try:
ext_preference = ORDER.index(f['ext'])
except ValueError:
ext_preference = -1
audio_ext_preference = 0
return (
preference,
f.get('language_preference') if f.get('language_preference') is not None else -1,
f.get('quality') if f.get('quality') is not None else -1,
f.get('tbr') if f.get('tbr') is not None else -1,
f.get('filesize') if f.get('filesize') is not None else -1,
f.get('vbr') if f.get('vbr') is not None else -1,
f.get('height') if f.get('height') is not None else -1,
f.get('width') if f.get('width') is not None else -1,
proto_preference,
ext_preference,
f.get('abr') if f.get('abr') is not None else -1,
audio_ext_preference,
f.get('fps') if f.get('fps') is not None else -1,
f.get('filesize_approx') if f.get('filesize_approx') is not None else -1,
f.get('source_preference') if f.get('source_preference') is not None else -1,
f.get('format_id') if f.get('format_id') is not None else '',
)
formats.sort(key=_formats_key)
def _check_formats(self, formats, video_id):
if formats:
formats[:] = filter(
lambda f: self._is_valid_url(
f['url'], video_id,
item='%s video format' % f.get('format_id') if f.get('format_id') else 'video'),
formats)
@staticmethod
def _remove_duplicate_formats(formats):
format_urls = set()
unique_formats = []
for f in formats:
if f['url'] not in format_urls:
format_urls.add(f['url'])
unique_formats.append(f)
formats[:] = unique_formats
def _is_valid_url(self, url, video_id, item='video'):
url = self._proto_relative_url(url, scheme='http:')
# For now assume non HTTP(S) URLs always valid
if not (url.startswith('http://') or url.startswith('https://')):
return True
try:
self._request_webpage(url, video_id, 'Checking %s URL' % item)
return True
except ExtractorError as e:
if isinstance(e.cause, compat_urllib_error.URLError):
self.to_screen(
'%s: %s URL is invalid, skipping' % (video_id, item))
return False
raise
def http_scheme(self):
""" Either "http:" or "https:", depending on the user's preferences """
return (
'http:'
if self._downloader.params.get('prefer_insecure', False)
else 'https:')
def _proto_relative_url(self, url, scheme=None):
if url is None:
return url
if url.startswith('//'):
if scheme is None:
scheme = self.http_scheme()
return scheme + url
else:
return url
def _sleep(self, timeout, video_id, msg_template=None):
if msg_template is None:
msg_template = '%(video_id)s: Waiting for %(timeout)s seconds'
msg = msg_template % {'video_id': video_id, 'timeout': timeout}
self.to_screen(msg)
time.sleep(timeout)
def _extract_f4m_formats(self, manifest_url, video_id, preference=None, f4m_id=None,
transform_source=lambda s: fix_xml_ampersands(s).strip(),
fatal=True, m3u8_id=None):
manifest = self._download_xml(
manifest_url, video_id, 'Downloading f4m manifest',
'Unable to download f4m manifest',
# Some manifests may be malformed, e.g. prosiebensat1 generated manifests
# (see https://github.com/rg3/youtube-dl/issues/6215#issuecomment-121704244)
transform_source=transform_source,
fatal=fatal)
if manifest is False:
return []
return self._parse_f4m_formats(
manifest, manifest_url, video_id, preference=preference, f4m_id=f4m_id,
transform_source=transform_source, fatal=fatal, m3u8_id=m3u8_id)
def _parse_f4m_formats(self, manifest, manifest_url, video_id, preference=None, f4m_id=None,
transform_source=lambda s: fix_xml_ampersands(s).strip(),
fatal=True, m3u8_id=None):
# currently youtube-dl cannot decode the playerVerificationChallenge as Akamai uses Adobe Alchemy
akamai_pv = manifest.find('{http://ns.adobe.com/f4m/1.0}pv-2.0')
if akamai_pv is not None and ';' in akamai_pv.text:
playerVerificationChallenge = akamai_pv.text.split(';')[0]
if playerVerificationChallenge.strip() != '':
return []
formats = []
manifest_version = '1.0'
media_nodes = manifest.findall('{http://ns.adobe.com/f4m/1.0}media')
if not media_nodes:
manifest_version = '2.0'
media_nodes = manifest.findall('{http://ns.adobe.com/f4m/2.0}media')
# Remove unsupported DRM protected media from final formats
# rendition (see https://github.com/rg3/youtube-dl/issues/8573).
media_nodes = remove_encrypted_media(media_nodes)
if not media_nodes:
return formats
base_url = xpath_text(
manifest, ['{http://ns.adobe.com/f4m/1.0}baseURL', '{http://ns.adobe.com/f4m/2.0}baseURL'],
'base URL', default=None)
if base_url:
base_url = base_url.strip()
bootstrap_info = xpath_element(
manifest, ['{http://ns.adobe.com/f4m/1.0}bootstrapInfo', '{http://ns.adobe.com/f4m/2.0}bootstrapInfo'],
'bootstrap info', default=None)
for i, media_el in enumerate(media_nodes):
tbr = int_or_none(media_el.attrib.get('bitrate'))
width = int_or_none(media_el.attrib.get('width'))
height = int_or_none(media_el.attrib.get('height'))
format_id = '-'.join(filter(None, [f4m_id, compat_str(i if tbr is None else tbr)]))
# If <bootstrapInfo> is present, the specified f4m is a
# stream-level manifest, and only set-level manifests may refer to
# external resources. See section 11.4 and section 4 of F4M spec
if bootstrap_info is None:
media_url = None
# @href is introduced in 2.0, see section 11.6 of F4M spec
if manifest_version == '2.0':
media_url = media_el.attrib.get('href')
if media_url is None:
media_url = media_el.attrib.get('url')
if not media_url:
continue
manifest_url = (
media_url if media_url.startswith('http://') or media_url.startswith('https://')
else ((base_url or '/'.join(manifest_url.split('/')[:-1])) + '/' + media_url))
# If media_url is itself a f4m manifest do the recursive extraction
# since bitrates in parent manifest (this one) and media_url manifest
# may differ leading to inability to resolve the format by requested
# bitrate in f4m downloader
ext = determine_ext(manifest_url)
if ext == 'f4m':
f4m_formats = self._extract_f4m_formats(
manifest_url, video_id, preference=preference, f4m_id=f4m_id,
transform_source=transform_source, fatal=fatal)
# Sometimes stream-level manifest contains single media entry that
# does not contain any quality metadata (e.g. http://matchtv.ru/#live-player).
# At the same time parent's media entry in set-level manifest may
# contain it. We will copy it from parent in such cases.
if len(f4m_formats) == 1:
f = f4m_formats[0]
f.update({
'tbr': f.get('tbr') or tbr,
'width': f.get('width') or width,
'height': f.get('height') or height,
'format_id': f.get('format_id') if not tbr else format_id,
})
formats.extend(f4m_formats)
continue
elif ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
manifest_url, video_id, 'mp4', preference=preference,
m3u8_id=m3u8_id, fatal=fatal))
continue
formats.append({
'format_id': format_id,
'url': manifest_url,
'manifest_url': manifest_url,
'ext': 'flv' if bootstrap_info is not None else None,
'tbr': tbr,
'width': width,
'height': height,
'preference': preference,
})
return formats
def _m3u8_meta_format(self, m3u8_url, ext=None, preference=None, m3u8_id=None):
return {
'format_id': '-'.join(filter(None, [m3u8_id, 'meta'])),
'url': m3u8_url,
'ext': ext,
'protocol': 'm3u8',
'preference': preference - 100 if preference else -100,
'resolution': 'multiple',
'format_note': 'Quality selection URL',
}
def _extract_m3u8_formats(self, m3u8_url, video_id, ext=None,
entry_protocol='m3u8', preference=None,
m3u8_id=None, note=None, errnote=None,
fatal=True, live=False):
res = self._download_webpage_handle(
m3u8_url, video_id,
note=note or 'Downloading m3u8 information',
errnote=errnote or 'Failed to download m3u8 information',
fatal=fatal)
if res is False:
return []
m3u8_doc, urlh = res
m3u8_url = urlh.geturl()
formats = [self._m3u8_meta_format(m3u8_url, ext, preference, m3u8_id)]
format_url = lambda u: (
u
if re.match(r'^https?://', u)
else compat_urlparse.urljoin(m3u8_url, u))
# We should try extracting formats only from master playlists [1], i.e.
# playlists that describe available qualities. On the other hand media
# playlists [2] should be returned as is since they contain just the media
# without qualities renditions.
# Fortunately, master playlist can be easily distinguished from media
# playlist based on particular tags availability. As of [1, 2] master
# playlist tags MUST NOT appear in a media playist and vice versa.
# As of [3] #EXT-X-TARGETDURATION tag is REQUIRED for every media playlist
# and MUST NOT appear in master playlist thus we can clearly detect media
# playlist with this criterion.
# 1. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.4
# 2. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3
# 3. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.1
if '#EXT-X-TARGETDURATION' in m3u8_doc: # media playlist, return as is
return [{
'url': m3u8_url,
'format_id': m3u8_id,
'ext': ext,
'protocol': entry_protocol,
'preference': preference,
}]
last_info = {}
last_media = {}
for line in m3u8_doc.splitlines():
if line.startswith('#EXT-X-STREAM-INF:'):
last_info = parse_m3u8_attributes(line)
elif line.startswith('#EXT-X-MEDIA:'):
media = parse_m3u8_attributes(line)
media_type = media.get('TYPE')
if media_type in ('VIDEO', 'AUDIO'):
media_url = media.get('URI')
if media_url:
format_id = []
for v in (media.get('GROUP-ID'), media.get('NAME')):
if v:
format_id.append(v)
formats.append({
'format_id': '-'.join(format_id),
'url': format_url(media_url),
'language': media.get('LANGUAGE'),
'vcodec': 'none' if media_type == 'AUDIO' else None,
'ext': ext,
'protocol': entry_protocol,
'preference': preference,
})
else:
# When there is no URI in EXT-X-MEDIA let this tag's
# data be used by regular URI lines below
last_media = media
elif line.startswith('#') or not line.strip():
continue
else:
tbr = int_or_none(last_info.get('AVERAGE-BANDWIDTH') or last_info.get('BANDWIDTH'), scale=1000)
format_id = []
if m3u8_id:
format_id.append(m3u8_id)
# Despite specification does not mention NAME attribute for
# EXT-X-STREAM-INF it still sometimes may be present
stream_name = last_info.get('NAME') or last_media.get('NAME')
# Bandwidth of live streams may differ over time thus making
# format_id unpredictable. So it's better to keep provided
# format_id intact.
if not live:
format_id.append(stream_name if stream_name else '%d' % (tbr if tbr else len(formats)))
manifest_url = format_url(line.strip())
f = {
'format_id': '-'.join(format_id),
'url': manifest_url,
'manifest_url': manifest_url,
'tbr': tbr,
'ext': ext,
'fps': float_or_none(last_info.get('FRAME-RATE')),
'protocol': entry_protocol,
'preference': preference,
}
resolution = last_info.get('RESOLUTION')
if resolution:
width_str, height_str = resolution.split('x')
f['width'] = int(width_str)
f['height'] = int(height_str)
# Unified Streaming Platform
mobj = re.search(
r'audio.*?(?:%3D|=)(\d+)(?:-video.*?(?:%3D|=)(\d+))?', f['url'])
if mobj:
abr, vbr = mobj.groups()
abr, vbr = float_or_none(abr, 1000), float_or_none(vbr, 1000)
f.update({
'vbr': vbr,
'abr': abr,
})
f.update(parse_codecs(last_info.get('CODECS')))
formats.append(f)
last_info = {}
last_media = {}
return formats
@staticmethod
def _xpath_ns(path, namespace=None):
if not namespace:
return path
out = []
for c in path.split('/'):
if not c or c == '.':
out.append(c)
else:
out.append('{%s}%s' % (namespace, c))
return '/'.join(out)
def _extract_smil_formats(self, smil_url, video_id, fatal=True, f4m_params=None, transform_source=None):
smil = self._download_smil(smil_url, video_id, fatal=fatal, transform_source=transform_source)
if smil is False:
assert not fatal
return []
namespace = self._parse_smil_namespace(smil)
return self._parse_smil_formats(
smil, smil_url, video_id, namespace=namespace, f4m_params=f4m_params)
def _extract_smil_info(self, smil_url, video_id, fatal=True, f4m_params=None):
smil = self._download_smil(smil_url, video_id, fatal=fatal)
if smil is False:
return {}
return self._parse_smil(smil, smil_url, video_id, f4m_params=f4m_params)
def _download_smil(self, smil_url, video_id, fatal=True, transform_source=None):
return self._download_xml(
smil_url, video_id, 'Downloading SMIL file',
'Unable to download SMIL file', fatal=fatal, transform_source=transform_source)
def _parse_smil(self, smil, smil_url, video_id, f4m_params=None):
namespace = self._parse_smil_namespace(smil)
formats = self._parse_smil_formats(
smil, smil_url, video_id, namespace=namespace, f4m_params=f4m_params)
subtitles = self._parse_smil_subtitles(smil, namespace=namespace)
video_id = os.path.splitext(url_basename(smil_url))[0]
title = None
description = None
upload_date = None
for meta in smil.findall(self._xpath_ns('./head/meta', namespace)):
name = meta.attrib.get('name')
content = meta.attrib.get('content')
if not name or not content:
continue
if not title and name == 'title':
title = content
elif not description and name in ('description', 'abstract'):
description = content
elif not upload_date and name == 'date':
upload_date = unified_strdate(content)
thumbnails = [{
'id': image.get('type'),
'url': image.get('src'),
'width': int_or_none(image.get('width')),
'height': int_or_none(image.get('height')),
} for image in smil.findall(self._xpath_ns('.//image', namespace)) if image.get('src')]
return {
'id': video_id,
'title': title or video_id,
'description': description,
'upload_date': upload_date,
'thumbnails': thumbnails,
'formats': formats,
'subtitles': subtitles,
}
def _parse_smil_namespace(self, smil):
return self._search_regex(
r'(?i)^{([^}]+)?}smil$', smil.tag, 'namespace', default=None)
def _parse_smil_formats(self, smil, smil_url, video_id, namespace=None, f4m_params=None, transform_rtmp_url=None):
base = smil_url
for meta in smil.findall(self._xpath_ns('./head/meta', namespace)):
b = meta.get('base') or meta.get('httpBase')
if b:
base = b
break
formats = []
rtmp_count = 0
http_count = 0
m3u8_count = 0
srcs = []
media = smil.findall(self._xpath_ns('.//video', namespace)) + smil.findall(self._xpath_ns('.//audio', namespace))
for medium in media:
src = medium.get('src')
if not src or src in srcs:
continue
srcs.append(src)
bitrate = float_or_none(medium.get('system-bitrate') or medium.get('systemBitrate'), 1000)
filesize = int_or_none(medium.get('size') or medium.get('fileSize'))
width = int_or_none(medium.get('width'))
height = int_or_none(medium.get('height'))
proto = medium.get('proto')
ext = medium.get('ext')
src_ext = determine_ext(src)
streamer = medium.get('streamer') or base
if proto == 'rtmp' or streamer.startswith('rtmp'):
rtmp_count += 1
formats.append({
'url': streamer,
'play_path': src,
'ext': 'flv',
'format_id': 'rtmp-%d' % (rtmp_count if bitrate is None else bitrate),
'tbr': bitrate,
'filesize': filesize,
'width': width,
'height': height,
})
if transform_rtmp_url:
streamer, src = transform_rtmp_url(streamer, src)
formats[-1].update({
'url': streamer,
'play_path': src,
})
continue
src_url = src if src.startswith('http') else compat_urlparse.urljoin(base, src)
src_url = src_url.strip()
if proto == 'm3u8' or src_ext == 'm3u8':
m3u8_formats = self._extract_m3u8_formats(
src_url, video_id, ext or 'mp4', m3u8_id='hls', fatal=False)
if len(m3u8_formats) == 1:
m3u8_count += 1
m3u8_formats[0].update({
'format_id': 'hls-%d' % (m3u8_count if bitrate is None else bitrate),
'tbr': bitrate,
'width': width,
'height': height,
})
formats.extend(m3u8_formats)
continue
if src_ext == 'f4m':
f4m_url = src_url
if not f4m_params:
f4m_params = {
'hdcore': '3.2.0',
'plugin': 'flowplayer-3.2.0.1',
}
f4m_url += '&' if '?' in f4m_url else '?'
f4m_url += compat_urllib_parse_urlencode(f4m_params)
formats.extend(self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False))
continue
if src_url.startswith('http') and self._is_valid_url(src, video_id):
http_count += 1
formats.append({
'url': src_url,
'ext': ext or src_ext or 'flv',
'format_id': 'http-%d' % (bitrate or http_count),
'tbr': bitrate,
'filesize': filesize,
'width': width,
'height': height,
})
continue
return formats
def _parse_smil_subtitles(self, smil, namespace=None, subtitles_lang='en'):
urls = []
subtitles = {}
for num, textstream in enumerate(smil.findall(self._xpath_ns('.//textstream', namespace))):
src = textstream.get('src')
if not src or src in urls:
continue
urls.append(src)
ext = textstream.get('ext') or mimetype2ext(textstream.get('type')) or determine_ext(src)
lang = textstream.get('systemLanguage') or textstream.get('systemLanguageName') or textstream.get('lang') or subtitles_lang
subtitles.setdefault(lang, []).append({
'url': src,
'ext': ext,
})
return subtitles
def _extract_xspf_playlist(self, playlist_url, playlist_id, fatal=True):
xspf = self._download_xml(
playlist_url, playlist_id, 'Downloading xpsf playlist',
'Unable to download xspf manifest', fatal=fatal)
if xspf is False:
return []
return self._parse_xspf(xspf, playlist_id)
def _parse_xspf(self, playlist, playlist_id):
NS_MAP = {
'xspf': 'http://xspf.org/ns/0/',
's1': 'http://static.streamone.nl/player/ns/0',
}
entries = []
for track in playlist.findall(xpath_with_ns('./xspf:trackList/xspf:track', NS_MAP)):
title = xpath_text(
track, xpath_with_ns('./xspf:title', NS_MAP), 'title', default=playlist_id)
description = xpath_text(
track, xpath_with_ns('./xspf:annotation', NS_MAP), 'description')
thumbnail = xpath_text(
track, xpath_with_ns('./xspf:image', NS_MAP), 'thumbnail')
duration = float_or_none(
xpath_text(track, xpath_with_ns('./xspf:duration', NS_MAP), 'duration'), 1000)
formats = [{
'url': location.text,
'format_id': location.get(xpath_with_ns('s1:label', NS_MAP)),
'width': int_or_none(location.get(xpath_with_ns('s1:width', NS_MAP))),
'height': int_or_none(location.get(xpath_with_ns('s1:height', NS_MAP))),
} for location in track.findall(xpath_with_ns('./xspf:location', NS_MAP))]
self._sort_formats(formats)
entries.append({
'id': playlist_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'duration': duration,
'formats': formats,
})
return entries
def _extract_mpd_formats(self, mpd_url, video_id, mpd_id=None, note=None, errnote=None, fatal=True, formats_dict={}):
res = self._download_webpage_handle(
mpd_url, video_id,
note=note or 'Downloading MPD manifest',
errnote=errnote or 'Failed to download MPD manifest',
fatal=fatal)
if res is False:
return []
mpd, urlh = res
mpd_base_url = re.match(r'https?://.+/', urlh.geturl()).group()
return self._parse_mpd_formats(
compat_etree_fromstring(mpd.encode('utf-8')), mpd_id, mpd_base_url,
formats_dict=formats_dict, mpd_url=mpd_url)
def _parse_mpd_formats(self, mpd_doc, mpd_id=None, mpd_base_url='', formats_dict={}, mpd_url=None):
"""
Parse formats from MPD manifest.
References:
1. MPEG-DASH Standard, ISO/IEC 23009-1:2014(E),
http://standards.iso.org/ittf/PubliclyAvailableStandards/c065274_ISO_IEC_23009-1_2014.zip
2. https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
"""
if mpd_doc.get('type') == 'dynamic':
return []
namespace = self._search_regex(r'(?i)^{([^}]+)?}MPD$', mpd_doc.tag, 'namespace', default=None)
def _add_ns(path):
return self._xpath_ns(path, namespace)
def is_drm_protected(element):
return element.find(_add_ns('ContentProtection')) is not None
def extract_multisegment_info(element, ms_parent_info):
ms_info = ms_parent_info.copy()
# As per [1, 5.3.9.2.2] SegmentList and SegmentTemplate share some
# common attributes and elements. We will only extract relevant
# for us.
def extract_common(source):
segment_timeline = source.find(_add_ns('SegmentTimeline'))
if segment_timeline is not None:
s_e = segment_timeline.findall(_add_ns('S'))
if s_e:
ms_info['total_number'] = 0
ms_info['s'] = []
for s in s_e:
r = int(s.get('r', 0))
ms_info['total_number'] += 1 + r
ms_info['s'].append({
't': int(s.get('t', 0)),
# @d is mandatory (see [1, 5.3.9.6.2, Table 17, page 60])
'd': int(s.attrib['d']),
'r': r,
})
start_number = source.get('startNumber')
if start_number:
ms_info['start_number'] = int(start_number)
timescale = source.get('timescale')
if timescale:
ms_info['timescale'] = int(timescale)
segment_duration = source.get('duration')
if segment_duration:
ms_info['segment_duration'] = int(segment_duration)
def extract_Initialization(source):
initialization = source.find(_add_ns('Initialization'))
if initialization is not None:
ms_info['initialization_url'] = initialization.attrib['sourceURL']
segment_list = element.find(_add_ns('SegmentList'))
if segment_list is not None:
extract_common(segment_list)
extract_Initialization(segment_list)
segment_urls_e = segment_list.findall(_add_ns('SegmentURL'))
if segment_urls_e:
ms_info['segment_urls'] = [segment.attrib['media'] for segment in segment_urls_e]
else:
segment_template = element.find(_add_ns('SegmentTemplate'))
if segment_template is not None:
extract_common(segment_template)
media_template = segment_template.get('media')
if media_template:
ms_info['media_template'] = media_template
initialization = segment_template.get('initialization')
if initialization:
ms_info['initialization_url'] = initialization
else:
extract_Initialization(segment_template)
return ms_info
def combine_url(base_url, target_url):
if re.match(r'^https?://', target_url):
return target_url
return '%s%s%s' % (base_url, '' if base_url.endswith('/') else '/', target_url)
mpd_duration = parse_duration(mpd_doc.get('mediaPresentationDuration'))
formats = []
for period in mpd_doc.findall(_add_ns('Period')):
period_duration = parse_duration(period.get('duration')) or mpd_duration
period_ms_info = extract_multisegment_info(period, {
'start_number': 1,
'timescale': 1,
})
for adaptation_set in period.findall(_add_ns('AdaptationSet')):
if is_drm_protected(adaptation_set):
continue
adaption_set_ms_info = extract_multisegment_info(adaptation_set, period_ms_info)
for representation in adaptation_set.findall(_add_ns('Representation')):
if is_drm_protected(representation):
continue
representation_attrib = adaptation_set.attrib.copy()
representation_attrib.update(representation.attrib)
# According to [1, 5.3.7.2, Table 9, page 41], @mimeType is mandatory
mime_type = representation_attrib['mimeType']
content_type = mime_type.split('/')[0]
if content_type == 'text':
# TODO implement WebVTT downloading
pass
elif content_type == 'video' or content_type == 'audio':
base_url = ''
for element in (representation, adaptation_set, period, mpd_doc):
base_url_e = element.find(_add_ns('BaseURL'))
if base_url_e is not None:
base_url = base_url_e.text + base_url
if re.match(r'^https?://', base_url):
break
if mpd_base_url and not re.match(r'^https?://', base_url):
if not mpd_base_url.endswith('/') and not base_url.startswith('/'):
mpd_base_url += '/'
base_url = mpd_base_url + base_url
representation_id = representation_attrib.get('id')
lang = representation_attrib.get('lang')
url_el = representation.find(_add_ns('BaseURL'))
filesize = int_or_none(url_el.attrib.get('{http://youtube.com/yt/2012/10/10}contentLength') if url_el is not None else None)
f = {
'format_id': '%s-%s' % (mpd_id, representation_id) if mpd_id else representation_id,
'url': base_url,
'manifest_url': mpd_url,
'ext': mimetype2ext(mime_type),
'width': int_or_none(representation_attrib.get('width')),
'height': int_or_none(representation_attrib.get('height')),
'tbr': int_or_none(representation_attrib.get('bandwidth'), 1000),
'asr': int_or_none(representation_attrib.get('audioSamplingRate')),
'fps': int_or_none(representation_attrib.get('frameRate')),
'vcodec': 'none' if content_type == 'audio' else representation_attrib.get('codecs'),
'acodec': 'none' if content_type == 'video' else representation_attrib.get('codecs'),
'language': lang if lang not in ('mul', 'und', 'zxx', 'mis') else None,
'format_note': 'DASH %s' % content_type,
'filesize': filesize,
}
representation_ms_info = extract_multisegment_info(representation, adaption_set_ms_info)
if 'segment_urls' not in representation_ms_info and 'media_template' in representation_ms_info:
media_template = representation_ms_info['media_template']
media_template = media_template.replace('$RepresentationID$', representation_id)
media_template = re.sub(r'\$(Number|Bandwidth|Time)\$', r'%(\1)d', media_template)
media_template = re.sub(r'\$(Number|Bandwidth|Time)%([^$]+)\$', r'%(\1)\2', media_template)
media_template.replace('$$', '$')
# As per [1, 5.3.9.4.4, Table 16, page 55] $Number$ and $Time$
# can't be used at the same time
if '%(Number' in media_template and 's' not in representation_ms_info:
segment_duration = None
if 'total_number' not in representation_ms_info and 'segment_duration':
segment_duration = float_or_none(representation_ms_info['segment_duration'], representation_ms_info['timescale'])
representation_ms_info['total_number'] = int(math.ceil(float(period_duration) / segment_duration))
representation_ms_info['fragments'] = [{
'url': media_template % {
'Number': segment_number,
'Bandwidth': representation_attrib.get('bandwidth'),
},
'duration': segment_duration,
} for segment_number in range(
representation_ms_info['start_number'],
representation_ms_info['total_number'] + representation_ms_info['start_number'])]
else:
# $Number*$ or $Time$ in media template with S list available
# Example $Number*$: http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg
# Example $Time$: https://play.arkena.com/embed/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411
representation_ms_info['fragments'] = []
segment_time = 0
segment_d = None
segment_number = representation_ms_info['start_number']
def add_segment_url():
segment_url = media_template % {
'Time': segment_time,
'Bandwidth': representation_attrib.get('bandwidth'),
'Number': segment_number,
}
representation_ms_info['fragments'].append({
'url': segment_url,
'duration': float_or_none(segment_d, representation_ms_info['timescale']),
})
for num, s in enumerate(representation_ms_info['s']):
segment_time = s.get('t') or segment_time
segment_d = s['d']
add_segment_url()
segment_number += 1
for r in range(s.get('r', 0)):
segment_time += segment_d
add_segment_url()
segment_number += 1
segment_time += segment_d
elif 'segment_urls' in representation_ms_info and 's' in representation_ms_info:
# No media template
# Example: https://www.youtube.com/watch?v=iXZV5uAYMJI
# or any YouTube dashsegments video
fragments = []
s_num = 0
for segment_url in representation_ms_info['segment_urls']:
s = representation_ms_info['s'][s_num]
for r in range(s.get('r', 0) + 1):
fragments.append({
'url': segment_url,
'duration': float_or_none(s['d'], representation_ms_info['timescale']),
})
representation_ms_info['fragments'] = fragments
# NB: MPD manifest may contain direct URLs to unfragmented media.
# No fragments key is present in this case.
if 'fragments' in representation_ms_info:
f.update({
'fragments': [],
'protocol': 'http_dash_segments',
})
if 'initialization_url' in representation_ms_info:
initialization_url = representation_ms_info['initialization_url'].replace('$RepresentationID$', representation_id)
if not f.get('url'):
f['url'] = initialization_url
f['fragments'].append({'url': initialization_url})
f['fragments'].extend(representation_ms_info['fragments'])
for fragment in f['fragments']:
fragment['url'] = combine_url(base_url, fragment['url'])
try:
existing_format = next(
fo for fo in formats
if fo['format_id'] == representation_id)
except StopIteration:
full_info = formats_dict.get(representation_id, {}).copy()
full_info.update(f)
formats.append(full_info)
else:
existing_format.update(f)
else:
self.report_warning('Unknown MIME type %s in DASH manifest' % mime_type)
return formats
def _parse_html5_media_entries(self, base_url, webpage, video_id, m3u8_id=None, m3u8_entry_protocol='m3u8'):
def absolute_url(video_url):
return compat_urlparse.urljoin(base_url, video_url)
def parse_content_type(content_type):
if not content_type:
return {}
ctr = re.search(r'(?P<mimetype>[^/]+/[^;]+)(?:;\s*codecs="?(?P<codecs>[^"]+))?', content_type)
if ctr:
mimetype, codecs = ctr.groups()
f = parse_codecs(codecs)
f['ext'] = mimetype2ext(mimetype)
return f
return {}
def _media_formats(src, cur_media_type):
full_url = absolute_url(src)
if determine_ext(full_url) == 'm3u8':
is_plain_url = False
formats = self._extract_m3u8_formats(
full_url, video_id, ext='mp4',
entry_protocol=m3u8_entry_protocol, m3u8_id=m3u8_id)
else:
is_plain_url = True
formats = [{
'url': full_url,
'vcodec': 'none' if cur_media_type == 'audio' else None,
}]
return is_plain_url, formats
entries = []
for media_tag, media_type, media_content in re.findall(r'(?s)(<(?P<tag>video|audio)[^>]*>)(.*?)</(?P=tag)>', webpage):
media_info = {
'formats': [],
'subtitles': {},
}
media_attributes = extract_attributes(media_tag)
src = media_attributes.get('src')
if src:
_, formats = _media_formats(src, media_type)
media_info['formats'].extend(formats)
media_info['thumbnail'] = media_attributes.get('poster')
if media_content:
for source_tag in re.findall(r'<source[^>]+>', media_content):
source_attributes = extract_attributes(source_tag)
src = source_attributes.get('src')
if not src:
continue
is_plain_url, formats = _media_formats(src, media_type)
if is_plain_url:
f = parse_content_type(source_attributes.get('type'))
f.update(formats[0])
media_info['formats'].append(f)
else:
media_info['formats'].extend(formats)
for track_tag in re.findall(r'<track[^>]+>', media_content):
track_attributes = extract_attributes(track_tag)
kind = track_attributes.get('kind')
if not kind or kind in ('subtitles', 'captions'):
src = track_attributes.get('src')
if not src:
continue
lang = track_attributes.get('srclang') or track_attributes.get('lang') or track_attributes.get('label')
media_info['subtitles'].setdefault(lang, []).append({
'url': absolute_url(src),
})
if media_info['formats'] or media_info['subtitles']:
entries.append(media_info)
return entries
def _extract_akamai_formats(self, manifest_url, video_id):
formats = []
hdcore_sign = 'hdcore=3.7.0'
f4m_url = re.sub(r'(https?://.+?)/i/', r'\1/z/', manifest_url).replace('/master.m3u8', '/manifest.f4m')
if 'hdcore=' not in f4m_url:
f4m_url += ('&' if '?' in f4m_url else '?') + hdcore_sign
f4m_formats = self._extract_f4m_formats(
f4m_url, video_id, f4m_id='hds', fatal=False)
for entry in f4m_formats:
entry.update({'extra_param_to_segment_url': hdcore_sign})
formats.extend(f4m_formats)
m3u8_url = re.sub(r'(https?://.+?)/z/', r'\1/i/', manifest_url).replace('/manifest.f4m', '/master.m3u8')
formats.extend(self._extract_m3u8_formats(
m3u8_url, video_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
return formats
def _extract_wowza_formats(self, url, video_id, m3u8_entry_protocol='m3u8_native', skip_protocols=[]):
url = re.sub(r'/(?:manifest|playlist|jwplayer)\.(?:m3u8|f4m|mpd|smil)', '', url)
url_base = self._search_regex(r'(?:https?|rtmp|rtsp)(://[^?]+)', url, 'format url')
http_base_url = 'http' + url_base
formats = []
if 'm3u8' not in skip_protocols:
formats.extend(self._extract_m3u8_formats(
http_base_url + '/playlist.m3u8', video_id, 'mp4',
m3u8_entry_protocol, m3u8_id='hls', fatal=False))
if 'f4m' not in skip_protocols:
formats.extend(self._extract_f4m_formats(
http_base_url + '/manifest.f4m',
video_id, f4m_id='hds', fatal=False))
if re.search(r'(?:/smil:|\.smil)', url_base):
if 'dash' not in skip_protocols:
formats.extend(self._extract_mpd_formats(
http_base_url + '/manifest.mpd',
video_id, mpd_id='dash', fatal=False))
if 'smil' not in skip_protocols:
rtmp_formats = self._extract_smil_formats(
http_base_url + '/jwplayer.smil',
video_id, fatal=False)
for rtmp_format in rtmp_formats:
rtsp_format = rtmp_format.copy()
rtsp_format['url'] = '%s/%s' % (rtmp_format['url'], rtmp_format['play_path'])
del rtsp_format['play_path']
del rtsp_format['ext']
rtsp_format.update({
'url': rtsp_format['url'].replace('rtmp://', 'rtsp://'),
'format_id': rtmp_format['format_id'].replace('rtmp', 'rtsp'),
'protocol': 'rtsp',
})
formats.extend([rtmp_format, rtsp_format])
else:
for protocol in ('rtmp', 'rtsp'):
if protocol not in skip_protocols:
formats.append({
'url': protocol + url_base,
'format_id': protocol,
'protocol': protocol,
})
return formats
def _live_title(self, name):
""" Generate the title for a live video """
now = datetime.datetime.now()
now_str = now.strftime('%Y-%m-%d %H:%M')
return name + ' ' + now_str
def _int(self, v, name, fatal=False, **kwargs):
res = int_or_none(v, **kwargs)
if 'get_attr' in kwargs:
print(getattr(v, kwargs['get_attr']))
if res is None:
msg = 'Failed to extract %s: Could not parse value %r' % (name, v)
if fatal:
raise ExtractorError(msg)
else:
self._downloader.report_warning(msg)
return res
def _float(self, v, name, fatal=False, **kwargs):
res = float_or_none(v, **kwargs)
if res is None:
msg = 'Failed to extract %s: Could not parse value %r' % (name, v)
if fatal:
raise ExtractorError(msg)
else:
self._downloader.report_warning(msg)
return res
def _set_cookie(self, domain, name, value, expire_time=None):
cookie = compat_cookiejar.Cookie(
0, name, value, None, None, domain, None,
None, '/', True, False, expire_time, '', None, None, None)
self._downloader.cookiejar.set_cookie(cookie)
def _get_cookies(self, url):
""" Return a compat_cookies.SimpleCookie with the cookies for the url """
req = sanitized_Request(url)
self._downloader.cookiejar.add_cookie_header(req)
return compat_cookies.SimpleCookie(req.get_header('Cookie'))
def get_testcases(self, include_onlymatching=False):
t = getattr(self, '_TEST', None)
if t:
assert not hasattr(self, '_TESTS'), \
'%s has _TEST and _TESTS' % type(self).__name__
tests = [t]
else:
tests = getattr(self, '_TESTS', [])
for t in tests:
if not include_onlymatching and t.get('only_matching', False):
continue
t['name'] = type(self).__name__[:-len('IE')]
yield t
def is_suitable(self, age_limit):
""" Test whether the extractor is generally suitable for the given
age limit (i.e. pornographic sites are not, all others usually are) """
any_restricted = False
for tc in self.get_testcases(include_onlymatching=False):
if tc.get('playlist', []):
tc = tc['playlist'][0]
is_restricted = age_restricted(
tc.get('info_dict', {}).get('age_limit'), age_limit)
if not is_restricted:
return True
any_restricted = any_restricted or is_restricted
return not any_restricted
def extract_subtitles(self, *args, **kwargs):
if (self._downloader.params.get('writesubtitles', False) or
self._downloader.params.get('listsubtitles')):
return self._get_subtitles(*args, **kwargs)
return {}
def _get_subtitles(self, *args, **kwargs):
raise NotImplementedError('This method must be implemented by subclasses')
@staticmethod
def _merge_subtitle_items(subtitle_list1, subtitle_list2):
""" Merge subtitle items for one language. Items with duplicated URLs
will be dropped. """
list1_urls = set([item['url'] for item in subtitle_list1])
ret = list(subtitle_list1)
ret.extend([item for item in subtitle_list2 if item['url'] not in list1_urls])
return ret
@classmethod
def _merge_subtitles(cls, subtitle_dict1, subtitle_dict2):
""" Merge two subtitle dictionaries, language by language. """
ret = dict(subtitle_dict1)
for lang in subtitle_dict2:
ret[lang] = cls._merge_subtitle_items(subtitle_dict1.get(lang, []), subtitle_dict2[lang])
return ret
def extract_automatic_captions(self, *args, **kwargs):
if (self._downloader.params.get('writeautomaticsub', False) or
self._downloader.params.get('listsubtitles')):
return self._get_automatic_captions(*args, **kwargs)
return {}
def _get_automatic_captions(self, *args, **kwargs):
raise NotImplementedError('This method must be implemented by subclasses')
def mark_watched(self, *args, **kwargs):
if (self._downloader.params.get('mark_watched', False) and
(self._get_login_info()[0] is not None or
self._downloader.params.get('cookiefile') is not None)):
self._mark_watched(*args, **kwargs)
def _mark_watched(self, *args, **kwargs):
raise NotImplementedError('This method must be implemented by subclasses')
def geo_verification_headers(self):
headers = {}
geo_verification_proxy = self._downloader.params.get('geo_verification_proxy')
if geo_verification_proxy:
headers['Ytdl-request-proxy'] = geo_verification_proxy
return headers
class SearchInfoExtractor(InfoExtractor):
"""
Base class for paged search queries extractors.
They accept URLs in the format _SEARCH_KEY(|all|[0-9]):{query}
Instances should define _SEARCH_KEY and _MAX_RESULTS.
"""
@classmethod
def _make_valid_url(cls):
return r'%s(?P<prefix>|[1-9][0-9]*|all):(?P<query>[\s\S]+)' % cls._SEARCH_KEY
@classmethod
def suitable(cls, url):
return re.match(cls._make_valid_url(), url) is not None
def _real_extract(self, query):
mobj = re.match(self._make_valid_url(), query)
if mobj is None:
raise ExtractorError('Invalid search query "%s"' % query)
prefix = mobj.group('prefix')
query = mobj.group('query')
if prefix == '':
return self._get_n_results(query, 1)
elif prefix == 'all':
return self._get_n_results(query, self._MAX_RESULTS)
else:
n = int(prefix)
if n <= 0:
raise ExtractorError('invalid download number %s for query "%s"' % (n, query))
elif n > self._MAX_RESULTS:
self._downloader.report_warning('%s returns max %i results (you requested %i)' % (self._SEARCH_KEY, self._MAX_RESULTS, n))
n = self._MAX_RESULTS
return self._get_n_results(query, n)
def _get_n_results(self, query, n):
"""Get a specified number of results for a query"""
raise NotImplementedError('This method must be implemented by subclasses')
@property
def SEARCH_KEY(self):
return self._SEARCH_KEY
|
pim89/youtube-dl
|
youtube_dl/extractor/common.py
|
Python
|
unlicense
| 96,165
|
[
"VisIt"
] |
a5497e905b54d25e126c0473d290f45a51e71ac895598a4bb236767791710d41
|
# Standard Python packages
import math, cmath
import re
import itertools
import numbers
import random
# Special dependencies
import numpy, numpy.random # sudo apt-get install python-numpy
# import minuit # no package
# Augustus dependencies
from augustus.kernel.unitable import UniTable
# Cassius interdependencies
import mathtools
import utilities
import color
import containers
class ContainerException(Exception):
"""Run-time errors in container objects."""
pass
class AutoType:
def __repr__(self):
if self is Auto:
return "Auto"
else:
raise ContainerException, "There must only be one instance of Auto"
def __copy__(self):
return self
def __deepcopy__(self, memo):
return self
#: Symbol indicating that a frame argument should be
#: automatically-generated, if possible. Similar to `None` in that
#: there is only one instance (checked with `is`), but with a different
#: meaning.
#:
#: Example:
#: `xticks = None` means that no x-ticks are drawn
#:
#: `xticks = Auto` means that x-ticks are automatically generated
#:
#: `Auto` is the only instance of `AutoType`.
Auto = AutoType()
######################################################### Layout of the page, coordinate frames, overlays
# for arranging a grid of plots
class Layout:
"""Represents a regular grid of plots.
Signatures::
Layout(nrows, ncols, plot1[, plot2[, ...]])
Layout(plot1[, plot2[, ...]], nrows=value, ncols=value)
Arguments:
nrows (number): number of rows
ncols (number): number of columns
plots (list of `Frame` or other `Layout` objects): plots to
draw, organized in normal reading order (left to right, columns
before rows)
Public Members:
`nrows`, `ncols`, `plots`
Behavior:
It is possible to create an empty Layout (no plots).
For a Layout object named `layout`, `layout[i,j]` accesses a
plot in row `i` and column `j`, while `layout.plots[k]`
accesses a plot by a serial index (`layout.plots` is a normal
list).
Spaces containing `None` will be blank.
Layouts can be nested: e.g. `Layout(1, 2, top, Layout(2, 1,
bottomleft, bottomright))`.
"""
def __init__(self, *args, **kwds):
if "nrows" in kwds and "ncols" in kwds:
self.nrows, self.ncols = kwds["nrows"], kwds["ncols"]
self.plots = list(args)
if set(kwds.keys()) != set(["nrows", "ncols"]):
raise TypeError, "Unrecognized keyword argument"
elif len(args) >= 2 and isinstance(args[0], (numbers.Number, numpy.number)) and isinstance(args[1], (numbers.Number, numpy.number)):
self.nrows, self.ncols = args[0:2]
self.plots = list(args[2:])
if set(kwds.keys()) != set([]):
raise TypeError, "Unrecognized keyword argument"
else:
raise TypeError, "Missing nrows or ncols argument"
def index(self, i, j):
"""Convert a grid index (i,j) into a serial index."""
if i < 0 or j < 0 or i >= self.nrows or j >= self.ncols:
raise ContainerException, "Index (%d,%d) is beyond the %dx%d grid of plots" % (i, j, self.nrows, self.ncols)
return self.ncols*i + j
def __getitem__(self, ij):
i, j = ij
index = self.index(i, j)
if index < len(self.plots):
return self.plots[index]
else:
return None
def __setitem__(self, ij, value):
i, j = ij
index = self.index(i, j)
if index < len(self.plots):
self.plots[index] = value
else:
for k in range(len(self.plots), index):
self.plots.append(None)
self.plots.append(value)
def __delitem__(self, ij):
i, j = ij
if self.index(i, j) < len(self.plots):
self.plots[self.index(i, j)] = None
def __repr__(self):
return "<Layout %dx%d at 0x%x>" % (self.nrows, self.ncols, id(self))
# for representing a coordinate axis
class Frame:
"""Abstract superclass for all plots with drawable coordinate frames.
Frame arguments:
Any frame argument (axis labels, margins, etc.) can be passed
as a keyword in the constructor or later as member data. The
frame arguments are interpreted only by the backend and are
replaced with defaults if not present.
Public Members:
All frame arguments that have been set.
"""
_not_frameargs = []
def __init__(self, **frameargs):
self.__dict__.update(frameargs)
def __repr__(self):
return "<Frame %s at 0x%x>" % (str(self._frameargs()), id(self))
def _frameargs(self):
output = dict(self.__dict__)
for i in self._not_frameargs:
if i in output: del output[i]
for i in output.keys():
if i[0] == "_": del output[i]
return output
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
The abstract class, `Frame`, returns constant intervals (0, 1)
(or (0.1, 1) for log scales.)
"""
if xlog:
xmin, xmax = 0.1, 1.
else:
xmin, xmax = 0., 1.
if ylog:
ymin, ymax = 0.1, 1.
else:
ymin, ymax = 0., 1.
return xmin, ymin, xmax, ymax
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=None, ylog=None): pass # get ready to be drawn
# for overlaying different containers' data in a single frame
class Overlay(Frame):
"""Represents an overlay of several plots in the same coordinate axis.
Signatures::
Overlay(frame, plot1[, plot2[, ...]], [framearg=value[, ...]])
Overlay(plot1[, plot2[, ...]], [frame=value[, framearg=value[, ...]]])
Arguments:
plots (`Frame` instances): plots to be overlaid
frame (index or `None`): which, if any, plot to use to set the
coordinate frame. If `frame=None`, then `frameargs` will be
taken from the `Overlay` instance and a data-space bounding box
will be derived from the union of all contents.
Public Members:
`plots`, `frame`
Behavior:
It is *not* possible to create an empty Overlay (no plots).
"""
_not_frameargs = ["plots", "frame"]
def __init__(self, first, *others, **frameargs):
if isinstance(first, (int, long)):
self.frame = first
self.plots = list(others)
else:
self.plots = [first] + list(others)
Frame.__init__(self, **frameargs)
def append(self, plot):
"""Append a plot to the end of `plots` (drawn last), keeping the `frame` pointer up-to-date."""
self.plots.append(plot)
if getattr(self, "frame", None) is not None and self.frame < 0:
self.frame -= 1
def prepend(self, plot):
"""Prepend a plot at the beginning of `plots` (drawn first), keeping the `frame` pointer up-to-date."""
self.plots.insert(0, plot)
if getattr(self, "frame", None) is not None and self.frame >= 0:
self.frame += 1
def __repr__(self):
if getattr(self, "frame", None) is not None:
return "<Overlay %d items (frame=%d) at 0x%x>" % (len(self.plots), self.frame, id(self))
else:
return "<Overlay %d items at 0x%x>" % (len(self.plots), id(self))
def _frameargs(self):
if getattr(self, "frame", None) is not None:
if self.frame >= len(self.plots):
raise ContainerException, "Overlay.frame points to a non-existent plot (%d <= %d)" % (self.frame, len(self.plots))
output = dict(self.plots[self.frame].__dict__)
output.update(self.__dict__)
else:
output = dict(self.__dict__)
for i in self._not_frameargs:
if i in output: del output[i]
for i in output.keys():
if i[0] == "_": del output[i]
return output
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box of all contents as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
if getattr(self, "frame", None) is not None:
if self.frame >= len(self.plots):
raise ContainerException, "Overlay.frame points to a non-existent plot (%d <= %d)" % (self.frame, len(self.plots))
return self.plots[self.frame].ranges(xlog, ylog)
xmins, ymins, xmaxs, ymaxs = [], [], [], []
for plot in self.plots:
xmin, ymin, xmax, ymax = plot.ranges(xlog, ylog)
xmins.append(xmin)
ymins.append(ymin)
xmaxs.append(xmax)
ymaxs.append(ymax)
return min(xmins), min(ymins), max(xmaxs), max(ymaxs)
######################################################### Histograms, bar charts, pie charts
class Stack(Frame):
"""Represents a stack of histograms.
Signature::
Stack(plot1[, plot2[, ...]] [linewidths=list,] [linestyles=list,] [linecolors=list,] [**frameargs])
Arguments:
plots (list of `HistogramAbstract` instances): histograms to be stacked
linewidths (list): list of linewidths with the same length as
the number of histograms
linestyles (list): list of styles
linecolors (list): list of colors
fillcolors (list): list of fill colors (most commonly used to
distinguish between stacked histograms
Public members:
`plots`, `linewidths`, `linestyles`, `linecolors`, `fillcolors`
Behavior:
It is *not* possible to create an empty Stack (no plots).
If `linewidths`, `linestyles`, `linecolors`, or `fillcolors`
are not specified, the input histograms' own styles will be
used.
"""
_not_frameargs = ["plots", "linewidths", "linestyles", "linecolors", "fillcolors"]
def __init__(self, first, *others, **frameargs):
self.plots = [first] + list(others)
Frame.__init__(self, **frameargs)
def __repr__(self):
return "<Stack %d at 0x%x>" % (len(self.plots), id(self))
def bins(self):
"""Returns a list of histogram (low, high) bin edges.
Exceptions:
Raises `ContainerException` if any of the histogram bins
differ (ignoring small numerical errors).
"""
bins = None
for hold in self.plots:
if bins is None:
bins = hold.bins[:]
else:
same = (len(hold.bins) == len(bins))
if same:
for oldbin, refbin in zip(hold.bins, bins):
if HistogramAbstract._numeric(hold, oldbin) and HistogramAbstract._numeric(hold, refbin):
xepsilon = mathtools.epsilon * abs(refbin[1] - refbin[0])
if abs(oldbin[0] - refbin[0]) > xepsilon or abs(oldbin[1] - refbin[1]) > xepsilon:
same = False
break
else:
if oldbin != refbin:
same = False
break
if not same:
raise ContainerException, "Bins in stacked histograms must be the same"
return bins
def stack(self):
"""Returns a list of new histograms, obtained by stacking the inputs.
Exceptions:
Raises `ContainerException` if any of the histogram bins
differ (ignoring small numerical errors).
"""
if len(self.plots) == 0:
raise ContainerException, "Stack must contain at least one histogram"
for styles in "linewidths", "linestyles", "linecolors", "fillcolors":
if getattr(self, styles, None) is not None:
if len(getattr(self, styles)) != len(self.plots):
raise ContainerException, "There must be as many %s as plots" % styles
bins = self.bins()
gap = max([i.gap for i in self.plots])
output = []
for i in xrange(len(self.plots)):
if getattr(self, "linewidths", None) is not None:
linewidth = self.linewidths[i]
else:
linewidth = self.plots[i].linewidth
if getattr(self, "linestyles", None) is not None:
linestyle = self.linestyles[i]
else:
linestyle = self.plots[i].linestyle
if getattr(self, "linecolors", None) is not None:
linecolor = self.linecolors[i]
else:
linecolor = self.plots[i].linecolor
if getattr(self, "fillcolors", None) is not None:
fillcolor = self.fillcolors[i]
else:
fillcolor = self.plots[i].fillcolor
if isinstance(self.plots[i], HistogramCategorical):
hnew = HistogramCategorical(bins, None, None, 0, linewidth, linestyle, linecolor, fillcolor, gap)
else:
hnew = HistogramAbstract(bins, 0, linewidth, linestyle, linecolor, fillcolor, gap)
for j in xrange(i+1):
for bin in xrange(len(hnew.values)):
hnew.values[bin] += self.plots[j].values[bin]
output.append(hnew)
return output
def overlay(self):
self._stack = self.stack()
self._stack.reverse()
self._overlay = Overlay(*self._stack, frame=0)
self._overlay.plots[0].__dict__.update(self._frameargs())
return self._overlay
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box of all contents as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
self.overlay()
if ylog:
ymin = min(filter(lambda y: y > 0., self._stack[-1].values))
ymax = max(filter(lambda y: y > 0., self._stack[0].values))
else:
ymin = min(list(self._stack[-1].values) + [0.])
ymax = max(self._stack[0].values)
if ymin == ymax:
if ylog:
ymin, ymax = ymin / 2., ymax * 2.
else:
ymin, ymax = ymin - 0.5, ymax + 0.5
return self.plots[0].low(), ymin, self.plots[0].high(), ymax
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=None, ylog=None):
self.overlay()
class HistogramAbstract(Frame):
"""Abstract class for histograms: use concrete classes (Histogram, HistogramNonUniform, and HistogramCategorical) instead."""
_not_frameargs = ["bins", "storelimit", "entries", "linewidth", "linestyle", "linecolor", "fillcolor", "gap", "values", "underflow", "overflow", "inflow"]
def __init__(self, bins, storelimit, linewidth, linestyle, linecolor, fillcolor, gap, **frameargs):
self.bins, self.storelimit = bins, storelimit
self.entries = 0
self.linewidth, self.linestyle, self.linecolor, self.fillcolor, self.gap = linewidth, linestyle, linecolor, fillcolor, gap
self.values = numpy.zeros(len(self.bins), numpy.float)
self._sumx = numpy.zeros(len(self.bins), numpy.float)
self.underflow, self.overflow, self.inflow = 0., 0., 0.
if storelimit is None:
self._store = []
self._weights = []
self._lenstore = None
else:
self._store = numpy.empty(storelimit, numpy.float)
self._weights = numpy.empty(storelimit, numpy.float)
self._lenstore = 0
Frame.__init__(self, **frameargs)
def __repr__(self):
return "<HistogramAbstract at 0x%x>" % id(self)
def _numeric(self, bin):
return len(bin) == 2 and isinstance(bin[0], (numbers.Number, numpy.number)) and isinstance(bin[1], (numbers.Number, numpy.number))
def __str__(self):
output = []
output.append("%-30s %s" % ("bin", "value"))
output.append("="*40)
if self.underflow > 0: output.append("%-30s %g" % ("underflow", self.underflow))
for i in xrange(len(self.bins)):
if self._numeric(self.bins[i]):
category = "[%g, %g)" % self.bins[i]
else:
category = "\"%s\"" % self.bins[i]
output.append("%-30s %g" % (category, self.values[i]))
if self.overflow > 0: output.append("%-30s %g" % ("overflow", self.overflow))
if self.inflow > 0: output.append("%-30s %g" % ("inflow", self.inflow))
return "\n".join(output)
def binedges(self):
"""Return numerical values for the the edges of bins."""
categorical = False
for bin in self.bins:
if not self._numeric(bin):
categorical = True
break
if categorical:
lows = map(lambda x: x - 0.5, xrange(len(self.bins)))
highs = map(lambda x: x + 0.5, xrange(len(self.bins)))
return zip(lows, highs)
else:
return self.bins[:]
def center(self, i):
"""Return the center (x value) of bin `i`."""
if self._numeric(self.bins[i]):
return (self.bins[i][0] + self.bins[i][1])/2.
else:
return self.bins[i]
def centers(self):
"""Return the centers of all bins."""
return [self.center(i) for i in range(len(self.bins))]
def centroid(self, i):
"""Return the centroid (average data x value) of bin `i`."""
if self.values[i] == 0.:
return self.center(i)
else:
return self._sumx[i] / self.values[i]
def centroids(self):
"""Return the centroids of all bins."""
return [self.centroid(i) for i in range(len(self.bins))]
def mean(self, decimals=Auto, sigfigs=Auto, string=False):
"""Calculate the mean of the distribution, using bin contents.
Keyword arguments:
decimals (int or `Auto`): number of digits after the decimal
point to found the result, if not `Auto`
sigfigs (int or `Auto`): number of significant digits to round
the result, if not `Auto`
string (bool): return output as a string (forces number of digits)
"""
numer = 0.
denom = 0.
for bin, value in zip(self.bins, self.values):
if self._numeric(bin):
width = bin[1] - bin[0]
center = (bin[0] + bin[1])/2.
else:
raise ContainerException, "The mean of a categorical histogram is not meaningful"
numer += width * value * center
denom += width * value
output = numer/denom
if decimals is not Auto:
if string:
return mathtools.str_round(output, decimals)
else:
return round(output, decimals)
elif sigfigs is not Auto:
if string:
return mathtools.str_sigfigs(output, sigfigs)
else:
return mathtools.round_sigfigs(output, sigfigs)
else:
if string:
return str(output)
else:
return output
def rms(self, decimals=Auto, sigfigs=Auto, string=False):
"""Calculate the root-mean-square of the distribution, using bin contents.
Keyword arguments:
decimals (int or `Auto`): number of digits after the decimal
point to found the result, if not `Auto`
sigfigs (int or `Auto`): number of significant digits to round
the result, if not `Auto`
string (bool): return output as a string (forces number of digits)
"""
numer = 0.
denom = 0.
for bin, value in zip(self.bins, self.values):
if self._numeric(bin):
width = bin[1] - bin[0]
center = (bin[0] + bin[1])/2.
else:
raise ContainerException, "The RMS of a categorical histogram is not meaningful"
numer += width * value * center**2
denom += width * value
output = math.sqrt(numer/denom)
if decimals is not Auto:
if string:
return mathtools.str_round(output, decimals)
else:
return round(output, decimals)
elif sigfigs is not Auto:
if string:
return mathtools.str_sigfigs(output, sigfigs)
else:
return mathtools.round_sigfigs(output, sigfigs)
else:
if string:
return str(output)
else:
return output
def stdev(self, unbiased=True, decimals=Auto, sigfigs=Auto, string=False):
"""Calculate the standard deviation of the distribution, using bin contents.
Keyword arguments:
unbiased (bool defaulting to True): return unbiased sample
deviation, sqrt(sum(xi - mean)**2/(N - 1)), rather than the
biased estimator, sqrt(sum(xi - mean)**2/ N )
decimals (int or `Auto`): number of digits after the decimal
point to found the result, if not `Auto`
sigfigs (int or `Auto`): number of significant digits to round
the result, if not `Auto`
string (bool): return output as a string (forces number of digits)
Note:
To use an unbiased estimator, the "entries" member must be
properly defined and greater than 1.
"""
numer1 = 0.
numer2 = 0.
denom = 0.
for bin, value in zip(self.bins, self.values):
if self._numeric(bin):
width = bin[1] - bin[0]
center = (bin[0] + bin[1])/2.
else:
raise ContainerException, "The standard deviation of a categorical histogram is not meaningful"
numer1 += width * value * center
numer2 += width * value * center**2
denom += width * value
output = math.sqrt(numer2/denom - (numer1/denom)**2)
if unbiased:
if self.entries <= 1.:
raise ValueError, "To use an unbiased standard deviation estimator, the \"entries\" member must be properly defined and greater than 1."
output *= math.sqrt(self.entries / (self.entries - 1.))
if decimals is not Auto:
if string:
return mathtools.str_round(output, decimals)
else:
return round(output, decimals)
elif sigfigs is not Auto:
if string:
return mathtools.str_sigfigs(output, sigfigs)
else:
return mathtools.round_sigfigs(output, sigfigs)
else:
if string:
return str(output)
else:
return output
def store(self):
"""Return a _copy_ of the histogram's stored values (if any)."""
if self._lenstore is None:
return self._store[:]
else:
return self._store[0:self._lenstore]
def weights(self):
"""Return a _copy_ of the histogram's stored weights (if any)."""
if self._lenstore is None:
return self._weights[:]
else:
return self._weights[0:self._lenstore]
def clearbins(self):
"""Clear all bin values, including `underflow`, `overflow`, and `inflow`, and set `entries` to zero."""
self.entries = 0
self.values = numpy.zeros(len(self.bins), self.values.dtype)
self._sumx = numpy.zeros(len(self.bins), self._sumx.dtype)
self.underflow, self.overflow, self.inflow = 0., 0., 0.
def clearstore(self):
"""Clear the histogram's stored values (if any)."""
if self._lenstore is None:
self._store = []
self._weights = []
else:
self._lenstore = 0
def refill(self):
"""Clear and refill all bin values using the stored values (if any)."""
self.clearbins()
self.fill(self._store, self._weights, self._lenstore, fillstore=False)
def support(self):
"""Return the widest interval of bin values with non-zero contents."""
all_numeric = True
for bin in self.bins:
if not self._numeric(bin):
all_numeric = False
break
xmin, xmax = None, None
output = []
for bin, value in zip(self.bins, self.values):
if value > 0.:
if all_numeric:
x1, x2 = bin
if xmin is None or x1 < xmin: xmin = x1
if xmax is None or x2 > xmax: xmax = x2
else:
output.append(bin)
if all_numeric: return xmin, xmax
else: return output
def scatter(self, centroids=False, poisson=False, **frameargs):
"""Return the bins and values of the histogram as a Scatter plot.
Arguments:
centroids (bool): if `False`, use bin centers; if `True`,
use centroids
poisson (bool): if `False`, do not create error bars; if
`True`, create error bars assuming the bin contents to
belong to Poisson distributions
Note:
Asymmetric Poisson tail-probability is used for error bars
on quantities up to 20 (using a pre-calculated table);
for 20 and above, a symmetric square root is used
(approximating Poisson(x) ~ Gaussian(x) for x >> 1).
"""
kwds = {"linewidth": self.linewidth,
"linestyle": self.linestyle,
"linecolor": self.linecolor}
kwds.update(frameargs)
def poisson_errorbars(value):
if value < 20:
return {0: (0, 1.1475924708896912),
1: (-1, 1.3593357241843194),
2: (-2, 1.5187126521158518),
3: (-2.1423687562878797, 1.7239415816257235),
4: (-2.2961052720689565, 1.9815257924746845),
5: (-2.4893042928478337, 2.2102901353154891),
6: (-2.6785495948620621, 2.418184093020642),
7: (-2.8588433484599989, 2.6100604797946687),
8: (-3.0300038654056323, 2.7891396571794473),
9: (-3.1927880092968906, 2.9576883353481378),
10: (-3.348085587280849, 3.1173735938098446),
11: (-3.4967228532132424, 3.2694639669834089),
12: (-3.639421017629985, 3.4149513337692667),
13: (-3.7767979638286704, 3.5546286916146812),
14: (-3.9093811537390764, 3.6891418894420838),
15: (-4.0376219573077776, 3.8190252444691453),
16: (-4.1619085382943979, 3.9447267851063259),
17: (-4.2825766762666433, 4.0666265902382577),
18: (-4.3999186228618044, 4.185050401352413),
19: (-4.5141902851535463, 4.3002799167131514)}[value]
else:
return -math.sqrt(value), math.sqrt(value)
if poisson: values = numpy.empty((len(self.bins), 4), dtype=numpy.float)
else: values = numpy.empty((len(self.bins), 2), dtype=numpy.float)
if centroids: values[:,0] = self.centroids()
else: values[:,0] = self.centers()
values[:,1] = self.values
if poisson:
for i in range(len(self.bins)):
values[i,2:4] = poisson_errorbars(self.values[i])
return Scatter(values=values, sig=("x", "y", "eyl", "ey"), **kwds)
else:
return Scatter(values=values, sig=("x", "y"), **kwds)
### to reproduce the table:
# from scipy.stats import poisson
# from scipy.optimize import bisect
# from math import sqrt
# def calculate_entry(value):
# def down(x):
# if x < 1e-5:
# return down(1e-5) - x
# else:
# if value in (0, 1, 2):
# return poisson.cdf(value, x) - 1. - 2.*0.3413
# else:
# return poisson.cdf(value, x) - poisson.cdf(value, value) - 0.3413
# def up(x):
# if x < 1e-5:
# return up(1e-5) - x
# else:
# if value in (0, 1, 2):
# return poisson.cdf(value, x) - 1. + 2.*0.3413
# else:
# return poisson.cdf(value, x) - poisson.cdf(value, value) + 0.3413
# table[value] = bisect(down, -100., 100.) - value, bisect(up, -100., 100.) - value
# if table[value][0] + value < 0.:
# table[value] = -value, table[value][1]
# table = {}
# for i in range(20):
# calculate_entry(i)
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
xmin, ymin, xmax, ymax = None, None, None, None
all_numeric = True
for bin, value in zip(self.bins, self.values):
if self._numeric(bin):
x1, x2 = bin
if (not xlog or x1 > 0.) and (xmin is None or x1 < xmin): xmin = x1
if (not xlog or x2 > 0.) and (xmax is None or x2 > xmax): xmax = x2
else:
all_numeric = False
if (not ylog or value > 0.) and (ymin is None or value < ymin): ymin = value
if (not ylog or value > 0.) and (ymax is None or value > ymax): ymax = value
if not all_numeric:
xmin, xmax = -0.5, len(self.bins) - 0.5
if xmin is None and xmax is None:
if xlog:
xmin, xmax = 0.1, 1.
else:
xmin, xmax = 0., 1.
if ymin is None and ymax is None:
if ylog:
ymin, ymax = 0.1, 1.
else:
ymin, ymax = 0., 1.
if xmin == xmax:
if xlog:
xmin, xmax = xmin/2., xmax*2.
else:
xmin, xmax = xmin - 0.5, xmax + 0.5
if ymin == ymax:
if ylog:
ymin, ymax = ymin/2., ymax*2.
else:
ymin, ymax = ymin - 0.5, ymax + 0.5
return xmin, ymin, xmax, ymax
class Histogram(HistogramAbstract):
"""Represent a 1-D histogram with uniform bins.
Arguments:
numbins (int): number of bins
low (float): low edge of first bin
high (float): high edge of last bin
storelimit (int or `None`): maximum number of values to store,
so that the histogram bins can be redrawn; `None` means no
limit
linewidth (float): scale factor for the line used to draw the
histogram border
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color of the boundary
line of the histogram area; no line if `None`
fillcolor (string, color, or `None`): fill color of the
histogram area; hollow if `None`
gap (float): space drawn between bins, as a fraction of the bin
width
`**frameargs`: keyword arguments for the coordinate frame
Public members:
bins (list of `(low, high)` pairs): bin intervals (x axis)
values (numpy array of floats): contents of each bin (y axis),
has the same length as `bins`
entries (int): unweighted number of entries accumulated so far
underflow (float): number of values encountered that are less
than all bin ranges
overflow (float): number of values encountered that are greater
than all bin ranges
`storelimit`, `linewidth`, `linestyle`, `linecolor`,
`fillcolor`, and frame arguments.
Behavior:
The histogram bins are initially fixed, but can be 'reshaped'
if `entries <= storelimit`.
After construction, do not set the bins directly; use `reshape`
instead.
Setting `linecolor = None` is the only proper way to direct the
graphics backend to not draw a line around the histogram
border.
"""
def __init__(self, numbins, low, high, data=None, weights=None, storelimit=0, linewidth=1., linestyle="solid", linecolor="black", fillcolor=None, gap=0, **frameargs):
self.reshape(numbins, low, high, refill=False, warnings=False)
HistogramAbstract.__init__(self, self.bins, storelimit, linewidth, linestyle, linecolor, fillcolor, gap, **frameargs)
if data is not None:
if weights is not None:
self.fill(data, weights)
else:
self.fill(data)
def low(self):
"""Return the low edge of the lowest bin."""
return self._low
def high(self):
"""Return the high edge of the highest bin."""
return self._high
def __repr__(self):
try:
xlabel = " \"%s\"" % self.xlabel
except AttributeError:
xlabel = ""
return "<Histogram %d %g %g%s at 0x%x>" % (len(self.bins), self.low(), self.high(), xlabel, id(self))
def reshape(self, numbins, low=None, high=None, refill=True, warnings=True):
"""Change the bin structure of the histogram and refill its contents.
Arguments:
numbins (int): new number of bins
low (float or `None`): new low edge, or `None` to keep the
old one
high (float or `None`): new high edge, or `None` to keep
the old one
refill (bool): call `refill` after setting the bins
warnings (bool): raise `ContainerException` if `storelimit
< entries`: that is, if the reshaping cannot be performed
without losing data
"""
if low is None: low = self.low()
if high is None: high = self.high()
if warnings:
if self._lenstore is None:
if len(self._store) < self.entries: raise ContainerException, "Cannot reshape a histogram without a full set of stored data"
else:
if self._lenstore < self.entries: raise ContainerException, "Cannot reshape a histogram without a full set of stored data"
self._low, self._high, self._factor = low, high, numbins/float(high - low)
self._binwidth = (high-low)/float(numbins)
lows = numpy.arange(low, high, self._binwidth)
highs = lows + self._binwidth
self.bins = zip(lows, highs)
if refill: self.refill()
def optimize(self, numbins=utilities.binning, ranges=utilities.calcrange_quartile):
"""Optimize the number of bins and/or range of the histogram.
Arguments:
numbins (function, int, or `None`): function that returns
an optimal number of bins, given a dataset, or a simple
number of bins, or `None` to leave the number of bins as it is
ranges (function, (low, high), or `None`): function that
returns an optimal low, high range, given a dataset, or an
explicit low, high tuple, or `None` to leave the ranges as
they are
"""
if numbins is Auto: numbins = utilities.binning
if ranges is Auto: ranges = utilities.calcrange_quartile
# first do the ranges
if ranges is None:
low, high = self.low(), self.high()
elif isinstance(ranges, (tuple, list)) and len(ranges) == 2 and isinstance(ranges[0], (numbers.Number, numpy.number)) and isinstance(ranges[1], (numbers.Number, numpy.number)):
low, high = ranges
elif callable(ranges):
if self._lenstore is None:
if len(self._store) < self.entries: raise ContainerException, "Cannot optimize a histogram without a full set of stored data"
else:
if self._lenstore < self.entries: raise ContainerException, "Cannot optimize a histogram without a full set of stored data"
low, high = ranges(self._store, self.__dict__.get("xlog", False))
else:
raise ContainerException, "The 'ranges' argument must be a function, (low, high), or `None`."
# then do the binning
if numbins is None:
numbins = len(self.bins)
elif isinstance(numbins, (int, long)):
pass
elif callable(numbins):
if self._lenstore is None:
if len(self._store) < self.entries: raise ContainerException, "Cannot optimize a histogram without a full set of stored data"
storecopy = numpy.array(filter(lambda x: low <= x < high, self._store))
else:
if self._lenstore < self.entries: raise ContainerException, "Cannot optimize a histogram without a full set of stored data"
storecopy = self._store[0:self._lenstore]
numbins = numbins(storecopy, low, high)
else:
raise ContainerException, "The 'numbins' argument must be a function, int, or `None`."
self.reshape(numbins, low, high)
def fill(self, values, weights=None, limit=None, fillstore=True):
"""Put one or many values into the histogram.
Arguments:
values (float or list of floats): value or values to put
into the histogram
weights (float, list of floats, or `None`): weights for
each value; all have equal weight if `weights = None`.
limit (int or `None`): maximum number of values, weights to
put into the histogram
fillstore (bool): also fill the histogram's store (if any)
Behavior:
`itertools.izip` is used to loop over values and weights,
filling the histogram. If values and weights have
different lengths, the filling operation would be truncated
to the shorter list.
Histogram weights are usually either 1 or 1/(value uncertainty)**2.
"""
# handle the case of being given only one value
if isinstance(values, (numbers.Number, numpy.number)):
values = [values]
if weights is None:
weights = numpy.ones(len(values), numpy.float)
elif isinstance(weights, (numbers.Number, numpy.number)):
weights = [weights]
for counter, (value, weight) in enumerate(itertools.izip(values, weights)):
if limit is not None and counter >= limit: break
if fillstore:
if self._lenstore is None:
self._store.append(value)
self._weights.append(weight)
elif self._lenstore < self.storelimit:
self._store[self._lenstore] = value
self._weights[self._lenstore] = weight
self._lenstore += 1
index = int(math.floor((value - self._low)*self._factor))
if index < 0:
self.underflow += weight
elif index >= len(self.bins):
self.overflow += weight
else:
self.values[index] += weight
self._sumx[index] += weight * value
self.entries += 1
def histogramInteger(low, high, **kwds):
"""Create a histogram of all integers between low and high (inclusive).
Keyword arguments are passed to the Histogram class constructor.
"""
return Histogram((high - low + 1), low - 0.5, high + 0.5, **kwds)
class HistogramNonUniform(HistogramAbstract):
"""Represent a 1-D histogram with uniform bins.
Arguments:
bins (list of `(low, high)` pairs): user-defined bin intervals
storelimit (int or `None`): maximum number of values to store,
so that the histogram bins can be redrawn; `None` means no
limit
linewidth (float): scale factor for the line used to draw the
histogram border
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color of the boundary
line of the histogram area; no line if `None`
fillcolor (string, color, or `None`): fill color of the
histogram area; hollow if `None`
gap (float): space drawn between bins, as a fraction of the bin
width
`**frameargs`: keyword arguments for the coordinate frame
Public members:
values (numpy array of floats): contents of each bin (y axis),
has the same length as `bins`
entries (int): unweighted number of entries accumulated so far
underflow (float): number of values encountered that are less
than all bin ranges
overflow (float): number of values encountered that are greater
than all bin ranges
inflow (float): number of values encountered that are between
bins (if there are any gaps between user-defined bin intervals)
`bins`, `storelimit`, `linewidth`, `linestyle`, `linecolor`,
`fillcolor`, and frame arguments.
Behavior:
If any bin intervals overlap, values will be entered into the
first of the two overlapping bins.
After construction, do not set the bins directly; use `reshape`
instead.
Setting `linecolor = None` is the only proper way to direct the
graphics backend to not draw a line around the histogram
border.
"""
def __init__(self, bins, data=None, weights=None, storelimit=0, linewidth=1., linestyle="solid", linecolor="black", fillcolor=None, gap=0, **frameargs):
HistogramAbstract.__init__(self, bins, storelimit, linewidth, linestyle, linecolor, fillcolor, gap, **frameargs)
self._low, self._high = None, None
for low, high in self.bins:
if self._low is None or low < self._low:
self._low = low
if self._high is None or high > self._high:
self._high = high
if data is not None:
if weights is not None:
self.fill(data, weights)
else:
self.fill(data)
def low(self):
"""Return the low edge of the lowest bin."""
return self._low
def high(self):
"""Return the high edge of the highest bin."""
return self._high
def __repr__(self):
try:
xlabel = " \"%s\"" % self.xlabel
except AttributeError:
xlabel = ""
return "<HistogramNonUniform %d%s at 0x%x>" % (len(self.bins), xlabel, id(self))
def reshape(self, bins, refill=True, warnings=True):
"""Change the bin structure of the histogram and refill its contents.
Arguments:
bins (list of `(low, high)` pairs): user-defined bin intervals
refill (bool): call `refill` after setting the bins
warnings (bool): raise `ContainerException` if `storelimit
< entries`: that is, if the reshaping cannot be performed
without losing data
"""
if warnings:
if self._lenstore is None:
if len(self._store) < self.entries: raise ContainerException, "Cannot reshape a histogram without a full set of stored data"
else:
if self._lenstore < self.entries: raise ContainerException, "Cannot reshape a histogram without a full set of stored data"
self.bins = bins
if refill: self.refill()
def fill(self, values, weights=None, limit=None, fillstore=True):
"""Put one or many values into the histogram.
Arguments:
values (float or list of floats): value or values to put
into the histogram
weights (float, list of floats, or `None`): weights for
each value; all have equal weight if `weights = None`.
limit (int or `None`): maximum number of values, weights to
put into the histogram
fillstore (bool): also fill the histogram's store (if any)
Behavior:
`itertools.izip` is used to loop over values and weights,
filling the histogram. If values and weights have
different lengths, the filling operation would be truncated
to the shorter list.
Histogram weights are usually either 1 or 1/(value uncertainty)**2.
"""
# handle the case of being given only one value
if isinstance(values, (numbers.Number, numpy.number)):
values = [values]
if weights is None:
weights = numpy.ones(len(values), numpy.float)
elif isinstance(weights, (numbers.Number, numpy.number)):
weights = [weights]
for counter, (value, weight) in enumerate(itertools.izip(values, weights)):
if limit is not None and counter >= limit: break
if fillstore:
if self._lenstore is None:
self._store.append(value)
self._weights.append(weight)
elif self._lenstore < self.storelimit:
self._store[self._lenstore] = value
self._weights[self._lenstore] = weight
self._lenstore += 1
filled = False
less_than_all = True
greater_than_all = True
for i, (low, high) in enumerate(self.bins):
if low <= value < high:
self.values[i] += weight
self._sumx[i] += weight * value
filled = True
break
elif not (value < low): less_than_all = False
elif not (value >= high): greater_than_all = False
if not filled:
if less_than_all: self.underflow += weight
elif greater_than_all: self.overflow += weight
else: self.inflow += weight
self.entries += 1
class HistogramCategorical(HistogramAbstract):
"""Represent a 1-D histogram with categorical bins (a bar chart).
Arguments:
bins (list of strings): names of the categories
storelimit (int or `None`): maximum number of values to store,
so that the histogram bins can be redrawn; `None` means no
limit
linewidth (float): scale factor for the line used to draw the
histogram border
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color of the boundary
line of the histogram area; no line if `None`
fillcolor (string, color, or `None`): fill color of the
histogram area; hollow if `None`
gap (float): space drawn between bins, as a fraction of the bin
width
`**frameargs`: keyword arguments for the coordinate frame
Public members:
values (numpy array of floats): contents of each bin (y axis),
has the same length as `bins`
entries (int): unweighted number of entries accumulated so far
inflow (float): number of values encountered that do not belong
to any bins
`bins`, `storelimit`, `linewidth`, `linestyle`, `linecolor`,
`fillcolor`, and frame arguments.
Behavior:
After construction, never change the bins.
Setting `linecolor = None` is the only proper way to direct the
graphics backend to not draw a line around the histogram
border.
"""
def __init__(self, bins, data=None, weights=None, storelimit=0, linewidth=1., linestyle="solid", linecolor="black", fillcolor=None, gap=0.1, **frameargs):
self._catalog = dict(map(lambda (x, y): (y, x), enumerate(bins)))
HistogramAbstract.__init__(self, bins, storelimit, linewidth, linestyle, linecolor, fillcolor, gap, **frameargs)
if data is not None:
if weights is not None:
self.fill(data, weights)
else:
self.fill(data)
def __repr__(self):
try:
xlabel = " \"%s\"" % self.xlabel
except AttributeError:
xlabel = ""
return "<HistogramCategorical %d%s at 0x%x>" % (len(self.bins), xlabel, id(self))
def low(self):
"""Return the effective low edge, with all categories treated as integers (-0.5)."""
return -0.5
def high(self):
"""Return the effective low edge, with all categories treated as integers (numbins - 0.5)."""
return len(self.bins) - 0.5
def top(self, N):
"""Return a simplified histogram containing only the top N values (sorted)."""
pairs = zip(self.bins, self.values)
pairs.sort(lambda a, b: cmp(b[1], a[1]))
othervalue = sum([values for bins, values in pairs[N:]])
bins, values = zip(*pairs[:N])
h = HistogramCategorical(list(bins) + ["other"])
h.values = numpy.array(list(values) + [othervalue])
for name, value in self.__dict__.items():
if name not in ("bins", "values"):
h.__dict__[name] = value
return h
def binorder(self, *neworder):
"""Specify a new order for the bins with a list of string arguments (updating bin values).
All arguments must be the names of existing bins.
If a bin name is missing, it will be deleted!
"""
reverse = dict(map(lambda (x, y): (y, x), enumerate(self.bins)))
indicies = []
for name in neworder:
if name not in self.bins:
raise ContainerException, "Not a recognized bin name: \"%s\"." % name
indicies.append(reverse[name])
newinflow = 0.
for i, name in enumerate(self.bins):
if name not in neworder:
newinflow += self.values[i]
self.bins = [self.bins[i] for i in indicies]
indicies = numpy.array(indicies)
self.values = self.values[indicies]
self._sumx = self._sumx[indicies]
self.inflow += newinflow
def fill(self, values, weights=None, limit=None, fillstore=True):
"""Put one or many values into the histogram.
Arguments:
values (float or list of floats): value or values to put
into the histogram
weights (float, list of floats, or `None`): weights for
each value; all have equal weight if `weights = None`.
limit (int or `None`): maximum number of values, weights to
put into the histogram
fillstore (bool): also fill the histogram's store (if any)
Behavior:
`itertools.izip` is used to loop over values and weights,
filling the histogram. If values and weights have
different lengths, the filling operation would be truncated
to the shorter list.
Histogram weights are usually either 1 or 1/(value uncertainty)**2.
"""
# handle the case of being given only one value
if isinstance(values, basestring):
values = [values]
if weights is None:
weights = numpy.ones(len(values), numpy.float)
elif isinstance(weights, (numbers.Number, numpy.number)):
weights = [weights]
for counter, (value, weight) in enumerate(itertools.izip(values, weights)):
if limit is not None and counter >= limit: break
try:
value = self._catalog[value]
self.values[value] += weight
self._sumx[value] += weight * value
except KeyError:
value = -1
self.inflow += weight
self.entries += 1
if fillstore:
if self._lenstore is None:
self._store.append(value)
self._weights.append(weight)
elif self._lenstore < self.storelimit:
self._store[self._lenstore] = value
self._weights[self._lenstore] = weight
self._lenstore += 1
######################################################### Scatter plots, with and without error bars, and timeseries
class Scatter(Frame):
"""Represents a scatter of X-Y points, a line graph, and error bars.
Signatures::
Scatter(values, sig, ...)
Scatter(x, y, [ex,] [ey,] [exl,] [eyl,] ...)
Arguments for signature 1:
values (numpy array of N-dimensional points): X-Y points to
draw (with possible error bars)
sig (list of strings): how to interpret each N-dimensional
point, e.g. `('x', 'y', 'ey')` for triplets of x, y, and y
error bars
Arguments for signature 2:
x (list of floats): x values
y (list of floats): y values
ex (list of floats or `None`): symmetric or upper errors in x;
`None` for no x error bars
ey (list of floats or `None`): symmetric or upper errors in y
exl (list of floats or `None`): asymmetric lower errors in x
eyl (list of floats or `None`): asymmetric lower errors in y
Arguments for both signatures:
limit (int or `None`): maximum number of points to draw
(randomly selected if less than total number of points)
calcrange (function): a function that chooses a reasonable range
to plot, based on the data (overruled by `xmin`, `xmax`, etc.)
connector (`None`, "unsorted", "xsort", "ysort"): toggles
whether a line is drawn through all of the visible points, and
whether those points are sorted before drawing the line
marker (string or `None`): symbol to draw at each point; `None`
for no markers (e.g. just lines)
markersize (float): scale factor to resize marker points
markercolor (string, color, or `None`): color of the marker
points; hollow markers if `None`
markeroutline (string, color, or `None`): color of the outline
of each marker; no outline if `None`
linewidth (float): scale factor to resize line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color of a line
connecting all points; no line if `None`
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`values`, `sig`, `limit`, `calcrange`, `connector`, `marker`,
`markersize`, `markercolor`, `markeroutline`, `lines`,
`linewidth`, `linestyle`, `linecolor`, and frame arguments.
Behavior:
Points are stored internally as an N-dimensional numpy array of
`values`, with meanings specified by `sig`.
Input points are _copied_, not set by reference, with both
input methods. The set-by-signature method is likely to be
faster for large datasets.
Setting `limit` to a value other than `None` restricts the
number of points to draw in the graphical backend, something
that may be necessary if the number of points is very large. A
random subset is selected when the scatter plot is drawn.
The numerical `limit` refers to the number of points drawn
*within a coordinate frame,* so zooming in will reveal more
points.
Since the input set of points is not guaranteed to be
monatonically increasing in x, a line connecting all points
might cross itself.
Setting `marker = None` is the only proper way to direct the
graphics backend to not draw a marker at each visible point
Setting `linecolor = None` is the only proper way to direct the
graphics backend to not draw a line connecting all visible points
Exceptions:
At least `x` and `y` are required.
"""
_not_frameargs = ["sig", "values", "limit", "calcrange", "connector", "marker", "markersize", "markercolor", "markeroutline", "linewidth", "linestyle", "linecolor"]
def __init__(self, values=[], sig=None, x=None, y=None, ex=None, ey=None, exl=None, eyl=None, limit=None, calcrange=utilities.calcrange, connector=None, marker="circle", markersize=1., markercolor="black", markeroutline=None, linewidth=1., linestyle="solid", linecolor="black", **frameargs):
self.limit, self.calcrange = limit, calcrange
self.connector, self.marker, self.markersize, self.markercolor, self.markeroutline, self.linewidth, self.linestyle, self.linecolor = connector, marker, markersize, markercolor, markeroutline, linewidth, linestyle, linecolor
if sig is None:
self.setvalues(x, y, ex, ey, exl, eyl)
else:
self.setbysig(values, sig)
Frame.__init__(self, **frameargs)
def __repr__(self):
if self.limit is None:
return "<Scatter %d (draw all) at 0x%x>" % (len(self.values), id(self))
else:
return "<Scatter %d (draw %d) at 0x%x>" % (len(self.values), self.limit, id(self))
def index(self):
"""Returns a dictionary of sig values ("x", "y", etc.) to `values` index.
Example usage::
scatter.values[0:1000,scatter.index()["ex"]]
returns the first thousand x error bars.
"""
return dict(zip(self.sig, range(len(self.sig))))
def sort(self, key="x"):
"""Sorts the data in values by one of the fields (does not affect graphical output)."""
self.values = self.values[self.values[:,self.index()[key]].argsort(),]
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=None, ylog=None):
if len(self.values) == 0:
self._xlimited_values = numpy.array([], dtype=numpy.float)
self._ylimited_values = numpy.array([], dtype=numpy.float)
self._limited_values = numpy.array([], dtype=numpy.float)
return
index = self.index()
# select elements within the given ranges
mask = numpy.ones(len(self.values), dtype="bool")
x = self.values[:,index["x"]]
y = self.values[:,index["y"]]
def limitx(mask):
if "ex" in index:
numpy.logical_and(mask, (x + abs(self.values[:,index["ex"]]) > xmin), mask)
else:
numpy.logical_and(mask, (x > xmin), mask)
if "exl" in index:
numpy.logical_and(mask, (x - abs(self.values[:,index["exl"]]) < xmax), mask)
elif "ex" in index:
numpy.logical_and(mask, (x - abs(self.values[:,index["ex"]]) < xmax), mask)
else:
numpy.logical_and(mask, (x < xmax), mask)
return mask
def limity(mask):
if "ey" in index:
numpy.logical_and(mask, (y + abs(self.values[:,index["ey"]]) > ymin), mask)
else:
numpy.logical_and(mask, (y > ymin), mask)
if "eyl" in index:
numpy.logical_and(mask, (y - abs(self.values[:,index["eyl"]]) < ymax), mask)
elif "ey" in index:
numpy.logical_and(mask, (y - abs(self.values[:,index["ey"]]) < ymax), mask)
else:
numpy.logical_and(mask, (y < ymax), mask)
return mask
if self.connector == "xsort":
mask = limitx(mask)
self._xlimited_values = (self.values[mask])[:,(index["x"],index["y"])]
self._ylimited_values = numpy.array([], dtype=numpy.float)
mask = limity(mask)
elif self.connector == "ysort":
mask = limity(mask)
self._xlimited_values = numpy.array([], dtype=numpy.float)
self._ylimited_values = (self.values[mask])[:,(index["x"],index["y"])]
mask = limitx(mask)
elif self.connector == "unsorted":
self._xlimited_values = self.values[:,(index["x"],index["y"])]
self._ylimited_values = numpy.array([], dtype=numpy.float)
mask = limitx(mask)
mask = limity(mask)
else:
self._xlimited_values = numpy.array([], dtype=numpy.float)
self._ylimited_values = numpy.array([], dtype=numpy.float)
mask = limitx(mask)
mask = limity(mask)
inrange = self.values[mask]
# select an unbiased subset
if self.limit is not None and self.limit < len(inrange):
self._limited_values = inrange[random.sample(xrange(len(inrange)), self.limit)]
else:
self._limited_values = inrange
if self.limit is not None and self.limit < len(self._xlimited_values):
self._xlimited_values = self._xlimited_values[random.sample(xrange(len(self._xlimited_values)), self.limit)]
if self.limit is not None and self.limit < len(self._ylimited_values):
self._ylimited_values = self._ylimited_values[random.sample(xrange(len(self._ylimited_values)), self.limit)]
# sort the xlimited and ylimited data
if self.connector == "xsort" and len(self._xlimited_values) > 0:
self._xlimited_values = self._xlimited_values[numpy.argsort(self._xlimited_values[:,0])]
if self.connector == "ysort" and len(self._ylimited_values) > 0:
self._ylimited_values = self._ylimited_values[numpy.argsort(self._ylimited_values[:,1])]
def setbysig(self, values, sig=("x", "y")):
"""Sets the values using a signature.
Arguments:
values (numpy array of N-dimensional points): X-Y points to
draw (with possible error bars)
sig (list of strings): how to interpret each N-dimensional
point, e.g. `('x', 'y', 'ey')` for triplets of x, y, and y
error bars
Exceptions:
At least `x` and `y` are required.
"""
if "x" not in sig or "y" not in sig:
raise ContainerException, "Signature must contain \"x\" and \"y\""
self.sig = sig
self.values = numpy.array(values, dtype=numpy.float)
def setvalues(self, x=None, y=None, ex=None, ey=None, exl=None, eyl=None):
"""Sets the values with separate lists.
Arguments:
x (list of floats or strings): x values
y (list of floats or strings): y values
ex (list of floats or `None`): symmetric or upper errors in x;
`None` for no x error bars
ey (list of floats or `None`): symmetric or upper errors in y
exl (list of floats or `None`): asymmetric lower errors in x
eyl (list of floats or `None`): asymmetric lower errors in y
Exceptions:
At least `x` and `y` are required.
"""
if x is None and y is None:
raise ContainerException, "Signature must contain \"x\" and \"y\""
longdim = 0
shortdim = 0
if x is not None:
longdim = max(longdim, len(x))
shortdim += 1
if y is not None:
longdim = max(longdim, len(y))
shortdim += 1
if ex is not None:
longdim = max(longdim, len(ex))
shortdim += 1
if ey is not None:
longdim = max(longdim, len(ey))
shortdim += 1
if exl is not None:
longdim = max(longdim, len(exl))
shortdim += 1
if eyl is not None:
longdim = max(longdim, len(eyl))
shortdim += 1
self.sig = []
self.values = numpy.empty((longdim, shortdim), dtype=numpy.float)
if x is not None:
x = numpy.array(x)
if x.dtype.char == "?":
x = numpy.array(x, dtype=numpy.string_)
if x.dtype.char in numpy.typecodes["Character"] + "Sa":
if len(x) > 0:
unique = numpy.unique(x)
self._xticks = dict(map(lambda (i, val): (float(i+1), val), enumerate(unique)))
strtoval = dict(map(lambda (i, val): (val, float(i+1)), enumerate(unique)))
x = numpy.apply_along_axis(numpy.vectorize(lambda s: strtoval[s]), 0, x)
else:
x = numpy.array([], dtype=numpy.float)
self.values[:,len(self.sig)] = x
self.sig.append("x")
if y is not None:
y = numpy.array(y)
if y.dtype.char == "?":
y = numpy.array(y, dtype=numpy.string_)
if y.dtype.char in numpy.typecodes["Character"] + "Sa":
if len(y) > 0:
unique = numpy.unique(y)
self._yticks = dict(map(lambda (i, val): (float(i+1), val), enumerate(unique)))
strtoval = dict(map(lambda (i, val): (val, float(i+1)), enumerate(unique)))
y = numpy.apply_along_axis(numpy.vectorize(lambda s: strtoval[s]), 0, y)
else:
y = numpy.array([], dtype=numpy.float)
self.values[:,len(self.sig)] = y
self.sig.append("y")
if ex is not None:
self.values[:,len(self.sig)] = ex
self.sig.append("ex")
if ey is not None:
self.values[:,len(self.sig)] = ey
self.sig.append("ey")
if exl is not None:
self.values[:,len(self.sig)] = exl
self.sig.append("exl")
if eyl is not None:
self.values[:,len(self.sig)] = eyl
self.sig.append("eyl")
def append(self, x, y, ex=None, ey=None, exl=None, eyl=None):
"""Append one point to the dataset.
Arguments:
x (float): x value
y (float): y value
ex (float or `None`): symmetric or upper error in x
ey (list of floats or `None`): symmetric or upper error in y
exl (list of floats or `None`): asymmetric lower error in x
eyl (list of floats or `None`): asymmetric lower error in y
Exceptions:
Input arguments must match the signature of the dataset
(`sig`).
Considerations:
This method is provided for convenience; it is more
efficient to input all points at once during
construction.
"""
index = self.index()
oldlen = self.values.shape[0]
oldwidth = self.values.shape[1]
for i in self.sig:
if eval(i) is None:
raise ContainerException, "This %s instance requires %s" % (self.__class__.__name__, i)
newvalues = [0.]*oldwidth
if x is not None: newvalues[index["x"]] = x
if y is not None: newvalues[index["y"]] = y
if ex is not None: newvalues[index["ex"]] = ex
if ey is not None: newvalues[index["ey"]] = ey
if exl is not None: newvalues[index["exl"]] = exl
if eyl is not None: newvalues[index["eyl"]] = eyl
self.values.resize((oldlen+1, oldwidth), refcheck=False)
self.values[oldlen,:] = newvalues
def _strip(self, which, limited=False):
try:
index = self.index()[which]
except KeyError:
raise ContainerException, "The signature doesn't have any \"%s\" variable" % which
if limited: return self._limited_values[:,index]
else: return self.values[:,index]
def x(self, limited=False):
"""Return a 1-D numpy array of x values.
Arguments:
limited (bool): if True, only return randomly selected
values (must be called after `_prepare()`)
"""
return self._strip("x", limited)
def y(self, limited=False):
"""Return a 1-D numpy array of y values.
Arguments:
limited (bool): if True, only return randomly selected
values (must be called after `_prepare()`)
"""
return self._strip("y", limited)
def ex(self, limited=False):
"""Return a 1-D numpy array of x error bars.
Arguments:
limited (bool): if True, only return randomly selected
values (must be called after `_prepare()`)
"""
return self._strip("ex", limited)
def ey(self, limited=False):
"""Return a 1-D numpy array of y error bars.
Arguments:
limited (bool): if True, only return randomly selected
values (must be called after `_prepare()`)
"""
return self._strip("ey", limited)
def exl(self, limited=False):
"""Return a 1-D numpy array of x lower error bars.
Arguments:
limited (bool): if True, only return randomly selected
values (must be called after `_prepare()`)
"""
return self._strip("exl", limited)
def eyl(self, limited=False):
"""Return a 1-D numpy array of y lower error bars.
Arguments:
limited (bool): if True, only return randomly selected
values (must be called after `_prepare()`)
"""
return self._strip("eyl", limited)
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
x = self.x()
y = self.y()
# if we're plotting logarithmically, only the positive values are relevant for ranges
if xlog or ylog:
mask = numpy.ones(len(self.values), dtype="bool")
if xlog:
numpy.logical_and(mask, (x > 0.), mask)
if ylog:
numpy.logical_and(mask, (y > 0.), mask)
x = x[mask]
y = y[mask]
if len(x) < 2:
if xlog:
xmin, xmax = 0.1, 1.
else:
xmin, xmax = 0., 1.
if ylog:
ymin, ymax = 0.1, 1.
else:
ymin, ymax = 0., 1.
elif callable(self.calcrange):
xmin, xmax = self.calcrange(x, xlog)
ymin, ymax = self.calcrange(y, ylog)
else:
raise ContainerException, "Scatter.calcrange must be a function."
if xmin == xmax:
if xlog:
xmin, xmax = xmin/2., xmax*2.
else:
xmin, xmax = xmin - 0.5, xmax + 0.5
if ymin == ymax:
if ylog:
ymin, ymax = ymin/2., ymax*2.
else:
ymin, ymax = ymin - 0.5, ymax + 0.5
return xmin, ymin, xmax, ymax
class TimeSeries(Scatter):
"""A scatter-plot in which the x axis is interpreted as time strings.
Arguments:
informat (string or `None`): time formatting string for
interpreting x data (see `time documentation
<http://docs.python.org/library/time.html#time.strftime>`_)
outformat (string): time formatting string for plotting
subseconds (bool): if True, interpret ".xxx" at the end of
the string as fractions of a second
t0 (number or time-string): the time from which to start
counting; zero is equivalent to Jan 1, 1970
x (list of strings): time strings for the x axis
y (list of floats): y values
ex (list of floats or `None`): symmetric or upper errors in x
(in seconds); `None` for no x error bars
ey (list of floats or `None`): symmetric or upper errors in y
exl (list of floats or `None`): asymmetric lower errors in x
eyl (list of floats or `None`): asymmetric lower errors in y
limit (int or `None`): maximum number of points to draw
(randomly selected if less than total number of points)
calcrange (function): a function that chooses a reasonable range
to plot, based on the data (overruled by `xmin`, `xmax`, etc.)
connector (`None`, "unsorted", "xsort", "ysort"): toggles
whether a line is drawn through all of the visible points, and
whether those points are sorted before drawing the line
marker (string or `None`): symbol to draw at each point; `None`
for no markers (e.g. just lines)
markersize (float): scale factor to resize marker points
markercolor (string, color, or `None`): color of the marker
points; hollow markers if `None`
markeroutline (string, color, or `None`): color of the outline
of each marker; no outline if `None`
linewidth (float): scale factor to resize line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color of a line
connecting all points; no line if `None`
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`informat`, `outformat`, `values`, `sig`, `limit`, `calcrange`,
`connector`, `marker`, `markersize`, `markercolor`,
`markeroutline`, `lines`, `linewidth`, `linestyle`,
`linecolor`, and frame arguments.
Behavior:
Points are stored internally as an N-dimensional numpy array of
`values`, with meanings specified by `sig`.
Input points are _copied_, not set by reference, with both
input methods. The set-by-signature method is likely to be
faster for large datasets.
Setting `limit` to a value other than `None` restricts the
number of points to draw in the graphical backend, something
that may be necessary if the number of points is very large. A
random subset is selected when the scatter plot is drawn.
The numerical `limit` refers to the number of points drawn
*within a coordinate frame,* so zooming in will reveal more
points.
Since the input set of points is not guaranteed to be
monatonically increasing in x, a line connecting all points
might cross itself.
Setting `marker = None` is the only proper way to direct the
graphics backend to not draw a marker at each visible point
Setting `linecolor = None` is the only proper way to direct the
graphics backend to not draw a line connecting all visible points
Exceptions:
At least `x` and `y` are required.
"""
_not_frameargs = Scatter._not_frameargs + ["informat", "outformat"]
def __init__(self, informat="%Y-%m-%d %H:%M:%S", outformat="%Y-%m-%d %H:%M:%S", subseconds=False, t0=0., x=None, y=None, ex=None, ey=None, exl=None, eyl=None, limit=None, calcrange=utilities.calcrange, connector="xsort", marker=None, markersize=1., markercolor="black", markeroutline=None, linewidth=1., linestyle="solid", linecolor="black", **frameargs):
self.__dict__["informat"] = informat
self.__dict__["outformat"] = outformat
self._subseconds, self._t0 = subseconds, t0
Scatter.__init__(self, x=utilities.fromtimestring(x, informat, subseconds, t0), y=y, ex=ex, ey=ey, exl=exl, eyl=eyl, limit=limit, calcrange=calcrange, connector=connector, marker=marker, markersize=markersize, markercolor=markercolor, markeroutline=markeroutline, linewidth=linewidth, linestyle=linestyle, linecolor=linecolor, **frameargs)
def __repr__(self):
if self.limit is None:
return "<TimeSeries %d (draw all) at 0x%x>" % (len(self.values), id(self))
else:
return "<TimeSeries %d (draw %d) at 0x%x>" % (len(self.values), self.limit, id(self))
def append(self, x, y, ex=None, ey=None, exl=None, eyl=None):
"""Append one point to the dataset.
Arguments:
x (string): x value (a time-string)
y (float): y value
ex (float or `None`): symmetric or upper error in x
ey (list of floats or `None`): symmetric or upper error in y
exl (list of floats or `None`): asymmetric lower error in x
eyl (list of floats or `None`): asymmetric lower error in y
Exceptions:
Input arguments must match the signature of the dataset
(`sig`).
Considerations:
This method is provided for convenience; it is more
efficient to input all points at once during
construction.
"""
Scatter.append(self, utilities.fromtimestring(x, self.informat, self._subseconds, self._t0), y, ex, ey, exl, eyl)
def totimestring(self, timenumbers):
"""Convert a number of seconds or a list of numbers into time string(s).
Arguments:
timenumbers (number or list of numbers): time(s) to be
converted
Behavior:
If only one `timenumbers` is passed, the return value is a
single string; if a list of strings is passed, the return value
is a list of strings.
Uses this timeseries's `outformat` and `t0` for the conversion.
"""
return utilities.totimestring(timenumbers, self.outformat, self._subseconds, self._t0)
def fromtimestring(self, timestrings):
"""Convert a time string or many time strings into a number(s) of seconds.
Arguments:
timestring (string or list of strings): time string(s) to be
converted
Behavior:
If only one `timestring` is passed, the return value is a
single number; if a list of strings is passed, the return value
is a list of numbers.
Uses this timeseries's `informat` and `t0` for the
conversion.
"""
return utilities.fromtimestring(timestrings, self.informat, self._subseconds, self._t0)
def timeticks(self, major, minor, start=None):
"""Set x tick-marks to temporally meaningful values.
Arguments:
major (number): number of seconds interval (may use combinations
of SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, or YEAR constants)
for major ticks (ticks with labels)
minor (number): same for minor ticks (shorter ticks without labels)
start (number, string, or `None`): a time to set the offset
of the tick-marks (use `t0` if `None`)
Behavior:
A "month" is taken to be exactly 31 days and a "year" is
taken to be exactly 365 days. Week markers will only line
up with month markers at `start`.
"""
if isinstance(start, basestring): start = fromtimestring(start)
return utilities.timeticks(major, minor, self.outformat, self._subseconds, self._t0, start)
######################################################### Colorfield
class ColorField(Frame):
_not_frameargs = ["values", "zmin", "zmax", "zlog", "components", "tocolor", "smooth"]
def __init__(self, xbins, xmin, xmax, ybins, ymin, ymax, zmin=Auto, zmax=Auto, zlog=False, components=1, tocolor=color.gradients["rainbow"], smooth=False, **frameargs):
self.xmin, self.xmax, self.ymin, self.ymax, self.zmin, self.zmax, self.tocolor, self.smooth = xmin, xmax, ymin, ymax, zmin, zmax, tocolor, smooth
if components == 1:
self.values = numpy.zeros((xbins, ybins), numpy.float)
else:
self.values = numpy.zeros((xbins, ybins, components), numpy.float)
Frame.__init__(self, **frameargs)
def __repr__(self):
if self.components() == 1:
return "<ColorField [%d][%d] x=(%g, %g) y=(%g, %g) at 0x%x>" % (self.xbins(), self.ybins(), self.xmin, self.xmax, self.ymin, self.ymax, id(self))
else:
return "<ColorField [%d][%d][%d] x=(%g, %g) y=(%g, %g) at 0x%x>" % (self.xbins(), self.ybins(), self.components(), self.xmin, self.xmax, self.ymin, self.ymax, id(self))
def xbins(self):
return self.values.shape[0]
def ybins(self):
return self.values.shape[1]
def components(self):
if len(self.values.shape) > 2:
return self.values.shape[2]
else:
return 1
def index(self, x, y):
xindex = int(math.floor((x - self.xmin)*self.values.shape[0]/(self.xmax - self.xmin)))
if not (0 <= xindex < self.values.shape[0]):
raise ContainerException, "The value %g is not between xmin=%g and xmax=%g." % (x, self.xmin, self.xmax)
yindex = int(math.floor((y - self.ymin)*self.values.shape[1]/(self.ymax - self.ymin)))
if not (0 <= yindex < self.values.shape[1]):
raise ContainerException, "The value %g is not between ymin=%g and ymax=%g." % (y, self.ymin, self.ymax)
return xindex, yindex
def center(self, i, j):
x = (i + 0.5)*(self.xmax - self.xmin)/float(self.values.shape[0]) + self.xmin
if not (self.xmin <= x <= self.xmax):
raise ContainerException, "The index %d is not between 0 and xbins=%d" % (i, self.values.shape[0])
y = (j + 0.5)*(self.ymax - self.ymin)/float(self.values.shape[1]) + self.ymin
if not (self.ymin <= y <= self.ymax):
raise ContainerException, "The index %d is not between 0 and ybins=%d" % (j, self.values.shape[1])
return x, y
def map(self, func):
ybins = self.ybins()
for i in xrange(self.xbins()):
for j in xrange(ybins):
self.values[i,j] = func(*self.center(i, j))
def remap(self, func):
ybins = self.ybins()
for i in xrange(self.xbins()):
for j in xrange(ybins):
self.values[i,j] = func(*self.center(i, j), old=self.values[i,j])
def zranges(self):
ybins = self.ybins()
components = self.components()
if components == 1:
zmin, zmax = None, None
else:
zmin, zmax = [None]*self.components(), [None]*self.components()
for i in xrange(self.xbins()):
for j in xrange(ybins):
if components == 1:
if zmin is None or self.values[i,j] < zmin: zmin = self.values[i,j]
if zmax is None or self.values[i,j] > zmax: zmax = self.values[i,j]
else:
for k in xrange(components):
if zmin[k] is None or self.values[i,j,k] < zmin[k]: zmin[k] = self.values[i,j,k]
if zmax[k] is None or self.values[i,j,k] > zmax[k]: zmax[k] = self.values[i,j,k]
return zmin, zmax
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
return self.xmin, self.ymin, self.xmax, self.ymax
######################################################### Subregions of the plane
class Region(Frame):
"""Represents an enclosed region of the plane.
Signature::
Region([command1[, command2[, command3[, ...]]]], [linewidth=width,] [linestyle=style,] [linecolor=color,] [fillcolor=color,] [**frameargs])
Arguments:
commands (list of RegionCommands): a list of `MoveTo`, `EdgeTo`,
or `ClosePolygon` commands; commands have the same structure as
SVG path data, but may have infinite arguments (to enclose an
unbounded region of the plane)
fillcolor (string or color): fill color of the enclosed region
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`commands`, `fillcolor`, and frame arguments.
"""
_not_frameargs = ["commands", "fillcolor"]
def __init__(self, *commands, **kwds):
self.commands = list(commands)
params = {"fillcolor": "lightblue"}
params.update(kwds)
Frame.__init__(self, **params)
def __repr__(self):
return "<Region (%s commands) at 0x%x>" % (len(self.commands), id(self))
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
xmin, ymin, xmax, ymax = None, None, None, None
for command in self.commands:
if not isinstance(command, RegionCommand):
raise ContainerException, "Commands passed to Region must all be RegionCommands (MoveTo, EdgeTo, ClosePolygon)"
for x, y in command.points():
if not isinstance(x, mathtools.InfiniteType) and not xlog or x > 0.:
if xmin is None or x < xmin: xmin = x
if xmax is None or x > xmax: xmax = x
if not isinstance(y, mathtools.InfiniteType) and not ylog or y > 0.:
if ymin is None or y < ymin: ymin = y
if ymax is None or y > ymax: ymax = y
if xmin is None:
if xlog:
xmin, xmax = 0.1, 1.
else:
xmin, xmax = 0., 1.
if ymin is None:
if ylog:
ymin, ymax = 0.1, 1.
else:
ymin, ymax = 0., 1.
if xmin == xmax:
if xlog:
xmin, xmax = xmin/2., xmax*2.
else:
xmin, xmax = xmin - 0.5, xmax + 0.5
if ymin == ymax:
if ylog:
ymin, ymax = ymin/2., ymax*2.
else:
ymin, ymax = ymin - 0.5, ymax + 0.5
return xmin, ymin, xmax, ymax
class RegionCommand:
def points(self): return []
class MoveTo(RegionCommand):
"""Represents a directive to move the pen to a specified point."""
def __init__(self, x, y):
self.x, self.y = x, y
def __repr__(self):
if isinstance(self.x, (numbers.Number, numpy.number)): x = "%g" % self.x
else: x = repr(self.x)
if isinstance(self.y, (numbers.Number, numpy.number)): y = "%g" % self.y
else: y = repr(self.y)
return "MoveTo(%s, %s)" % (x, y)
def points(self): return [(self.x, self.y)]
class EdgeTo(RegionCommand):
"""Represents a directive to draw an edge to a specified point."""
def __init__(self, x, y):
self.x, self.y = x, y
def __repr__(self):
if isinstance(self.x, (numbers.Number, numpy.number)): x = "%g" % self.x
else: x = repr(self.x)
if isinstance(self.y, (numbers.Number, numpy.number)): y = "%g" % self.y
else: y = repr(self.y)
return "EdgeTo(%s, %s)" % (x, y)
def points(self): return [(self.x, self.y)]
class ClosePolygon(RegionCommand):
"""Represents a directive to close the current polygon."""
def __repr__(self):
return "ClosePolygon()"
class RegionMap(Frame):
_not_frameargs = ["xbins", "ybins", "categories", "categorizer", "colors", "bordercolor"]
def __init__(self, xbins, xmin, xmax, ybins, ymin, ymax, categories, categorizer, colors=Auto, bordercolor=None, **frameargs):
self.xbins, self.xmin, self.xmax, self.ybins, self.ymin, self.ymax, self.categories, self.categorizer, self.colors, self.bordercolor = xbins, xmin, xmax, ybins, ymin, ymax, categories, categorizer, colors, bordercolor
Frame.__init__(self, **frameargs)
def __repr__(self):
return "<RegionMap [%d][%d] x=(%g, %g) y=(%g, %g) at 0x%x>" % (self.xbins, self.ybins, self.xmin, self.xmax, self.ymin, self.ymax, id(self))
def index(self, x, y):
xindex = int(math.floor((x - self.xmin)*self.xbins/(self.xmax - self.xmin)))
if not (0 <= xindex < self.xbins):
raise ContainerException, "The value %g is not between xmin=%g and xmax=%g." % (x, self.xmin, self.xmax)
yindex = int(math.floor((y - self.ymin)*self.ybins/(self.ymax - self.ymin)))
if not (0 <= yindex < self.ybins):
raise ContainerException, "The value %g is not between ymin=%g and ymax=%g." % (y, self.ymin, self.ymax)
return xindex, yindex
def center(self, i, j):
x = (i + 0.5)*(self.xmax - self.xmin)/float(self.xbins) + self.xmin
if not (self.xmin <= x <= self.xmax):
raise ContainerException, "The index %d is not between 0 and xbins=%d" % (i, self.xbins)
y = (j + 0.5)*(self.ymax - self.ymin)/float(self.ybins) + self.ymin
if not (self.ymin <= y <= self.ymax):
raise ContainerException, "The index %d is not between 0 and ybins=%d" % (j, self.ybins)
return x, y
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
return self.xmin, self.ymin, self.xmax, self.ymax
def _compile(self):
if isinstance(self.categorizer, numpy.ndarray) or callable(self.categorizer):
self._categorizer = self.categorizer
else:
self._categorizer = eval("lambda x, y: (%s)" % self.categorizer)
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=False, ylog=False):
self._compile()
if self.colors is Auto:
cols = color.lightseries(len(self.categories), alternating=False)
else:
cols = self.colors
self._colors = {}
ints = {}
counter = 0
for category, col in zip(self.categories, cols):
self._colors[category] = color.RGB(col).ints()
ints[category] = counter
counter += 1
if self.bordercolor is not None:
asarray = numpy.zeros((self.xbins, self.ybins), dtype=numpy.int)
self._values = []
for i in xrange(self.xbins):
row = []
for j in xrange(self.ybins):
if isinstance(self._categorizer, numpy.ndarray):
category = self.categories[self._categorizer[i,j]]
else:
category = self._categorizer(*self.center(i, j))
row.append(self._colors[category])
if self.bordercolor is not None:
asarray[i,j] = ints[category]
self._values.append(row)
if self.bordercolor is not None:
roll1 = numpy.roll(asarray, 1, 0)
roll2 = numpy.roll(asarray, -1, 0)
roll3 = numpy.roll(asarray, 1, 1)
roll4 = numpy.roll(asarray, -1, 1)
mask = numpy.equal(asarray, roll1)
numpy.logical_and(mask, numpy.equal(asarray, roll2), mask)
numpy.logical_and(mask, numpy.equal(asarray, roll3), mask)
numpy.logical_and(mask, numpy.equal(asarray, roll4), mask)
thecolor = color.RGB(self.bordercolor).ints()
for i in xrange(self.xbins):
for j in xrange(self.ybins):
if not mask[i,j]:
self._values[i][j] = thecolor
######################################################### Curves and functions
class Curve(Frame):
"""Represents a parameterized function.
Arguments:
func (function or string): the function to plot; if callable,
it should take one argument and accept parameters as keywords;
if a string, it should be valid Python code, accepting a
variable name specified by `var`, parameter names to be passed
through `parameters`, and any function in the `math` library
(`cmath` if complex).
varmin, varmax (numbers or `Auto`): nominal range of function input
parameters (dict): parameter name, value pairs to be passed
before plotting
var (string): name of the input variable (string `func` only)
namespace (module, dict, or `None`): names to be used by the
function; for example::
import scipy.special # (sudo apt-get install python-scipy)
curve = Curve("jn(4, x)", namespace=scipy.special)
view(curve, xmin=-20., xmax=20.)
form (built-in constant): if Curve.FUNCTION, `func` is expected
to input x and output y; if Curve.PARAMETRIC, `func` is expected
to input t and output the tuple (x, y); if Curve.COMPLEX, `func`
is expected to output a 2-D point as a complex number
samples (number or `Auto`): number of sample points or `Auto`
for dynamic sampling (_not yet copied over from SVGFig!_)
linewidth (float): scale factor to resize line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color specification for
the curve
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`horiz`, `vert`, `linewidth`, `linestyle`, `linecolor`, and
frame arguments.
Examples::
>>> c = Curve("sin(x + delta)", 0, 6.28)
>>> c
<Curve x -> sin(x + delta) from 0 to 6.28>
>>> c(0., delta=0.1)
0.099833416646828155
>>> c.parameters = {"delta": 0.1}
>>> view(c)
>>> def f(x, delta=0.):
... return math.sin(x + delta)
...
>>> c = Curve(f, 0, 6.28)
>>> c
<Curve f from 0 to 6.28>
>>> c(0., delta=0.1)
0.099833416646828155
"""
_not_frameargs = ["func", "varmin", "varmax", "parameters", "var", "namespace", "form", "samples", "linewidth", "linestyle", "linecolor", "FUNCTION", "PARAMETRIC", "COMPLEX"]
class CurveType:
def __init__(self, name): self.name = "Curve." + name
def __repr__(self): return self.name
FUNCTION = CurveType("FUNCTION")
PARAMETRIC = CurveType("PARAMETRIC")
COMPLEX = CurveType("COMPLEX")
def __init__(self, func, varmin=Auto, varmax=Auto, parameters={}, var="x", namespace=None, form=FUNCTION, samples=1000, linewidth=1., linestyle="solid", linecolor="black", **frameargs):
self.func, self.varmin, self.varmax, self.parameters, self.var, self.namespace, self.form, self.samples, self.linewidth, self.linestyle, self.linecolor = func, varmin, varmax, parameters, var, namespace, form, samples, linewidth, linestyle, linecolor
Frame.__init__(self, **frameargs)
def _compile(self, parameters):
if callable(self.func):
self._func = lambda t: self.func(t, **parameters)
try:
self._func.func_name = self.func.func_name
except AttributeError:
self._func.func_name = "built-in"
else:
if self.form is self.COMPLEX: g = dict(cmath.__dict__)
else: g = dict(math.__dict__)
# missing these important functions
g["erf"] = mathtools.erf
g["erfc"] = mathtools.erfc
if self.namespace is not None:
if isinstance(self.namespace, dict):
g.update(self.namespace)
else:
g.update(self.namespace.__dict__)
g.update(parameters)
self._func = eval("lambda (%s): (%s)" % (self.var, self.func), g)
self._func.func_name = "%s -> %s" % (self.var, self.func)
def __repr__(self):
if callable(self.func):
try:
func_name = self.func.func_name
except AttributeError:
func_name = "built-in"
else:
func_name = "%s -> %s" % (self.var, self.func)
return "<Curve %s>" % func_name
def __call__(self, values, **parameters):
"""Call the function for a set of values and parameters.
Arguments:
values (number or list of numbers): input(s) to the function
parameters (keyword arguments): parameter values for this
set of evaluations
"""
self._compile(parameters)
if isinstance(values, (numbers.Number, numpy.number)):
singleton = True
values = [values]
else:
singleton = False
if self.form is self.FUNCTION:
output = numpy.empty(len(values), dtype=numpy.float)
elif self.form is self.PARAMETRIC:
output = numpy.empty((len(values), 2), dtype=numpy.float)
elif self.form is self.COMPLEX:
output = numpy.empty(len(values), dtype=numpy.complex)
else:
raise ContainerException, "Curve.form must be one of Curve.FUNCTION, Curve.PARAMETRIC, or Curve.COMPLEX."
try:
for i, value in enumerate(values):
output[i] = self._func(value)
except NameError, err:
raise NameError, "%s: are the Curve's parameters missing (or namespace not set)?" % err
if singleton: output = output[0]
return output
def derivative(self, values, epsilon=mathtools.epsilon, **parameters):
"""Numerically calculate derivative for a set of values and parameters.
Arguments:
values (number or list of numbers): input(s) to the function
parameters (keyword arguments): parameter values for this
set of evaluations
"""
self._compile(parameters)
if isinstance(values, (numbers.Number, numpy.number)):
singleton = True
values = [values]
else:
singleton = False
if self.form is self.FUNCTION:
output = numpy.empty(len(values), dtype=numpy.float)
elif self.form is self.PARAMETRIC:
output = numpy.empty((len(values), 2), dtype=numpy.float)
elif self.form is self.COMPLEX:
raise ContainerException, "Curve.derivative not implemented for COMPLEX functions."
else:
raise ContainerException, "Curve.form must be one of Curve.FUNCTION, Curve.PARAMETRIC, or Curve.COMPLEX."
for i, value in enumerate(values):
up = self._func(value + mathtools.epsilon)
down = self._func(value - mathtools.epsilon)
output[i] = (up - down)/(2. * mathtools.epsilon)
if singleton: output = output[0]
return output
def scatter(self, low, high, samples=Auto, xlog=False, **parameters):
"""Create a `Scatter` object from the evaluated function.
Arguments:
samples (number or `Auto`): number of sample points
low, high (numbers): domain to sample
xlog (bool): if `form` == `FUNCTION`, distribute the sample
points logarithmically
parameters (keyword arguments): parameter values for this
set of evaluations
"""
tmp = self.parameters
tmp.update(parameters)
parameters = tmp
if samples is Auto: samples = self.samples
if self.form is self.FUNCTION:
points = numpy.empty((samples, 2), dtype=numpy.float)
if xlog:
step = (math.log(high) - math.log(low))/(samples - 1.)
points[:,0] = numpy.exp(numpy.arange(math.log(low), math.log(high) + 0.5*step, step))
else:
step = (high - low)/(samples - 1.)
points[:,0] = numpy.arange(low, high + 0.5*step, step)
points[:,1] = self(points[:,0], **parameters)
elif self.form is self.PARAMETRIC:
step = (high - low)/(samples - 1.)
points = self(numpy.arange(low, high + 0.5*step, step), **parameters)
elif self.form is self.COMPLEX:
step = (high - low)/(samples - 1.)
tmp = self(numpy.arange(low, high + 0.5*step, step), **parameters)
points = numpy.empty((samples, 2), dtype=numpy.float)
for i, value in enumerate(tmp):
points[i] = value.real, value.imag
else: raise ContainerException, "Curve.form must be one of Curve.FUNCTION, Curve.PARAMETRIC, or Curve.COMPLEX."
return Scatter(points, ("x", "y"), limit=None, calcrange=utilities.calcrange, connector="unsorted", marker=None, lines=True, linewidth=self.linewidth, linestyle=self.linestyle, linecolor=self.linecolor, **self._frameargs())
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=False, ylog=False):
if xmin in (None, Auto) and xmax in (None, Auto):
if xlog:
xmin, xmax = 0.1, 1.
else:
xmin, xmax = 0., 1.
elif xmin is None:
if xlog:
xmin = xmax / 2.
else:
xmin = xmax - 1.
elif xmax is None:
if xlog:
xmax = xmin * 2.
else:
xmax = xmin + 1.
if self.form is self.FUNCTION:
if self.varmin is not Auto: xmin = self.varmin
if self.varmax is not Auto: xmax = self.varmax
self._scatter = self.scatter(xmin, xmax, self.samples, xlog, **self.parameters)
else:
varmin = self.varmin
varmax = self.varmax
if varmin is Auto: varmin = 0.
if varmax is Auto: varmax = 1.
self._scatter = self.scatter(varmin, varmax, self.samples, xlog, **self.parameters)
def ranges(self, xlog=False, ylog=False):
"""Return a data-space bounding box as `xmin, ymin, xmax, ymax`.
Arguments:
xlog (bool): requesting a logarithmic x axis (negative
and zero-valued contents are ignored)
ylog (bool): requesting a logarithmic y axis
"""
if getattr(self, "_scatter", None) is not None:
return self._scatter.ranges(xlog=xlog, ylog=ylog)
else:
self._prepare(xlog=xlog)
output = self._scatter.ranges(xlog=xlog, ylog=ylog)
self._scatter = None
return output
def objective(self, data, parnames, method=Auto, exclude=Auto, centroids=False):
"""Return an objective function whose minimum represents a
best fit to a given dataset.
Arguments:
data (`Histogram` or `Scatter`): the data to fit
parnames (list of strings): names of the parameters
method (function or `Auto`): a function that will be called
for each data point to calculate the final value of the
objective function; examples:
`lambda f, x, y: (f - y)**2` chi^2 for data without uncertainties
`lambda f, x, y, ey: (f - y)**2/ey**2` chi^2 with uncertainties
If `method` is `Auto`, an appropriate chi^2 function will
be used.
exclude (function, `Auto`, or `None`): a function that will
be called for each data point to determine whether to
exclude the point; `Auto` excludes only zero values and
`None` excludes nothing
centroids (bool): use centroids of histogram, rather than
centers
"""
if isinstance(data, Histogram):
if isinstance(data, HistogramCategorical):
raise ContainerException, "A fit to a categorical histogram is not meaningful."
if exclude is Auto and method is Auto:
exclude = lambda x, y: y == 0.
else:
exclude = lambda x, y: False
self._exclude = exclude
if method is Auto:
method = lambda f, x, y: (f - y)**2/abs(y)
values = numpy.empty((len(data.bins), 2), dtype=numpy.float)
if centroids: values[:,0] = data.centroids()
else: values[:,0] = data.centers()
values[:,1] = data.values
return eval("lambda %s: sum([method(f, x, y) for f, (x, y) in itertools.izip(curve(values[:,0], **{%s}), values) if not exclude(x, y)])" % (", ".join(parnames), ", ".join(["\"%s\": %s" % (x, x) for x in parnames])), {"method": method, "itertools": itertools, "curve": self, "values": values, "exclude": exclude})
elif isinstance(data, Scatter):
if "ey" in data.sig and "eyl" in data.sig:
if method is Auto:
method = lambda f, x, y, ey, eyl: ((f - y)**2/eyl**2 if f < y else (f - y)**2/ey**2)
if exclude is Auto:
exclude = lambda x, y, ey, eyl: eyl == 0. or ey == 0.
elif exclude is None:
exclude = lambda x, y, ey, eyl: False
elif "ey" in data.sig:
if method is Auto:
method = lambda f, x, y, ey: (f - y)**2/ey**2
if exclude is Auto:
exclude = lambda x, y, ey: ey == 0.
elif exclude is None:
exclude = lambda x, y, ey: False
else:
if method is Auto:
method = lambda f, x, y: (f - y)**2
if exclude is Auto or exclude is None:
exclude = lambda x, y: False
self._exclude = exclude
index = data.index()
if "ey" in data.sig and "eyl" in data.sig:
values = numpy.empty((len(data.values), 4))
values[:,0] = data.values[:,index["x"]]
values[:,1] = data.values[:,index["y"]]
values[:,2] = data.values[:,index["ey"]]
values[:,3] = data.values[:,index["eyl"]]
return eval("lambda %s: sum([method(f, x, y, ey, eyl) for f, (x, y, ey, eyl) in itertools.izip(curve(values[:,0], **{%s}), values) if not exclude(x, y, ey, eyl)])" % (", ".join(parnames), ", ".join(["\"%s\": %s" % (x, x) for x in parnames])), {"method": method, "itertools": itertools, "curve": self, "values": values, "exclude": exclude})
elif "ey" in data.sig:
values = numpy.empty((len(data.values), 3))
values[:,0] = data.values[:,index["x"]]
values[:,1] = data.values[:,index["y"]]
values[:,2] = data.values[:,index["ey"]]
return eval("lambda %s: sum([method(f, x, y, ey) for f, (x, y, ey) in itertools.izip(curve(values[:,0], **{%s}), values) if not exclude(x, y, ey)])" % (", ".join(parnames), ", ".join(["\"%s\": %s" % (x, x) for x in parnames])), {"method": method, "itertools": itertools, "curve": self, "values": values, "exclude": exclude})
else:
values = numpy.empty((len(data.values), 2))
values[:,0] = data.values[:,index["x"]]
values[:,1] = data.values[:,index["y"]]
return eval("lambda %s: sum([method(f, x, y) for f, (x, y) in itertools.izip(curve(values[:,0], **{%s}), values) if not exclude(x, y)])" % (", ".join(parnames), ", ".join(["\"%s\": %s" % (x, x) for x in parnames])), {"method": method, "itertools": itertools, "curve": self, "values": values, "exclude": exclude})
else:
raise ContainerException, "Data for Curve.objective must be a Histogram or a Scatter plot."
def fit(self, data, parameters=Auto, sequence=[("migrad",)], method=Auto, exclude=Auto, centroids=False, **fitter_arguments):
"""Fit this curve to a given dataset, updating its `parameters` and creating a `minimizer` member.
Arguments:
data (`Histogram` or `Scatter`): the data to fit
parameters (dict of strings -> values): the initial
parameters for the fit
sequence (list of (string, arg, arg)): sequence of Minuit
commands to call, with optional arguments
method (function or `Auto`): a function that will be called
for each data point to calculate the final value of the
objective function; examples:
`lambda f, x, y: (f - y)**2` chi^2 for data without uncertainties
`lambda f, x, y, ey: (f - y)**2/ey**2` chi^2 with uncertainties
If `method` is `Auto`, an appropriate chi^2 function will
be used.
exclude (function, `Auto`, or `None`): a function that will
be called for each data point to determine whether to
exclude the point; `Auto` excludes only zero values and
`None` excludes nothing
centroids (bool): use centroids of histogram, rather than
centers
Keyword arguments:
Keyword arguments will be passed to the Minuit object as member data.
"""
if parameters is Auto: parameters = self.parameters
self.minimizer = minuit.Minuit(self.objective(data, parameters.keys(), method=method, exclude=exclude, centroids=centroids))
for name, value in fitter_arguments.items():
exec("self.minimizer.%s = %s" % (name, str(value)))
self.minimizer.values = parameters
# this block is just to set ndf (with all exclusions applied)
ndf = 0
if isinstance(data, Histogram):
if isinstance(data, HistogramCategorical):
raise ContainerException, "A fit to a categorical histogram is not meaningful."
values = numpy.empty((len(data.bins), 2), dtype=numpy.float)
if centroids: values[:,0] = data.centroids()
else: values[:,0] = data.centers()
values[:,1] = data.values
for x, y in values:
if not self._exclude(x, y):
ndf += 1
elif isinstance(data, Scatter):
index = data.index()
if "ey" in data.sig and "eyl" in data.sig:
values = numpy.empty((len(data.values), 4))
values[:,0] = data.values[:,index["x"]]
values[:,1] = data.values[:,index["y"]]
values[:,2] = data.values[:,index["ey"]]
values[:,3] = data.values[:,index["eyl"]]
for x, y, ey, eyl in values:
if not self._exclude(x, y, ey, eyl):
ndf += 1
elif "ey" in data.sig:
values = numpy.empty((len(data.values), 3))
values[:,0] = data.values[:,index["x"]]
values[:,1] = data.values[:,index["y"]]
values[:,2] = data.values[:,index["ey"]]
for x, y, ey in values:
if not self._exclude(x, y, ey):
ndf += 1
else:
values = numpy.empty((len(data.values), 2))
values[:,0] = data.values[:,index["x"]]
values[:,1] = data.values[:,index["y"]]
for x, y in values:
if not self._exclude(x, y):
ndf += 1
else:
raise ContainerException, "Data for Curve.objective must be a Histogram or a Scatter plot."
ndf -= len(parameters)
# end block to set ndf
try:
for command in sequence:
name = command[0]
args = list(command[1:])
for i in range(len(args)):
if isinstance(args[i], basestring): args[i] = "\"%s\"" % args[i]
else: args[i] = str(args[i])
eval("self.minimizer.%s(%s)" % (name, ", ".join(args)))
except Exception as tmp:
self.parameters = self.minimizer.values
self.chi2 = self.minimizer.fval
self.ndf = ndf
self.normalizedChi2 = (self.minimizer.fval / float(self.ndf) if self.ndf > 0 else -1.)
raise tmp
self.parameters = self.minimizer.values
self.chi2 = self.minimizer.fval
self.ndf = ndf
self.normalizedChi2 = (self.minimizer.fval / float(self.ndf) if self.ndf > 0 else -1.)
# reporting results after fitting
def round_errpair(self, parname, n=2):
"""Round a parameter and its uncertainty to n significant figures in
the uncertainty (default is two)."""
if getattr(self, "minimizer", None) is None:
raise ContainerException, "Curve.round_errpair can only be called after fitting."
return mathtools.round_errpair(self.minimizer.values[parname], self.minimizer.errors[parname], n=n)
def str_errpair(self, parname, n=2):
"""Round a number and its uncertainty to n significant figures in the
uncertainty (default is two) and return the result as a string."""
if getattr(self, "minimizer", None) is None:
raise ContainerException, "Curve.str_errpair can only be called after fitting."
return mathtools.str_errpair(self.minimizer.values[parname], self.minimizer.errors[parname], n=n)
def unicode_errpair(self, parname, n=2):
"""Round a number and its uncertainty to n significant figures in the
uncertainty (default is two) and return the result joined by a unicode
plus-minus sign."""
if getattr(self, "minimizer", None) is None:
raise ContainerException, "Curve.unicode_errpair can only be called after fitting."
return mathtools.unicode_errpair(self.minimizer.values[parname], self.minimizer.errors[parname], n=n)
def expr(self, varrepl=None, sigfigs=2):
if callable(self.func):
raise ContainerException, "Curve.expr only works for string-based functions."
if getattr(self, "minimizer", None) is None:
raise ContainerException, "Curve.expr can only be called after fitting."
output = self.func[:]
for name, value in self.minimizer.values.items():
if sigfigs is None:
value = ("%g" % value)
else:
value = mathtools.str_sigfigs(value, sigfigs)
output = re.sub(r"\b%s\b" % name, value, output)
if varrepl is not None:
output = re.sub(r"\b%s\b" % self.var, varrepl, output)
return output
######################################################### Grids, horiz/vert lines, annotations
class Line(Frame):
"""Represents a line drawn between two points (one of which may be at infinity).
Arguments:
x1, y1 (numbers): a point; either coordinate can be Infinity or
multiples of Infinity
x2, y2 (numbers): another point; either coordinate can be
Infinity or multiples of Infinity
linewidth (float): scale factor to resize line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string or color): color specification for grid line(s)
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`x1`, `y1`, `x2`, `y2`, `linewidth`, `linestyle`, `linecolor`,
and frame arguments.
"""
_not_frameargs = ["x1", "y1", "x2", "y2", "linewidth", "linestyle", "linecolor"]
def __init__(self, x1, y1, x2, y2, linewidth=1., linestyle="solid", linecolor="black", **frameargs):
self.x1, self.y1, self.x2, self.y2, self.linewidth, self.linestyle, self.linecolor = x1, y1, x2, y2, linewidth, linestyle, linecolor
Frame.__init__(self, **frameargs)
def __repr__(self):
if isinstance(self.x1, mathtools.InfiniteType): x1 = repr(self.x1)
else: x1 = "%g" % self.x1
if isinstance(self.y1, mathtools.InfiniteType): y1 = repr(self.y1)
else: y1 = "%g" % self.y1
if isinstance(self.x2, mathtools.InfiniteType): x2 = repr(self.x2)
else: x2 = "%g" % self.x2
if isinstance(self.y2, mathtools.InfiniteType): y2 = repr(self.y2)
else: y2 = "%g" % self.y2
return "<Line %s %s %s %s at 0x%x>" % (x1, y1, x2, y2, id(self))
def ranges(self, xlog=False, ylog=False):
if (isinstance(self.x1, mathtools.InfiniteType) or isinstance(self.y1, mathtools.InfiniteType) or (getattr(self, "xlog", False) and self.x1 <= 0.) or (getattr(self, "ylog", False) and self.y1 <= 0.)) and \
(isinstance(self.x2, mathtools.InfiniteType) or isinstance(self.y2, mathtools.InfiniteType) or (getattr(self, "xlog", False) and self.x2 <= 0.) or (getattr(self, "ylog", False) and self.y2 <= 0.)):
if getattr(self, "xlog", False):
xmin, xmax = 0.1, 1.
else:
xmin, xmax = 0., 1.
if getattr(self, "ylog", False):
ymin, ymax = 0.1, 1.
else:
ymin, ymax = 0., 1.
return xmin, ymin, xmax, ymax
elif isinstance(self.x1, mathtools.InfiniteType) or isinstance(self.y1, mathtools.InfiniteType) or (getattr(self, "xlog", False) and self.x1 <= 0.) or (getattr(self, "ylog", False) and self.y1 <= 0.):
singlepoint = (self.x2, self.y2)
elif isinstance(self.x2, mathtools.InfiniteType) or isinstance(self.y2, mathtools.InfiniteType) or (getattr(self, "xlog", False) and self.x2 <= 0.) or (getattr(self, "ylog", False) and self.y2 <= 0.):
singlepoint = (self.x1, self.y1)
else:
return min(self.x1, self.x2), min(self.y1, self.y2), max(self.x1, self.x2), max(self.y1, self.y2)
# handle singlepoint
if getattr(self, "xlog", False):
xmin, xmax = singlepoint[0]/2., singlepoint[0]*2.
else:
xmin, xmax = singlepoint[0] - 1., singlepoint[0] + 1.
if getattr(self, "ylog", False):
ymin, ymax = singlepoint[1]/2., singlepoint[1]*2.
else:
ymin, ymax = singlepoint[1] - 1., singlepoint[1] + 1.
return xmin, ymin, xmax, ymax
class Grid(Frame):
"""Represents one or more horizontal/vertical lines or a whole grid.
Arguments:
horiz (list of numbers, function, or `None`): a list of values
at which to draw horizontal lines, a function `f(a, b)` taking
an interval and providing such a list, or `None` for no
horizontal lines.
vert (list of numbers, function, or `None`): same for vertical
lines
linewidth (float): scale factor to resize line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string or color): color specification for grid line(s)
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`horiz`, `vert`, `linewidth`, `linestyle`, `linecolor`, and
frame arguments.
Considerations:
The `regular` utility provides functions suitable for `horiz`
and `vert`.
"""
_not_frameargs = ["horiz", "vert", "linewidth", "linestyle", "linecolor"]
def __init__(self, horiz=None, vert=None, linewidth=1., linestyle="dotted", linecolor="grey", **frameargs):
self.horiz, self.vert, self.linewidth, self.linestyle, self.linecolor = horiz, vert, linewidth, linestyle, linecolor
Frame.__init__(self, **frameargs)
def __repr__(self):
return "<Grid %s %s at 0x%x>" % (repr(self.horiz), repr(self.vert), id(self))
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=None, ylog=None):
try:
self._horiz = []
for i in self.horiz:
self._horiz.append(i)
except TypeError:
if callable(self.horiz):
try:
self._horiz = self.horiz(ymin, ymax)
except TypeError:
raise ContainerException, "If Grid.horiz is a function, it must take two endpoints and return a list of values"
elif self.horiz is None:
self._horiz = []
else:
raise ContainerException, "Grid.horiz must be None, a list of values, or a function returning a list of values (given endpoints)"
try:
self._vert = []
for i in self.vert:
self._vert.append(i)
except TypeError:
if callable(self.vert):
try:
self._vert = self.vert(xmin, xmax)
except TypeError:
raise ContainerException, "If Grid.vert is a function, it must take two endpoints and return a list of values"
elif self.vert is None:
self._vert = []
else:
raise ContainerException, "Grid.vert must be None, a list of values, or a function returning a list of values (given endpoints)"
######################################################### User-defined plot legend
class Legend(Frame):
"""Represents a table of information to overlay on a plot.
Arguments:
fields (list of lists): table data; may include text, numbers,
and objects with line, fill, or marker styles
colwid (list of numbers): column widths as fractions of the
whole width (minus padding); e.g. [0.5, 0.25, 0.25]
justify (list of "l", "m", "r"): column justification: "l" for
left, "m" or "c" for middle, and "r" for right
x, y (numbers): position of the legend box (use with
`textanchor`) in units of frame width; e.g. (1, 1) is the
top-right corner, (0, 0) is the bottom-left corner
width (number): width of the legend box in units of frame width
height (number or `Auto`): height of the legend box in units of
frame width or `Auto` to calculate from the number of rows,
`baselineskip`, and `padding`
anchor (2-character string): placement of the legend box
relative to `x`, `y`; first character is "t" for top, "m" or
"c" for middle, and "b" for bottom, second character is
"l" for left, "m" or "c" for middle, and "r" for right
textscale (number): scale factor for text (1 is normal)
padding (number): extra space between the legend box and its
contents, as a fraction of the whole SVG document
baselineskip (number): space to skip between rows of the table,
as a fraction of the whole SVG document
linewidth (float): scale factor to resize legend box line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): color of the boundary
around the legend box; no line if `None`
fillcolor (string, color, or `None`): fill color of the legend
box; hollow if `None`
`**frameargs`: keyword arguments for the coordinate frame
Public members:
`fields`, `colwid`, `justify`, `x`, `y`, `width`, `height`,
`anchor`, `textscale`, `padding`, `baselineskip`, `linewidth`,
`linestyle`, `linecolor`, `fillcolor`, and frame arguments.
Considerations:
`Legend` is a drawable data container on its own, not attached
to any histogram or scatter plot. To overlay a `Legend` on
another plot, use the `Overlay` command, and be sure to point
`Overlay.frame` to the desired plot::
Overlay(plot, legend, frame=0)
Legends will always be drawn _above_ the frame (and therefore
also above all other plots in an overlay).
"""
_not_frameargs = ["colwid", "justify", "x", "y", "width", "height", "anchor", "textscale", "padding", "baselineskip", "linewidth", "linestyle", "linecolor", "fillcolor"]
def __init__(self, fields, colwid=Auto, justify="l", x=1., y=1., width=0.4, height=Auto, anchor="tr", textscale=1., padding=0.01, baselineskip=0.035, linewidth=1., linestyle="solid", linecolor="black", fillcolor="white"):
self.fields, self.colwid, self.justify, self.x, self.y, self.width, self.height, self.anchor, self.textscale, self.padding, self.baselineskip, self.linewidth, self.linestyle, self.linecolor, self.fillcolor = fields, colwid, justify, x, y, width, height, anchor, textscale, padding, baselineskip, linewidth, linestyle, linecolor, fillcolor
def __repr__(self):
return "<Legend %dx%d>" % self.dimensions()
def dimensions(self):
"""Determine the number of rows and columns in `fields`."""
rows = 1
columns = 1
if not isinstance(self.fields, basestring):
iterable = False
try:
iter(self.fields)
iterable = True
except TypeError: pass
if iterable:
rows -= 1
for line in self.fields:
if not isinstance(line, basestring):
length = 0
try:
for cell in line:
length += 1
except TypeError: pass
if length > columns: columns = length
rows += 1
return rows, columns
def _prepare(self, xmin=None, ymin=None, xmax=None, ymax=None, xlog=None, ylog=None):
self._rows, self._columns = self.dimensions()
# make _fields a rectangular array with None in missing fields
self._fields = [[None for j in range(self._columns)] for i in range(self._rows)]
if isinstance(self.fields, basestring):
self._fields[0][0] = self.fields
else:
iterable = False
try:
iter(self.fields)
iterable = True
except TypeError: pass
if not iterable:
self._fields[0][0] = self.fields
else:
for i, line in enumerate(self.fields):
if isinstance(line, basestring):
self._fields[i][0] = line
else:
lineiterable = False
try:
iter(line)
lineiterable = True
except TypeError: pass
if not lineiterable:
self._fields[i][0] = line
else:
for j, cell in enumerate(line):
self._fields[i][j] = cell
# take user input if available, fill in what's remaining by evenly splitting the difference
if self.colwid is Auto:
self._colwid = [1./self._columns]*self._columns
else:
self._colwid = list(self.colwid[:self._columns])
if len(self._colwid) < self._columns:
if sum(self._colwid) < 1.:
width = (1. - sum(self._colwid)) / (self._columns - len(self._colwid))
self._colwid.extend([width]*(self._columns - len(self._colwid)))
else:
# or put in typical values if we have to normalize anyway
average = float(sum(self._colwid))/len(self._colwid)
self._colwid.extend([average]*(self._columns - len(self._colwid)))
# normalize: sum of colwid = 1
total = 1.*sum(self._colwid)
for i in range(len(self._colwid)):
self._colwid[i] /= total
# if we only get one directive, repeat for all self._columns
if self.justify is Auto or self.justify == "l":
self._justify = ["l"]*self._columns
elif self.justify == "m" or self.justify == "c":
self._justify = ["m"]*self._columns
elif self.justify == "r":
self._justify = ["r"]*self._columns
else:
# take all user input and fill in whatever's missing with "l"
self._justify = list(self.justify[:self._columns])
if len(self._justify) < self._columns:
self._justify.extend(["l"]*(self._columns - len(self._justify)))
self._anchor = [None, None]
if len(self.anchor) == 2:
if self.anchor[0] == "t": self._anchor[0] = "t"
if self.anchor[0] in ("m", "c"): self._anchor[0] = "m"
if self.anchor[0] == "b": self._anchor[0] = "b"
if self.anchor[1] == "l": self._anchor[1] = "l"
if self.anchor[1] in ("m", "c"): self._anchor[1] = "m"
if self.anchor[1] == "r": self._anchor[1] = "r"
# try the letters backward
if self._anchor[0] is None or self._anchor[1] is None:
self._anchor = [None, None]
if self.anchor[1] == "t": self._anchor[0] = "t"
if self.anchor[1] in ("m", "c"): self._anchor[0] = "m"
if self.anchor[1] == "b": self._anchor[0] = "b"
if self.anchor[0] == "l": self._anchor[1] = "l"
if self.anchor[0] in ("m", "c"): self._anchor[1] = "m"
if self.anchor[0] == "r": self._anchor[1] = "r"
if self._anchor[0] is None or self._anchor[1] is None:
raise ContainerException, "Legend.anchor not recognized: \"%s\"" % self.anchor
class Style:
"""Represents a line, fill, and marker style, but is not drawable.
Arguments:
linewidth (float): scale factor to resize line width
linestyle (tuple or string): "solid", "dashed", "dotted", or a
tuple of numbers representing a dash-pattern
linecolor (string, color, or `None`): stroke color
fillcolor (string, color, or `None`): fill color
marker (string or `None`): symbol at each point
markersize (float): scale factor to resize marker points
markercolor (string, color, or `None`): fill color for markers
markeroutline (string, color, or `None`): stroke color for markers
Public members:
`linewidth`, `linestyle`, `linecolor`, `fillcolor`, `marker`,
`markersize`, `markercolor`, and `markeroutline`.
Purpose:
Can be used in place of a real Histogram/Scatter/etc. in Legend.
"""
def __init__(self, linewidth=1., linestyle="solid", linecolor=None, fillcolor=None, marker=None, markersize=1., markercolor="black", markeroutline=None):
self.linewidth, self.linestyle, self.linecolor, self.fillcolor, self.marker, self.markersize, self.markercolor, self.markeroutline = linewidth, linestyle, linecolor, fillcolor, marker, markersize, markercolor, markeroutline
def __repr__(self):
attributes = [""]
if self.linecolor is not None:
attributes.append("linewidth=%g" % self.linewidth)
attributes.append("linestyle=%s" % str(self.linestyle))
attributes.append("linecolor=%s" % str(self.linecolor))
if self.fillcolor is not None:
attributes.append("fillcolor=%s" % str(self.fillcolor))
if self.marker is not None:
attributes.append("marker=%s" % str(self.marker))
attributes.append("markersize=%g" % self.markersize)
attributes.append("markercolor=%s" % str(self.markercolor))
return "<Style%s>" % " ".join(attributes)
######################################################### Interactive table for a PAW-style analysis
class InspectTable(UniTable):
"""Load, manipulate, and plot data quickly and interactively.
Class members:
cache_limit (int or `None`): a maximum number of preselected
subtables to cache
"""
cache_limit = 10
_comma = re.compile("\s*,\s*")
def __repr__(self):
return "<InspectTable %d keys %d rows>" % (len(self.keys()), len(self))
def _setup_cache(self):
if getattr(self, "_cache_subtables", None) is None:
self._cache_subtables = {}
self._cache_order = []
def __call__(self, expr, cuts=None, use_cache=True):
"""Select and return a subtable based on the expression and cuts.
Arguments:
expr (string): expression to evaluate in the namespace of
the table and plot
cuts (string): expression for filtering out unwanted data
use_cache (bool): if True, keep track of all preselected
subtables (it is likely that the user will want them again)
"""
if cuts is None or cuts == "":
subtable = self
else:
if use_cache:
self._setup_cache()
if cuts in self._cache_subtables and set(self.keys()) == set(self._cache_subtables[cuts].keys()):
subtable = self._cache_subtables[cuts]
self._cache_order = [cuts] + filter(lambda x: x != cuts, self._cache_order)
else:
subtable = self.compress(self.eval(cuts))
self._cache_subtables[cuts] = subtable
self._cache_order = [cuts] + filter(lambda x: x != cuts, self._cache_order)
if self.cache_limit is not None:
while len(self._cache_order) > self.cache_limit:
del self._cache_subtables[self._cache_order.pop()]
else:
subtable = self.compress(self.eval(cuts))
return subtable.eval(expr)
def unique(self, expr=None, cuts=None, use_cache=True):
if expr is None:
keys = self.keys()
expr = ",".join(keys)
subtable = self(expr, cuts, use_cache)
if isinstance(subtable, tuple):
# can't use numpy because the output may be heterogeneous
output = set()
for event in zip(*subtable):
output.add(event)
return output
else:
return set(numpy.unique(subtable))
def scan(self, expr=None, cuts=None, subset=slice(0, 10), use_cache=True, width=12):
"""Print a table or subtable of values on the screen.
Arguments:
expr (string): comma-separated set of expressions to print
(if `None`, print all fields)
cuts (string): expression for filtering out unwanted data
subset (slice): slice applied to all fields, so that the
output is manageable
use_cache (bool): if True, keep track of all preselected
subtables (it is likely that the user will want them again)
"""
if expr is None:
keys = self.keys()
expr = ",".join(keys)
subtable = self(expr, cuts, use_cache)
fields = re.split(self._comma, expr)
format_fields = []
separator = []
format_line = []
typechar = []
for field, array in zip(fields, subtable):
format_fields.append("%%%d.%ds" % (width, width))
separator.append("=" * width)
if array.dtype.char in numpy.typecodes["Float"]:
format_line.append("%%%dg" % width)
typechar.append("f")
elif array.dtype.char in numpy.typecodes["AllInteger"]:
format_line.append("%%%dd" % width)
typechar.append("i")
elif array.dtype.char == "?":
format_line.append("%%%ds" % width)
typechar.append("?")
elif array.dtype.char in numpy.typecodes["Complex"]:
format_line.append("%%%dg+%%%dgj" % ((width-2)//2, (width-2)//2))
typechar.append("F")
elif array.dtype.char in numpy.typecodes["Character"] + "Sa":
format_line.append("%%%d.%ds" % (width, width))
typechar.append("S")
format_fields = " ".join(format_fields)
separator = "=".join(separator)
print format_fields % tuple(fields)
print separator
if isinstance(subtable, tuple):
for records in zip(*[i[subset] for i in subtable]):
for r, f, c in zip(records, format_line, typechar):
if c == "F":
print f % (r.real, r.imag),
elif c == "?":
if r: print f % "True",
else: print f % "False",
elif c == "S":
print f % ("'%s'" % r),
else:
print f % r,
print
else:
for record in subtable[subset]:
if typechar[0] == "F":
print format_line[0] % (record.real, record.imag)
elif typechar[0] == "?":
if record: print format_line[0] % "True"
else: print format_line[0] % "False"
elif typechar[0] == "S":
print format_line[0] % ("'%s'" % record)
else:
print format_line[0] % record
def histogram(self, expr, cuts=None, weights=None, numbins=utilities.binning, lowhigh=utilities.calcrange_quartile, use_cache=True, **kwds):
"""Draw and return a histogram based on the expression and cuts.
Arguments:
expr (string): expression to evaluate in the namespace of
the table and plot
cuts (string): expression for filtering out unwanted data
weights (string): optional expression for the weight of
each data entry
numbins (int or function): number of bins or a function
that returns an optimized number of bins, given data, low,
and high
lowhigh ((low, high) or function): range of the histogram or
a function that returns an optimized range given the data
use_cache (bool): if True, keep track of all preselected
subtables (it is likely that the user will want them again)
`**kwds`: any other arguments are passed to the Histogram
constructor
"""
if numbins is Auto: numbins = utilities.binning
if lowhigh is Auto: lowhigh = utilities.calcrange_quartile
data = self(expr, cuts)
if isinstance(data, tuple):
raise ContainerException, "The expr must return one-dimensional data (no commas!)"
if weights is not None:
dataweight = self(weights, cuts)
if isinstance(data, tuple):
raise ContainerException, "The weights must return one-dimensional data (no commas!)"
else:
dataweight = numpy.ones(len(data), numpy.float)
if len(data) > 0 and data.dtype.char in numpy.typecodes["Character"] + "SU":
bins = numpy.unique(data)
bins.sort()
kwds2 = {"xlabel": expr}
kwds2.update(kwds)
output = HistogramCategorical(bins, data, dataweight, **kwds2)
elif len(data) == 0 or data.dtype.char in numpy.typecodes["Float"] + numpy.typecodes["AllInteger"]:
if isinstance(lowhigh, (tuple, list)) and len(lowhigh) == 2 and isinstance(lowhigh[0], (numbers.Number, numpy.number)) and isinstance(lowhigh[1], (numbers.Number, numpy.number)):
low, high = lowhigh
elif callable(lowhigh):
low, high = lowhigh(data, kwds.get("xlog", False))
else:
raise ContainerException, "The 'lowhigh' argument must be a function or (low, high) tuple."
if isinstance(numbins, (int, long)):
pass
elif callable(numbins):
numbins = numbins(data, low, high)
else:
raise ContainerException, "The 'numbins' argument must be a function or an int."
if numbins < 1: numbins = 1
if low >= high: low, high = 0., 1.
kwds2 = {"xlabel": expr}
kwds2.update(kwds)
output = Histogram(numbins, low, high, data, dataweight, **kwds2)
else:
raise ContainerException, "Unrecognized data type: %s (%s)" % (data.dtype.name, data.dtype.char)
return output
def timeseries(self, expr, cuts=None, ex=None, ey=None, exl=None, eyl=None, limit=1000, use_cache=True, **kwds):
"""Draw and return a scatter-plot based on the expression and cuts.
Arguments:
expr (string): expression to evaluate in the namespace of
the table and plot
cuts (string): expression for filtering out unwanted data
ex (string): optional expression for x error bars (in seconds)
ey (string): optional expression for y error bars
exl (string): optional expression for x lower error bars (in seconds)
eyl (string): optional expression for y lower error bars
limit (int or `None`): set an upper limit on the number of
points that will be drawn
use_cache (bool): if True, keep track of all preselected
subtables (it is likely that the user will want them again)
`**kwds`: any other arguments are passed to the TimeSeries
constructor
"""
return self.scatter(expr, cuts, ex, ey, exl, eyl, limit=limit, timeseries=True, use_cache=use_cache, **kwds)
def scatter(self, expr, cuts=None, ex=None, ey=None, exl=None, eyl=None, limit=1000, timeseries=False, use_cache=True, **kwds):
"""Draw and return a scatter-plot based on the expression and cuts.
Arguments:
expr (string): expression to evaluate in the namespace of
the table and plot
cuts (string): expression for filtering out unwanted data
ex (string): optional expression for x error bars
ey (string): optional expression for y error bars
exl (string): optional expression for x lower error bars
eyl (string): optional expression for y lower error bars
limit (int or `None`): set an upper limit on the number of
points that will be drawn
timeseries (bool): if True, produce a TimeSeries, rather
than a Scatter
use_cache (bool): if True, keep track of all preselected
subtables (it is likely that the user will want them again)
`**kwds`: any other arguments are passed to the Scatter
constructor
"""
fields = re.split(self._comma, expr)
data = self(expr, cuts)
# convert one-dimensional complex data into two-dimensional real data
if not isinstance(data, tuple) and data.dtype.char in numpy.typecodes["Complex"]:
data = numpy.real(data), numpy.imag(data)
if not isinstance(data, tuple) or len(data) != 2:
raise ContainerException, "The expr must return two-dimensional data (include a comma!)"
xdata, ydata = data
if ex is not None:
ex = self(ex, cuts)
if isinstance(ex, tuple):
raise ContainerException, "The ex must return one-dimensional data"
if ey is not None:
ey = self(ey, cuts)
if isinstance(ey, tuple):
raise ContainerException, "The ey must return one-dimensional data"
if exl is not None:
exl = self(exl, cuts)
if isinstance(exl, tuple):
raise ContainerException, "The exl must return one-dimensional data"
if eyl is not None:
eyl = self(eyl, cuts)
if isinstance(eyl, tuple):
raise ContainerException, "The eyl must return one-dimensional data"
if timeseries:
if xdata.dtype.char in numpy.typecodes["Float"] + numpy.typecodes["AllInteger"]:
kwds2 = {"xlabel": fields[0], "ylabel": fields[1]}
kwds2.update(kwds)
output = TimeSeries(x=xdata, y=ydata, ex=ex, ey=ey, exl=exl, eyl=eyl, informat=None, limit=limit, connector="xsort", **kwds2)
elif xdata.dtype.char in numpy.typecodes["Character"] + "Sa":
kwds2 = {"xlabel": fields[0], "ylabel": fields[1]}
kwds2.update(kwds)
output = TimeSeries(x=xdata, y=ydata, ex=ex, ey=ey, exl=exl, eyl=eyl, limit=limit, connector="xsort", **kwds2)
else:
raise ContainerException, "Unsupported data type for x of TimeSeries: %s" % xdata.dtype.name
else:
kwds2 = {"xlabel": fields[0], "ylabel": fields[1]}
kwds2.update(kwds)
output = Scatter(x=xdata, y=ydata, ex=ex, ey=ey, exl=exl, eyl=eyl, limit=limit, **kwds2)
return output
def inspect(*files, **kwds):
"""Load an InspectTable from a file or a collection of files recognized by UniTable.
If a single fileName is provided, `InspectTable.load` is called to
read it into memory.
If multiple fileNames are provided, an `InspectTable` is built
from the concatenation of all files (using the `extend` method).
Keyword arguments are passed to each `load` method.
"""
output = InspectTable()
first = True
for f in files:
if first:
output.load(f, **kwds)
first = False
else:
output.extend(InspectTable().load(f, **kwds))
return output
|
opendatagroup/cassius
|
tags/cassius_0_1_0_1/cassius/containers.py
|
Python
|
apache-2.0
| 145,496
|
[
"Gaussian"
] |
a2d7b9854171d63b5436deb340ba739763f45cbdaac366df23202692ac153191
|
# revset.py - revision set queries for mercurial
#
# Copyright 2010 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
import re
import parser, util, error, discovery, hbisect, phases
import node
import match as matchmod
from i18n import _
import encoding
import obsolete as obsmod
import repoview
def _revancestors(repo, revs, followfirst):
"""Like revlog.ancestors(), but supports followfirst."""
cut = followfirst and 1 or None
cl = repo.changelog
visit = util.deque(revs)
seen = set([node.nullrev])
while visit:
for parent in cl.parentrevs(visit.popleft())[:cut]:
if parent not in seen:
visit.append(parent)
seen.add(parent)
yield parent
def _revdescendants(repo, revs, followfirst):
"""Like revlog.descendants() but supports followfirst."""
cut = followfirst and 1 or None
cl = repo.changelog
first = min(revs)
nullrev = node.nullrev
if first == nullrev:
# Are there nodes with a null first parent and a non-null
# second one? Maybe. Do we care? Probably not.
for i in cl:
yield i
return
seen = set(revs)
for i in cl.revs(first + 1):
for x in cl.parentrevs(i)[:cut]:
if x != nullrev and x in seen:
seen.add(i)
yield i
break
def _revsbetween(repo, roots, heads):
"""Return all paths between roots and heads, inclusive of both endpoint
sets."""
if not roots:
return []
parentrevs = repo.changelog.parentrevs
visit = heads[:]
reachable = set()
seen = {}
minroot = min(roots)
roots = set(roots)
# open-code the post-order traversal due to the tiny size of
# sys.getrecursionlimit()
while visit:
rev = visit.pop()
if rev in roots:
reachable.add(rev)
parents = parentrevs(rev)
seen[rev] = parents
for parent in parents:
if parent >= minroot and parent not in seen:
visit.append(parent)
if not reachable:
return []
for rev in sorted(seen):
for parent in seen[rev]:
if parent in reachable:
reachable.add(rev)
return sorted(reachable)
elements = {
"(": (20, ("group", 1, ")"), ("func", 1, ")")),
"~": (18, None, ("ancestor", 18)),
"^": (18, None, ("parent", 18), ("parentpost", 18)),
"-": (5, ("negate", 19), ("minus", 5)),
"::": (17, ("dagrangepre", 17), ("dagrange", 17),
("dagrangepost", 17)),
"..": (17, ("dagrangepre", 17), ("dagrange", 17),
("dagrangepost", 17)),
":": (15, ("rangepre", 15), ("range", 15), ("rangepost", 15)),
"not": (10, ("not", 10)),
"!": (10, ("not", 10)),
"and": (5, None, ("and", 5)),
"&": (5, None, ("and", 5)),
"or": (4, None, ("or", 4)),
"|": (4, None, ("or", 4)),
"+": (4, None, ("or", 4)),
",": (2, None, ("list", 2)),
")": (0, None, None),
"symbol": (0, ("symbol",), None),
"string": (0, ("string",), None),
"end": (0, None, None),
}
keywords = set(['and', 'or', 'not'])
def tokenize(program):
'''
Parse a revset statement into a stream of tokens
Check that @ is a valid unquoted token character (issue3686):
>>> list(tokenize("@::"))
[('symbol', '@', 0), ('::', None, 1), ('end', None, 3)]
'''
pos, l = 0, len(program)
while pos < l:
c = program[pos]
if c.isspace(): # skip inter-token whitespace
pass
elif c == ':' and program[pos:pos + 2] == '::': # look ahead carefully
yield ('::', None, pos)
pos += 1 # skip ahead
elif c == '.' and program[pos:pos + 2] == '..': # look ahead carefully
yield ('..', None, pos)
pos += 1 # skip ahead
elif c in "():,-|&+!~^": # handle simple operators
yield (c, None, pos)
elif (c in '"\'' or c == 'r' and
program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
if c == 'r':
pos += 1
c = program[pos]
decode = lambda x: x
else:
decode = lambda x: x.decode('string-escape')
pos += 1
s = pos
while pos < l: # find closing quote
d = program[pos]
if d == '\\': # skip over escaped characters
pos += 2
continue
if d == c:
yield ('string', decode(program[s:pos]), s)
break
pos += 1
else:
raise error.ParseError(_("unterminated string"), s)
# gather up a symbol/keyword
elif c.isalnum() or c in '._@' or ord(c) > 127:
s = pos
pos += 1
while pos < l: # find end of symbol
d = program[pos]
if not (d.isalnum() or d in "._/@" or ord(d) > 127):
break
if d == '.' and program[pos - 1] == '.': # special case for ..
pos -= 1
break
pos += 1
sym = program[s:pos]
if sym in keywords: # operator keywords
yield (sym, None, s)
else:
yield ('symbol', sym, s)
pos -= 1
else:
raise error.ParseError(_("syntax error"), pos)
pos += 1
yield ('end', None, pos)
# helpers
def getstring(x, err):
if x and (x[0] == 'string' or x[0] == 'symbol'):
return x[1]
raise error.ParseError(err)
def getlist(x):
if not x:
return []
if x[0] == 'list':
return getlist(x[1]) + [x[2]]
return [x]
def getargs(x, min, max, err):
l = getlist(x)
if len(l) < min or (max >= 0 and len(l) > max):
raise error.ParseError(err)
return l
def getset(repo, subset, x):
if not x:
raise error.ParseError(_("missing argument"))
return methods[x[0]](repo, subset, *x[1:])
def _getrevsource(repo, r):
extra = repo[r].extra()
for label in ('source', 'transplant_source', 'rebase_source'):
if label in extra:
try:
return repo[extra[label]].rev()
except error.RepoLookupError:
pass
return None
# operator methods
def stringset(repo, subset, x):
x = repo[x].rev()
if x == -1 and len(subset) == len(repo):
return [-1]
if len(subset) == len(repo) or x in subset:
return [x]
return []
def symbolset(repo, subset, x):
if x in symbols:
raise error.ParseError(_("can't use %s here") % x)
return stringset(repo, subset, x)
def rangeset(repo, subset, x, y):
cl = repo.changelog
m = getset(repo, cl, x)
n = getset(repo, cl, y)
if not m or not n:
return []
m, n = m[0], n[-1]
if m < n:
r = range(m, n + 1)
else:
r = range(m, n - 1, -1)
s = set(subset)
return [x for x in r if x in s]
def dagrange(repo, subset, x, y):
r = list(repo)
xs = _revsbetween(repo, getset(repo, r, x), getset(repo, r, y))
s = set(subset)
return [r for r in xs if r in s]
def andset(repo, subset, x, y):
return getset(repo, getset(repo, subset, x), y)
def orset(repo, subset, x, y):
xl = getset(repo, subset, x)
s = set(xl)
yl = getset(repo, [r for r in subset if r not in s], y)
return xl + yl
def notset(repo, subset, x):
s = set(getset(repo, subset, x))
return [r for r in subset if r not in s]
def listset(repo, subset, a, b):
raise error.ParseError(_("can't use a list in this context"))
def func(repo, subset, a, b):
if a[0] == 'symbol' and a[1] in symbols:
return symbols[a[1]](repo, subset, b)
raise error.ParseError(_("not a function: %s") % a[1])
# functions
def adds(repo, subset, x):
"""``adds(pattern)``
Changesets that add a file matching pattern.
"""
# i18n: "adds" is a keyword
pat = getstring(x, _("adds requires a pattern"))
return checkstatus(repo, subset, pat, 1)
def ancestor(repo, subset, x):
"""``ancestor(*changeset)``
Greatest common ancestor of the changesets.
Accepts 0 or more changesets.
Will return empty list when passed no args.
Greatest common ancestor of a single changeset is that changeset.
"""
# i18n: "ancestor" is a keyword
l = getlist(x)
rl = list(repo)
anc = None
# (getset(repo, rl, i) for i in l) generates a list of lists
rev = repo.changelog.rev
ancestor = repo.changelog.ancestor
node = repo.changelog.node
for revs in (getset(repo, rl, i) for i in l):
for r in revs:
if anc is None:
anc = r
else:
anc = rev(ancestor(node(anc), node(r)))
if anc is not None and anc in subset:
return [anc]
return []
def _ancestors(repo, subset, x, followfirst=False):
args = getset(repo, list(repo), x)
if not args:
return []
s = set(_revancestors(repo, args, followfirst)) | set(args)
return [r for r in subset if r in s]
def ancestors(repo, subset, x):
"""``ancestors(set)``
Changesets that are ancestors of a changeset in set.
"""
return _ancestors(repo, subset, x)
def _firstancestors(repo, subset, x):
# ``_firstancestors(set)``
# Like ``ancestors(set)`` but follows only the first parents.
return _ancestors(repo, subset, x, followfirst=True)
def ancestorspec(repo, subset, x, n):
"""``set~n``
Changesets that are the Nth ancestor (first parents only) of a changeset
in set.
"""
try:
n = int(n[1])
except (TypeError, ValueError):
raise error.ParseError(_("~ expects a number"))
ps = set()
cl = repo.changelog
for r in getset(repo, cl, x):
for i in range(n):
r = cl.parentrevs(r)[0]
ps.add(r)
return [r for r in subset if r in ps]
def author(repo, subset, x):
"""``author(string)``
Alias for ``user(string)``.
"""
# i18n: "author" is a keyword
n = encoding.lower(getstring(x, _("author requires a string")))
kind, pattern, matcher = _substringmatcher(n)
return [r for r in subset if matcher(encoding.lower(repo[r].user()))]
def bisect(repo, subset, x):
"""``bisect(string)``
Changesets marked in the specified bisect status:
- ``good``, ``bad``, ``skip``: csets explicitly marked as good/bad/skip
- ``goods``, ``bads`` : csets topologically good/bad
- ``range`` : csets taking part in the bisection
- ``pruned`` : csets that are goods, bads or skipped
- ``untested`` : csets whose fate is yet unknown
- ``ignored`` : csets ignored due to DAG topology
- ``current`` : the cset currently being bisected
"""
# i18n: "bisect" is a keyword
status = getstring(x, _("bisect requires a string")).lower()
state = set(hbisect.get(repo, status))
return [r for r in subset if r in state]
# Backward-compatibility
# - no help entry so that we do not advertise it any more
def bisected(repo, subset, x):
return bisect(repo, subset, x)
def bookmark(repo, subset, x):
"""``bookmark([name])``
The named bookmark or all bookmarks.
If `name` starts with `re:`, the remainder of the name is treated as
a regular expression. To match a bookmark that actually starts with `re:`,
use the prefix `literal:`.
"""
# i18n: "bookmark" is a keyword
args = getargs(x, 0, 1, _('bookmark takes one or no arguments'))
if args:
bm = getstring(args[0],
# i18n: "bookmark" is a keyword
_('the argument to bookmark must be a string'))
kind, pattern, matcher = _stringmatcher(bm)
if kind == 'literal':
bmrev = repo._bookmarks.get(bm, None)
if not bmrev:
raise util.Abort(_("bookmark '%s' does not exist") % bm)
bmrev = repo[bmrev].rev()
return [r for r in subset if r == bmrev]
else:
matchrevs = set()
for name, bmrev in repo._bookmarks.iteritems():
if matcher(name):
matchrevs.add(bmrev)
if not matchrevs:
raise util.Abort(_("no bookmarks exist that match '%s'")
% pattern)
bmrevs = set()
for bmrev in matchrevs:
bmrevs.add(repo[bmrev].rev())
return [r for r in subset if r in bmrevs]
bms = set([repo[r].rev()
for r in repo._bookmarks.values()])
return [r for r in subset if r in bms]
def branch(repo, subset, x):
"""``branch(string or set)``
All changesets belonging to the given branch or the branches of the given
changesets.
If `string` starts with `re:`, the remainder of the name is treated as
a regular expression. To match a branch that actually starts with `re:`,
use the prefix `literal:`.
"""
try:
b = getstring(x, '')
except error.ParseError:
# not a string, but another revspec, e.g. tip()
pass
else:
kind, pattern, matcher = _stringmatcher(b)
if kind == 'literal':
# note: falls through to the revspec case if no branch with
# this name exists
if pattern in repo.branchmap():
return [r for r in subset if matcher(repo[r].branch())]
else:
return [r for r in subset if matcher(repo[r].branch())]
s = getset(repo, list(repo), x)
b = set()
for r in s:
b.add(repo[r].branch())
s = set(s)
return [r for r in subset if r in s or repo[r].branch() in b]
def bumped(repo, subset, x):
"""``bumped()``
Mutable changesets marked as successors of public changesets.
Only non-public and non-obsolete changesets can be `bumped`.
"""
# i18n: "bumped" is a keyword
getargs(x, 0, 0, _("bumped takes no arguments"))
bumped = obsmod.getrevs(repo, 'bumped')
return [r for r in subset if r in bumped]
def bundle(repo, subset, x):
"""``bundle()``
Changesets in the bundle.
Bundle must be specified by the -R option."""
try:
bundlerevs = repo.changelog.bundlerevs
except AttributeError:
raise util.Abort(_("no bundle provided - specify with -R"))
return [r for r in subset if r in bundlerevs]
def checkstatus(repo, subset, pat, field):
m = None
s = []
hasset = matchmod.patkind(pat) == 'set'
fname = None
for r in subset:
c = repo[r]
if not m or hasset:
m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
if not m.anypats() and len(m.files()) == 1:
fname = m.files()[0]
if fname is not None:
if fname not in c.files():
continue
else:
for f in c.files():
if m(f):
break
else:
continue
files = repo.status(c.p1().node(), c.node())[field]
if fname is not None:
if fname in files:
s.append(r)
else:
for f in files:
if m(f):
s.append(r)
break
return s
def _children(repo, narrow, parentset):
cs = set()
if not parentset:
return cs
pr = repo.changelog.parentrevs
minrev = min(parentset)
for r in narrow:
if r <= minrev:
continue
for p in pr(r):
if p in parentset:
cs.add(r)
return cs
def children(repo, subset, x):
"""``children(set)``
Child changesets of changesets in set.
"""
s = set(getset(repo, list(repo), x))
cs = _children(repo, subset, s)
return [r for r in subset if r in cs]
def closed(repo, subset, x):
"""``closed()``
Changeset is closed.
"""
# i18n: "closed" is a keyword
getargs(x, 0, 0, _("closed takes no arguments"))
return [r for r in subset if repo[r].closesbranch()]
def contains(repo, subset, x):
"""``contains(pattern)``
Revision contains a file matching pattern. See :hg:`help patterns`
for information about file patterns.
"""
# i18n: "contains" is a keyword
pat = getstring(x, _("contains requires a pattern"))
m = None
s = []
if not matchmod.patkind(pat):
for r in subset:
if pat in repo[r]:
s.append(r)
else:
for r in subset:
c = repo[r]
if not m or matchmod.patkind(pat) == 'set':
m = matchmod.match(repo.root, repo.getcwd(), [pat], ctx=c)
for f in c.manifest():
if m(f):
s.append(r)
break
return s
def converted(repo, subset, x):
"""``converted([id])``
Changesets converted from the given identifier in the old repository if
present, or all converted changesets if no identifier is specified.
"""
# There is exactly no chance of resolving the revision, so do a simple
# string compare and hope for the best
rev = None
# i18n: "converted" is a keyword
l = getargs(x, 0, 1, _('converted takes one or no arguments'))
if l:
# i18n: "converted" is a keyword
rev = getstring(l[0], _('converted requires a revision'))
def _matchvalue(r):
source = repo[r].extra().get('convert_revision', None)
return source is not None and (rev is None or source.startswith(rev))
return [r for r in subset if _matchvalue(r)]
def date(repo, subset, x):
"""``date(interval)``
Changesets within the interval, see :hg:`help dates`.
"""
# i18n: "date" is a keyword
ds = getstring(x, _("date requires a string"))
dm = util.matchdate(ds)
return [r for r in subset if dm(repo[r].date()[0])]
def desc(repo, subset, x):
"""``desc(string)``
Search commit message for string. The match is case-insensitive.
"""
# i18n: "desc" is a keyword
ds = encoding.lower(getstring(x, _("desc requires a string")))
l = []
for r in subset:
c = repo[r]
if ds in encoding.lower(c.description()):
l.append(r)
return l
def _descendants(repo, subset, x, followfirst=False):
args = getset(repo, list(repo), x)
if not args:
return []
s = set(_revdescendants(repo, args, followfirst)) | set(args)
return [r for r in subset if r in s]
def descendants(repo, subset, x):
"""``descendants(set)``
Changesets which are descendants of changesets in set.
"""
return _descendants(repo, subset, x)
def _firstdescendants(repo, subset, x):
# ``_firstdescendants(set)``
# Like ``descendants(set)`` but follows only the first parents.
return _descendants(repo, subset, x, followfirst=True)
def destination(repo, subset, x):
"""``destination([set])``
Changesets that were created by a graft, transplant or rebase operation,
with the given revisions specified as the source. Omitting the optional set
is the same as passing all().
"""
if x is not None:
args = set(getset(repo, list(repo), x))
else:
args = set(getall(repo, list(repo), x))
dests = set()
# subset contains all of the possible destinations that can be returned, so
# iterate over them and see if their source(s) were provided in the args.
# Even if the immediate src of r is not in the args, src's source (or
# further back) may be. Scanning back further than the immediate src allows
# transitive transplants and rebases to yield the same results as transitive
# grafts.
for r in subset:
src = _getrevsource(repo, r)
lineage = None
while src is not None:
if lineage is None:
lineage = list()
lineage.append(r)
# The visited lineage is a match if the current source is in the arg
# set. Since every candidate dest is visited by way of iterating
# subset, any dests further back in the lineage will be tested by a
# different iteration over subset. Likewise, if the src was already
# selected, the current lineage can be selected without going back
# further.
if src in args or src in dests:
dests.update(lineage)
break
r = src
src = _getrevsource(repo, r)
return [r for r in subset if r in dests]
def divergent(repo, subset, x):
"""``divergent()``
Final successors of changesets with an alternative set of final successors.
"""
# i18n: "divergent" is a keyword
getargs(x, 0, 0, _("divergent takes no arguments"))
divergent = obsmod.getrevs(repo, 'divergent')
return [r for r in subset if r in divergent]
def draft(repo, subset, x):
"""``draft()``
Changeset in draft phase."""
# i18n: "draft" is a keyword
getargs(x, 0, 0, _("draft takes no arguments"))
pc = repo._phasecache
return [r for r in subset if pc.phase(repo, r) == phases.draft]
def extinct(repo, subset, x):
"""``extinct()``
Obsolete changesets with obsolete descendants only.
"""
# i18n: "extinct" is a keyword
getargs(x, 0, 0, _("extinct takes no arguments"))
extincts = obsmod.getrevs(repo, 'extinct')
return [r for r in subset if r in extincts]
def extra(repo, subset, x):
"""``extra(label, [value])``
Changesets with the given label in the extra metadata, with the given
optional value.
If `value` starts with `re:`, the remainder of the value is treated as
a regular expression. To match a value that actually starts with `re:`,
use the prefix `literal:`.
"""
# i18n: "extra" is a keyword
l = getargs(x, 1, 2, _('extra takes at least 1 and at most 2 arguments'))
# i18n: "extra" is a keyword
label = getstring(l[0], _('first argument to extra must be a string'))
value = None
if len(l) > 1:
# i18n: "extra" is a keyword
value = getstring(l[1], _('second argument to extra must be a string'))
kind, value, matcher = _stringmatcher(value)
def _matchvalue(r):
extra = repo[r].extra()
return label in extra and (value is None or matcher(extra[label]))
return [r for r in subset if _matchvalue(r)]
def filelog(repo, subset, x):
"""``filelog(pattern)``
Changesets connected to the specified filelog.
For performance reasons, ``filelog()`` does not show every changeset
that affects the requested file(s). See :hg:`help log` for details. For
a slower, more accurate result, use ``file()``.
"""
# i18n: "filelog" is a keyword
pat = getstring(x, _("filelog requires a pattern"))
m = matchmod.match(repo.root, repo.getcwd(), [pat], default='relpath',
ctx=repo[None])
s = set()
if not matchmod.patkind(pat):
for f in m.files():
fl = repo.file(f)
for fr in fl:
s.add(fl.linkrev(fr))
else:
for f in repo[None]:
if m(f):
fl = repo.file(f)
for fr in fl:
s.add(fl.linkrev(fr))
return [r for r in subset if r in s]
def first(repo, subset, x):
"""``first(set, [n])``
An alias for limit().
"""
return limit(repo, subset, x)
def _follow(repo, subset, x, name, followfirst=False):
l = getargs(x, 0, 1, _("%s takes no arguments or a filename") % name)
c = repo['.']
if l:
x = getstring(l[0], _("%s expected a filename") % name)
if x in c:
cx = c[x]
s = set(ctx.rev() for ctx in cx.ancestors(followfirst=followfirst))
# include the revision responsible for the most recent version
s.add(cx.linkrev())
else:
return []
else:
s = set(_revancestors(repo, [c.rev()], followfirst)) | set([c.rev()])
return [r for r in subset if r in s]
def follow(repo, subset, x):
"""``follow([file])``
An alias for ``::.`` (ancestors of the working copy's first parent).
If a filename is specified, the history of the given file is followed,
including copies.
"""
return _follow(repo, subset, x, 'follow')
def _followfirst(repo, subset, x):
# ``followfirst([file])``
# Like ``follow([file])`` but follows only the first parent of
# every revision or file revision.
return _follow(repo, subset, x, '_followfirst', followfirst=True)
def getall(repo, subset, x):
"""``all()``
All changesets, the same as ``0:tip``.
"""
# i18n: "all" is a keyword
getargs(x, 0, 0, _("all takes no arguments"))
return subset
def grep(repo, subset, x):
"""``grep(regex)``
Like ``keyword(string)`` but accepts a regex. Use ``grep(r'...')``
to ensure special escape characters are handled correctly. Unlike
``keyword(string)``, the match is case-sensitive.
"""
try:
# i18n: "grep" is a keyword
gr = re.compile(getstring(x, _("grep requires a string")))
except re.error, e:
raise error.ParseError(_('invalid match pattern: %s') % e)
l = []
for r in subset:
c = repo[r]
for e in c.files() + [c.user(), c.description()]:
if gr.search(e):
l.append(r)
break
return l
def _matchfiles(repo, subset, x):
# _matchfiles takes a revset list of prefixed arguments:
#
# [p:foo, i:bar, x:baz]
#
# builds a match object from them and filters subset. Allowed
# prefixes are 'p:' for regular patterns, 'i:' for include
# patterns and 'x:' for exclude patterns. Use 'r:' prefix to pass
# a revision identifier, or the empty string to reference the
# working directory, from which the match object is
# initialized. Use 'd:' to set the default matching mode, default
# to 'glob'. At most one 'r:' and 'd:' argument can be passed.
# i18n: "_matchfiles" is a keyword
l = getargs(x, 1, -1, _("_matchfiles requires at least one argument"))
pats, inc, exc = [], [], []
hasset = False
rev, default = None, None
for arg in l:
# i18n: "_matchfiles" is a keyword
s = getstring(arg, _("_matchfiles requires string arguments"))
prefix, value = s[:2], s[2:]
if prefix == 'p:':
pats.append(value)
elif prefix == 'i:':
inc.append(value)
elif prefix == 'x:':
exc.append(value)
elif prefix == 'r:':
if rev is not None:
# i18n: "_matchfiles" is a keyword
raise error.ParseError(_('_matchfiles expected at most one '
'revision'))
rev = value
elif prefix == 'd:':
if default is not None:
# i18n: "_matchfiles" is a keyword
raise error.ParseError(_('_matchfiles expected at most one '
'default mode'))
default = value
else:
# i18n: "_matchfiles" is a keyword
raise error.ParseError(_('invalid _matchfiles prefix: %s') % prefix)
if not hasset and matchmod.patkind(value) == 'set':
hasset = True
if not default:
default = 'glob'
m = None
s = []
for r in subset:
c = repo[r]
if not m or (hasset and rev is None):
ctx = c
if rev is not None:
ctx = repo[rev or None]
m = matchmod.match(repo.root, repo.getcwd(), pats, include=inc,
exclude=exc, ctx=ctx, default=default)
for f in c.files():
if m(f):
s.append(r)
break
return s
def hasfile(repo, subset, x):
"""``file(pattern)``
Changesets affecting files matched by pattern.
For a faster but less accurate result, consider using ``filelog()``
instead.
"""
# i18n: "file" is a keyword
pat = getstring(x, _("file requires a pattern"))
return _matchfiles(repo, subset, ('string', 'p:' + pat))
def head(repo, subset, x):
"""``head()``
Changeset is a named branch head.
"""
# i18n: "head" is a keyword
getargs(x, 0, 0, _("head takes no arguments"))
hs = set()
for b, ls in repo.branchmap().iteritems():
hs.update(repo[h].rev() for h in ls)
return [r for r in subset if r in hs]
def heads(repo, subset, x):
"""``heads(set)``
Members of set with no children in set.
"""
s = getset(repo, subset, x)
ps = set(parents(repo, subset, x))
return [r for r in s if r not in ps]
def hidden(repo, subset, x):
"""``hidden()``
Hidden changesets.
"""
# i18n: "hidden" is a keyword
getargs(x, 0, 0, _("hidden takes no arguments"))
hiddenrevs = repoview.filterrevs(repo, 'visible')
return [r for r in subset if r in hiddenrevs]
def keyword(repo, subset, x):
"""``keyword(string)``
Search commit message, user name, and names of changed files for
string. The match is case-insensitive.
"""
# i18n: "keyword" is a keyword
kw = encoding.lower(getstring(x, _("keyword requires a string")))
l = []
for r in subset:
c = repo[r]
if util.any(kw in encoding.lower(t)
for t in c.files() + [c.user(), c.description()]):
l.append(r)
return l
def limit(repo, subset, x):
"""``limit(set, [n])``
First n members of set, defaulting to 1.
"""
# i18n: "limit" is a keyword
l = getargs(x, 1, 2, _("limit requires one or two arguments"))
try:
lim = 1
if len(l) == 2:
# i18n: "limit" is a keyword
lim = int(getstring(l[1], _("limit requires a number")))
except (TypeError, ValueError):
# i18n: "limit" is a keyword
raise error.ParseError(_("limit expects a number"))
ss = set(subset)
os = getset(repo, list(repo), l[0])[:lim]
return [r for r in os if r in ss]
def last(repo, subset, x):
"""``last(set, [n])``
Last n members of set, defaulting to 1.
"""
# i18n: "last" is a keyword
l = getargs(x, 1, 2, _("last requires one or two arguments"))
try:
lim = 1
if len(l) == 2:
# i18n: "last" is a keyword
lim = int(getstring(l[1], _("last requires a number")))
except (TypeError, ValueError):
# i18n: "last" is a keyword
raise error.ParseError(_("last expects a number"))
ss = set(subset)
os = getset(repo, list(repo), l[0])[-lim:]
return [r for r in os if r in ss]
def maxrev(repo, subset, x):
"""``max(set)``
Changeset with highest revision number in set.
"""
os = getset(repo, list(repo), x)
if os:
m = max(os)
if m in subset:
return [m]
return []
def merge(repo, subset, x):
"""``merge()``
Changeset is a merge changeset.
"""
# i18n: "merge" is a keyword
getargs(x, 0, 0, _("merge takes no arguments"))
cl = repo.changelog
return [r for r in subset if cl.parentrevs(r)[1] != -1]
def branchpoint(repo, subset, x):
"""``branchpoint()``
Changesets with more than one child.
"""
# i18n: "branchpoint" is a keyword
getargs(x, 0, 0, _("branchpoint takes no arguments"))
cl = repo.changelog
if not subset:
return []
baserev = min(subset)
parentscount = [0]*(len(repo) - baserev)
for r in cl.revs(start=baserev + 1):
for p in cl.parentrevs(r):
if p >= baserev:
parentscount[p - baserev] += 1
return [r for r in subset if (parentscount[r - baserev] > 1)]
def minrev(repo, subset, x):
"""``min(set)``
Changeset with lowest revision number in set.
"""
os = getset(repo, list(repo), x)
if os:
m = min(os)
if m in subset:
return [m]
return []
def modifies(repo, subset, x):
"""``modifies(pattern)``
Changesets modifying files matched by pattern.
"""
# i18n: "modifies" is a keyword
pat = getstring(x, _("modifies requires a pattern"))
return checkstatus(repo, subset, pat, 0)
def node_(repo, subset, x):
"""``id(string)``
Revision non-ambiguously specified by the given hex string prefix.
"""
# i18n: "id" is a keyword
l = getargs(x, 1, 1, _("id requires one argument"))
# i18n: "id" is a keyword
n = getstring(l[0], _("id requires a string"))
if len(n) == 40:
rn = repo[n].rev()
else:
rn = None
pm = repo.changelog._partialmatch(n)
if pm is not None:
rn = repo.changelog.rev(pm)
return [r for r in subset if r == rn]
def obsolete(repo, subset, x):
"""``obsolete()``
Mutable changeset with a newer version."""
# i18n: "obsolete" is a keyword
getargs(x, 0, 0, _("obsolete takes no arguments"))
obsoletes = obsmod.getrevs(repo, 'obsolete')
return [r for r in subset if r in obsoletes]
def origin(repo, subset, x):
"""``origin([set])``
Changesets that were specified as a source for the grafts, transplants or
rebases that created the given revisions. Omitting the optional set is the
same as passing all(). If a changeset created by these operations is itself
specified as a source for one of these operations, only the source changeset
for the first operation is selected.
"""
if x is not None:
args = set(getset(repo, list(repo), x))
else:
args = set(getall(repo, list(repo), x))
def _firstsrc(rev):
src = _getrevsource(repo, rev)
if src is None:
return None
while True:
prev = _getrevsource(repo, src)
if prev is None:
return src
src = prev
o = set([_firstsrc(r) for r in args])
return [r for r in subset if r in o]
def outgoing(repo, subset, x):
"""``outgoing([path])``
Changesets not found in the specified destination repository, or the
default push location.
"""
import hg # avoid start-up nasties
# i18n: "outgoing" is a keyword
l = getargs(x, 0, 1, _("outgoing takes one or no arguments"))
# i18n: "outgoing" is a keyword
dest = l and getstring(l[0], _("outgoing requires a repository path")) or ''
dest = repo.ui.expandpath(dest or 'default-push', dest or 'default')
dest, branches = hg.parseurl(dest)
revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
if revs:
revs = [repo.lookup(rev) for rev in revs]
other = hg.peer(repo, {}, dest)
repo.ui.pushbuffer()
outgoing = discovery.findcommonoutgoing(repo, other, onlyheads=revs)
repo.ui.popbuffer()
cl = repo.changelog
o = set([cl.rev(r) for r in outgoing.missing])
return [r for r in subset if r in o]
def p1(repo, subset, x):
"""``p1([set])``
First parent of changesets in set, or the working directory.
"""
if x is None:
p = repo[x].p1().rev()
return [r for r in subset if r == p]
ps = set()
cl = repo.changelog
for r in getset(repo, list(repo), x):
ps.add(cl.parentrevs(r)[0])
return [r for r in subset if r in ps]
def p2(repo, subset, x):
"""``p2([set])``
Second parent of changesets in set, or the working directory.
"""
if x is None:
ps = repo[x].parents()
try:
p = ps[1].rev()
return [r for r in subset if r == p]
except IndexError:
return []
ps = set()
cl = repo.changelog
for r in getset(repo, list(repo), x):
ps.add(cl.parentrevs(r)[1])
return [r for r in subset if r in ps]
def parents(repo, subset, x):
"""``parents([set])``
The set of all parents for all changesets in set, or the working directory.
"""
if x is None:
ps = tuple(p.rev() for p in repo[x].parents())
return [r for r in subset if r in ps]
ps = set()
cl = repo.changelog
for r in getset(repo, list(repo), x):
ps.update(cl.parentrevs(r))
return [r for r in subset if r in ps]
def parentspec(repo, subset, x, n):
"""``set^0``
The set.
``set^1`` (or ``set^``), ``set^2``
First or second parent, respectively, of all changesets in set.
"""
try:
n = int(n[1])
if n not in (0, 1, 2):
raise ValueError
except (TypeError, ValueError):
raise error.ParseError(_("^ expects a number 0, 1, or 2"))
ps = set()
cl = repo.changelog
for r in getset(repo, cl, x):
if n == 0:
ps.add(r)
elif n == 1:
ps.add(cl.parentrevs(r)[0])
elif n == 2:
parents = cl.parentrevs(r)
if len(parents) > 1:
ps.add(parents[1])
return [r for r in subset if r in ps]
def present(repo, subset, x):
"""``present(set)``
An empty set, if any revision in set isn't found; otherwise,
all revisions in set.
If any of specified revisions is not present in the local repository,
the query is normally aborted. But this predicate allows the query
to continue even in such cases.
"""
try:
return getset(repo, subset, x)
except error.RepoLookupError:
return []
def public(repo, subset, x):
"""``public()``
Changeset in public phase."""
# i18n: "public" is a keyword
getargs(x, 0, 0, _("public takes no arguments"))
pc = repo._phasecache
return [r for r in subset if pc.phase(repo, r) == phases.public]
def remote(repo, subset, x):
"""``remote([id [,path]])``
Local revision that corresponds to the given identifier in a
remote repository, if present. Here, the '.' identifier is a
synonym for the current local branch.
"""
import hg # avoid start-up nasties
# i18n: "remote" is a keyword
l = getargs(x, 0, 2, _("remote takes one, two or no arguments"))
q = '.'
if len(l) > 0:
# i18n: "remote" is a keyword
q = getstring(l[0], _("remote requires a string id"))
if q == '.':
q = repo['.'].branch()
dest = ''
if len(l) > 1:
# i18n: "remote" is a keyword
dest = getstring(l[1], _("remote requires a repository path"))
dest = repo.ui.expandpath(dest or 'default')
dest, branches = hg.parseurl(dest)
revs, checkout = hg.addbranchrevs(repo, repo, branches, [])
if revs:
revs = [repo.lookup(rev) for rev in revs]
other = hg.peer(repo, {}, dest)
n = other.lookup(q)
if n in repo:
r = repo[n].rev()
if r in subset:
return [r]
return []
def removes(repo, subset, x):
"""``removes(pattern)``
Changesets which remove files matching pattern.
"""
# i18n: "removes" is a keyword
pat = getstring(x, _("removes requires a pattern"))
return checkstatus(repo, subset, pat, 2)
def rev(repo, subset, x):
"""``rev(number)``
Revision with the given numeric identifier.
"""
# i18n: "rev" is a keyword
l = getargs(x, 1, 1, _("rev requires one argument"))
try:
# i18n: "rev" is a keyword
l = int(getstring(l[0], _("rev requires a number")))
except (TypeError, ValueError):
# i18n: "rev" is a keyword
raise error.ParseError(_("rev expects a number"))
return [r for r in subset if r == l]
def matching(repo, subset, x):
"""``matching(revision [, field])``
Changesets in which a given set of fields match the set of fields in the
selected revision or set.
To match more than one field pass the list of fields to match separated
by spaces (e.g. ``author description``).
Valid fields are most regular revision fields and some special fields.
Regular revision fields are ``description``, ``author``, ``branch``,
``date``, ``files``, ``phase``, ``parents``, ``substate``, ``user``
and ``diff``.
Note that ``author`` and ``user`` are synonyms. ``diff`` refers to the
contents of the revision. Two revisions matching their ``diff`` will
also match their ``files``.
Special fields are ``summary`` and ``metadata``:
``summary`` matches the first line of the description.
``metadata`` is equivalent to matching ``description user date``
(i.e. it matches the main metadata fields).
``metadata`` is the default field which is used when no fields are
specified. You can match more than one field at a time.
"""
# i18n: "matching" is a keyword
l = getargs(x, 1, 2, _("matching takes 1 or 2 arguments"))
revs = getset(repo, repo.changelog, l[0])
fieldlist = ['metadata']
if len(l) > 1:
fieldlist = getstring(l[1],
# i18n: "matching" is a keyword
_("matching requires a string "
"as its second argument")).split()
# Make sure that there are no repeated fields,
# expand the 'special' 'metadata' field type
# and check the 'files' whenever we check the 'diff'
fields = []
for field in fieldlist:
if field == 'metadata':
fields += ['user', 'description', 'date']
elif field == 'diff':
# a revision matching the diff must also match the files
# since matching the diff is very costly, make sure to
# also match the files first
fields += ['files', 'diff']
else:
if field == 'author':
field = 'user'
fields.append(field)
fields = set(fields)
if 'summary' in fields and 'description' in fields:
# If a revision matches its description it also matches its summary
fields.discard('summary')
# We may want to match more than one field
# Not all fields take the same amount of time to be matched
# Sort the selected fields in order of increasing matching cost
fieldorder = ['phase', 'parents', 'user', 'date', 'branch', 'summary',
'files', 'description', 'substate', 'diff']
def fieldkeyfunc(f):
try:
return fieldorder.index(f)
except ValueError:
# assume an unknown field is very costly
return len(fieldorder)
fields = list(fields)
fields.sort(key=fieldkeyfunc)
# Each field will be matched with its own "getfield" function
# which will be added to the getfieldfuncs array of functions
getfieldfuncs = []
_funcs = {
'user': lambda r: repo[r].user(),
'branch': lambda r: repo[r].branch(),
'date': lambda r: repo[r].date(),
'description': lambda r: repo[r].description(),
'files': lambda r: repo[r].files(),
'parents': lambda r: repo[r].parents(),
'phase': lambda r: repo[r].phase(),
'substate': lambda r: repo[r].substate,
'summary': lambda r: repo[r].description().splitlines()[0],
'diff': lambda r: list(repo[r].diff(git=True),)
}
for info in fields:
getfield = _funcs.get(info, None)
if getfield is None:
raise error.ParseError(
# i18n: "matching" is a keyword
_("unexpected field name passed to matching: %s") % info)
getfieldfuncs.append(getfield)
# convert the getfield array of functions into a "getinfo" function
# which returns an array of field values (or a single value if there
# is only one field to match)
getinfo = lambda r: [f(r) for f in getfieldfuncs]
matches = set()
for rev in revs:
target = getinfo(rev)
for r in subset:
match = True
for n, f in enumerate(getfieldfuncs):
if target[n] != f(r):
match = False
break
if match:
matches.add(r)
return [r for r in subset if r in matches]
def reverse(repo, subset, x):
"""``reverse(set)``
Reverse order of set.
"""
l = getset(repo, subset, x)
if not isinstance(l, list):
l = list(l)
l.reverse()
return l
def roots(repo, subset, x):
"""``roots(set)``
Changesets in set with no parent changeset in set.
"""
s = set(getset(repo, repo.changelog, x))
subset = [r for r in subset if r in s]
cs = _children(repo, subset, s)
return [r for r in subset if r not in cs]
def secret(repo, subset, x):
"""``secret()``
Changeset in secret phase."""
# i18n: "secret" is a keyword
getargs(x, 0, 0, _("secret takes no arguments"))
pc = repo._phasecache
return [r for r in subset if pc.phase(repo, r) == phases.secret]
def sort(repo, subset, x):
"""``sort(set[, [-]key...])``
Sort set by keys. The default sort order is ascending, specify a key
as ``-key`` to sort in descending order.
The keys can be:
- ``rev`` for the revision number,
- ``branch`` for the branch name,
- ``desc`` for the commit message (description),
- ``user`` for user name (``author`` can be used as an alias),
- ``date`` for the commit date
"""
# i18n: "sort" is a keyword
l = getargs(x, 1, 2, _("sort requires one or two arguments"))
keys = "rev"
if len(l) == 2:
# i18n: "sort" is a keyword
keys = getstring(l[1], _("sort spec must be a string"))
s = l[0]
keys = keys.split()
l = []
def invert(s):
return "".join(chr(255 - ord(c)) for c in s)
for r in getset(repo, subset, s):
c = repo[r]
e = []
for k in keys:
if k == 'rev':
e.append(r)
elif k == '-rev':
e.append(-r)
elif k == 'branch':
e.append(c.branch())
elif k == '-branch':
e.append(invert(c.branch()))
elif k == 'desc':
e.append(c.description())
elif k == '-desc':
e.append(invert(c.description()))
elif k in 'user author':
e.append(c.user())
elif k in '-user -author':
e.append(invert(c.user()))
elif k == 'date':
e.append(c.date()[0])
elif k == '-date':
e.append(-c.date()[0])
else:
raise error.ParseError(_("unknown sort key %r") % k)
e.append(r)
l.append(e)
l.sort()
return [e[-1] for e in l]
def _stringmatcher(pattern):
"""
accepts a string, possibly starting with 're:' or 'literal:' prefix.
returns the matcher name, pattern, and matcher function.
missing or unknown prefixes are treated as literal matches.
helper for tests:
>>> def test(pattern, *tests):
... kind, pattern, matcher = _stringmatcher(pattern)
... return (kind, pattern, [bool(matcher(t)) for t in tests])
exact matching (no prefix):
>>> test('abcdefg', 'abc', 'def', 'abcdefg')
('literal', 'abcdefg', [False, False, True])
regex matching ('re:' prefix)
>>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
('re', 'a.+b', [False, False, True])
force exact matches ('literal:' prefix)
>>> test('literal:re:foobar', 'foobar', 're:foobar')
('literal', 're:foobar', [False, True])
unknown prefixes are ignored and treated as literals
>>> test('foo:bar', 'foo', 'bar', 'foo:bar')
('literal', 'foo:bar', [False, False, True])
"""
if pattern.startswith('re:'):
pattern = pattern[3:]
try:
regex = re.compile(pattern)
except re.error, e:
raise error.ParseError(_('invalid regular expression: %s')
% e)
return 're', pattern, regex.search
elif pattern.startswith('literal:'):
pattern = pattern[8:]
return 'literal', pattern, pattern.__eq__
def _substringmatcher(pattern):
kind, pattern, matcher = _stringmatcher(pattern)
if kind == 'literal':
matcher = lambda s: pattern in s
return kind, pattern, matcher
def tag(repo, subset, x):
"""``tag([name])``
The specified tag by name, or all tagged revisions if no name is given.
"""
# i18n: "tag" is a keyword
args = getargs(x, 0, 1, _("tag takes one or no arguments"))
cl = repo.changelog
if args:
pattern = getstring(args[0],
# i18n: "tag" is a keyword
_('the argument to tag must be a string'))
kind, pattern, matcher = _stringmatcher(pattern)
if kind == 'literal':
# avoid resolving all tags
tn = repo._tagscache.tags.get(pattern, None)
if tn is None:
raise util.Abort(_("tag '%s' does not exist") % pattern)
s = set([repo[tn].rev()])
else:
s = set([cl.rev(n) for t, n in repo.tagslist() if matcher(t)])
else:
s = set([cl.rev(n) for t, n in repo.tagslist() if t != 'tip'])
return [r for r in subset if r in s]
def tagged(repo, subset, x):
return tag(repo, subset, x)
def unstable(repo, subset, x):
"""``unstable()``
Non-obsolete changesets with obsolete ancestors.
"""
# i18n: "unstable" is a keyword
getargs(x, 0, 0, _("unstable takes no arguments"))
unstables = obsmod.getrevs(repo, 'unstable')
return [r for r in subset if r in unstables]
def user(repo, subset, x):
"""``user(string)``
User name contains string. The match is case-insensitive.
If `string` starts with `re:`, the remainder of the string is treated as
a regular expression. To match a user that actually contains `re:`, use
the prefix `literal:`.
"""
return author(repo, subset, x)
# for internal use
def _list(repo, subset, x):
s = getstring(x, "internal error")
if not s:
return []
if not isinstance(subset, set):
subset = set(subset)
ls = [repo[r].rev() for r in s.split('\0')]
return [r for r in ls if r in subset]
symbols = {
"adds": adds,
"all": getall,
"ancestor": ancestor,
"ancestors": ancestors,
"_firstancestors": _firstancestors,
"author": author,
"bisect": bisect,
"bisected": bisected,
"bookmark": bookmark,
"branch": branch,
"branchpoint": branchpoint,
"bumped": bumped,
"bundle": bundle,
"children": children,
"closed": closed,
"contains": contains,
"converted": converted,
"date": date,
"desc": desc,
"descendants": descendants,
"_firstdescendants": _firstdescendants,
"destination": destination,
"divergent": divergent,
"draft": draft,
"extinct": extinct,
"extra": extra,
"file": hasfile,
"filelog": filelog,
"first": first,
"follow": follow,
"_followfirst": _followfirst,
"grep": grep,
"head": head,
"heads": heads,
"hidden": hidden,
"id": node_,
"keyword": keyword,
"last": last,
"limit": limit,
"_matchfiles": _matchfiles,
"max": maxrev,
"merge": merge,
"min": minrev,
"modifies": modifies,
"obsolete": obsolete,
"origin": origin,
"outgoing": outgoing,
"p1": p1,
"p2": p2,
"parents": parents,
"present": present,
"public": public,
"remote": remote,
"removes": removes,
"rev": rev,
"reverse": reverse,
"roots": roots,
"sort": sort,
"secret": secret,
"matching": matching,
"tag": tag,
"tagged": tagged,
"user": user,
"unstable": unstable,
"_list": _list,
}
# symbols which can't be used for a DoS attack for any given input
# (e.g. those which accept regexes as plain strings shouldn't be included)
# functions that just return a lot of changesets (like all) don't count here
safesymbols = set([
"adds",
"all",
"ancestor",
"ancestors",
"_firstancestors",
"author",
"bisect",
"bisected",
"bookmark",
"branch",
"branchpoint",
"bumped",
"bundle",
"children",
"closed",
"converted",
"date",
"desc",
"descendants",
"_firstdescendants",
"destination",
"divergent",
"draft",
"extinct",
"extra",
"file",
"filelog",
"first",
"follow",
"_followfirst",
"head",
"heads",
"hidden",
"id",
"keyword",
"last",
"limit",
"_matchfiles",
"max",
"merge",
"min",
"modifies",
"obsolete",
"origin",
"outgoing",
"p1",
"p2",
"parents",
"present",
"public",
"remote",
"removes",
"rev",
"reverse",
"roots",
"sort",
"secret",
"matching",
"tag",
"tagged",
"user",
"unstable",
"_list",
])
methods = {
"range": rangeset,
"dagrange": dagrange,
"string": stringset,
"symbol": symbolset,
"and": andset,
"or": orset,
"not": notset,
"list": listset,
"func": func,
"ancestor": ancestorspec,
"parent": parentspec,
"parentpost": p1,
}
def optimize(x, small):
if x is None:
return 0, x
smallbonus = 1
if small:
smallbonus = .5
op = x[0]
if op == 'minus':
return optimize(('and', x[1], ('not', x[2])), small)
elif op == 'dagrangepre':
return optimize(('func', ('symbol', 'ancestors'), x[1]), small)
elif op == 'dagrangepost':
return optimize(('func', ('symbol', 'descendants'), x[1]), small)
elif op == 'rangepre':
return optimize(('range', ('string', '0'), x[1]), small)
elif op == 'rangepost':
return optimize(('range', x[1], ('string', 'tip')), small)
elif op == 'negate':
return optimize(('string',
'-' + getstring(x[1], _("can't negate that"))), small)
elif op in 'string symbol negate':
return smallbonus, x # single revisions are small
elif op == 'and':
wa, ta = optimize(x[1], True)
wb, tb = optimize(x[2], True)
w = min(wa, wb)
if wa > wb:
return w, (op, tb, ta)
return w, (op, ta, tb)
elif op == 'or':
wa, ta = optimize(x[1], False)
wb, tb = optimize(x[2], False)
if wb < wa:
wb, wa = wa, wb
return max(wa, wb), (op, ta, tb)
elif op == 'not':
o = optimize(x[1], not small)
return o[0], (op, o[1])
elif op == 'parentpost':
o = optimize(x[1], small)
return o[0], (op, o[1])
elif op == 'group':
return optimize(x[1], small)
elif op in 'dagrange range list parent ancestorspec':
if op == 'parent':
# x^:y means (x^) : y, not x ^ (:y)
post = ('parentpost', x[1])
if x[2][0] == 'dagrangepre':
return optimize(('dagrange', post, x[2][1]), small)
elif x[2][0] == 'rangepre':
return optimize(('range', post, x[2][1]), small)
wa, ta = optimize(x[1], small)
wb, tb = optimize(x[2], small)
return wa + wb, (op, ta, tb)
elif op == 'func':
f = getstring(x[1], _("not a symbol"))
wa, ta = optimize(x[2], small)
if f in ("author branch closed date desc file grep keyword "
"outgoing user"):
w = 10 # slow
elif f in "modifies adds removes":
w = 30 # slower
elif f == "contains":
w = 100 # very slow
elif f == "ancestor":
w = 1 * smallbonus
elif f in "reverse limit first":
w = 0
elif f in "sort":
w = 10 # assume most sorts look at changelog
else:
w = 1
return w + wa, (op, x[1], ta)
return 1, x
_aliasarg = ('func', ('symbol', '_aliasarg'))
def _getaliasarg(tree):
"""If tree matches ('func', ('symbol', '_aliasarg'), ('string', X))
return X, None otherwise.
"""
if (len(tree) == 3 and tree[:2] == _aliasarg
and tree[2][0] == 'string'):
return tree[2][1]
return None
def _checkaliasarg(tree, known=None):
"""Check tree contains no _aliasarg construct or only ones which
value is in known. Used to avoid alias placeholders injection.
"""
if isinstance(tree, tuple):
arg = _getaliasarg(tree)
if arg is not None and (not known or arg not in known):
raise error.ParseError(_("not a function: %s") % '_aliasarg')
for t in tree:
_checkaliasarg(t, known)
class revsetalias(object):
funcre = re.compile('^([^(]+)\(([^)]+)\)$')
args = None
def __init__(self, name, value):
'''Aliases like:
h = heads(default)
b($1) = ancestors($1) - ancestors(default)
'''
m = self.funcre.search(name)
if m:
self.name = m.group(1)
self.tree = ('func', ('symbol', m.group(1)))
self.args = [x.strip() for x in m.group(2).split(',')]
for arg in self.args:
# _aliasarg() is an unknown symbol only used separate
# alias argument placeholders from regular strings.
value = value.replace(arg, '_aliasarg(%r)' % (arg,))
else:
self.name = name
self.tree = ('symbol', name)
self.replacement, pos = parse(value)
if pos != len(value):
raise error.ParseError(_('invalid token'), pos)
# Check for placeholder injection
_checkaliasarg(self.replacement, self.args)
def _getalias(aliases, tree):
"""If tree looks like an unexpanded alias, return it. Return None
otherwise.
"""
if isinstance(tree, tuple) and tree:
if tree[0] == 'symbol' and len(tree) == 2:
name = tree[1]
alias = aliases.get(name)
if alias and alias.args is None and alias.tree == tree:
return alias
if tree[0] == 'func' and len(tree) > 1:
if tree[1][0] == 'symbol' and len(tree[1]) == 2:
name = tree[1][1]
alias = aliases.get(name)
if alias and alias.args is not None and alias.tree == tree[:2]:
return alias
return None
def _expandargs(tree, args):
"""Replace _aliasarg instances with the substitution value of the
same name in args, recursively.
"""
if not tree or not isinstance(tree, tuple):
return tree
arg = _getaliasarg(tree)
if arg is not None:
return args[arg]
return tuple(_expandargs(t, args) for t in tree)
def _expandaliases(aliases, tree, expanding, cache):
"""Expand aliases in tree, recursively.
'aliases' is a dictionary mapping user defined aliases to
revsetalias objects.
"""
if not isinstance(tree, tuple):
# Do not expand raw strings
return tree
alias = _getalias(aliases, tree)
if alias is not None:
if alias in expanding:
raise error.ParseError(_('infinite expansion of revset alias "%s" '
'detected') % alias.name)
expanding.append(alias)
if alias.name not in cache:
cache[alias.name] = _expandaliases(aliases, alias.replacement,
expanding, cache)
result = cache[alias.name]
expanding.pop()
if alias.args is not None:
l = getlist(tree[2])
if len(l) != len(alias.args):
raise error.ParseError(
_('invalid number of arguments: %s') % len(l))
l = [_expandaliases(aliases, a, [], cache) for a in l]
result = _expandargs(result, dict(zip(alias.args, l)))
else:
result = tuple(_expandaliases(aliases, t, expanding, cache)
for t in tree)
return result
def findaliases(ui, tree):
_checkaliasarg(tree)
aliases = {}
for k, v in ui.configitems('revsetalias'):
alias = revsetalias(k, v)
aliases[alias.name] = alias
return _expandaliases(aliases, tree, [], {})
parse = parser.parser(tokenize, elements).parse
def match(ui, spec):
if not spec:
raise error.ParseError(_("empty query"))
tree, pos = parse(spec)
if (pos != len(spec)):
raise error.ParseError(_("invalid token"), pos)
if ui:
tree = findaliases(ui, tree)
weight, tree = optimize(tree, True)
def mfunc(repo, subset):
return getset(repo, subset, tree)
return mfunc
def formatspec(expr, *args):
'''
This is a convenience function for using revsets internally, and
escapes arguments appropriately. Aliases are intentionally ignored
so that intended expression behavior isn't accidentally subverted.
Supported arguments:
%r = revset expression, parenthesized
%d = int(arg), no quoting
%s = string(arg), escaped and single-quoted
%b = arg.branch(), escaped and single-quoted
%n = hex(arg), single-quoted
%% = a literal '%'
Prefixing the type with 'l' specifies a parenthesized list of that type.
>>> formatspec('%r:: and %lr', '10 or 11', ("this()", "that()"))
'(10 or 11):: and ((this()) or (that()))'
>>> formatspec('%d:: and not %d::', 10, 20)
'10:: and not 20::'
>>> formatspec('%ld or %ld', [], [1])
"_list('') or 1"
>>> formatspec('keyword(%s)', 'foo\\xe9')
"keyword('foo\\\\xe9')"
>>> b = lambda: 'default'
>>> b.branch = b
>>> formatspec('branch(%b)', b)
"branch('default')"
>>> formatspec('root(%ls)', ['a', 'b', 'c', 'd'])
"root(_list('a\\x00b\\x00c\\x00d'))"
'''
def quote(s):
return repr(str(s))
def argtype(c, arg):
if c == 'd':
return str(int(arg))
elif c == 's':
return quote(arg)
elif c == 'r':
parse(arg) # make sure syntax errors are confined
return '(%s)' % arg
elif c == 'n':
return quote(node.hex(arg))
elif c == 'b':
return quote(arg.branch())
def listexp(s, t):
l = len(s)
if l == 0:
return "_list('')"
elif l == 1:
return argtype(t, s[0])
elif t == 'd':
return "_list('%s')" % "\0".join(str(int(a)) for a in s)
elif t == 's':
return "_list('%s')" % "\0".join(s)
elif t == 'n':
return "_list('%s')" % "\0".join(node.hex(a) for a in s)
elif t == 'b':
return "_list('%s')" % "\0".join(a.branch() for a in s)
m = l // 2
return '(%s or %s)' % (listexp(s[:m], t), listexp(s[m:], t))
ret = ''
pos = 0
arg = 0
while pos < len(expr):
c = expr[pos]
if c == '%':
pos += 1
d = expr[pos]
if d == '%':
ret += d
elif d in 'dsnbr':
ret += argtype(d, args[arg])
arg += 1
elif d == 'l':
# a list of some type
pos += 1
d = expr[pos]
ret += listexp(list(args[arg]), d)
arg += 1
else:
raise util.Abort('unexpected revspec format character %s' % d)
else:
ret += c
pos += 1
return ret
def prettyformat(tree):
def _prettyformat(tree, level, lines):
if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
lines.append((level, str(tree)))
else:
lines.append((level, '(%s' % tree[0]))
for s in tree[1:]:
_prettyformat(s, level + 1, lines)
lines[-1:] = [(lines[-1][0], lines[-1][1] + ')')]
lines = []
_prettyformat(tree, 0, lines)
output = '\n'.join((' '*l + s) for l, s in lines)
return output
def depth(tree):
if isinstance(tree, tuple):
return max(map(depth, tree)) + 1
else:
return 0
def funcsused(tree):
if not isinstance(tree, tuple) or tree[0] in ('string', 'symbol'):
return set()
else:
funcs = set()
for s in tree[1:]:
funcs |= funcsused(s)
if tree[0] == 'func':
funcs.add(tree[1][1])
return funcs
# tell hggettext to extract docstrings from these functions:
i18nfunctions = symbols.values()
|
jordigh/mercurial-crew
|
mercurial/revset.py
|
Python
|
gpl-2.0
| 64,224
|
[
"VisIt"
] |
db2927eac050628eedab4a0fdaf06c60e5623084e54dc4f1d30a9ba3d7b2a951
|
from __future__ import absolute_import
import numpy as np
import tensorflow as tf
from . import likelihood
class Gaussian(likelihood.Likelihood):
def __init__(self, std_dev=1.0):
# Save the raw standard deviation. Note that this value can be negative.
self.raw_std_dev = tf.Variable(std_dev)
def log_cond_prob(self, outputs, latent):
var = self.raw_std_dev ** 2
return -0.5 * tf.log(2.0 * np.pi * var) - ((outputs - latent) ** 2) / (2.0 * var)
def get_params(self):
return [self.raw_std_dev]
def predict(self, latent_means, latent_vars):
return latent_means, latent_vars + self.raw_std_dev ** 2
|
danmackinlay/AutoGP
|
autogp/likelihoods/gaussian.py
|
Python
|
apache-2.0
| 666
|
[
"Gaussian"
] |
7b77b8babb59a3d31a6e9a46b803048427f8ff92e842336b0c5cf8685b05aef1
|
#
# tsne.py
#
# Implementation of t-SNE in Python. The implementation was tested on Python 2.5.1, and it requires a working
# installation of NumPy. The implementation comes with an example on the MNIST dataset. In order to plot the
# results of this example, a working installation of matplotlib is required.
# The example can be run by executing: ipython tsne.py -pylab
#
#
# Created by Laurens van der Maaten on 20-12-08.
# Copyright (c) 2008 Tilburg University. All rights reserved.
import numpy as Math
import pylab as Plot
def Hbeta(D = Math.array([]), beta = 1.0):
"""Compute the perplexity and the P-row for a specific value of the precision of a Gaussian distribution."""
# Compute P-row and corresponding perplexity
P = Math.exp(-D.copy() * beta);
sumP = sum(P);
H = Math.log(sumP) + beta * Math.sum(D * P) / sumP;
P = P / sumP;
return H, P;
def x2p(X = Math.array([]), tol = 1e-5, perplexity = 30.0):
"""Performs a binary search to get P-values in such a way that each conditional Gaussian has the same perplexity."""
# Initialize some variables
print "Computing pairwise distances..."
(n, d) = X.shape;
sum_X = Math.sum(Math.square(X), 1);
D = Math.add(Math.add(-2 * Math.dot(X, X.T), sum_X).T, sum_X);
P = Math.zeros((n, n));
beta = Math.ones((n, 1));
logU = Math.log(perplexity);
# Loop over all datapoints
for i in range(n):
# Print progress
if i % 500 == 0:
print "Computing P-values for point ", i, " of ", n, "..."
# Compute the Gaussian kernel and entropy for the current precision
betamin = -Math.inf;
betamax = Math.inf;
Di = D[i, Math.concatenate((Math.r_[0:i], Math.r_[i+1:n]))];
(H, thisP) = Hbeta(Di, beta[i]);
# Evaluate whether the perplexity is within tolerance
Hdiff = H - logU;
tries = 0;
while Math.abs(Hdiff) > tol and tries < 50:
# If not, increase or decrease precision
if Hdiff > 0:
betamin = beta[i].copy();
if betamax == Math.inf or betamax == -Math.inf:
beta[i] = beta[i] * 2;
else:
beta[i] = (beta[i] + betamax) / 2;
else:
betamax = beta[i].copy();
if betamin == Math.inf or betamin == -Math.inf:
beta[i] = beta[i] / 2;
else:
beta[i] = (beta[i] + betamin) / 2;
# Recompute the values
(H, thisP) = Hbeta(Di, beta[i]);
Hdiff = H - logU;
tries = tries + 1;
# Set the final row of P
P[i, Math.concatenate((Math.r_[0:i], Math.r_[i+1:n]))] = thisP;
# Return final P-matrix
print "Mean value of sigma: ", Math.mean(Math.sqrt(1 / beta))
return P;
def pca(X = Math.array([]), no_dims = 50):
"""Runs PCA on the NxD array X in order to reduce its dimensionality to no_dims dimensions."""
print "Preprocessing the data using PCA..."
(n, d) = X.shape;
X = X - Math.tile(Math.mean(X, 0), (n, 1));
(l, M) = Math.linalg.eig(Math.dot(X.T, X));
Y = Math.dot(X, M[:,0:no_dims]);
return Y;
def tsne(X = Math.array([]), no_dims = 2, initial_dims = 50, perplexity = 30.0):
"""Runs t-SNE on the dataset in the NxD array X to reduce its dimensionality to no_dims dimensions.
The syntaxis of the function is Y = tsne.tsne(X, no_dims, perplexity), where X is an NxD NumPy array."""
# Check inputs
if X.dtype != "float64":
print "Error: array X should have type float64.";
return -1;
#if no_dims.__class__ != "<type 'int'>": # doesn't work yet!
# print "Error: number of dimensions should be an integer.";
# return -1;
# Initialize variables
X = pca(X, initial_dims).real;
(n, d) = X.shape;
max_iter = 200;
initial_momentum = 0.5;
final_momentum = 0.8;
eta = 500;
min_gain = 0.01;
Y = Math.random.randn(n, no_dims);
dY = Math.zeros((n, no_dims));
iY = Math.zeros((n, no_dims));
gains = Math.ones((n, no_dims));
# Compute P-values
P = x2p(X, 1e-5, perplexity);
P = P + Math.transpose(P);
P = P / Math.sum(P);
P = P * 4; # early exaggeration
P = Math.maximum(P, 1e-12);
# Run iterations
for iter in range(max_iter):
# Compute pairwise affinities
sum_Y = Math.sum(Math.square(Y), 1);
num = 1 / (1 + Math.add(Math.add(-2 * Math.dot(Y, Y.T), sum_Y).T, sum_Y));
num[range(n), range(n)] = 0;
Q = num / Math.sum(num);
Q = Math.maximum(Q, 1e-12);
# Compute gradient
PQ = P - Q;
for i in range(n):
dY[i,:] = Math.sum(Math.tile(PQ[:,i] * num[:,i], (no_dims, 1)).T * (Y[i,:] - Y), 0);
# Perform the update
if iter < 20:
momentum = initial_momentum
else:
momentum = final_momentum
gains = (gains + 0.2) * ((dY > 0) != (iY > 0)) + (gains * 0.8) * ((dY > 0) == (iY > 0));
gains[gains < min_gain] = min_gain;
iY = momentum * iY - eta * (gains * dY);
Y = Y + iY;
Y = Y - Math.tile(Math.mean(Y, 0), (n, 1));
# Compute current value of cost function
if (iter + 1) % 10 == 0:
C = Math.sum(P * Math.log(P / Q));
print "Iteration ", (iter + 1), ": error is ", C
# Stop lying about P-values
if iter == 100:
P = P / 4;
# Return solution
return Y;
if __name__ == "__main__":
print "Run Y = tsne.tsne(X, no_dims, perplexity) to perform t-SNE on your dataset."
print "Running example on 2,500 MNIST digits..."
X = Math.loadtxt("d500.txt");
#labels = Math.loadtxt("labels.txt");
text_file = open("l500.txt", "r")
labels = text_file.readlines()
Y = tsne(X, 2, 50, 20.0);
#Plot.scatter(Y[:,0], Y[:,1], 20, labels)
Plot.scatter(
Y[:, 0], Y[:, 1], marker = 'o', c = Y[:, 1],
cmap = Plot.get_cmap('Spectral'))
'''
for label, x, y in zip(labels, Y[:, 0], Y[:, 1]):
Plot.annotate(label, xy = (x, y), xytext = (-20, 20),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
'''
Plot.show()
|
masoodking/LinkPrediction
|
tsne_python/tsne.py
|
Python
|
bsd-3-clause
| 5,760
|
[
"Gaussian"
] |
0b328747446cc00c86f5cf1b83492f0cdd7774932cccce7bc01081545f9b0dfd
|
../../../../../../../share/pyshared/orca/scripts/apps/Instantbird/chat.py
|
Alberto-Beralix/Beralix
|
i386-squashfs-root/usr/lib/python2.7/dist-packages/orca/scripts/apps/Instantbird/chat.py
|
Python
|
gpl-3.0
| 73
|
[
"ORCA"
] |
4dae61a35462c8e994f3065477f5bf9e4208546d75f7540b783e8aa08228561f
|
from warnings import warn
import numpy as np
from scipy.stats import norm as ndist
from ..constraints.affine import constraints
from .debiased_lasso_utils import solve_wide_
def debiasing_matrix(X,
rows,
bound=None,
linesearch=True, # do a linesearch?
scaling_factor=1.5, # multiplicative factor for linesearch
max_active=None, # how big can active set get?
max_try=10, # how many steps in linesearch?
warn_kkt=False, # warn if KKT does not seem to be satisfied?
max_iter=50, # how many iterations for each optimization problem
kkt_stop=True, # stop based on KKT conditions?
parameter_stop=True, # stop based on relative convergence of parameter?
objective_stop=True, # stop based on relative decrease in objective?
kkt_tol=1.e-4, # tolerance for the KKT conditions
parameter_tol=1.e-4, # tolerance for relative convergence of parameter
objective_tol=1.e-4 # tolerance for relative decrease in objective
):
"""
Find a row of debiasing matrix using line search of
Javanmard and Montanari.
"""
n, p = X.shape
if bound is None:
orig_bound = (1. / np.sqrt(n)) * ndist.ppf(1. - (0.1 / (p ** 2)))
else:
orig_bound = bound
if max_active is None:
max_active = max(50, 0.3 * n)
rows = np.atleast_1d(rows)
M = np.zeros((len(rows), p))
nndef_diag = (X ** 2).sum(0) / n
for idx, row in enumerate(rows):
bound = orig_bound
soln = np.zeros(p)
soln_old = np.zeros(p)
ever_active = np.zeros(p, np.intp)
ever_active[0] = row + 1 # C code is 1-based
nactive = np.array([1], np.intp)
linear_func = np.zeros(p)
linear_func[row] = -1
gradient = linear_func.copy()
counter_idx = 1
incr = 0;
last_output = None
Xsoln = np.zeros(n) # X\hat{\beta}
ridge_term = 0
need_update = np.zeros(p, np.intp)
while (counter_idx < max_try):
bound_vec = np.ones(p) * bound
result = solve_wide_(X,
Xsoln,
linear_func,
nndef_diag,
gradient,
need_update,
ever_active,
nactive,
bound_vec,
ridge_term,
soln,
soln_old,
max_iter,
kkt_tol,
objective_tol,
parameter_tol,
max_active,
kkt_stop,
objective_stop,
parameter_stop)
niter = result['iter']
# Logic for whether we should continue the line search
if not linesearch: break
if counter_idx == 1:
if niter == (max_iter + 1):
incr = 1 # was the original problem feasible? 1 if not
else:
incr = 0 # original problem was feasible
if incr == 1: # trying to find a feasible point
if niter < (max_iter + 1) and counter_idx > 1:
break
bound = bound * scaling_factor;
elif niter == (max_iter + 1) and counter_idx > 1:
result = last_output # problem seems infeasible because we didn't solve it
break # so we revert to previously found solution
bound = bound / scaling_factor
counter_idx += 1
last_output = {'soln': result['soln'],
'kkt_check': result['kkt_check']}
# If the active set has grown to a certain size
# then we stop, presuming problem has become
# infeasible.
# We revert to the previous solution
if result['max_active_check']:
result = last_output
break
# Check feasibility
if warn_kkt and not result['kkt_check']:
warn("Solution for row of M does not seem to be feasible")
M[idx] = result['soln'] * 1.
return np.squeeze(M)
def _find_row_approx_inverse_X(X,
j,
delta,
maxiter=50,
kkt_tol=1.e-4,
objective_tol=1.e-4,
parameter_tol=1.e-4,
kkt_stop=True,
objective_stop=True,
parameter_stop=True,
max_active=None,
):
n, p = X.shape
theta = np.zeros(p)
theta_old = np.zeros(p)
X_theta = np.zeros(n)
linear_func = np.zeros(p)
linear_func[j] = -1
gradient = linear_func.copy()
ever_active = np.zeros(p, np.intp)
ever_active[0] = j + 1 # C code has ever_active as 1-based
nactive = np.array([1], np.intp)
bound = np.ones(p) * delta
ridge_term = 0
nndef_diag = (X ** 2).sum(0) / n
need_update = np.zeros(p, np.intp)
if max_active is None:
max_active = max(50, 0.3 * n)
solve_wide_(X,
X_theta,
linear_func,
nndef_diag,
gradient,
need_update,
ever_active,
nactive,
bound,
ridge_term,
theta,
theta_old,
maxiter,
kkt_tol,
objective_tol,
parameter_tol,
max_active,
kkt_stop,
objective_stop,
parameter_stop)
return theta
def pseudoinverse_debiasing_matrix(X,
rows,
tol=1.e-9 # tolerance for rank computaion
):
"""
Find a row of debiasing matrix using algorithm of
Boot and Niedderling from https://arxiv.org/pdf/1703.03282.pdf
"""
n, p = X.shape
nactive = len(rows)
if n < p:
U, D, V = np.linalg.svd(X, full_matrices=0)
rank = np.sum(D > max(D) * tol)
inv_D = 1. / D
inv_D[rank:] = 0.
inv_D2 = inv_D**2
inv = (U * inv_D2[None, :]).dot(U.T)
scaling = np.zeros(nactive)
pseudo_XTX = (V.T[rows] * inv_D2[None, :]).dot(V)
for i in range(nactive):
var = rows[i]
scaling[i] = 1. / (X[:,var] * inv.dot(X[:,var]).T).sum()
else:
pseudo_XTX = np.linalg.inv(X.T.dot(X))[rows]
scaling = np.ones(nactive)
M_active = scaling[:, None] * pseudo_XTX
return M_active
def debiased_lasso_inference(lasso_obj, variables, delta):
"""
Debiased estimate is
.. math::
\hat{\beta}^d = \hat{\beta} - \hat{\theta} \nabla \ell(\hat{\beta})
where $\ell$ is the Gaussian loss and $\hat{\theta}$ is an approximation of the
inverse Hessian at $\hat{\beta}$.
The term on the right is expressible in terms of the inactive gradient
as well as the fixed active subgradient. The left hand term is expressible in
terms of $\bar{\beta}$ the "relaxed" solution and the fixed active subgradient.
We need a covariance for $(\bar{\beta}_M, G_{-M})$.
Parameters
----------
lasso_obj : `selection.algorithms.lasso.lasso`
A lasso object after calling fit() method.
variables : seq
Which variables should we produce p-values / intervals for?
delta : float
Feasibility parameter for estimating row of inverse of Sigma.
"""
if not lasso_obj.ignore_inactive_constraints:
raise ValueError(
'debiased lasso should be fit ignoring inactive constraints as implied covariance between active and inactive score is 0')
# should we check that loglike is gaussian
lasso_soln = lasso_obj.lasso_solution
lasso_active = lasso_soln[lasso_obj.active]
active_list = list(lasso_obj.active)
G = lasso_obj.loglike.smooth_objective(lasso_soln, 'grad')
G_I = G[lasso_obj.inactive]
# this is the fixed part of subgradient
subgrad_term = -G[lasso_obj.active]
# we make new constraints for the Gaussian vector \hat{\beta}_M --
# same covariance as those for \bar{\beta}_M, but the constraints are just on signs,
# not signs after translation
if lasso_obj.active_penalized.sum():
_constraints = constraints(-np.diag(lasso_obj.active_signs)[lasso_obj.active_penalized],
np.zeros(lasso_obj.active_penalized.sum()),
covariance=lasso_obj._constraints.covariance)
_inactive_constraints = lasso_obj._inactive_constraints
# now make a product of the two constraints
# assuming independence -- which is true under
# selected model
_full_linear_part = np.zeros(((_constraints.linear_part.shape[0] +
_inactive_constraints.linear_part.shape[0]),
(_constraints.linear_part.shape[1] +
_inactive_constraints.linear_part.shape[1])))
_full_linear_part[:_constraints.linear_part.shape[0]][:,
:_constraints.linear_part.shape[1]] = _constraints.linear_part
_full_linear_part[_constraints.linear_part.shape[0]:][:,
_constraints.linear_part.shape[1]:] = _inactive_constraints.linear_part
_full_offset = np.zeros(_full_linear_part.shape[0])
_full_offset[:_constraints.linear_part.shape[0]] = _constraints.offset
_full_offset[_constraints.linear_part.shape[0]:] = _inactive_constraints.offset
_full_cov = np.zeros((_full_linear_part.shape[1],
_full_linear_part.shape[1]))
_full_cov[:_constraints.linear_part.shape[1]][:, :_constraints.linear_part.shape[1]] = _constraints.covariance
_full_cov[_constraints.linear_part.shape[1]:][:,
_constraints.linear_part.shape[1]:] = _inactive_constraints.covariance
_full_constraints = constraints(_full_linear_part,
_full_offset,
covariance=_full_cov)
_full_data = np.hstack([lasso_active, G_I])
if not _full_constraints(_full_data):
raise ValueError('constraints not satisfied')
H = lasso_obj.loglike.hessian(lasso_obj.lasso_solution)
H_AA = H[lasso_obj.active][:, lasso_obj.active]
bias_AA = np.linalg.inv(H_AA).dot(subgrad_term)
intervals = []
pvalues = []
approx_inverse = debiasing_matrix(H, variables, delta)
for Midx, var in enumerate(variables):
theta_var = approx_inverse[Midx]
# express target in pair (\hat{\beta}_A, G_I)
eta = np.zeros_like(theta_var)
# XXX should be better way to do this
if var in active_list:
idx = active_list.index(var)
eta[idx] = 1.
# inactive coordinates
eta[lasso_active.shape[0]:] = theta_var[lasso_obj.inactive]
theta_active = theta_var[active_list]
# offset term
offset = -bias_AA[idx] + theta_active.dot(subgrad_term)
intervals.append(_full_constraints.interval(eta,
_full_data) + offset)
pvalues.append(_full_constraints.pivot(eta,
_full_data,
null_value=-offset,
alternative='twosided'))
return [(j, p) + tuple(i) for j, p, i in zip(active_list, pvalues, intervals)]
|
selective-inference/selective-inference
|
selectinf/algorithms/debiased_lasso.py
|
Python
|
bsd-3-clause
| 12,209
|
[
"Gaussian"
] |
22ed089069ecd4b5f8266ee2790d1b7b51057f09a6cba07743acf59dfbf6da1c
|
#
# Copyright © 2012 - 2021 Michal Čihař <michal@cihar.com>
#
# This file is part of Weblate <https://weblate.org/>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
"""License data definitions.
This is an automatically generated file, see scripts/generate-license-data
"""
LICENSES = (
(
"Glide",
"3dfx Glide License",
"http://www.users.on.net/~triforce/glidexp/COPYING.txt",
False,
),
(
"Abstyles",
"Abstyles License",
"https://fedoraproject.org/wiki/Licensing/Abstyles",
False,
),
(
"AFL-1.1",
"Academic Free License v1.1",
"http://opensource.linux-mirror.org/licenses/afl-1.1.txt",
True,
),
(
"AFL-1.2",
"Academic Free License v1.2",
"http://opensource.linux-mirror.org/licenses/afl-1.2.txt",
True,
),
(
"AFL-2.0",
"Academic Free License v2.0",
"http://wayback.archive.org/web/20060924134533/http://www.opensource.org/licenses/afl-2.0.txt",
True,
),
(
"AFL-2.1",
"Academic Free License v2.1",
"http://opensource.linux-mirror.org/licenses/afl-2.1.txt",
True,
),
(
"AFL-3.0",
"Academic Free License v3.0",
"http://www.rosenlaw.com/AFL3.0.htm",
True,
),
(
"AMPAS",
"Academy of Motion Picture Arts and Sciences BSD",
"https://fedoraproject.org/wiki/Licensing/BSD#AMPASBSD",
False,
),
(
"APL-1.0",
"Adaptive Public License 1.0",
"https://opensource.org/licenses/APL-1.0",
True,
),
(
"Adobe-Glyph",
"Adobe Glyph List License",
"https://fedoraproject.org/wiki/Licensing/MIT#AdobeGlyph",
False,
),
(
"APAFML",
"Adobe Postscript AFM License",
"https://fedoraproject.org/wiki/Licensing/AdobePostscriptAFM",
False,
),
(
"Adobe-2006",
"Adobe Systems Incorporated Source Code License Agreement",
"https://fedoraproject.org/wiki/Licensing/AdobeLicense",
False,
),
(
"AGPL-1.0-only",
"Affero General Public License v1.0 only",
"http://www.affero.org/oagpl.html",
False,
),
(
"AGPL-1.0-or-later",
"Affero General Public License v1.0 or later",
"http://www.affero.org/oagpl.html",
False,
),
(
"Afmparse",
"Afmparse License",
"https://fedoraproject.org/wiki/Licensing/Afmparse",
False,
),
(
"Aladdin",
"Aladdin Free Public License",
"http://pages.cs.wisc.edu/~ghost/doc/AFPL/6.01/Public.htm",
False,
),
(
"ADSL",
"Amazon Digital Services License",
"https://fedoraproject.org/wiki/Licensing/AmazonDigitalServicesLicense",
False,
),
(
"AMDPLPA",
"AMD's plpa_map.c License",
"https://fedoraproject.org/wiki/Licensing/AMD_plpa_map_License",
False,
),
(
"ANTLR-PD",
"ANTLR Software Rights Notice",
"http://www.antlr2.org/license.html",
False,
),
(
"ANTLR-PD-fallback",
"ANTLR Software Rights Notice with license fallback",
"http://www.antlr2.org/license.html",
False,
),
(
"Apache-1.0",
"Apache License 1.0",
"http://www.apache.org/licenses/LICENSE-1.0",
True,
),
(
"Apache-1.1",
"Apache License 1.1",
"http://apache.org/licenses/LICENSE-1.1",
True,
),
(
"Apache-2.0",
"Apache License 2.0",
"http://www.apache.org/licenses/LICENSE-2.0",
True,
),
(
"AML",
"Apple MIT License",
"https://fedoraproject.org/wiki/Licensing/Apple_MIT_License",
False,
),
(
"APSL-1.0",
"Apple Public Source License 1.0",
"https://fedoraproject.org/wiki/Licensing/Apple_Public_Source_License_1.0",
True,
),
(
"APSL-1.1",
"Apple Public Source License 1.1",
"http://www.opensource.apple.com/source/IOSerialFamily/IOSerialFamily-7/APPLE_LICENSE",
True,
),
(
"APSL-1.2",
"Apple Public Source License 1.2",
"http://www.samurajdata.se/opensource/mirror/licenses/apsl.php",
True,
),
(
"APSL-2.0",
"Apple Public Source License 2.0",
"http://www.opensource.apple.com/license/apsl/",
True,
),
(
"Artistic-1.0",
"Artistic License 1.0",
"https://opensource.org/licenses/Artistic-1.0",
True,
),
(
"Artistic-1.0-Perl",
"Artistic License 1.0 (Perl)",
"http://dev.perl.org/licenses/artistic.html",
True,
),
(
"Artistic-1.0-cl8",
"Artistic License 1.0 w/clause 8",
"https://opensource.org/licenses/Artistic-1.0",
True,
),
(
"Artistic-2.0",
"Artistic License 2.0",
"http://www.perlfoundation.org/artistic_license_2_0",
True,
),
(
"AAL",
"Attribution Assurance License",
"https://opensource.org/licenses/attribution",
True,
),
(
"Bahyph",
"Bahyph License",
"https://fedoraproject.org/wiki/Licensing/Bahyph",
False,
),
("Barr", "Barr License", "https://fedoraproject.org/wiki/Licensing/Barr", False),
(
"Beerware",
"Beerware License",
"https://fedoraproject.org/wiki/Licensing/Beerware",
True,
),
(
"BitTorrent-1.0",
"BitTorrent Open Source License v1.0",
"http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/licenses/BitTorrent?r1=1.1&r2=1.1.1.1&diff_format=s",
False,
),
(
"BitTorrent-1.1",
"BitTorrent Open Source License v1.1",
"http://directory.fsf.org/wiki/License:BitTorrentOSL1.1",
True,
),
(
"BlueOak-1.0.0",
"Blue Oak Model License 1.0.0",
"https://blueoakcouncil.org/license/1.0.0",
False,
),
(
"BSL-1.0",
"Boost Software License 1.0",
"http://www.boost.org/LICENSE_1_0.txt",
True,
),
(
"Borceux",
"Borceux license",
"https://fedoraproject.org/wiki/Licensing/Borceux",
False,
),
(
"BSD-1-Clause",
"BSD 1-Clause License",
"https://svnweb.freebsd.org/base/head/include/ifaddrs.h?revision=326823",
True,
),
(
"BSD-2-Clause",
'BSD 2-Clause "Simplified" License',
"https://opensource.org/licenses/BSD-2-Clause",
True,
),
(
"BSD-2-Clause-Views",
"BSD 2-Clause with views sentence",
"http://www.freebsd.org/copyright/freebsd-license.html",
False,
),
(
"BSD-3-Clause",
'BSD 3-Clause "New" or "Revised" License',
"https://opensource.org/licenses/BSD-3-Clause",
True,
),
(
"BSD-3-Clause-Clear",
"BSD 3-Clause Clear License",
"http://labs.metacarta.com/license-explanation.html#license",
True,
),
(
"BSD-3-Clause-Modification",
"BSD 3-Clause Modification",
"https://fedoraproject.org/wiki/Licensing:BSD#Modification_Variant",
False,
),
(
"BSD-3-Clause-No-Military-License",
"BSD 3-Clause No Military License",
"https://gitlab.syncad.com/hive/dhive/-/blob/master/LICENSE",
False,
),
(
"BSD-3-Clause-No-Nuclear-License",
"BSD 3-Clause No Nuclear License",
"http://download.oracle.com/otn-pub/java/licenses/bsd.txt?AuthParam=1467140197_43d516ce1776bd08a58235a7785be1cc",
False,
),
(
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD 3-Clause No Nuclear License 2014",
"https://java.net/projects/javaeetutorial/pages/BerkeleyLicense",
False,
),
(
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD 3-Clause No Nuclear Warranty",
"https://jogamp.org/git/?p=gluegen.git;a=blob_plain;f=LICENSE.txt",
False,
),
(
"BSD-3-Clause-Open-MPI",
"BSD 3-Clause Open MPI variant",
"https://www.open-mpi.org/community/license.php",
False,
),
(
"BSD-4-Clause-Shortened",
"BSD 4 Clause Shortened",
"https://metadata.ftp-master.debian.org/changelogs//main/a/arpwatch/arpwatch_2.1a15-7_copyright",
False,
),
(
"BSD-4-Clause",
'BSD 4-Clause "Original" or "Old" License',
"http://directory.fsf.org/wiki/License:BSD_4Clause",
True,
),
(
"BSD-Protection",
"BSD Protection License",
"https://fedoraproject.org/wiki/Licensing/BSD_Protection_License",
False,
),
(
"BSD-Source-Code",
"BSD Source Code Attribution",
"https://github.com/robbiehanson/CocoaHTTPServer/blob/master/LICENSE.txt",
False,
),
(
"BSD-3-Clause-Attribution",
"BSD with attribution",
"https://fedoraproject.org/wiki/Licensing/BSD_with_Attribution",
False,
),
("0BSD", "BSD Zero Clause License", "http://landley.net/toybox/license.html", True),
(
"BSD-2-Clause-Patent",
"BSD-2-Clause Plus Patent License",
"https://opensource.org/licenses/BSDplusPatent",
True,
),
(
"BSD-4-Clause-UC",
"BSD-4-Clause (University of California-Specific)",
"http://www.freebsd.org/copyright/license.html",
False,
),
("BUSL-1.1", "Business Source License 1.1", "https://mariadb.com/bsl11/", False),
(
"bzip2-1.0.5",
"bzip2 and libbzip2 License v1.0.5",
"https://sourceware.org/bzip2/1.0.5/bzip2-manual-1.0.5.html",
False,
),
(
"bzip2-1.0.6",
"bzip2 and libbzip2 License v1.0.6",
"https://sourceware.org/git/?p=bzip2.git;a=blob;f=LICENSE;hb=bzip2-1.0.6",
False,
),
(
"Caldera",
"Caldera License",
"http://www.lemis.com/grog/UNIX/ancient-source-all.pdf",
False,
),
(
"CECILL-1.0",
"CeCILL Free Software License Agreement v1.0",
"http://www.cecill.info/licences/Licence_CeCILL_V1-fr.html",
False,
),
(
"CECILL-1.1",
"CeCILL Free Software License Agreement v1.1",
"http://www.cecill.info/licences/Licence_CeCILL_V1.1-US.html",
False,
),
(
"CECILL-2.0",
"CeCILL Free Software License Agreement v2.0",
"http://www.cecill.info/licences/Licence_CeCILL_V2-en.html",
True,
),
(
"CECILL-2.1",
"CeCILL Free Software License Agreement v2.1",
"http://www.cecill.info/licences/Licence_CeCILL_V2.1-en.html",
True,
),
(
"CECILL-B",
"CeCILL-B Free Software License Agreement",
"http://www.cecill.info/licences/Licence_CeCILL-B_V1-en.html",
True,
),
(
"CECILL-C",
"CeCILL-C Free Software License Agreement",
"http://www.cecill.info/licences/Licence_CeCILL-C_V1-en.html",
True,
),
(
"CERN-OHL-1.1",
"CERN Open Hardware Licence v1.1",
"https://www.ohwr.org/project/licenses/wikis/cern-ohl-v1.1",
False,
),
(
"CERN-OHL-1.2",
"CERN Open Hardware Licence v1.2",
"https://www.ohwr.org/project/licenses/wikis/cern-ohl-v1.2",
False,
),
(
"CERN-OHL-P-2.0",
"CERN Open Hardware Licence Version 2 - Permissive",
"https://www.ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-version-2",
True,
),
(
"CERN-OHL-S-2.0",
"CERN Open Hardware Licence Version 2 - Strongly Reciprocal",
"https://www.ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-version-2",
True,
),
(
"CERN-OHL-W-2.0",
"CERN Open Hardware Licence Version 2 - Weakly Reciprocal",
"https://www.ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-version-2",
True,
),
(
"ClArtistic",
"Clarified Artistic License",
"http://gianluca.dellavedova.org/2011/01/03/clarified-artistic-license/",
True,
),
(
"MIT-CMU",
"CMU License",
"https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT#CMU_Style",
False,
),
("CNRI-Jython", "CNRI Jython License", "http://www.jython.org/license.html", False),
(
"CNRI-Python",
"CNRI Python License",
"https://opensource.org/licenses/CNRI-Python",
True,
),
(
"CNRI-Python-GPL-Compatible",
"CNRI Python Open Source GPL Compatible License Agreement",
"http://www.python.org/download/releases/1.6.1/download_win/",
False,
),
(
"CPOL-1.02",
"Code Project Open License 1.02",
"http://www.codeproject.com/info/cpol10.aspx",
False,
),
(
"CDDL-1.0",
"Common Development and Distribution License 1.0",
"https://opensource.org/licenses/cddl1",
True,
),
(
"CDDL-1.1",
"Common Development and Distribution License 1.1",
"http://glassfish.java.net/public/CDDL+GPL_1_1.html",
False,
),
(
"CDL-1.0",
"Common Documentation License 1.0",
"http://www.opensource.apple.com/cdl/",
False,
),
(
"CPAL-1.0",
"Common Public Attribution License 1.0",
"https://opensource.org/licenses/CPAL-1.0",
True,
),
(
"CPL-1.0",
"Common Public License 1.0",
"https://opensource.org/licenses/CPL-1.0",
True,
),
(
"CDLA-Permissive-1.0",
"Community Data License Agreement Permissive 1.0",
"https://cdla.io/permissive-1-0",
False,
),
(
"CDLA-Sharing-1.0",
"Community Data License Agreement Sharing 1.0",
"https://cdla.io/sharing-1-0",
False,
),
(
"C-UDA-1.0",
"Computational Use of Data Agreement v1.0",
"https://github.com/microsoft/Computational-Use-of-Data-Agreement/blob/master/C-UDA-1.0.md",
False,
),
(
"CATOSL-1.1",
"Computer Associates Trusted Open Source License 1.1",
"https://opensource.org/licenses/CATOSL-1.1",
True,
),
(
"Condor-1.1",
"Condor Public License v1.1",
"http://research.cs.wisc.edu/condor/license.html#condor",
True,
),
(
"copyleft-next-0.3.0",
"copyleft-next 0.3.0",
"https://github.com/copyleft-next/copyleft-next/blob/master/Releases/copyleft-next-0.3.0",
False,
),
(
"copyleft-next-0.3.1",
"copyleft-next 0.3.1",
"https://github.com/copyleft-next/copyleft-next/blob/master/Releases/copyleft-next-0.3.1",
False,
),
(
"CC-BY-1.0",
"Creative Commons Attribution 1.0 Generic",
"https://creativecommons.org/licenses/by/1.0/legalcode",
False,
),
(
"CC-BY-2.0",
"Creative Commons Attribution 2.0 Generic",
"https://creativecommons.org/licenses/by/2.0/legalcode",
False,
),
(
"CC-BY-2.5-AU",
"Creative Commons Attribution 2.5 Australia",
"https://creativecommons.org/licenses/by/2.5/au/legalcode",
False,
),
(
"CC-BY-2.5",
"Creative Commons Attribution 2.5 Generic",
"https://creativecommons.org/licenses/by/2.5/legalcode",
False,
),
(
"CC-BY-3.0-AT",
"Creative Commons Attribution 3.0 Austria",
"https://creativecommons.org/licenses/by/3.0/at/legalcode",
False,
),
(
"CC-BY-3.0-US",
"Creative Commons Attribution 3.0 United States",
"https://creativecommons.org/licenses/by/3.0/us/legalcode",
False,
),
(
"CC-BY-3.0",
"Creative Commons Attribution 3.0 Unported",
"https://creativecommons.org/licenses/by/3.0/legalcode",
True,
),
(
"CC-BY-4.0",
"Creative Commons Attribution 4.0 International",
"https://creativecommons.org/licenses/by/4.0/legalcode",
True,
),
(
"CC-BY-ND-1.0",
"Creative Commons Attribution No Derivatives 1.0 Generic",
"https://creativecommons.org/licenses/by-nd/1.0/legalcode",
False,
),
(
"CC-BY-ND-2.0",
"Creative Commons Attribution No Derivatives 2.0 Generic",
"https://creativecommons.org/licenses/by-nd/2.0/legalcode",
False,
),
(
"CC-BY-ND-2.5",
"Creative Commons Attribution No Derivatives 2.5 Generic",
"https://creativecommons.org/licenses/by-nd/2.5/legalcode",
False,
),
(
"CC-BY-ND-3.0",
"Creative Commons Attribution No Derivatives 3.0 Unported",
"https://creativecommons.org/licenses/by-nd/3.0/legalcode",
False,
),
(
"CC-BY-ND-4.0",
"Creative Commons Attribution No Derivatives 4.0 International",
"https://creativecommons.org/licenses/by-nd/4.0/legalcode",
False,
),
(
"CC-BY-NC-1.0",
"Creative Commons Attribution Non Commercial 1.0 Generic",
"https://creativecommons.org/licenses/by-nc/1.0/legalcode",
False,
),
(
"CC-BY-NC-2.0",
"Creative Commons Attribution Non Commercial 2.0 Generic",
"https://creativecommons.org/licenses/by-nc/2.0/legalcode",
False,
),
(
"CC-BY-NC-2.5",
"Creative Commons Attribution Non Commercial 2.5 Generic",
"https://creativecommons.org/licenses/by-nc/2.5/legalcode",
False,
),
(
"CC-BY-NC-3.0",
"Creative Commons Attribution Non Commercial 3.0 Unported",
"https://creativecommons.org/licenses/by-nc/3.0/legalcode",
False,
),
(
"CC-BY-NC-4.0",
"Creative Commons Attribution Non Commercial 4.0 International",
"https://creativecommons.org/licenses/by-nc/4.0/legalcode",
False,
),
(
"CC-BY-NC-ND-1.0",
"Creative Commons Attribution Non Commercial No Derivatives 1.0 Generic",
"https://creativecommons.org/licenses/by-nd-nc/1.0/legalcode",
False,
),
(
"CC-BY-NC-ND-2.0",
"Creative Commons Attribution Non Commercial No Derivatives 2.0 Generic",
"https://creativecommons.org/licenses/by-nc-nd/2.0/legalcode",
False,
),
(
"CC-BY-NC-ND-2.5",
"Creative Commons Attribution Non Commercial No Derivatives 2.5 Generic",
"https://creativecommons.org/licenses/by-nc-nd/2.5/legalcode",
False,
),
(
"CC-BY-NC-ND-3.0-IGO",
"Creative Commons Attribution Non Commercial No Derivatives 3.0 IGO",
"https://creativecommons.org/licenses/by-nc-nd/3.0/igo/legalcode",
False,
),
(
"CC-BY-NC-ND-3.0",
"Creative Commons Attribution Non Commercial No Derivatives 3.0 Unported",
"https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode",
False,
),
(
"CC-BY-NC-ND-4.0",
"Creative Commons Attribution Non Commercial No Derivatives 4.0 International",
"https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode",
False,
),
(
"CC-BY-NC-SA-1.0",
"Creative Commons Attribution Non Commercial Share Alike 1.0 Generic",
"https://creativecommons.org/licenses/by-nc-sa/1.0/legalcode",
False,
),
(
"CC-BY-NC-SA-2.0",
"Creative Commons Attribution Non Commercial Share Alike 2.0 Generic",
"https://creativecommons.org/licenses/by-nc-sa/2.0/legalcode",
False,
),
(
"CC-BY-NC-SA-2.5",
"Creative Commons Attribution Non Commercial Share Alike 2.5 Generic",
"https://creativecommons.org/licenses/by-nc-sa/2.5/legalcode",
False,
),
(
"CC-BY-NC-SA-3.0",
"Creative Commons Attribution Non Commercial Share Alike 3.0 Unported",
"https://creativecommons.org/licenses/by-nc-sa/3.0/legalcode",
False,
),
(
"CC-BY-NC-SA-4.0",
"Creative Commons Attribution Non Commercial Share Alike 4.0 International",
"https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode",
False,
),
(
"CC-BY-SA-1.0",
"Creative Commons Attribution Share Alike 1.0 Generic",
"https://creativecommons.org/licenses/by-sa/1.0/legalcode",
False,
),
(
"CC-BY-SA-2.0-UK",
"Creative Commons Attribution Share Alike 2.0 England and Wales",
"https://creativecommons.org/licenses/by-sa/2.0/uk/legalcode",
False,
),
(
"CC-BY-SA-2.0",
"Creative Commons Attribution Share Alike 2.0 Generic",
"https://creativecommons.org/licenses/by-sa/2.0/legalcode",
False,
),
(
"CC-BY-SA-2.1-JP",
"Creative Commons Attribution Share Alike 2.1 Japan",
"https://creativecommons.org/licenses/by-sa/2.1/jp/legalcode",
False,
),
(
"CC-BY-SA-2.5",
"Creative Commons Attribution Share Alike 2.5 Generic",
"https://creativecommons.org/licenses/by-sa/2.5/legalcode",
False,
),
(
"CC-BY-SA-3.0-AT",
"Creative Commons Attribution Share Alike 3.0 Austria",
"https://creativecommons.org/licenses/by-sa/3.0/at/legalcode",
False,
),
(
"CC-BY-SA-3.0",
"Creative Commons Attribution Share Alike 3.0 Unported",
"https://creativecommons.org/licenses/by-sa/3.0/legalcode",
True,
),
(
"CC-BY-SA-4.0",
"Creative Commons Attribution Share Alike 4.0 International",
"https://creativecommons.org/licenses/by-sa/4.0/legalcode",
True,
),
(
"CC-PDDC",
"Creative Commons Public Domain Dedication and Certification",
"https://creativecommons.org/licenses/publicdomain/",
False,
),
(
"CC0-1.0",
"Creative Commons Zero v1.0 Universal",
"https://creativecommons.org/publicdomain/zero/1.0/legalcode",
True,
),
(
"Crossword",
"Crossword License",
"https://fedoraproject.org/wiki/Licensing/Crossword",
False,
),
(
"CAL-1.0",
"Cryptographic Autonomy License 1.0",
"http://cryptographicautonomylicense.com/license-text.html",
True,
),
(
"CAL-1.0-Combined-Work-Exception",
"Cryptographic Autonomy License 1.0 (Combined Work Exception)",
"http://cryptographicautonomylicense.com/license-text.html",
True,
),
(
"CrystalStacker",
"CrystalStacker License",
"https://fedoraproject.org/wiki/Licensing:CrystalStacker?rd=Licensing/CrystalStacker",
False,
),
(
"CUA-OPL-1.0",
"CUA Office Public License v1.0",
"https://opensource.org/licenses/CUA-OPL-1.0",
True,
),
("Cube", "Cube License", "https://fedoraproject.org/wiki/Licensing/Cube", False),
(
"curl",
"curl License",
"https://github.com/bagder/curl/blob/master/COPYING",
False,
),
(
"DRL-1.0",
"Detection Rule License 1.0",
"https://github.com/Neo23x0/sigma/blob/master/LICENSE.Detection.Rules.md",
False,
),
(
"D-FSL-1.0",
"Deutsche Freie Software Lizenz",
"http://www.dipp.nrw.de/d-fsl/lizenzen/",
False,
),
(
"diffmark",
"diffmark license",
"https://fedoraproject.org/wiki/Licensing/diffmark",
False,
),
(
"WTFPL",
"Do What The F*ck You Want To Public License",
"http://www.wtfpl.net/about/",
True,
),
("DOC", "DOC License", "http://www.cs.wustl.edu/~schmidt/ACE-copying.html", False),
(
"Dotseqn",
"Dotseqn License",
"https://fedoraproject.org/wiki/Licensing/Dotseqn",
False,
),
("DSDP", "DSDP License", "https://fedoraproject.org/wiki/Licensing/DSDP", False),
(
"dvipdfm",
"dvipdfm License",
"https://fedoraproject.org/wiki/Licensing/dvipdfm",
False,
),
(
"EPL-1.0",
"Eclipse Public License 1.0",
"http://www.eclipse.org/legal/epl-v10.html",
True,
),
(
"EPL-2.0",
"Eclipse Public License 2.0",
"https://www.eclipse.org/legal/epl-2.0",
True,
),
(
"ECL-1.0",
"Educational Community License v1.0",
"https://opensource.org/licenses/ECL-1.0",
True,
),
(
"ECL-2.0",
"Educational Community License v2.0",
"https://opensource.org/licenses/ECL-2.0",
True,
),
(
"eGenix",
"eGenix.com Public License 1.1.0",
"http://www.egenix.com/products/eGenix.com-Public-License-1.1.0.pdf",
False,
),
(
"EFL-1.0",
"Eiffel Forum License v1.0",
"http://www.eiffel-nice.org/license/forum.txt",
True,
),
(
"EFL-2.0",
"Eiffel Forum License v2.0",
"http://www.eiffel-nice.org/license/eiffel-forum-license-2.html",
True,
),
(
"MIT-advertising",
"Enlightenment License (e16)",
"https://fedoraproject.org/wiki/Licensing/MIT_With_Advertising",
False,
),
(
"MIT-enna",
"enna License",
"https://fedoraproject.org/wiki/Licensing/MIT#enna",
False,
),
(
"Entessa",
"Entessa Public License v1.0",
"https://opensource.org/licenses/Entessa",
True,
),
("EPICS", "EPICS Open License", "https://epics.anl.gov/license/open.php", False),
(
"ErlPL-1.1",
"Erlang Public License v1.1",
"http://www.erlang.org/EPLICENSE",
False,
),
(
"etalab-2.0",
"Etalab Open License 2.0",
"https://github.com/DISIC/politique-de-contribution-open-source/blob/master/LICENSE.pdf",
False,
),
(
"EUDatagrid",
"EU DataGrid Software License",
"http://eu-datagrid.web.cern.ch/eu-datagrid/license.html",
True,
),
(
"EUPL-1.0",
"European Union Public License 1.0",
"http://ec.europa.eu/idabc/en/document/7330.html",
False,
),
(
"EUPL-1.1",
"European Union Public License 1.1",
"https://joinup.ec.europa.eu/software/page/eupl/licence-eupl",
True,
),
(
"EUPL-1.2",
"European Union Public License 1.2",
"https://joinup.ec.europa.eu/page/eupl-text-11-12",
True,
),
(
"Eurosym",
"Eurosym License",
"https://fedoraproject.org/wiki/Licensing/Eurosym",
False,
),
("Fair", "Fair License", "http://fairlicense.org/", True),
(
"MIT-feh",
"feh License",
"https://fedoraproject.org/wiki/Licensing/MIT#feh",
False,
),
(
"Frameworx-1.0",
"Frameworx Open License 1.0",
"https://opensource.org/licenses/Frameworx-1.0",
True,
),
(
"FreeBSD-DOC",
"FreeBSD Documentation License",
"https://www.freebsd.org/copyright/freebsd-doc-license/",
False,
),
(
"FreeImage",
"FreeImage Public License v1.0",
"http://freeimage.sourceforge.net/freeimage-license.txt",
False,
),
(
"FTL",
"Freetype Project License",
"http://freetype.fis.uniroma2.it/FTL.TXT",
True,
),
(
"FSFAP",
"FSF All Permissive License",
"https://www.gnu.org/prep/maintain/html_node/License-Notices-for-Other-Files.html",
True,
),
(
"FSFUL",
"FSF Unlimited License",
"https://fedoraproject.org/wiki/Licensing/FSF_Unlimited_License",
False,
),
(
"FSFULLR",
"FSF Unlimited License (with License Retention)",
"https://fedoraproject.org/wiki/Licensing/FSF_Unlimited_License#License_Retention_Variant",
False,
),
(
"GD",
"GD License",
"https://libgd.github.io/manuals/2.3.0/files/license-txt.html",
False,
),
(
"Giftware",
"Giftware License",
"http://liballeg.org/license.html#allegro-4-the-giftware-license",
False,
),
("GL2PS", "GL2PS License", "http://www.geuz.org/gl2ps/COPYING.GL2PS", False),
(
"Glulxe",
"Glulxe License",
"https://fedoraproject.org/wiki/Licensing/Glulxe",
False,
),
(
"AGPL-3.0-only",
"GNU Affero General Public License v3.0 only",
"https://www.gnu.org/licenses/agpl.txt",
True,
),
(
"AGPL-3.0-or-later",
"GNU Affero General Public License v3.0 or later",
"https://www.gnu.org/licenses/agpl.txt",
True,
),
(
"GFDL-1.1-only",
"GNU Free Documentation License v1.1 only",
"https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt",
True,
),
(
"GFDL-1.1-invariants-only",
"GNU Free Documentation License v1.1 only - invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt",
False,
),
(
"GFDL-1.1-no-invariants-only",
"GNU Free Documentation License v1.1 only - no invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt",
False,
),
(
"GFDL-1.1-or-later",
"GNU Free Documentation License v1.1 or later",
"https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt",
True,
),
(
"GFDL-1.1-invariants-or-later",
"GNU Free Documentation License v1.1 or later - invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt",
False,
),
(
"GFDL-1.1-no-invariants-or-later",
"GNU Free Documentation License v1.1 or later - no invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt",
False,
),
(
"GFDL-1.2-only",
"GNU Free Documentation License v1.2 only",
"https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt",
True,
),
(
"GFDL-1.2-invariants-only",
"GNU Free Documentation License v1.2 only - invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt",
False,
),
(
"GFDL-1.2-no-invariants-only",
"GNU Free Documentation License v1.2 only - no invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt",
False,
),
(
"GFDL-1.2-or-later",
"GNU Free Documentation License v1.2 or later",
"https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt",
True,
),
(
"GFDL-1.2-invariants-or-later",
"GNU Free Documentation License v1.2 or later - invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt",
False,
),
(
"GFDL-1.2-no-invariants-or-later",
"GNU Free Documentation License v1.2 or later - no invariants",
"https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt",
False,
),
(
"GFDL-1.3-only",
"GNU Free Documentation License v1.3 only",
"https://www.gnu.org/licenses/fdl-1.3.txt",
True,
),
(
"GFDL-1.3-invariants-only",
"GNU Free Documentation License v1.3 only - invariants",
"https://www.gnu.org/licenses/fdl-1.3.txt",
False,
),
(
"GFDL-1.3-no-invariants-only",
"GNU Free Documentation License v1.3 only - no invariants",
"https://www.gnu.org/licenses/fdl-1.3.txt",
False,
),
(
"GFDL-1.3-or-later",
"GNU Free Documentation License v1.3 or later",
"https://www.gnu.org/licenses/fdl-1.3.txt",
True,
),
(
"GFDL-1.3-invariants-or-later",
"GNU Free Documentation License v1.3 or later - invariants",
"https://www.gnu.org/licenses/fdl-1.3.txt",
False,
),
(
"GFDL-1.3-no-invariants-or-later",
"GNU Free Documentation License v1.3 or later - no invariants",
"https://www.gnu.org/licenses/fdl-1.3.txt",
False,
),
(
"GPL-1.0-only",
"GNU General Public License v1.0 only",
"https://www.gnu.org/licenses/old-licenses/gpl-1.0-standalone.html",
False,
),
(
"GPL-1.0-or-later",
"GNU General Public License v1.0 or later",
"https://www.gnu.org/licenses/old-licenses/gpl-1.0-standalone.html",
False,
),
(
"GPL-2.0-only",
"GNU General Public License v2.0 only",
"https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html",
True,
),
(
"GPL-2.0-or-later",
"GNU General Public License v2.0 or later",
"https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html",
True,
),
(
"GPL-3.0-only",
"GNU General Public License v3.0 only",
"https://www.gnu.org/licenses/gpl-3.0-standalone.html",
True,
),
(
"GPL-3.0-or-later",
"GNU General Public License v3.0 or later",
"https://www.gnu.org/licenses/gpl-3.0-standalone.html",
True,
),
(
"LGPL-2.1-only",
"GNU Lesser General Public License v2.1 only",
"https://www.gnu.org/licenses/old-licenses/lgpl-2.1-standalone.html",
True,
),
(
"LGPL-2.1-or-later",
"GNU Lesser General Public License v2.1 or later",
"https://www.gnu.org/licenses/old-licenses/lgpl-2.1-standalone.html",
True,
),
(
"LGPL-3.0-only",
"GNU Lesser General Public License v3.0 only",
"https://www.gnu.org/licenses/lgpl-3.0-standalone.html",
True,
),
(
"LGPL-3.0-or-later",
"GNU Lesser General Public License v3.0 or later",
"https://www.gnu.org/licenses/lgpl-3.0-standalone.html",
True,
),
(
"LGPL-2.0-only",
"GNU Library General Public License v2 only",
"https://www.gnu.org/licenses/old-licenses/lgpl-2.0-standalone.html",
True,
),
(
"LGPL-2.0-or-later",
"GNU Library General Public License v2 or later",
"https://www.gnu.org/licenses/old-licenses/lgpl-2.0-standalone.html",
True,
),
(
"gnuplot",
"gnuplot License",
"https://fedoraproject.org/wiki/Licensing/Gnuplot",
True,
),
(
"GLWTPL",
"Good Luck With That Public License",
"https://github.com/me-shaon/GLWTPL/commit/da5f6bc734095efbacb442c0b31e33a65b9d6e85",
False,
),
(
"gSOAP-1.3b",
"gSOAP Public License v1.3b",
"http://www.cs.fsu.edu/~engelen/license.html",
False,
),
(
"HaskellReport",
"Haskell Language Report License",
"https://fedoraproject.org/wiki/Licensing/Haskell_Language_Report_License",
False,
),
(
"Hippocratic-2.1",
"Hippocratic License 2.1",
"https://firstdonoharm.dev/version/2/1/license.html",
False,
),
(
"HPND",
"Historical Permission Notice and Disclaimer",
"https://opensource.org/licenses/HPND",
True,
),
(
"HPND-sell-variant",
"Historical Permission Notice and Disclaimer - sell variant",
"https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/sunrpc/auth_gss/gss_generic_token.c?h=v4.19",
False,
),
(
"HTMLTIDY",
"HTML Tidy License",
"https://github.com/htacg/tidy-html5/blob/next/README/LICENSE.md",
False,
),
(
"IBM-pibs",
"IBM PowerPC Initialization and Boot Software",
"http://git.denx.de/?p=u-boot.git;a=blob;f=arch/powerpc/cpu/ppc4xx/miiphy.c;h=297155fdafa064b955e53e9832de93bfb0cfb85b;hb=9fab4bf4cc077c21e43941866f3f2c196f28670d",
False,
),
(
"IPL-1.0",
"IBM Public License v1.0",
"https://opensource.org/licenses/IPL-1.0",
True,
),
(
"ICU",
"ICU License",
"http://source.icu-project.org/repos/icu/icu/trunk/license.html",
False,
),
(
"ImageMagick",
"ImageMagick License",
"http://www.imagemagick.org/script/license.php",
False,
),
(
"iMatix",
"iMatix Standard Function Library Agreement",
"http://legacy.imatix.com/html/sfl/sfl4.htm#license",
True,
),
(
"Imlib2",
"Imlib2 License",
"http://trac.enlightenment.org/e/browser/trunk/imlib2/COPYING",
True,
),
(
"IJG",
"Independent JPEG Group License",
"http://dev.w3.org/cvsweb/Amaya/libjpeg/Attic/README?rev=1.2",
True,
),
("Info-ZIP", "Info-ZIP License", "http://www.info-zip.org/license.html", False),
(
"Intel-ACPI",
"Intel ACPI Software License Agreement",
"https://fedoraproject.org/wiki/Licensing/Intel_ACPI_Software_License_Agreement",
False,
),
(
"Intel",
"Intel Open Source License",
"https://opensource.org/licenses/Intel",
True,
),
(
"Interbase-1.0",
"Interbase Public License v1.0",
"https://web.archive.org/web/20060319014854/http://info.borland.com/devsupport/interbase/opensource/IPL.html",
False,
),
("IPA", "IPA Font License", "https://opensource.org/licenses/IPA", True),
(
"ISC",
"ISC License",
"https://www.isc.org/downloads/software-support-policy/isc-license/",
True,
),
(
"JPNIC",
"Japan Network Information Center License",
"https://gitlab.isc.org/isc-projects/bind9/blob/master/COPYRIGHT#L366",
False,
),
(
"JasPer-2.0",
"JasPer License",
"http://www.ece.uvic.ca/~mdadams/jasper/LICENSE",
False,
),
("JSON", "JSON License", "http://www.json.org/license.html", False),
(
"LPPL-1.0",
"LaTeX Project Public License v1.0",
"http://www.latex-project.org/lppl/lppl-1-0.txt",
False,
),
(
"LPPL-1.1",
"LaTeX Project Public License v1.1",
"http://www.latex-project.org/lppl/lppl-1-1.txt",
False,
),
(
"LPPL-1.2",
"LaTeX Project Public License v1.2",
"http://www.latex-project.org/lppl/lppl-1-2.txt",
True,
),
(
"LPPL-1.3a",
"LaTeX Project Public License v1.3a",
"http://www.latex-project.org/lppl/lppl-1-3a.txt",
True,
),
(
"LPPL-1.3c",
"LaTeX Project Public License v1.3c",
"http://www.latex-project.org/lppl/lppl-1-3c.txt",
True,
),
(
"Latex2e",
"Latex2e License",
"https://fedoraproject.org/wiki/Licensing/Latex2e",
False,
),
(
"BSD-3-Clause-LBNL",
"Lawrence Berkeley National Labs BSD variant license",
"https://fedoraproject.org/wiki/Licensing/LBNLBSD",
True,
),
(
"Leptonica",
"Leptonica License",
"https://fedoraproject.org/wiki/Licensing/Leptonica",
False,
),
(
"LGPLLR",
"Lesser General Public License For Linguistic Resources",
"http://www-igm.univ-mlv.fr/~unitex/lgpllr.html",
False,
),
(
"Libpng",
"libpng License",
"http://www.libpng.org/pub/png/src/libpng-LICENSE.txt",
False,
),
(
"libselinux-1.0",
"libselinux public domain notice",
"https://github.com/SELinuxProject/selinux/blob/master/libselinux/LICENSE",
False,
),
(
"libtiff",
"libtiff License",
"https://fedoraproject.org/wiki/Licensing/libtiff",
False,
),
(
"LAL-1.2",
"Licence Art Libre 1.2",
"http://artlibre.org/licence/lal/licence-art-libre-12/",
False,
),
("LAL-1.3", "Licence Art Libre 1.3", "https://artlibre.org/", False),
(
"LiLiQ-P-1.1",
"Licence Libre du Québec – Permissive version 1.1",
"https://forge.gouv.qc.ca/licence/fr/liliq-v1-1/",
True,
),
(
"LiLiQ-Rplus-1.1",
"Licence Libre du Québec – Réciprocité forte version 1.1",
"https://www.forge.gouv.qc.ca/participez/licence-logicielle/licence-libre-du-quebec-liliq-en-francais/licence-libre-du-quebec-reciprocite-forte-liliq-r-v1-1/",
True,
),
(
"LiLiQ-R-1.1",
"Licence Libre du Québec – Réciprocité version 1.1",
"https://www.forge.gouv.qc.ca/participez/licence-logicielle/licence-libre-du-quebec-liliq-en-francais/licence-libre-du-quebec-reciprocite-liliq-r-v1-1/",
True,
),
(
"Linux-OpenIB",
"Linux Kernel Variant of OpenIB.org license",
"https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/infiniband/core/sa.h",
False,
),
(
"LPL-1.02",
"Lucent Public License v1.02",
"http://plan9.bell-labs.com/plan9/license.html",
True,
),
(
"LPL-1.0",
"Lucent Public License Version 1.0",
"https://opensource.org/licenses/LPL-1.0",
True,
),
(
"MakeIndex",
"MakeIndex License",
"https://fedoraproject.org/wiki/Licensing/MakeIndex",
False,
),
(
"MTLL",
"Matrix Template Library License",
"https://fedoraproject.org/wiki/Licensing/Matrix_Template_Library_License",
False,
),
(
"MS-PL",
"Microsoft Public License",
"http://www.microsoft.com/opensource/licenses.mspx",
True,
),
(
"MS-RL",
"Microsoft Reciprocal License",
"http://www.microsoft.com/opensource/licenses.mspx",
True,
),
(
"MITNFA",
"MIT +no-false-attribs license",
"https://fedoraproject.org/wiki/Licensing/MITNFA",
False,
),
("MIT", "MIT License", "https://opensource.org/licenses/MIT", True),
(
"MIT-Modern-Variant",
"MIT License Modern Variant",
"https://fedoraproject.org/wiki/Licensing:MIT#Modern_Variants",
True,
),
("MIT-0", "MIT No Attribution", "https://github.com/aws/mit-0", True),
(
"MIT-open-group",
"MIT Open Group variant",
"https://gitlab.freedesktop.org/xorg/app/iceauth/-/blob/master/COPYING",
False,
),
("Motosoto", "Motosoto License", "https://opensource.org/licenses/Motosoto", True),
(
"MPL-1.0",
"Mozilla Public License 1.0",
"http://www.mozilla.org/MPL/MPL-1.0.html",
True,
),
(
"MPL-1.1",
"Mozilla Public License 1.1",
"http://www.mozilla.org/MPL/MPL-1.1.html",
True,
),
("MPL-2.0", "Mozilla Public License 2.0", "http://www.mozilla.org/MPL/2.0/", True),
(
"MPL-2.0-no-copyleft-exception",
"Mozilla Public License 2.0 (no copyleft exception)",
"http://www.mozilla.org/MPL/2.0/",
True,
),
("mpich2", "mpich2 License", "https://fedoraproject.org/wiki/Licensing/MIT", False),
(
"MulanPSL-1.0",
"Mulan Permissive Software License, Version 1",
"https://license.coscl.org.cn/MulanPSL/",
False,
),
(
"MulanPSL-2.0",
"Mulan Permissive Software License, Version 2",
"https://license.coscl.org.cn/MulanPSL2/",
True,
),
("Multics", "Multics License", "https://opensource.org/licenses/Multics", True),
("Mup", "Mup License", "https://fedoraproject.org/wiki/Licensing/Mup", False),
(
"NAIST-2003",
"Nara Institute of Science and Technology License (2003)",
"https://enterprise.dejacode.com/licenses/public/naist-2003/#license-text",
False,
),
(
"NASA-1.3",
"NASA Open Source Agreement 1.3",
"http://ti.arc.nasa.gov/opensource/nosa/",
True,
),
("Naumen", "Naumen Public License", "https://opensource.org/licenses/Naumen", True),
(
"NBPL-1.0",
"Net Boolean Public License v1",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=37b4b3f6cc4bf34e1d3dec61e69914b9819d8894",
False,
),
(
"Net-SNMP",
"Net-SNMP License",
"http://net-snmp.sourceforge.net/about/license.html",
False,
),
(
"NetCDF",
"NetCDF license",
"http://www.unidata.ucar.edu/software/netcdf/copyright.html",
False,
),
(
"NGPL",
"Nethack General Public License",
"https://opensource.org/licenses/NGPL",
True,
),
(
"NOSL",
"Netizen Open Source License",
"http://bits.netizen.com.au/licenses/NOSL/nosl.txt",
True,
),
(
"NPL-1.0",
"Netscape Public License v1.0",
"http://www.mozilla.org/MPL/NPL/1.0/",
True,
),
(
"NPL-1.1",
"Netscape Public License v1.1",
"http://www.mozilla.org/MPL/NPL/1.1/",
True,
),
(
"Newsletr",
"Newsletr License",
"https://fedoraproject.org/wiki/Licensing/Newsletr",
False,
),
(
"NIST-PD",
"NIST Public Domain Notice",
"https://github.com/tcheneau/simpleRPL/blob/e645e69e38dd4e3ccfeceb2db8cba05b7c2e0cd3/LICENSE.txt",
False,
),
(
"NIST-PD-fallback",
"NIST Public Domain Notice with license fallback",
"https://github.com/usnistgov/jsip/blob/59700e6926cbe96c5cdae897d9a7d2656b42abe3/LICENSE",
False,
),
(
"NLPL",
"No Limit Public License",
"https://fedoraproject.org/wiki/Licensing/NLPL",
False,
),
(
"Nokia",
"Nokia Open Source License",
"https://opensource.org/licenses/nokia",
True,
),
(
"NCGL-UK-2.0",
"Non-Commercial Government Licence",
"http://www.nationalarchives.gov.uk/doc/non-commercial-government-licence/version/2/",
False,
),
(
"NPOSL-3.0",
"Non-Profit Open Software License 3.0",
"https://opensource.org/licenses/NOSL3.0",
True,
),
(
"NLOD-1.0",
"Norwegian Licence for Open Government Data",
"http://data.norge.no/nlod/en/1.0",
False,
),
("Noweb", "Noweb License", "https://fedoraproject.org/wiki/Licensing/Noweb", False),
("NRL", "NRL License", "http://web.mit.edu/network/isakmp/nrllicense.html", False),
("NTP", "NTP License", "https://opensource.org/licenses/NTP", True),
(
"NTP-0",
"NTP No Attribution",
"https://github.com/tytso/e2fsprogs/blob/master/lib/et/et_name.c",
False,
),
(
"OCLC-2.0",
"OCLC Research Public License 2.0",
"http://www.oclc.org/research/activities/software/license/v2final.htm",
True,
),
(
"OGC-1.0",
"OGC Software License, Version 1.0",
"https://www.ogc.org/ogc/software/1.0",
False,
),
(
"OCCT-PL",
"Open CASCADE Technology Public License",
"http://www.opencascade.com/content/occt-public-license",
False,
),
(
"ODC-By-1.0",
"Open Data Commons Attribution License v1.0",
"https://opendatacommons.org/licenses/by/1.0/",
False,
),
(
"ODbL-1.0",
"Open Data Commons Open Database License v1.0",
"http://www.opendatacommons.org/licenses/odbl/1.0/",
True,
),
(
"PDDL-1.0",
"Open Data Commons Public Domain Dedication & License 1.0",
"http://opendatacommons.org/licenses/pddl/1.0/",
False,
),
(
"OGL-Canada-2.0",
"Open Government Licence - Canada",
"https://open.canada.ca/en/open-government-licence-canada",
False,
),
(
"OGL-UK-1.0",
"Open Government Licence v1.0",
"http://www.nationalarchives.gov.uk/doc/open-government-licence/version/1/",
False,
),
(
"OGL-UK-2.0",
"Open Government Licence v2.0",
"http://www.nationalarchives.gov.uk/doc/open-government-licence/version/2/",
False,
),
(
"OGL-UK-3.0",
"Open Government Licence v3.0",
"http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/",
False,
),
(
"OGTSL",
"Open Group Test Suite License",
"http://www.opengroup.org/testing/downloads/The_Open_Group_TSL.txt",
True,
),
(
"OLDAP-2.2.2",
"Open LDAP Public License 2.2.2",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=df2cc1e21eb7c160695f5b7cffd6296c151ba188",
False,
),
(
"OLDAP-1.1",
"Open LDAP Public License v1.1",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=806557a5ad59804ef3a44d5abfbe91d706b0791f",
False,
),
(
"OLDAP-1.2",
"Open LDAP Public License v1.2",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=42b0383c50c299977b5893ee695cf4e486fb0dc7",
False,
),
(
"OLDAP-1.3",
"Open LDAP Public License v1.3",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=e5f8117f0ce088d0bd7a8e18ddf37eaa40eb09b1",
False,
),
(
"OLDAP-1.4",
"Open LDAP Public License v1.4",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=c9f95c2f3f2ffb5e0ae55fe7388af75547660941",
False,
),
(
"OLDAP-2.0",
"Open LDAP Public License v2.0 (or possibly 2.0A and 2.0B)",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=cbf50f4e1185a21abd4c0a54d3f4341fe28f36ea",
False,
),
(
"OLDAP-2.0.1",
"Open LDAP Public License v2.0.1",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=b6d68acd14e51ca3aab4428bf26522aa74873f0e",
False,
),
(
"OLDAP-2.1",
"Open LDAP Public License v2.1",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=b0d176738e96a0d3b9f85cb51e140a86f21be715",
False,
),
(
"OLDAP-2.2",
"Open LDAP Public License v2.2",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=470b0c18ec67621c85881b2733057fecf4a1acc3",
False,
),
(
"OLDAP-2.2.1",
"Open LDAP Public License v2.2.1",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=4bc786f34b50aa301be6f5600f58a980070f481e",
False,
),
(
"OLDAP-2.3",
"Open LDAP Public License v2.3",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=d32cf54a32d581ab475d23c810b0a7fbaf8d63c3",
True,
),
(
"OLDAP-2.4",
"Open LDAP Public License v2.4",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=cd1284c4a91a8a380d904eee68d1583f989ed386",
False,
),
(
"OLDAP-2.5",
"Open LDAP Public License v2.5",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=6852b9d90022e8593c98205413380536b1b5a7cf",
False,
),
(
"OLDAP-2.6",
"Open LDAP Public License v2.6",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=1cae062821881f41b73012ba816434897abf4205",
False,
),
(
"OLDAP-2.7",
"Open LDAP Public License v2.7",
"http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=blob;f=LICENSE;hb=47c2415c1df81556eeb39be6cad458ef87c534a2",
True,
),
(
"OLDAP-2.8",
"Open LDAP Public License v2.8",
"http://www.openldap.org/software/release/license.html",
True,
),
(
"OML",
"Open Market License",
"https://fedoraproject.org/wiki/Licensing/Open_Market_License",
False,
),
(
"OPL-1.0",
"Open Public License v1.0",
"http://old.koalateam.com/jackaroo/OPL_1_0.TXT",
False,
),
(
"OPUBL-1.0",
"Open Publication License v1.0",
"http://opencontent.org/openpub/",
False,
),
(
"OSL-1.0",
"Open Software License 1.0",
"https://opensource.org/licenses/OSL-1.0",
True,
),
(
"OSL-1.1",
"Open Software License 1.1",
"https://fedoraproject.org/wiki/Licensing/OSL1.1",
True,
),
(
"OSL-2.0",
"Open Software License 2.0",
"http://web.archive.org/web/20041020171434/http://www.rosenlaw.com/osl2.0.html",
True,
),
(
"OSL-2.1",
"Open Software License 2.1",
"http://web.archive.org/web/20050212003940/http://www.rosenlaw.com/osl21.htm",
True,
),
(
"OSL-3.0",
"Open Software License 3.0",
"https://web.archive.org/web/20120101081418/http://rosenlaw.com:80/OSL3.0.htm",
True,
),
(
"O-UDA-1.0",
"Open Use of Data Agreement v1.0",
"https://github.com/microsoft/Open-Use-of-Data-Agreement/blob/v1.0/O-UDA-1.0.md",
False,
),
("OpenSSL", "OpenSSL License", "http://www.openssl.org/source/license.html", True),
(
"OSET-PL-2.1",
"OSET Public License version 2.1",
"http://www.osetfoundation.org/public-license",
True,
),
("PHP-3.0", "PHP License v3.0", "http://www.php.net/license/3_0.txt", True),
("PHP-3.01", "PHP License v3.01", "http://www.php.net/license/3_01.txt", True),
(
"Plexus",
"Plexus Classworlds License",
"https://fedoraproject.org/wiki/Licensing/Plexus_Classworlds_License",
False,
),
(
"libpng-2.0",
"PNG Reference Library version 2",
"http://www.libpng.org/pub/png/src/libpng-LICENSE.txt",
False,
),
(
"PolyForm-Noncommercial-1.0.0",
"PolyForm Noncommercial License 1.0.0",
"https://polyformproject.org/licenses/noncommercial/1.0.0",
False,
),
(
"PolyForm-Small-Business-1.0.0",
"PolyForm Small Business License 1.0.0",
"https://polyformproject.org/licenses/small-business/1.0.0",
False,
),
(
"PostgreSQL",
"PostgreSQL License",
"http://www.postgresql.org/about/licence",
True,
),
(
"psfrag",
"psfrag License",
"https://fedoraproject.org/wiki/Licensing/psfrag",
False,
),
(
"psutils",
"psutils License",
"https://fedoraproject.org/wiki/Licensing/psutils",
False,
),
(
"Python-2.0",
"Python License 2.0",
"https://opensource.org/licenses/Python-2.0",
True,
),
(
"PSF-2.0",
"Python Software Foundation License 2.0",
"https://opensource.org/licenses/Python-2.0",
False,
),
(
"QPL-1.0",
"Q Public License 1.0",
"http://doc.qt.nokia.com/3.3/license.html",
True,
),
("Qhull", "Qhull License", "https://fedoraproject.org/wiki/Licensing/Qhull", False),
(
"Rdisc",
"Rdisc License",
"https://fedoraproject.org/wiki/Licensing/Rdisc_License",
False,
),
(
"RPSL-1.0",
"RealNetworks Public Source License v1.0",
"https://helixcommunity.org/content/rpsl",
True,
),
(
"RPL-1.1",
"Reciprocal Public License 1.1",
"https://opensource.org/licenses/RPL-1.1",
True,
),
(
"RPL-1.5",
"Reciprocal Public License 1.5",
"https://opensource.org/licenses/RPL-1.5",
True,
),
(
"RHeCos-1.1",
"Red Hat eCos Public License v1.1",
"http://ecos.sourceware.org/old-license.html",
False,
),
(
"RSCPL",
"Ricoh Source Code Public License",
"http://wayback.archive.org/web/20060715140826/http://www.risource.org/RPL/RPL-1.0A.shtml",
True,
),
(
"RSA-MD",
"RSA Message-Digest License",
"http://www.faqs.org/rfcs/rfc1321.html",
False,
),
("Ruby", "Ruby License", "http://www.ruby-lang.org/en/LICENSE.txt", True),
(
"SAX-PD",
"Sax Public Domain Notice",
"http://www.saxproject.org/copying.html",
False,
),
(
"Saxpath",
"Saxpath License",
"https://fedoraproject.org/wiki/Licensing/Saxpath_License",
False,
),
(
"SCEA",
"SCEA Shared Source License",
"http://research.scea.com/scea_shared_source_license.html",
False,
),
(
"SWL",
"Scheme Widget Library (SWL) Software License Agreement",
"https://fedoraproject.org/wiki/Licensing/SWL",
False,
),
(
"SMPPL",
"Secure Messaging Protocol Public License",
"https://github.com/dcblake/SMP/blob/master/Documentation/License.txt",
False,
),
(
"Sendmail",
"Sendmail License",
"http://www.sendmail.com/pdfs/open_source/sendmail_license.pdf",
False,
),
(
"Sendmail-8.23",
"Sendmail License 8.23",
"https://www.proofpoint.com/sites/default/files/sendmail-license.pdf",
False,
),
(
"SSPL-1.0",
"Server Side Public License, v 1",
"https://www.mongodb.com/licensing/server-side-public-license",
False,
),
(
"SGI-B-1.0",
"SGI Free Software License B v1.0",
"http://oss.sgi.com/projects/FreeB/SGIFreeSWLicB.1.0.html",
False,
),
(
"SGI-B-1.1",
"SGI Free Software License B v1.1",
"http://oss.sgi.com/projects/FreeB/",
False,
),
(
"SGI-B-2.0",
"SGI Free Software License B v2.0",
"http://oss.sgi.com/projects/FreeB/SGIFreeSWLicB.2.0.pdf",
True,
),
(
"OFL-1.0",
"SIL Open Font License 1.0",
"http://scripts.sil.org/cms/scripts/page.php?item_id=OFL10_web",
False,
),
(
"OFL-1.0-no-RFN",
"SIL Open Font License 1.0 with no Reserved Font Name",
"http://scripts.sil.org/cms/scripts/page.php?item_id=OFL10_web",
False,
),
(
"OFL-1.0-RFN",
"SIL Open Font License 1.0 with Reserved Font Name",
"http://scripts.sil.org/cms/scripts/page.php?item_id=OFL10_web",
False,
),
(
"OFL-1.1",
"SIL Open Font License 1.1",
"http://scripts.sil.org/cms/scripts/page.php?item_id=OFL_web",
True,
),
(
"OFL-1.1-no-RFN",
"SIL Open Font License 1.1 with no Reserved Font Name",
"http://scripts.sil.org/cms/scripts/page.php?item_id=OFL_web",
True,
),
(
"OFL-1.1-RFN",
"SIL Open Font License 1.1 with Reserved Font Name",
"http://scripts.sil.org/cms/scripts/page.php?item_id=OFL_web",
True,
),
(
"SimPL-2.0",
"Simple Public License 2.0",
"https://opensource.org/licenses/SimPL-2.0",
True,
),
(
"Sleepycat",
"Sleepycat License",
"https://opensource.org/licenses/Sleepycat",
True,
),
(
"SNIA",
"SNIA Public License 1.1",
"https://fedoraproject.org/wiki/Licensing/SNIA_Public_License",
False,
),
(
"SHL-0.5",
"Solderpad Hardware License v0.5",
"https://solderpad.org/licenses/SHL-0.5/",
False,
),
(
"SHL-0.51",
"Solderpad Hardware License, Version 0.51",
"https://solderpad.org/licenses/SHL-0.51/",
False,
),
(
"Spencer-86",
"Spencer License 86",
"https://fedoraproject.org/wiki/Licensing/Henry_Spencer_Reg-Ex_Library_License",
False,
),
(
"Spencer-94",
"Spencer License 94",
"https://fedoraproject.org/wiki/Licensing/Henry_Spencer_Reg-Ex_Library_License",
False,
),
(
"Spencer-99",
"Spencer License 99",
"http://www.opensource.apple.com/source/tcl/tcl-5/tcl/generic/regfronts.c",
False,
),
(
"blessing",
"SQLite Blessing",
"https://www.sqlite.org/src/artifact/e33a4df7e32d742a?ln=4-9",
False,
),
(
"SSH-OpenSSH",
"SSH OpenSSH license",
"https://github.com/openssh/openssh-portable/blob/1b11ea7c58cd5c59838b5fa574cd456d6047b2d4/LICENCE#L10",
False,
),
(
"SSH-short",
"SSH short notice",
"https://github.com/openssh/openssh-portable/blob/1b11ea7c58cd5c59838b5fa574cd456d6047b2d4/pathnames.h",
False,
),
(
"SMLNJ",
"Standard ML of New Jersey License",
"https://www.smlnj.org/license.html",
True,
),
(
"SugarCRM-1.1.3",
"SugarCRM Public License v1.1.3",
"http://www.sugarcrm.com/crm/SPL",
False,
),
(
"SISSL",
"Sun Industry Standards Source License v1.1",
"http://www.openoffice.org/licenses/sissl_license.html",
True,
),
(
"SISSL-1.2",
"Sun Industry Standards Source License v1.2",
"http://gridscheduler.sourceforge.net/Gridengine_SISSL_license.html",
False,
),
(
"SPL-1.0",
"Sun Public License v1.0",
"https://opensource.org/licenses/SPL-1.0",
True,
),
(
"Watcom-1.0",
"Sybase Open Watcom Public License 1.0",
"https://opensource.org/licenses/Watcom-1.0",
True,
),
(
"OGDL-Taiwan-1.0",
"Taiwan Open Government Data License, version 1.0",
"https://data.gov.tw/license",
False,
),
(
"TAPR-OHL-1.0",
"TAPR Open Hardware License v1.0",
"https://www.tapr.org/OHL",
False,
),
("TCL", "TCL/TK License", "http://www.tcl.tk/software/tcltk/license.html", False),
(
"TCP-wrappers",
"TCP Wrappers License",
"http://rc.quest.com/topics/openssh/license.php#tcpwrappers",
False,
),
(
"TU-Berlin-1.0",
"Technische Universitaet Berlin License 1.0",
"https://github.com/swh/ladspa/blob/7bf6f3799fdba70fda297c2d8fd9f526803d9680/gsm/COPYRIGHT",
False,
),
(
"TU-Berlin-2.0",
"Technische Universitaet Berlin License 2.0",
"https://github.com/CorsixTH/deps/blob/fd339a9f526d1d9c9f01ccf39e438a015da50035/licences/libgsm.txt",
False,
),
("MirOS", "The MirOS Licence", "https://opensource.org/licenses/MirOS", True),
(
"Parity-6.0.0",
"The Parity Public License 6.0.0",
"https://paritylicense.com/versions/6.0.0.html",
False,
),
(
"Parity-7.0.0",
"The Parity Public License 7.0.0",
"https://paritylicense.com/versions/7.0.0.html",
False,
),
("Unlicense", "The Unlicense", "https://unlicense.org/", True),
("TMate", "TMate Open Source License", "http://svnkit.com/license.html", False),
(
"TORQUE-1.1",
"TORQUE v2.5+ Software License v1.1",
"https://fedoraproject.org/wiki/Licensing/TORQUEv1.1",
False,
),
(
"TOSL",
"Trusster Open Source License",
"https://fedoraproject.org/wiki/Licensing/TOSL",
False,
),
(
"Unicode-DFS-2015",
"Unicode License Agreement - Data Files and Software (2015)",
"https://web.archive.org/web/20151224134844/http://unicode.org/copyright.html",
False,
),
(
"Unicode-DFS-2016",
"Unicode License Agreement - Data Files and Software (2016)",
"http://www.unicode.org/copyright.html",
True,
),
(
"Unicode-TOU",
"Unicode Terms of Use",
"http://www.unicode.org/copyright.html",
False,
),
(
"UPL-1.0",
"Universal Permissive License v1.0",
"https://opensource.org/licenses/UPL",
True,
),
(
"NCSA",
"University of Illinois/NCSA Open Source License",
"http://otm.illinois.edu/uiuc_openSource",
True,
),
(
"UCL-1.0",
"Upstream Compatibility License v1.0",
"https://opensource.org/licenses/UCL-1.0",
True,
),
("Vim", "Vim License", "http://vimdoc.sourceforge.net/htmldoc/uganda.html", True),
(
"VOSTROM",
"VOSTROM Public License for Open Source",
"https://fedoraproject.org/wiki/Licensing/VOSTROM",
False,
),
(
"VSL-1.0",
"Vovida Software License v1.0",
"https://opensource.org/licenses/VSL-1.0",
True,
),
(
"W3C-20150513",
"W3C Software Notice and Document License (2015-05-13)",
"https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document",
False,
),
(
"W3C-19980720",
"W3C Software Notice and License (1998-07-20)",
"http://www.w3.org/Consortium/Legal/copyright-software-19980720.html",
False,
),
(
"W3C",
"W3C Software Notice and License (2002-12-31)",
"http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231.html",
True,
),
(
"Wsuipa",
"Wsuipa License",
"https://fedoraproject.org/wiki/Licensing/Wsuipa",
False,
),
("Xnet", "X.Net License", "https://opensource.org/licenses/Xnet", True),
("X11", "X11 License", "http://www.xfree86.org/3.3.6/COPYRIGHT2.html#3", True),
("Xerox", "Xerox License", "https://fedoraproject.org/wiki/Licensing/Xerox", False),
(
"XFree86-1.1",
"XFree86 License 1.1",
"http://www.xfree86.org/current/LICENSE4.html",
True,
),
(
"xinetd",
"xinetd License",
"https://fedoraproject.org/wiki/Licensing/Xinetd_License",
True,
),
("xpp", "XPP License", "https://fedoraproject.org/wiki/Licensing/xpp", False),
(
"XSkat",
"XSkat License",
"https://fedoraproject.org/wiki/Licensing/XSkat_License",
False,
),
(
"YPL-1.0",
"Yahoo! Public License v1.0",
"http://www.zimbra.com/license/yahoo_public_license_1.0.html",
False,
),
(
"YPL-1.1",
"Yahoo! Public License v1.1",
"http://www.zimbra.com/license/yahoo_public_license_1.1.html",
True,
),
("Zed", "Zed License", "https://fedoraproject.org/wiki/Licensing/Zed", False),
(
"Zend-2.0",
"Zend License v2.0",
"https://web.archive.org/web/20130517195954/http://www.zend.com/license/2_00.txt",
True,
),
(
"Zimbra-1.3",
"Zimbra Public License v1.3",
"http://web.archive.org/web/20100302225219/http://www.zimbra.com/license/zimbra-public-license-1-3.html",
True,
),
(
"Zimbra-1.4",
"Zimbra Public License v1.4",
"http://www.zimbra.com/legal/zimbra-public-license-1-4",
False,
),
("Zlib", "zlib License", "http://www.zlib.net/zlib_license.html", True),
(
"zlib-acknowledgement",
"zlib/libpng License with Acknowledgement",
"https://fedoraproject.org/wiki/Licensing/ZlibWithAcknowledgement",
False,
),
(
"ZPL-1.1",
"Zope Public License 1.1",
"http://old.zope.org/Resources/License/ZPL-1.1",
False,
),
(
"ZPL-2.0",
"Zope Public License 2.0",
"http://old.zope.org/Resources/License/ZPL-2.0",
True,
),
("ZPL-2.1", "Zope Public License 2.1", "http://old.zope.org/Resources/ZPL/", True),
)
|
phw/weblate
|
weblate/utils/licensedata.py
|
Python
|
gpl-3.0
| 69,321
|
[
"NetCDF"
] |
8a1d01449f963bdfd39c6ff51bd031b941e0a07f052059ce1aadccc6fc310c59
|
"""
Unit tests for masquerade.
"""
import json
import pickle
from datetime import datetime
from django.conf import settings
from django.urls import reverse
from django.test import TestCase
from mock import patch
from pytz import UTC
from capa.tests.response_xml_factory import OptionResponseXMLFactory
from courseware.masquerade import CourseMasquerade, MasqueradingKeyValueStore, get_masquerading_user_group
from courseware.tests.factories import StaffFactory
from courseware.tests.helpers import LoginEnrollmentTestCase, masquerade_as_group_member
from courseware.tests.test_submitting_problems import ProblemSubmissionTestMixin
from nose.plugins.attrib import attr
from openedx.core.djangoapps.lang_pref import LANGUAGE_KEY
from openedx.core.djangoapps.self_paced.models import SelfPacedConfiguration
from openedx.core.djangoapps.user_api.preferences.api import get_user_preference, set_user_preference
from openedx.core.djangoapps.waffle_utils.testutils import override_waffle_flag
from openedx.features.course_experience import UNIFIED_COURSE_TAB_FLAG
from student.tests.factories import UserFactory
from xblock.runtime import DictKeyValueStore
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.django_utils import SharedModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory
from xmodule.partitions.partitions import Group, UserPartition
class MasqueradeTestCase(SharedModuleStoreTestCase, LoginEnrollmentTestCase):
"""
Base class for masquerade tests that sets up a test course and enrolls a user in the course.
"""
@classmethod
def setUpClass(cls):
super(MasqueradeTestCase, cls).setUpClass()
cls.course = CourseFactory.create(number='masquerade-test', metadata={'start': datetime.now(UTC)})
cls.info_page = ItemFactory.create(
category="course_info", parent_location=cls.course.location,
data="OOGIE BLOOGIE", display_name="updates"
)
cls.chapter = ItemFactory.create(
parent_location=cls.course.location,
category="chapter",
display_name="Test Section",
)
cls.sequential_display_name = "Test Masquerade Subsection"
cls.sequential = ItemFactory.create(
parent_location=cls.chapter.location,
category="sequential",
display_name=cls.sequential_display_name,
)
cls.vertical = ItemFactory.create(
parent_location=cls.sequential.location,
category="vertical",
display_name="Test Unit",
)
problem_xml = OptionResponseXMLFactory().build_xml(
question_text='The correct answer is Correct',
num_inputs=2,
weight=2,
options=['Correct', 'Incorrect'],
correct_option='Correct'
)
cls.problem_display_name = "TestMasqueradeProblem"
cls.problem = ItemFactory.create(
parent_location=cls.vertical.location,
category='problem',
data=problem_xml,
display_name=cls.problem_display_name
)
def setUp(self):
super(MasqueradeTestCase, self).setUp()
self.test_user = self.create_user()
self.login(self.test_user.email, 'test')
self.enroll(self.course, True)
def get_courseware_page(self):
"""
Returns the server response for the courseware page.
"""
url = reverse(
'courseware_section',
kwargs={
'course_id': unicode(self.course.id),
'chapter': self.chapter.location.block_id,
'section': self.sequential.location.block_id,
}
)
return self.client.get(url)
def get_course_info_page(self):
"""
Returns the server response for course info page.
"""
url = reverse(
'info',
kwargs={
'course_id': unicode(self.course.id),
}
)
return self.client.get(url)
def get_progress_page(self):
"""
Returns the server response for progress page.
"""
url = reverse(
'progress',
kwargs={
'course_id': unicode(self.course.id),
}
)
return self.client.get(url)
def verify_staff_debug_present(self, staff_debug_expected):
"""
Verifies that the staff debug control visibility is as expected (for staff only).
"""
content = self.get_courseware_page().content
self.assertIn(self.sequential_display_name, content, "Subsection should be visible")
self.assertEqual(staff_debug_expected, 'Staff Debug Info' in content)
def get_problem(self):
"""
Returns the JSON content for the problem in the course.
"""
problem_url = reverse(
'xblock_handler',
kwargs={
'course_id': unicode(self.course.id),
'usage_id': unicode(self.problem.location),
'handler': 'xmodule_handler',
'suffix': 'problem_get'
}
)
return self.client.get(problem_url)
def verify_show_answer_present(self, show_answer_expected):
"""
Verifies that "Show Answer" is only present when expected (for staff only).
"""
problem_html = json.loads(self.get_problem().content)['html']
self.assertIn(self.problem_display_name, problem_html)
self.assertEqual(show_answer_expected, "Show Answer" in problem_html)
def ensure_masquerade_as_group_member(self, partition_id, group_id):
"""
Installs a masquerade for the test_user and test course, to enable the
user to masquerade as belonging to the specific partition/group combination.
Also verifies that the call to install the masquerade was successful.
Arguments:
partition_id (int): the integer partition id, referring to partitions already
configured in the course.
group_id (int); the integer group id, within the specified partition.
"""
self.assertEqual(200, masquerade_as_group_member(self.test_user, self.course, partition_id, group_id))
@attr(shard=1)
class NormalStudentVisibilityTest(MasqueradeTestCase):
"""
Verify the course displays as expected for a "normal" student (to ensure test setup is correct).
"""
def create_user(self):
"""
Creates a normal student user.
"""
return UserFactory()
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_staff_debug_not_visible(self):
"""
Tests that staff debug control is not present for a student.
"""
self.verify_staff_debug_present(False)
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_show_answer_not_visible(self):
"""
Tests that "Show Answer" is not visible for a student.
"""
self.verify_show_answer_present(False)
class StaffMasqueradeTestCase(MasqueradeTestCase):
"""
Base class for tests of the masquerade behavior for a staff member.
"""
def create_user(self):
"""
Creates a staff user.
"""
return StaffFactory(course_key=self.course.id)
def update_masquerade(self, role, group_id=None, user_name=None):
"""
Toggle masquerade state.
"""
masquerade_url = reverse(
'masquerade_update',
kwargs={
'course_key_string': unicode(self.course.id),
}
)
response = self.client.post(
masquerade_url,
json.dumps({"role": role, "group_id": group_id, "user_name": user_name}),
"application/json"
)
self.assertEqual(response.status_code, 200)
return response
@attr(shard=1)
class TestStaffMasqueradeAsStudent(StaffMasqueradeTestCase):
"""
Check for staff being able to masquerade as student.
"""
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_staff_debug_with_masquerade(self):
"""
Tests that staff debug control is not visible when masquerading as a student.
"""
# Verify staff initially can see staff debug
self.verify_staff_debug_present(True)
# Toggle masquerade to student
self.update_masquerade(role='student')
self.verify_staff_debug_present(False)
# Toggle masquerade back to staff
self.update_masquerade(role='staff')
self.verify_staff_debug_present(True)
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_show_answer_for_staff(self):
"""
Tests that "Show Answer" is not visible when masquerading as a student.
"""
# Verify that staff initially can see "Show Answer".
self.verify_show_answer_present(True)
# Toggle masquerade to student
self.update_masquerade(role='student')
self.verify_show_answer_present(False)
# Toggle masquerade back to staff
self.update_masquerade(role='staff')
self.verify_show_answer_present(True)
@attr(shard=1)
class TestStaffMasqueradeAsSpecificStudent(StaffMasqueradeTestCase, ProblemSubmissionTestMixin):
"""
Check for staff being able to masquerade as a specific student.
"""
def setUp(self):
super(TestStaffMasqueradeAsSpecificStudent, self).setUp()
self.student_user = self.create_user()
self.login_student()
self.enroll(self.course, True)
def login_staff(self):
""" Login as a staff user """
self.logout()
self.login(self.test_user.email, 'test')
def login_student(self):
""" Login as a student """
self.logout()
self.login(self.student_user.email, 'test')
def submit_answer(self, response1, response2):
"""
Submit an answer to the single problem in our test course.
"""
return self.submit_question_answer(
self.problem_display_name,
{'2_1': response1, '2_2': response2}
)
def get_progress_detail(self):
"""
Return the reported progress detail for the problem in our test course.
The return value is a string like u'1/2'.
"""
json_data = json.loads(self.look_at_question(self.problem_display_name).content)
progress = '%s/%s' % (str(json_data['current_score']), str(json_data['total_possible']))
return progress
def assertExpectedLanguageInPreference(self, user, expected_language_code):
"""
This method is a custom assertion verifies that a given user has expected
language code in the preference and in cookies.
Arguments:
user: User model instance
expected_language_code: string indicating a language code
"""
self.assertEqual(
get_user_preference(user, LANGUAGE_KEY), expected_language_code
)
self.assertEqual(
self.client.cookies[settings.LANGUAGE_COOKIE].value, expected_language_code
)
@override_waffle_flag(UNIFIED_COURSE_TAB_FLAG, active=False)
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_masquerade_as_specific_user_on_self_paced(self):
"""
Test masquerading as a specific user for course info page when self paced configuration
"enable_course_home_improvements" flag is set
Login as a staff user and visit course info page.
set masquerade to view same page as a specific student and revisit the course info page.
"""
# Log in as staff, and check we can see the info page.
self.login_staff()
response = self.get_course_info_page()
self.assertEqual(response.status_code, 200)
content = response.content
self.assertIn("OOGIE BLOOGIE", content)
# Masquerade as the student,enable the self paced configuration, and check we can see the info page.
SelfPacedConfiguration(enable_course_home_improvements=True).save()
self.update_masquerade(role='student', user_name=self.student_user.username)
response = self.get_course_info_page()
self.assertEqual(response.status_code, 200)
content = response.content
self.assertIn("OOGIE BLOOGIE", content)
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_masquerade_as_specific_student(self):
"""
Test masquerading as a specific user.
We answer the problem in our test course as the student and as staff user, and we use the
progress as a proxy to determine who's state we currently see.
"""
# Answer correctly as the student, and check progress.
self.login_student()
self.submit_answer('Correct', 'Correct')
self.assertEqual(self.get_progress_detail(), u'2/2')
# Log in as staff, and check the problem is unanswered.
self.login_staff()
self.assertEqual(self.get_progress_detail(), u'0/2')
# Masquerade as the student, and check we can see the student state.
self.update_masquerade(role='student', user_name=self.student_user.username)
self.assertEqual(self.get_progress_detail(), u'2/2')
# Temporarily override the student state.
self.submit_answer('Correct', 'Incorrect')
self.assertEqual(self.get_progress_detail(), u'1/2')
# Reload the page and check we see the student state again.
self.get_courseware_page()
self.assertEqual(self.get_progress_detail(), u'2/2')
# Become the staff user again, and check the problem is still unanswered.
self.update_masquerade(role='staff')
self.assertEqual(self.get_progress_detail(), u'0/2')
# Verify the student state did not change.
self.login_student()
self.assertEqual(self.get_progress_detail(), u'2/2')
def test_masquerading_with_language_preference(self):
"""
Tests that masquerading as a specific user for the course does not update preference language
for the staff.
Login as a staff user and set user's language preference to english and visit the courseware page.
Set masquerade to view same page as a specific student having different language preference and
revisit the courseware page.
"""
english_language_code = 'en'
set_user_preference(self.test_user, preference_key=LANGUAGE_KEY, preference_value=english_language_code)
self.login_staff()
# Reload the page and check we have expected language preference in system and in cookies.
self.get_courseware_page()
self.assertExpectedLanguageInPreference(self.test_user, english_language_code)
# Set student language preference and set masquerade to view same page the student.
set_user_preference(self.student_user, preference_key=LANGUAGE_KEY, preference_value='es-419')
self.update_masquerade(role='student', user_name=self.student_user.username)
# Reload the page and check we have expected language preference in system and in cookies.
self.get_courseware_page()
self.assertExpectedLanguageInPreference(self.test_user, english_language_code)
@override_waffle_flag(UNIFIED_COURSE_TAB_FLAG, active=False)
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_masquerade_as_specific_student_course_info(self):
"""
Test masquerading as a specific user for course info page.
We login with login_staff and check course info page content if it's working and then we
set masquerade to view same page as a specific student and test if it's working or not.
"""
# Log in as staff, and check we can see the info page.
self.login_staff()
content = self.get_course_info_page().content
self.assertIn("OOGIE BLOOGIE", content)
# Masquerade as the student, and check we can see the info page.
self.update_masquerade(role='student', user_name=self.student_user.username)
content = self.get_course_info_page().content
self.assertIn("OOGIE BLOOGIE", content)
def test_masquerade_as_specific_student_progress(self):
"""
Test masquerading as a specific user for progress page.
"""
# Give the student some correct answers, check their progress page
self.login_student()
self.submit_answer('Correct', 'Correct')
student_progress = self.get_progress_page().content
self.assertNotIn("1 of 2 possible points", student_progress)
self.assertIn("2 of 2 possible points", student_progress)
# Staff answers are slightly different
self.login_staff()
self.submit_answer('Incorrect', 'Correct')
staff_progress = self.get_progress_page().content
self.assertNotIn("2 of 2 possible points", staff_progress)
self.assertIn("1 of 2 possible points", staff_progress)
# Should now see the student's scores
self.update_masquerade(role='student', user_name=self.student_user.username)
masquerade_progress = self.get_progress_page().content
self.assertNotIn("1 of 2 possible points", masquerade_progress)
self.assertIn("2 of 2 possible points", masquerade_progress)
@attr(shard=1)
class TestGetMasqueradingGroupId(StaffMasqueradeTestCase):
"""
Check for staff being able to masquerade as belonging to a group.
"""
def setUp(self):
super(TestGetMasqueradingGroupId, self).setUp()
self.user_partition = UserPartition(
0, 'Test User Partition', '',
[Group(0, 'Group 1'), Group(1, 'Group 2')],
scheme_id='cohort'
)
self.course.user_partitions.append(self.user_partition)
modulestore().update_item(self.course, self.test_user.id)
@patch.dict('django.conf.settings.FEATURES', {'DISABLE_START_DATES': False})
def test_get_masquerade_group(self):
"""
Tests that a staff member can masquerade as being in a group in a user partition
"""
# Verify there is no masquerading group initially
group = get_masquerading_user_group(self.course.id, self.test_user, self.user_partition)
self.assertIsNone(group)
# Install a masquerading group
self.ensure_masquerade_as_group_member(0, 1)
# Verify that the masquerading group is returned
group = get_masquerading_user_group(self.course.id, self.test_user, self.user_partition)
self.assertEqual(group.id, 1)
class ReadOnlyKeyValueStore(DictKeyValueStore):
"""
A KeyValueStore that raises an exception on attempts to modify it.
Used to make sure MasqueradingKeyValueStore does not try to modify the underlying KeyValueStore.
"""
def set(self, key, value):
assert False, "ReadOnlyKeyValueStore may not be modified."
def delete(self, key):
assert False, "ReadOnlyKeyValueStore may not be modified."
def set_many(self, update_dict): # pylint: disable=unused-argument
assert False, "ReadOnlyKeyValueStore may not be modified."
class FakeSession(dict):
""" Mock for Django session object. """
modified = False # We need dict semantics with a writable 'modified' property
class MasqueradingKeyValueStoreTest(TestCase):
"""
Unit tests for the MasqueradingKeyValueStore class.
"""
def setUp(self):
super(MasqueradingKeyValueStoreTest, self).setUp()
self.ro_kvs = ReadOnlyKeyValueStore({'a': 42, 'b': None, 'c': 'OpenCraft'})
self.session = FakeSession()
self.kvs = MasqueradingKeyValueStore(self.ro_kvs, self.session)
def test_all(self):
self.assertEqual(self.kvs.get('a'), 42)
self.assertEqual(self.kvs.get('b'), None)
self.assertEqual(self.kvs.get('c'), 'OpenCraft')
with self.assertRaises(KeyError):
self.kvs.get('d')
self.assertTrue(self.kvs.has('a'))
self.assertTrue(self.kvs.has('b'))
self.assertTrue(self.kvs.has('c'))
self.assertFalse(self.kvs.has('d'))
self.kvs.set_many({'a': 'Norwegian Blue', 'd': 'Giraffe'})
self.kvs.set('b', 7)
self.assertEqual(self.kvs.get('a'), 'Norwegian Blue')
self.assertEqual(self.kvs.get('b'), 7)
self.assertEqual(self.kvs.get('c'), 'OpenCraft')
self.assertEqual(self.kvs.get('d'), 'Giraffe')
for key in 'abd':
self.assertTrue(self.kvs.has(key))
self.kvs.delete(key)
with self.assertRaises(KeyError):
self.kvs.get(key)
self.assertEqual(self.kvs.get('c'), 'OpenCraft')
class CourseMasqueradeTest(TestCase):
"""
Unit tests for the CourseMasquerade class.
"""
def test_unpickling_sets_all_attributes(self):
"""
Make sure that old CourseMasquerade objects receive missing attributes when unpickled from
the session.
"""
cmasq = CourseMasquerade(7)
del cmasq.user_name
pickled_cmasq = pickle.dumps(cmasq)
unpickled_cmasq = pickle.loads(pickled_cmasq)
self.assertEqual(unpickled_cmasq.user_name, None)
|
Stanford-Online/edx-platform
|
lms/djangoapps/courseware/tests/test_masquerade.py
|
Python
|
agpl-3.0
| 21,539
|
[
"VisIt"
] |
dfeb73e6c99cf0c5071906186943912c32bacde5c5e1739ba86b6081d63a9bb8
|
#!/usr/bin/env python
#
# Electrum - lightweight Bitcoin client
# Copyright (C) 2011 thomasv@gitorious
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# list of words from http://en.wiktionary.org/wiki/Wiktionary:Frequency_lists/Contemporary_poetry
words = [
"like",
"just",
"love",
"know",
"never",
"want",
"time",
"out",
"there",
"make",
"look",
"eye",
"down",
"only",
"think",
"heart",
"back",
"then",
"into",
"about",
"more",
"away",
"still",
"them",
"take",
"thing",
"even",
"through",
"long",
"always",
"world",
"too",
"friend",
"tell",
"try",
"hand",
"thought",
"over",
"here",
"other",
"need",
"smile",
"again",
"much",
"cry",
"been",
"night",
"ever",
"little",
"said",
"end",
"some",
"those",
"around",
"mind",
"people",
"girl",
"leave",
"dream",
"left",
"turn",
"myself",
"give",
"nothing",
"really",
"off",
"before",
"something",
"find",
"walk",
"wish",
"good",
"once",
"place",
"ask",
"stop",
"keep",
"watch",
"seem",
"everything",
"wait",
"got",
"yet",
"made",
"remember",
"start",
"alone",
"run",
"hope",
"maybe",
"believe",
"body",
"hate",
"after",
"close",
"talk",
"stand",
"own",
"each",
"hurt",
"help",
"home",
"god",
"soul",
"new",
"many",
"two",
"inside",
"should",
"true",
"first",
"fear",
"mean",
"better",
"play",
"another",
"gone",
"change",
"use",
"wonder",
"someone",
"hair",
"cold",
"open",
"best",
"any",
"behind",
"happen",
"water",
"dark",
"laugh",
"stay",
"forever",
"name",
"work",
"show",
"sky",
"break",
"came",
"deep",
"door",
"put",
"black",
"together",
"upon",
"happy",
"such",
"great",
"white",
"matter",
"fill",
"past",
"please",
"burn",
"cause",
"enough",
"touch",
"moment",
"soon",
"voice",
"scream",
"anything",
"stare",
"sound",
"red",
"everyone",
"hide",
"kiss",
"truth",
"death",
"beautiful",
"mine",
"blood",
"broken",
"very",
"pass",
"next",
"forget",
"tree",
"wrong",
"air",
"mother",
"understand",
"lip",
"hit",
"wall",
"memory",
"sleep",
"free",
"high",
"realize",
"school",
"might",
"skin",
"sweet",
"perfect",
"blue",
"kill",
"breath",
"dance",
"against",
"fly",
"between",
"grow",
"strong",
"under",
"listen",
"bring",
"sometimes",
"speak",
"pull",
"person",
"become",
"family",
"begin",
"ground",
"real",
"small",
"father",
"sure",
"feet",
"rest",
"young",
"finally",
"land",
"across",
"today",
"different",
"guy",
"line",
"fire",
"reason",
"reach",
"second",
"slowly",
"write",
"eat",
"smell",
"mouth",
"step",
"learn",
"three",
"floor",
"promise",
"breathe",
"darkness",
"push",
"earth",
"guess",
"save",
"song",
"above",
"along",
"both",
"color",
"house",
"almost",
"sorry",
"anymore",
"brother",
"okay",
"dear",
"game",
"fade",
"already",
"apart",
"warm",
"beauty",
"heard",
"notice",
"question",
"shine",
"began",
"piece",
"whole",
"shadow",
"secret",
"street",
"within",
"finger",
"point",
"morning",
"whisper",
"child",
"moon",
"green",
"story",
"glass",
"kid",
"silence",
"since",
"soft",
"yourself",
"empty",
"shall",
"angel",
"answer",
"baby",
"bright",
"dad",
"path",
"worry",
"hour",
"drop",
"follow",
"power",
"war",
"half",
"flow",
"heaven",
"act",
"chance",
"fact",
"least",
"tired",
"children",
"near",
"quite",
"afraid",
"rise",
"sea",
"taste",
"window",
"cover",
"nice",
"trust",
"lot",
"sad",
"cool",
"force",
"peace",
"return",
"blind",
"easy",
"ready",
"roll",
"rose",
"drive",
"held",
"music",
"beneath",
"hang",
"mom",
"paint",
"emotion",
"quiet",
"clear",
"cloud",
"few",
"pretty",
"bird",
"outside",
"paper",
"picture",
"front",
"rock",
"simple",
"anyone",
"meant",
"reality",
"road",
"sense",
"waste",
"bit",
"leaf",
"thank",
"happiness",
"meet",
"men",
"smoke",
"truly",
"decide",
"self",
"age",
"book",
"form",
"alive",
"carry",
"escape",
"damn",
"instead",
"able",
"ice",
"minute",
"throw",
"catch",
"leg",
"ring",
"course",
"goodbye",
"lead",
"poem",
"sick",
"corner",
"desire",
"known",
"problem",
"remind",
"shoulder",
"suppose",
"toward",
"wave",
"drink",
"jump",
"woman",
"pretend",
"sister",
"week",
"human",
"joy",
"crack",
"grey",
"pray",
"surprise",
"dry",
"knee",
"less",
"search",
"bleed",
"caught",
"clean",
"embrace",
"future",
"king",
"son",
"sorrow",
"chest",
"hug",
"remain",
"sat",
"worth",
"blow",
"daddy",
"final",
"parent",
"tight",
"also",
"create",
"lonely",
"safe",
"cross",
"dress",
"evil",
"silent",
"bone",
"fate",
"perhaps",
"anger",
"class",
"scar",
"snow",
"tiny",
"tonight",
"continue",
"control",
"dog",
"edge",
"mirror",
"month",
"suddenly",
"comfort",
"given",
"loud",
"quickly",
"gaze",
"plan",
"rush",
"stone",
"town",
"battle",
"ignore",
"spirit",
"stood",
"stupid",
"yours",
"brown",
"build",
"dust",
"hey",
"kept",
"pay",
"phone",
"twist",
"although",
"ball",
"beyond",
"hidden",
"nose",
"taken",
"fail",
"float",
"pure",
"somehow",
"wash",
"wrap",
"angry",
"cheek",
"creature",
"forgotten",
"heat",
"rip",
"single",
"space",
"special",
"weak",
"whatever",
"yell",
"anyway",
"blame",
"job",
"choose",
"country",
"curse",
"drift",
"echo",
"figure",
"grew",
"laughter",
"neck",
"suffer",
"worse",
"yeah",
"disappear",
"foot",
"forward",
"knife",
"mess",
"somewhere",
"stomach",
"storm",
"beg",
"idea",
"lift",
"offer",
"breeze",
"field",
"five",
"often",
"simply",
"stuck",
"win",
"allow",
"confuse",
"enjoy",
"except",
"flower",
"seek",
"strength",
"calm",
"grin",
"gun",
"heavy",
"hill",
"large",
"ocean",
"shoe",
"sigh",
"straight",
"summer",
"tongue",
"accept",
"crazy",
"everyday",
"exist",
"grass",
"mistake",
"sent",
"shut",
"surround",
"table",
"ache",
"brain",
"destroy",
"heal",
"nature",
"shout",
"sign",
"stain",
"choice",
"doubt",
"glance",
"glow",
"mountain",
"queen",
"stranger",
"throat",
"tomorrow",
"city",
"either",
"fish",
"flame",
"rather",
"shape",
"spin",
"spread",
"ash",
"distance",
"finish",
"image",
"imagine",
"important",
"nobody",
"shatter",
"warmth",
"became",
"feed",
"flesh",
"funny",
"lust",
"shirt",
"trouble",
"yellow",
"attention",
"bare",
"bite",
"money",
"protect",
"amaze",
"appear",
"born",
"choke",
"completely",
"daughter",
"fresh",
"friendship",
"gentle",
"probably",
"six",
"deserve",
"expect",
"grab",
"middle",
"nightmare",
"river",
"thousand",
"weight",
"worst",
"wound",
"barely",
"bottle",
"cream",
"regret",
"relationship",
"stick",
"test",
"crush",
"endless",
"fault",
"itself",
"rule",
"spill",
"art",
"circle",
"join",
"kick",
"mask",
"master",
"passion",
"quick",
"raise",
"smooth",
"unless",
"wander",
"actually",
"broke",
"chair",
"deal",
"favorite",
"gift",
"note",
"number",
"sweat",
"box",
"chill",
"clothes",
"lady",
"mark",
"park",
"poor",
"sadness",
"tie",
"animal",
"belong",
"brush",
"consume",
"dawn",
"forest",
"innocent",
"pen",
"pride",
"stream",
"thick",
"clay",
"complete",
"count",
"draw",
"faith",
"press",
"silver",
"struggle",
"surface",
"taught",
"teach",
"wet",
"bless",
"chase",
"climb",
"enter",
"letter",
"melt",
"metal",
"movie",
"stretch",
"swing",
"vision",
"wife",
"beside",
"crash",
"forgot",
"guide",
"haunt",
"joke",
"knock",
"plant",
"pour",
"prove",
"reveal",
"steal",
"stuff",
"trip",
"wood",
"wrist",
"bother",
"bottom",
"crawl",
"crowd",
"fix",
"forgive",
"frown",
"grace",
"loose",
"lucky",
"party",
"release",
"surely",
"survive",
"teacher",
"gently",
"grip",
"speed",
"suicide",
"travel",
"treat",
"vein",
"written",
"cage",
"chain",
"conversation",
"date",
"enemy",
"however",
"interest",
"million",
"page",
"pink",
"proud",
"sway",
"themselves",
"winter",
"church",
"cruel",
"cup",
"demon",
"experience",
"freedom",
"pair",
"pop",
"purpose",
"respect",
"shoot",
"softly",
"state",
"strange",
"bar",
"birth",
"curl",
"dirt",
"excuse",
"lord",
"lovely",
"monster",
"order",
"pack",
"pants",
"pool",
"scene",
"seven",
"shame",
"slide",
"ugly",
"among",
"blade",
"blonde",
"closet",
"creek",
"deny",
"drug",
"eternity",
"gain",
"grade",
"handle",
"key",
"linger",
"pale",
"prepare",
"swallow",
"swim",
"tremble",
"wheel",
"won",
"cast",
"cigarette",
"claim",
"college",
"direction",
"dirty",
"gather",
"ghost",
"hundred",
"loss",
"lung",
"orange",
"present",
"swear",
"swirl",
"twice",
"wild",
"bitter",
"blanket",
"doctor",
"everywhere",
"flash",
"grown",
"knowledge",
"numb",
"pressure",
"radio",
"repeat",
"ruin",
"spend",
"unknown",
"buy",
"clock",
"devil",
"early",
"false",
"fantasy",
"pound",
"precious",
"refuse",
"sheet",
"teeth",
"welcome",
"add",
"ahead",
"block",
"bury",
"caress",
"content",
"depth",
"despite",
"distant",
"marry",
"purple",
"threw",
"whenever",
"bomb",
"dull",
"easily",
"grasp",
"hospital",
"innocence",
"normal",
"receive",
"reply",
"rhyme",
"shade",
"someday",
"sword",
"toe",
"visit",
"asleep",
"bought",
"center",
"consider",
"flat",
"hero",
"history",
"ink",
"insane",
"muscle",
"mystery",
"pocket",
"reflection",
"shove",
"silently",
"smart",
"soldier",
"spot",
"stress",
"train",
"type",
"view",
"whether",
"bus",
"energy",
"explain",
"holy",
"hunger",
"inch",
"magic",
"mix",
"noise",
"nowhere",
"prayer",
"presence",
"shock",
"snap",
"spider",
"study",
"thunder",
"trail",
"admit",
"agree",
"bag",
"bang",
"bound",
"butterfly",
"cute",
"exactly",
"explode",
"familiar",
"fold",
"further",
"pierce",
"reflect",
"scent",
"selfish",
"sharp",
"sink",
"spring",
"stumble",
"universe",
"weep",
"women",
"wonderful",
"action",
"ancient",
"attempt",
"avoid",
"birthday",
"branch",
"chocolate",
"core",
"depress",
"drunk",
"especially",
"focus",
"fruit",
"honest",
"match",
"palm",
"perfectly",
"pillow",
"pity",
"poison",
"roar",
"shift",
"slightly",
"thump",
"truck",
"tune",
"twenty",
"unable",
"wipe",
"wrote",
"coat",
"constant",
"dinner",
"drove",
"egg",
"eternal",
"flight",
"flood",
"frame",
"freak",
"gasp",
"glad",
"hollow",
"motion",
"peer",
"plastic",
"root",
"screen",
"season",
"sting",
"strike",
"team",
"unlike",
"victim",
"volume",
"warn",
"weird",
"attack",
"await",
"awake",
"built",
"charm",
"crave",
"despair",
"fought",
"grant",
"grief",
"horse",
"limit",
"message",
"ripple",
"sanity",
"scatter",
"serve",
"split",
"string",
"trick",
"annoy",
"blur",
"boat",
"brave",
"clearly",
"cling",
"connect",
"fist",
"forth",
"imagination",
"iron",
"jock",
"judge",
"lesson",
"milk",
"misery",
"nail",
"naked",
"ourselves",
"poet",
"possible",
"princess",
"sail",
"size",
"snake",
"society",
"stroke",
"torture",
"toss",
"trace",
"wise",
"bloom",
"bullet",
"cell",
"check",
"cost",
"darling",
"during",
"footstep",
"fragile",
"hallway",
"hardly",
"horizon",
"invisible",
"journey",
"midnight",
"mud",
"nod",
"pause",
"relax",
"shiver",
"sudden",
"value",
"youth",
"abuse",
"admire",
"blink",
"breast",
"bruise",
"constantly",
"couple",
"creep",
"curve",
"difference",
"dumb",
"emptiness",
"gotta",
"honor",
"plain",
"planet",
"recall",
"rub",
"ship",
"slam",
"soar",
"somebody",
"tightly",
"weather",
"adore",
"approach",
"bond",
"bread",
"burst",
"candle",
"coffee",
"cousin",
"crime",
"desert",
"flutter",
"frozen",
"grand",
"heel",
"hello",
"language",
"level",
"movement",
"pleasure",
"powerful",
"random",
"rhythm",
"settle",
"silly",
"slap",
"sort",
"spoken",
"steel",
"threaten",
"tumble",
"upset",
"aside",
"awkward",
"bee",
"blank",
"board",
"button",
"card",
"carefully",
"complain",
"crap",
"deeply",
"discover",
"drag",
"dread",
"effort",
"entire",
"fairy",
"giant",
"gotten",
"greet",
"illusion",
"jeans",
"leap",
"liquid",
"march",
"mend",
"nervous",
"nine",
"replace",
"rope",
"spine",
"stole",
"terror",
"accident",
"apple",
"balance",
"boom",
"childhood",
"collect",
"demand",
"depression",
"eventually",
"faint",
"glare",
"goal",
"group",
"honey",
"kitchen",
"laid",
"limb",
"machine",
"mere",
"mold",
"murder",
"nerve",
"painful",
"poetry",
"prince",
"rabbit",
"shelter",
"shore",
"shower",
"soothe",
"stair",
"steady",
"sunlight",
"tangle",
"tease",
"treasure",
"uncle",
"begun",
"bliss",
"canvas",
"cheer",
"claw",
"clutch",
"commit",
"crimson",
"crystal",
"delight",
"doll",
"existence",
"express",
"fog",
"football",
"gay",
"goose",
"guard",
"hatred",
"illuminate",
"mass",
"math",
"mourn",
"rich",
"rough",
"skip",
"stir",
"student",
"style",
"support",
"thorn",
"tough",
"yard",
"yearn",
"yesterday",
"advice",
"appreciate",
"autumn",
"bank",
"beam",
"bowl",
"capture",
"carve",
"collapse",
"confusion",
"creation",
"dove",
"feather",
"girlfriend",
"glory",
"government",
"harsh",
"hop",
"inner",
"loser",
"moonlight",
"neighbor",
"neither",
"peach",
"pig",
"praise",
"screw",
"shield",
"shimmer",
"sneak",
"stab",
"subject",
"throughout",
"thrown",
"tower",
"twirl",
"wow",
"army",
"arrive",
"bathroom",
"bump",
"cease",
"cookie",
"couch",
"courage",
"dim",
"guilt",
"howl",
"hum",
"husband",
"insult",
"led",
"lunch",
"mock",
"mostly",
"natural",
"nearly",
"needle",
"nerd",
"peaceful",
"perfection",
"pile",
"price",
"remove",
"roam",
"sanctuary",
"serious",
"shiny",
"shook",
"sob",
"stolen",
"tap",
"vain",
"void",
"warrior",
"wrinkle",
"affection",
"apologize",
"blossom",
"bounce",
"bridge",
"cheap",
"crumble",
"decision",
"descend",
"desperately",
"dig",
"dot",
"flip",
"frighten",
"heartbeat",
"huge",
"lazy",
"lick",
"odd",
"opinion",
"process",
"puzzle",
"quietly",
"retreat",
"score",
"sentence",
"separate",
"situation",
"skill",
"soak",
"square",
"stray",
"taint",
"task",
"tide",
"underneath",
"veil",
"whistle",
"anywhere",
"bedroom",
"bid",
"bloody",
"burden",
"careful",
"compare",
"concern",
"curtain",
"decay",
"defeat",
"describe",
"double",
"dreamer",
"driver",
"dwell",
"evening",
"flare",
"flicker",
"grandma",
"guitar",
"harm",
"horrible",
"hungry",
"indeed",
"lace",
"melody",
"monkey",
"nation",
"object",
"obviously",
"rainbow",
"salt",
"scratch",
"shown",
"shy",
"stage",
"stun",
"third",
"tickle",
"useless",
"weakness",
"worship",
"worthless",
"afternoon",
"beard",
"boyfriend",
"bubble",
"busy",
"certain",
"chin",
"concrete",
"desk",
"diamond",
"doom",
"drawn",
"due",
"felicity",
"freeze",
"frost",
"garden",
"glide",
"harmony",
"hopefully",
"hunt",
"jealous",
"lightning",
"mama",
"mercy",
"peel",
"physical",
"position",
"pulse",
"punch",
"quit",
"rant",
"respond",
"salty",
"sane",
"satisfy",
"savior",
"sheep",
"slept",
"social",
"sport",
"tuck",
"utter",
"valley",
"wolf",
"aim",
"alas",
"alter",
"arrow",
"awaken",
"beaten",
"belief",
"brand",
"ceiling",
"cheese",
"clue",
"confidence",
"connection",
"daily",
"disguise",
"eager",
"erase",
"essence",
"everytime",
"expression",
"fan",
"flag",
"flirt",
"foul",
"fur",
"giggle",
"glorious",
"ignorance",
"law",
"lifeless",
"measure",
"mighty",
"muse",
"north",
"opposite",
"paradise",
"patience",
"patient",
"pencil",
"petal",
"plate",
"ponder",
"possibly",
"practice",
"slice",
"spell",
"stock",
"strife",
"strip",
"suffocate",
"suit",
"tender",
"tool",
"trade",
"velvet",
"verse",
"waist",
"witch",
"aunt",
"bench",
"bold",
"cap",
"certainly",
"click",
"companion",
"creator",
"dart",
"delicate",
"determine",
"dish",
"dragon",
"drama",
"drum",
"dude",
"everybody",
"feast",
"forehead",
"former",
"fright",
"fully",
"gas",
"hook",
"hurl",
"invite",
"juice",
"manage",
"moral",
"possess",
"raw",
"rebel",
"royal",
"scale",
"scary",
"several",
"slight",
"stubborn",
"swell",
"talent",
"tea",
"terrible",
"thread",
"torment",
"trickle",
"usually",
"vast",
"violence",
"weave",
"acid",
"agony",
"ashamed",
"awe",
"belly",
"blend",
"blush",
"character",
"cheat",
"common",
"company",
"coward",
"creak",
"danger",
"deadly",
"defense",
"define",
"depend",
"desperate",
"destination",
"dew",
"duck",
"dusty",
"embarrass",
"engine",
"example",
"explore",
"foe",
"freely",
"frustrate",
"generation",
"glove",
"guilty",
"health",
"hurry",
"idiot",
"impossible",
"inhale",
"jaw",
"kingdom",
"mention",
"mist",
"moan",
"mumble",
"mutter",
"observe",
"ode",
"pathetic",
"pattern",
"pie",
"prefer",
"puff",
"rape",
"rare",
"revenge",
"rude",
"scrape",
"spiral",
"squeeze",
"strain",
"sunset",
"suspend",
"sympathy",
"thigh",
"throne",
"total",
"unseen",
"weapon",
"weary"
]
n = 1626
# Note about US patent no 5892470: Here each word does not represent a given digit.
# Instead, the digit represented by a word is variable, it depends on the previous word.
def mn_encode( message ):
assert len(message) % 8 == 0
out = []
for i in range(len(message)//8):
word = message[8*i:8*i+8]
x = int(word, 16)
w1 = (x%n)
w2 = ((x//n) + w1)%n
w3 = ((x//n//n) + w2)%n
out += [ words[w1], words[w2], words[w3] ]
return out
def mn_decode( wlist ):
out = ''
for i in range(len(wlist)//3):
word1, word2, word3 = wlist[3*i:3*i+3]
w1 = words.index(word1)
w2 = (words.index(word2))%n
w3 = (words.index(word3))%n
x = w1 +n*((w2-w1)%n) +n*n*((w3-w2)%n)
out += '%08x'%x
return out
if __name__ == '__main__':
import sys
if len(sys.argv) == 1:
print('I need arguments: a hex string to encode, or a list of words to decode')
elif len(sys.argv) == 2:
print(' '.join(mn_encode(sys.argv[1])))
else:
print(mn_decode(sys.argv[1:]))
|
romanz/electrum
|
lib/old_mnemonic.py
|
Python
|
mit
| 17,747
|
[
"CRYSTAL",
"VisIt"
] |
78bd7cbcb9fbc36ddbb6b410149694742053760d68e985db3e03d33c9512ba4a
|
import sys
from ase import Atoms
from gpaw import GPAW, FermiDirac
from gpaw import KohnShamConvergenceError
from gpaw.utilities import devnull, compiled_with_sl
from ase.data.molecules import molecule
# Calculates energy and forces for various parallelizations
tolerance = 4e-5
parallel = dict()
basekwargs = dict(mode='fd',
maxiter=3,
#basis='dzp',
#nbands=18,
nbands=6,
kpts=(4,4,4), # 8 kpts in the IBZ
parallel=parallel)
Eref = None
Fref_av = None
def run(formula='H2O', vacuum=1.5, cell=None, pbc=1, **morekwargs):
print formula, parallel
system = molecule(formula)
kwargs = dict(basekwargs)
kwargs.update(morekwargs)
calc = GPAW(**kwargs)
system.set_calculator(calc)
system.center(vacuum)
if cell is None:
system.center(vacuum)
else:
system.set_cell(cell)
system.set_pbc(pbc)
try:
system.get_potential_energy()
except KohnShamConvergenceError:
pass
E = calc.hamiltonian.Etot
F_av = calc.forces.calculate(calc.wfs, calc.density,
calc.hamiltonian)
global Eref, Fref_av
if Eref is None:
Eref = E
Fref_av = F_av
eerr = abs(E - Eref)
ferr = abs(F_av - Fref_av).max()
if calc.wfs.world.rank == 0:
print 'Energy', E
print
print 'Forces'
print F_av
print
print 'Errs', eerr, ferr
if eerr > tolerance or ferr > tolerance:
if calc.wfs.world.rank == 0:
stderr = sys.stderr
else:
stderr = devnull
if eerr > tolerance:
print >> stderr, 'Failed!'
print >> stderr, 'E = %f, Eref = %f' % (E, Eref)
msg = 'Energy err larger than tolerance: %f' % eerr
if ferr > tolerance:
print >> stderr, 'Failed!'
print >> stderr, 'Forces:'
print >> stderr, F_av
print >> stderr
print >> stderr, 'Ref forces:'
print >> stderr, Fref_av
print >> stderr
msg = 'Force err larger than tolerance: %f' % ferr
print >> stderr
print >> stderr, 'Args:'
print >> stderr, formula, vacuum, cell, pbc, morekwargs
print >> stderr, parallel
raise AssertionError(msg)
# only kpt-parallelization, this is the reference
run()
# kpt-parallelization=2, state-parallelization=2,
# domain-decomposition=(1,2,1)
parallel['band'] = 2
parallel['domain'] = (1, 2, 1)
run()
if compiled_with_sl():
# kpt-parallelization=2, state-parallelization=2,
# domain-decomposition=(1,2,1)
# with blacs
parallel['sl_default'] = (2, 2, 2)
run()
# perform spin polarization test
parallel = dict()
basekwargs = dict(mode='fd',
maxiter=3,
nbands=6,
kpts=(4,4,4), # 8 kpts in the IBZ
parallel=parallel)
Eref = None
Fref_av = None
OH_kwargs = dict(formula='NH2', vacuum=1.5, pbc=1, spinpol=1,
occupations=FermiDirac(width=0.1))
# reference:
# kpt-parallelization = 4, spin-polarization = 2,
run(**OH_kwargs)
# kpt-parallelization = 2, spin-polarization = 2,
# domain-decomposition = (1, 2, 1)
parallel['domain'] = (1, 2, 1)
run(**OH_kwargs)
# kpt-parallelization = 2, spin-polarization = 2,
# state-parallelization = 2,
# domain-decomposition=(1, 1, 1)
del parallel['domain']
parallel['band'] = 2
run(**OH_kwargs)
# do last test plus buffer_size keyword
parallel['buffer_size'] = 150
run(**OH_kwargs)
if compiled_with_sl():
# kpt-parallelization=2, spin-polarization=2,
# state-parallelization = 2
# domain-decomposition=(1, 2, 1)
# with blacs
del parallel['buffer_size']
parallel['domain'] = (1, 2, 1)
parallel['sl_default'] = (2, 1, 2)
run(**OH_kwargs)
# kpt-parallelization=2, state-parallelization=2,
# domain-decomposition = (1, 2, 1)
# with blacs
parallel['sl_default'] = (2, 2, 2)
run(**OH_kwargs)
# do last test plus buffer_size keyword
parallel['buffer_size'] = 150
run(**OH_kwargs)
|
qsnake/gpaw
|
gpaw/test/parallel/fd_parallel_kpt.py
|
Python
|
gpl-3.0
| 4,179
|
[
"ASE",
"GPAW"
] |
6237cf5f062eddd78ceaad94a29adb256e751ec0bcb0aa46b07949736230a1a8
|
# -*- coding: utf-8 -*-
# HORTON: Helpful Open-source Research TOol for N-fermion systems.
# Copyright (C) 2011-2016 The HORTON Development Team
#
# This file is part of HORTON.
#
# HORTON is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# HORTON is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <http://www.gnu.org/licenses/>
#
# --
'''Periodic table of elements
This module contains an object ``periodic`` that can be used as a Pythonic
periodic table. It can be used as follows::
>>> from horton import periodic
>>> periodic['si'].number
14
>>> periodic['He'].number
2
>>> periodic['h'].symbol
'H'
>>> periodic[3].symbol
'Li'
>>> periodic['5'].symbol
'B'
'''
from horton.context import context
from horton.units import angstrom, amu
__all__ = ['periodic', 'Element', 'Periodic']
class Element(object):
'''Represents an element from the periodic table.
The following attributes are supported for all elements:
number
The atomic number.
symbol
A string with the symbol of the element.
name
The full element name.
group
The group of the element (not for actinides and lanthanides).
period
The row of the periodic system.
The following attributes are present for some elements. When a parameter
is not known for a given element, the attribute is set to None.
cov_radius_cordero
Covalent radius. B. Cordero, V. Gomez, A. E. Platero-Prats, M.
Reves, J. Echeverria, E. Cremades, F. Barragan, and S. Alvarez,
Dalton Trans. pp. 2832--2838 (2008), URL
http://dx.doi.org/10.1039/b801115j
cov_radius_bragg
Covalent radius. W. L. Bragg, Phil. Mag. 40, 169 (1920), URL
http://dx.doi.org/10.1080/14786440808636111
cov_radius_slater
Covalent radius. J. C. Slater, J. Chem. Phys. 41, 3199 (1964), URL
http://dx.doi.org/10.1063/1.1725697
vdw_radius_bondi
van der Waals radius. A. Bondi, J. Phys. Chem. 68, 441 (1964), URL
http://dx.doi.org/10.1021/j100785a001
vdw_radius_truhlar
van der Waals radius. M. Mantina A. C. Chamberlin R. Valero C. J.
Cramer D. G. Truhlar J. Phys. Chem. A 113 5806 (2009), URL
http://dx.doi.org/10.1021/jp8111556
vdw_radius_rt
van der Waals radius. R. S. Rowland and R. Taylor, J. Phys. Chem.
100, 7384 (1996), URL http://dx.doi.org/10.1021/jp953141+
vdw_radius_batsanov
van der Waals radius. S. S. Batsanov Inorganic Materials 37 871
(2001), URL http://dx.doi.org/10.1023/a%3a1011625728803
vdw_radius_dreiding
van der Waals radius. Stephen L. Mayo, Barry D. Olafson, and William
A. Goddard III J. Phys. Chem. 94 8897 (1990), URL
http://dx.doi.org/10.1021/j100389a010
vdw_radius_uff
van der Waals radius. A. K. Rappi, C. J. Casewit, K. S. Colwell, W.
A. Goddard III, and W. M. Skid J. Am. Chem. Soc. 114 10024 (1992),
URL http://dx.doi.org/10.1021/ja00051a040
vdw_radius_mm3
van der Waals radius. N. L. Allinger, X. Zhou, and J. Bergsma,
Journal of Molecular Structure: THEOCHEM 312, 69 (1994),
http://dx.doi.org/10.1016/s0166-1280(09)80008-0
wc_radius
Waber-Cromer radius of the outermost orbital maximum. J. T. Waber
and D. T. Cromer, J. Chem. Phys. 42, 4116 (1965), URL
http://dx.doi.org/10.1063/1.1695904
cr_radius
Clementi-Raimondi radius. E. Clementi, D. L. Raimondi, W. P.
Reinhardt, J. Chem. Phys. 47, 1300 (1967), URL
http://dx.doi.org/10.1063/1.1712084
pold_crc
Isolated atom dipole polarizability. CRC Handbook of Chemistry and
Physics (CRC, Boca Raton, FL, 2003). If multiple values were present
in the CRC book, the value used in Erin's postg code is taken.
pold_chu
Isolated atom dipole polarizability. X. Chu & A. Dalgarno, J. Chem.
Phys., 121(9), 4083--4088 (2004), URL
http://dx.doi.org/10.1063/1.1779576 Theoretical value for hydrogen
from this paper: A.D. Buckingham, K.L. Clarke; Chem. Phys. Lett.
57(3), 321--325 (1978), URL
http://dx.doi.org/10.1016/0009-2614(78)85517-1
c6_chu
Isolated atom C_6 dispersion coefficient. X. Chu & A. Dalgarno, J. Chem.
Phys., 121(9), 4083--4088 (2004), URL
http://dx.doi.org/10.1063/1.1779576 Theoretical value for hydrogen
from this paper: K. T. Tang, J. M. Norbeck and P. R. Certain; J.
Chem. Phys. 64, 3063 (1976), URL #
http://dx.doi.org/10.1063/1.432569
mass
The IUPAC atomic masses (wieghts) of 2013.
T.B. Coplen, W.A. Brand, J. Meija, M. Gröning, N.E. Holden, M.
Berglund, P. De Bièvre, R.D. Loss, T. Prohaska, and T. Walczyk.
http://ciaaw.org, http://www.ciaaw.org/pubs/TSAW2013_xls.xls,
When ranges are provided, the middle of the range is used.
The following attributes are derived from the data given above:
cov_radius:
| equals cov_radius_cordero
vdw_radius:
| vdw_radius_truhlar if present
| else vdw_radius_bondi if present
| else vdw_radius_batsanov if present
| else vdw_radius_mm3 if present
| else None
becke_radius:
| cov_radius_slater if present
| else cov_radius_cordero if present
| else None
pold:
| pold_crc
c6:
| c6_chu
'''
def __init__(self, number=None, symbol=None, **kwargs):
self.number = number
self.symbol = symbol
for name, value in kwargs.iteritems():
setattr(self, name, value)
self.cov_radius = self.cov_radius_cordero
if self.vdw_radius_truhlar is not None:
self.vdw_radius = self.vdw_radius_truhlar
elif self.vdw_radius_bondi is not None:
self.vdw_radius = self.vdw_radius_bondi
elif self.vdw_radius_batsanov is not None:
self.vdw_radius = self.vdw_radius_batsanov
elif self.vdw_radius_mm3 is not None:
self.vdw_radius = self.vdw_radius_mm3
else:
self.vdw_radius = None
if self.cov_radius_slater is not None:
self.becke_radius = self.cov_radius_slater
elif self.cov_radius_cordero is not None:
self.becke_radius = self.cov_radius_cordero
else:
self.becke_radius = None
self.pold = self.pold_crc
self.c6 = self.c6_chu
class Periodic(object):
'''A periodic table data structure.'''
def __init__(self, elements):
'''**Arguments:**
elements
A list of :class:`Element` instances.
'''
self.elements = elements
self._lookup = {}
for element in elements:
self._lookup[element.number] = element
self._lookup[element.symbol.lower()] = element
def __getitem__(self, index):
'''Get an element from the table based on a flexible index.
**Argument:**
index
This can be either an integer atomic number, a string with the
elemental symbol (any case), or a string with the atomic number.
**Returns:** the corresponding :class:`Element` instance
'''
result = self._lookup.get(index)
if result is None and isinstance(index, basestring):
index = index.strip()
result = self._lookup.get(index.lower())
if result is None and index.isdigit():
result = self._lookup.get(int(index))
if result is None:
raise KeyError('Could not find element %s.' % index)
return result
def load_periodic():
import csv
convertor_types = {
'int': (lambda s: int(s)),
'float': (lambda s : float(s)),
'au': (lambda s : float(s)), # just for clarity, atomic units
'str': (lambda s: s.strip()),
'angstrom': (lambda s: float(s)*angstrom),
'2angstrom': (lambda s: float(s)*angstrom/2),
'angstrom**3': (lambda s: float(s)*angstrom**3),
'amu': (lambda s: float(s)*amu),
}
with open(context.get_fn('elements.csv'),'r') as f:
r = csv.reader(f)
# go to the actual data
for row in r:
if len(row[1]) > 0:
break
# parse the first two header rows
names = row
convertors = [convertor_types[key] for key in r.next()]
elements = []
for row in r:
if len(row) == 0:
break
kwargs = {}
for i in xrange(len(row)):
cell = row[i]
if len(cell) > 0:
kwargs[names[i]] = convertors[i](cell)
else:
kwargs[names[i]] = None
elements.append(Element(**kwargs))
return Periodic(elements)
periodic = load_periodic()
|
crisely09/horton
|
horton/periodic.py
|
Python
|
gpl-3.0
| 9,840
|
[
"Dalton"
] |
c8ecb5397328963748cf3f20873bd144ee819e736817cdff9d9c688c8afc8570
|
__author__ = 'Christoph Heindl'
__copyright__ = 'Copyright 2017, Profactor GmbH'
__license__ = 'BSD'
import glob
import os
import numpy as np
import re
import matplotlib.pyplot as plt
import matplotlib
from mpl_toolkits.axes_grid1 import make_axes_locatable
from sensor_correction.utils import mask_outliers
import seaborn as sbn
sbn.set_context('paper')
sbn.set(font_scale=3)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Evaluate Gaussian Process')
parser.add_argument('depth', type=str, help='Preprocessed depth')
parser.add_argument('corrected', type=str, help='Corrected depth')
parser.add_argument('--no-show', action='store_true', help='Do not display results, just save image')
parser.add_argument('--temps', nargs='*', type=int)
parser.add_argument('--poses', nargs='*', type=int)
args = parser.parse_args()
#matplotlib.rcParams.update({'font.size': 20})
# Load depth data
data = np.load(args.depth)
temps = data['temps']
poses = data['poses']
if args.temps:
temps = np.array(args.temps)
if args.poses:
poses = np.array(args.poses)
all_depths_ir = data['depth_ir'][()]
all_depths_rgb = data['depth_rgb'][()]
data = np.load(args.corrected)
all_corrected = data['depth_corrected'][()]
all_deltae = data['depth_deltae'][()]
for p in poses:
depth_target = all_depths_rgb[(p, temps[0])]
for t in temps:
print('Processing pos {}, temperature {}'.format(p, t))
depth_ir = all_depths_ir[(p, t)] # Actual
depth_c = all_corrected[(p, t)] # Corrected
depth_delta = all_deltae[(p, t)] # Corrected
errbefore = np.abs(depth_ir - depth_target)
errafter = np.abs(depth_c - depth_target)
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(30, 8), dpi=60)
#fig.suptitle('Gaussian Process Regression', fontsize=24, fontweight='bold')
ax1.set_title('RGB/IR Depth Error Before')
ax1.axis('off')
img1 = ax1.imshow(errbefore, cmap='cubehelix', interpolation='nearest', aspect='equal', vmin=0.0, vmax=0.02)
d1 = make_axes_locatable(ax1)
cax1 = d1.append_axes("right", size="7%", pad=0.05)
ax2.set_title('Depth Correction by GP')
ax2.axis('off')
img2 = ax2.imshow(depth_delta, cmap='cubehelix', interpolation='nearest', aspect='equal', vmin=0.0, vmax=0.02)
d2 = make_axes_locatable(ax2)
cax2 = d2.append_axes("right", size="7%", pad=0.05)
ax3.set_title('RGB/IR Depth Error After')
ax3.axis('off')
img3 = ax3.imshow(errafter, cmap='cubehelix', aspect='equal', interpolation='nearest', vmin=0.0, vmax=0.02)
d3 = make_axes_locatable(ax3)
cax3 = d3.append_axes("right", size="7%", pad=0.05)
cbar1 = fig.colorbar(img1, cax = cax1)
fig.delaxes(fig.axes[3])
cbar2 = fig.colorbar(img2, cax = cax2)
fig.delaxes(fig.axes[3])
cbar3 = fig.colorbar(img3, cax = cax3)
cbar3.set_label('Error/Correction (m)', labelpad=3.)
fig.tight_layout(pad=0.05)
fig.subplots_adjust(wspace=0.0)
plt.savefig('correction_t{}_p{:04d}.png'.format(t, p), bbox_inches='tight')
plt.savefig('correction_t{}_p{:04d}.pdf'.format(t, p), bbox_inches='tight', transparent=True, dpi=300)
if not args.no_show:
plt.show()
|
cheind/rgbd-correction
|
sensor_correction/apps/plot_corrected_depth.py
|
Python
|
bsd-3-clause
| 3,751
|
[
"Gaussian"
] |
4ae989ed1a85fb0f05d65121b45a4e01837683c6752b337841b4eebbc1f5282a
|
import os, subprocess
import tkMessageBox
def popupInfo(title, message):
tkMessageBox.showinfo(title=title, message=message)
def popupError(title, message):
tkMessageBox.showerror(title=title, icon=tkMessageBox.WARNING, message=message)
def startFiles(*filepaths):
"""Takes a sequence of filepath strings."""
if not filepaths: return
subprocess.call(('open',) + filepaths)
def residuesFromPdb(filepath):
if not filepath or not os.path.exists(filepath): return False
chains, residues = [], {}
chain, residue, newResidue = '', '', ''
f = open(filepath, 'rb')
for line in f:
if line.startswith('ATOM'):
chain = line[21]
if not chain in chains: chains.append(chain)
newResidue = line[17:20]+line[22:26].strip()
if newResidue != residue:
residue = newResidue
residues.setdefault(chain, []).append(residue)
f.close()
return [chains]+[residues[chain] for chain in chains]
class ScoresDict(dict):
""" The scoreType() method indicates what type of score file has been opened, if
known. Returns a string, one of 'docking', 'abinitio', 'homology', or
'floppytail'. metrics()
returns as a list of strings the scoring metrics in this score file."""
def __init__(self, filepath):
dict.__init__(self)
self.filepath = self.__resolveFilepath(filepath)
if not self.filepath: return
self.dirpath = os.path.dirname(self.filepath)
self.lines = {}
self.__header = ''
self.__metrics = []
self.__rankingMetric = None
self.__type = None
self.__parseScoresFile()
self.__determineType()
def remove(self, name):
if name in self:
del self[name]
del self.lines[name]
def saveScores(self, filepath=None):
if filepath == None: filepath = self.filepath
buff = [self.__header]
for name in sorted(self):
buff.append(self.lines[name])
buff.append('') # So the file can be appended to.
f = open(filepath, 'wb')
f.write('\n'.join(buff)); f.close()
def sort(self, metric=None):
if metric not in self.__metrics:
metric = self.__rankingMetric
return sorted(self, key=lambda name: self[name][metric])
def score(self, filename):
return self[filename][self.__rankingMetric]
def scoreType(self):
return self.__type
def metrics(self):
return self.__metrics
def markDeleted(self, name, basename=None):
if not basename: basename = name[:-9]
nameTemplate = '%s_deleted_%04d.pdb'
i = 1
while nameTemplate % (basename, i) in self: i += 1
self.rename(name, nameTemplate % (basename, i))
def rename(self, old, new):
if old == new: return
oldScoreLine = self.lines.pop(old)
newScoreLine = '%s %s' % (oldScoreLine.rpartition(' ')[0], new)
self.lines[new] = newScoreLine
self[new] = self.pop(old)
def swap(self, nameA, nameB):
if nameA == nameB: return
tempName = nameA+'.temp'
self.rename(nameA, tempName)
self.rename(nameB, nameA)
self.rename(tempName, nameB)
# # # # # File handling methods # # # # #
def orderDecoys(self, basename=None, rankingMetric=None):
self.orderAndTrimDecoys(0, basename)
def orderAndTrimDecoys(self, numToKeep, basename=None):
files = os.listdir(os.path.dirname(self.filepath))
newOrder = [fname for fname in self.sort() if fname in files]
if numToKeep:
toDelete = newOrder[numToKeep:]
newOrder = newOrder[:numToKeep]
for fname in toDelete:
os.remove(os.path.join(self.dirpath, fname))
self.markDeleted(fname, basename)
self.__reorderDecoys(newOrder, basename)
def renameFiles(self, old, new):
if old == new: return
self.rename(old, new)
oldPath = os.path.join(self.dirpath, old)
newPath = os.path.join(self.dirpath, new)
os.rename(oldPath, newPath)
def swapFiles(self, nameA, nameB):
if nameA == nameB: return
tempName = nameA+'.temp'
self.renameFiles(nameA, tempName)
self.renameFiles(nameB, nameA)
self.renameFiles(tempName, nameB)
# # # # # Private Methods # # # # #
def __reorderDecoys(self, newOrder, basename=None):
l = newOrder[:]
if not basename: basename = l[0].rpartition('_')[0]
for i, oldFilename in enumerate(l):
l[i] = False
filename = '%s_%.4d.pdb' % (basename, i+1)
if oldFilename == filename: continue
if filename in l: # Need to rename existing file
l[l.index(filename)] = oldFilename
self.swapFiles(oldFilename, filename)
else:
self.renameFiles(oldFilename, filename)
self.saveScores()
def __isScoreFile(self, filename):
suffixes = ('fasc', 'fsc', 'sc')
if filename.lower().split('.')[-1] in suffixes:
return True
return False
def __resolveFilepath(self, filepath):
if type(filepath) is not str: return False
if os.path.isfile(filepath) and self.__isScoreFile(filepath):
return os.path.realpath(filepath)
elif os.path.isdir(filepath):
for f in os.listdir(filepath):
if self.__isScoreFile(f):
return os.path.realpath(os.path.join(filepath, f))
return False
def __parseScoresFile(self):
f = open(self.filepath, 'r')
def floatIfIs(score):
try: return float(score)
except ValueError: return score
try:
temp = f.readline()
if temp.startswith('SEQUENCE:'): header = f.readline().strip()
else: header, temp = temp.strip(), ''
scoresHeader = [score.strip() for score in header.split()[1:]
if not score.isspace() and 'description' not in score]
self.__metrics = scoresHeader
self.__header = temp+header
self.__rankingMetric = scoresHeader
for line in f:
resultDict = {}
segs = [seg.strip() for seg in line.split()[1:]
if not seg.isspace()]
name = os.path.basename(segs.pop().strip())
if not name.endswith('.pdb'): name += '.pdb' # Results must be pdbs.
resultDict = dict((metric, floatIfIs(score)) for metric, score in
zip(scoresHeader, segs))
self[name] = resultDict.copy()
self.lines[name] = line.strip()
return True
except:
print '\nError occured attempting to parse %s.\n'%self.filepath
self.__header = ''
self.lines = {}
self.clear()
raise
finally:
f.close()
def __determineType(self):
suffix = self.filepath.rpartition('.')[-1]
metrics = self.__metrics
if 'total_score' in metrics:
self.__rankingMetric = 'total_score'
if 'I_sc' in metrics:
self.__type = 'docking'
elif 'aln_len' in metrics:
self.__type = 'homology'
elif suffix == 'sc':
self.__type = 'floppytail'
elif 'total_energy' in metrics:
self.__type = 'loopmodel'
self.__rankingMetric = 'total_energy'
elif 'score' in metrics:
self.__rankingMetric = 'score'
if suffix == 'fsc':
self.__type = 'abinitio'
else:
self.__type = 'unknown'
self.__rankingMetric = metrics[0]
rosedockMainHelpMessage = ' Very important note: This program has been tested with PyMol for Mac version 1.3, and the rosetta suite versions 3.1 and 3.3. Other versions of rosetta may or may not work, as with other versions of PyMol. It is however known that PyMol version 0.99 DOES NOT WORK. Once rosetta is installed make sure that RoseDock knows where it is located. If it has not found it automatically, you can set the paths in the preferences menu.\n' + \
' This program was written to run Rosetta protein docking, and to be able to parse the multitudes of results. To start a new run, create one pdb file containing the two proteins to be docked together. If possible, remove sections of the structures that you are confident are not involved in the docking, as this can save hours of run-time. The two proteins should be separated from each other by enough space to allow each to rotate. If you have a rough idea of how the proteins interact, rotate each so the appropriate face is towards its partner.\n' + \
' The number of decoys depends on the type of run; for a global run (where both structures are randomized and no constraints are defined) it is recommended to have 10,000-100,000 decoys, while for a local run (where the proteins have already been roughly positioned and/or constraints have been entered) 1,000-5,000 decoys are recommended. This modelling can take a very long time; by way of example 3,000 decoys of 2 proteins 300 amino acids each generated by 7 cores at 2.8GHz takes approximately 4-5 hours, and up to 8-16 hours with constraints defined.\n' + \
' For best performance the number of parallel processes should be equal to or one less than the number of processors in your computer. Randomize Structures should be checked if you want to do a gloabl docking run, in which case you should be generating 10,000 decoys or more. Leave it unchecked for a local docking run. The output folder specifies where to save all of the decoys, their scores, and the docking options. The number of top decoys to keep allows you to delete the low scoring decoys after the run is finished, as thousands of them can quickly consume a lot of hard-drive space. When they are deleted the score file will not be altered, so their information will show up in the results display.\n' + \
' Docking constraints allow you to incorporate previous information about an amino acid into a docking run. If you have mutagenesis information, or can infer importance via homology, you can specify that certain amino acids must be within a certain distance (in Angstroms) of the other protein. The Edit button becomes active after choosing an input pdb, and you can specify amino acids from one or both proteins in the pdb. Remember to exit with the Done button once finished, so that the constraints are saved. After generating the previously specified number of decoys, all of those that do not meet every constraint are deleted, and docking is begun again in order to again reach that number. This continues until every decoy satisfies every constraint. Note: If the constraints are too restrictive this cycle could go on forever, or at least for a very very long time.\n' + \
' Once options (including constraints) have been filled out, they can be saved for future reference. Likewise, options can be loaded from a .dockfile. The options for a run are automatically saved into the output folder when a run is begun, and the results will be opened once it has completed.\n' + \
' When examining the results of a docking run, several scores are presented. Total Score is a combination of several scores for the complex, the lower the better. The interface Score is a separate measure, and describes the difference in score between each individual protein and that of their complex. Again the lower the better, with good models usually from -5 to -10. RMS is a measure of the difference of that decoy to the starting structure; its value has little meaning, but the greater the difference in RMS between 2 decoys, the more different they are from each other. The results can be sorted, and double-clicking a decoy will open it in your default pdb viewing program.\n' + \
' The Tabulate Results button brings up a text box with the RMS and sum of the Total and Interface scores, separated by a tab. Pressing select all and then command-c will copy the data so you can paste it into your favourite graphing or spreadsheet program. This allows the creation of graphs overviewing the docking run, so funnel clusters can be shown.\n' + \
' Since each decoy is generated independantly, many of them may be very similar. Clustering the results attempts to separate the decoys into clusters of similar structures, allowing a better understanding of the results. This process is computationally intensive, and increases semi-exponentially the more decoys that are considered. 30-200 is a reasonable number, taking a minute or two at most. The cluster radius allows you to define how similar structures must be (in Angstrom) to be considered a cluster.\n'
pathsHelpMessage = ' These paths allow RoseDock to find the Rosetta programs it requires to run.\n Rosetta_Bundles should be the path to the main directory, and rosetta_database should be just within that folder.\n The docking executable is usually in rosetta3.x_Bundles/rosetta_source/bin/docking_protocol.your_particular_version. If you cannot pick that alias, the true file should be found in rosetta_Bundles/rosetta_source/build/src.../docking_protocol.your_particular_version.\n\nRemember to press "Save Paths" when you are done.'
|
dave-the-scientist/molecbio
|
rosettaApps/util.py
|
Python
|
gpl-3.0
| 13,477
|
[
"PyMOL"
] |
26ef0de0a8b8bdbb2bd46a2267b27974ad4f19df6a38050da7807b07e5fc6997
|
"""@See preprocessed data
"""
from numpy import arange, sin, pi
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
from matplotlib.path import Path
import matplotlib.patches as patches
from numpy import*
from numpy.linalg import*
from scipy import interpolate
from scipy.signal import filtfilt, lfilter
from scipy.signal import medfilt
from scipy.signal import filter_design as ifd
from scipy.stats import multivariate_normal
import scipy.spatial
import numpy as np
from scipy.stats import multivariate_normal
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
import os
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import math
from sklearn.mixture import GMM
class Creator():
"""
GenerateModel assumes the trials provided to be the
modelling dataset of a Human Motion Primitive (HMP) and returns the model
of the HMP computed by executing Gaussian Mixture Modelling (GMM) and
Gaussian Mixture Regression (GMR) over the modelling dataset. The
model is defined by the expected curve and associated set of covariance
matrices of the features extracted from the trials.
Actually considered features are:
- 4D gravity (time, gravity components on the 3 axes)
- 4D body acceleration (time, body acc. components on the 3 axes)
"""
def __init__(self):
self.lista = []
#Acceleration set in X
self.x_set_acc = []
#Acceleration set in Y
self.y_set_acc = []
#Acceleration set in Z
self.z_set_acc = []
#list of acceleration files
def ReadFiles(self,names_acc, names_gyro, acc = True, gyro = False, dfilter = "median"):
""" Read the data from a list of files
:param names_acc: list of files that contains the accelerometer data to be used as modelling dataset for a HMP
:param names_gyro: list of files that contains the gyroscope data to be used as modelling dataset for a HMP
:param acc: bool parameter that indicates if the accelerometer data will be used
:param gyro: bool parameter that indicates if the gyroscope data will be used
:param dfilter: type of filter used to reduce noise in the data
"""
self.names_acc = names_acc
self.names_gyrp = names_gyro
"""Read the acceleration data from txt files"""
if(acc == True):
try:
self.num = 0
for name in self.names_acc:
self.data = genfromtxt(name, delimiter=' ')
self.lista.append(self.data)
noisy_x = self.lista[self.num].transpose()[0]
noisy_y = self.lista[self.num].transpose()[1]
noisy_z = self.lista[self.num].transpose()[2]
self.num = self.num + 1
if(dfilter == 'median'):
n = 3 #order of the median filter
x_set = medfilt(noisy_x,n)
y_set = medfilt(noisy_y,n)
z_set = medfilt(noisy_z,n)
self.x_set_acc.append(x_set)
self.y_set_acc.append(y_set)
self.z_set_acc.append(z_set)
self.numSamples,m = self.lista[0].shape
except:
print("\n \n ------- Error in the files -------------- \n")
"""Read the gyroscopre data from txt files"""
def CreateDatasets_Acc(self):
""" CreateDatasets computes the gravity and body acceleration components of the trials given in the [dataset]s by calling the function GetComponents for each trial and reshapes the results into one set of gravity components and one set of body acceleration components according to the requirements of Gaussian Mixture Modelling.
"""
#% SEPARATE THE GRAVITY AND BODY-MOTION ACCELERATION COMPONENTS
#Obtain the number of files
numFiles = len(self.x_set_acc)
gravity_trial,body_trial = self.GetComponents(self.x_set_acc[0], self.y_set_acc[0], self.z_set_acc[0])
self.shortNumSamples, m = gravity_trial.shape
#print self.shortNumSamples
#initial values of the dataset arrays
n = self.shortNumSamples
time = ones((1,n))*arange(0,self.shortNumSamples,1)
g_x_s = ones((1,n))*gravity_trial[0:self.shortNumSamples,0].transpose()
g_y_s = ones((1,n))*gravity_trial[0:self.shortNumSamples,1].transpose()
g_z_s = ones((1,n))*gravity_trial[0:self.shortNumSamples,2].transpose()
b_x_s = ones((1,n))*body_trial[0:self.shortNumSamples,0].transpose()
b_y_s = ones((1,n))*body_trial[0:self.shortNumSamples,1].transpose()
b_z_s = ones((1,n))*body_trial[0:self.shortNumSamples,2].transpose()
i = 1
while(i < self.num):
gravity_trial,body_trial = self.GetComponents(self.x_set_acc[i], self.y_set_acc[i], self.z_set_acc[i])
# CREATE THE DATASETS FOR THE GMMs
timec = ones((1,n))*arange(0,self.shortNumSamples,1)
time = concatenate((time,timec),axis=1)
g_x_s = concatenate((g_x_s,ones((1,n))*gravity_trial[0:self.shortNumSamples,0].transpose()),axis=1)
g_y_s = concatenate((g_y_s,ones((1,n))*gravity_trial[0:self.shortNumSamples,1].transpose()),axis=1)
g_z_s = concatenate((g_z_s,ones((1,n))*gravity_trial[0:self.shortNumSamples,2].transpose()),axis=1)
b_x_s= concatenate((b_x_s,ones((1,n))*body_trial[0:self.shortNumSamples,0].transpose()),axis=1)
b_y_s = concatenate((b_y_s,ones((1,n))*body_trial[0:self.shortNumSamples,1].transpose()),axis=1)
b_z_s = concatenate((b_z_s,ones((1,n))*body_trial[0:self.shortNumSamples,2].transpose()),axis=1)
i = i +1
gravity = concatenate((time,g_x_s), axis = 0);
gravity = concatenate((gravity, g_y_s), axis = 0);
self.gravity = concatenate((gravity, g_z_s), axis = 0);
body = concatenate((time,b_x_s), axis = 0);
body = concatenate((body,b_y_s), axis = 0);
self.body = concatenate((body,b_z_s), axis = 0);
#2.1
def GetComponents(self, x_axis, y_axis, z_axis):
""" GetComponents discriminates between gravity and body acceleration by
applying an infinite impulse response (IIR) filter to the raw
acceleration data (one trial) given in input.
:param x_axis: acceleration data in the axis x
:param y_axis: acceleration data in the axis y
:param z_axis: acceleration data in the axis z
:return gravity: gravity component of the acceleration data
:return body: body component of the acceleration data
"""
#APPLY IIR FILTER TO GET THE GRAVITY COMPONENTS
#IIR filter parameters (all frequencies are in Hz)
Fs = 32; # sampling frequency
Fpass = 0.25; # passband frequency
Fstop = 2; # stopband frequency
Apass = 0.001; # passband ripple (dB)
Astop = 100; # stopband attenuation (dB)
match = 'pass'; # band to match exactly
delay = 64; # delay (# samples) introduced by filtering
#Create the IIR filter
# iirdesign agruements
Wip = (Fpass)/(Fs/2)
Wis = (Fstop+1e6)/(Fs/2)
Rp = Apass # passband ripple
As = Astop # stopband attenuation
# The iirdesign takes passband, stopband, passband ripple,
# and stop attenuation.
bb, ab = ifd.iirdesign(Wip, Wis, Rp, As, ftype='cheby1')
g1 = lfilter(bb,ab,x_axis)
g2 = lfilter(bb,ab,y_axis)
g3 = lfilter(bb,ab,z_axis)
#COMPUTE THE BODY-ACCELERATION COMPONENTS BY SUBTRACTION (PREGUNTA)
gravity = zeros((self.numSamples -delay,3));
body = zeros((self.numSamples -delay,3));
i = 0
while(i < self.numSamples-delay):
#shift & reshape gravity to reduce the delaying effect of filtering
gravity[i,0] = g1[i+delay];
gravity[i,1] = g2[i+delay];
gravity[i,2] = g3[i+delay];
body[i,0] = x_axis[i] - gravity[i,0];
body[i,1] = y_axis[i] - gravity[i,1];
body[i,2] = z_axis[i] - gravity[i,2];
i = i + 1
#COMPUTE THE BODY-ACCELERATION COMPONENTS BY SUBTRACTION
return gravity, body
#3
def ObtainNumberOfCluster(self,acc = True, gyro = False, algorithm = "KMeans", save = True, path = ""):
"""Compute the expected curve for each dataset
:param acc: bool parameter that indicates if the accelerometer data will be used
:param gyro: bool parameter that indicates if the gyroscope data will be used
:param algorithm: algorithm used to obtain the number of clusters
:param save: bool parameter that indicates if the plots are saved
:param path: path where the plots will be saved
"""
#Determine the number of Gaussians to be used in the GMM
if (acc):
self.K_gravity = self.TuneK(self.gravity,100,'gravity',save,path)
print "K gravity = ", self.K_gravity, "\n"
self.K_body = self.TuneK(self.body,100,'body',save,path)
print "K body = ", self.K_body, "\n"
def TuneK(self,set_, maxK, name, save = True, path = ''):
""" TuneK determines the optimal number of clusters to be used to cluster
the given [set] with K-means algorithm. It cycles from K = 2 to [maxK].
The optimization criterion adopted is a variant of the elbow method: at
each iteration TuneK computes the silhouette values of the clusters
determined by the K-means algorithm and compares them with the values
obtained at the previous iteration. When the quality of the
clustering falls below a fixed threshold, TuneK stops.
:param set_: either the gravity or the body acc. dataset retrived from
CreateDatasets
:param maxK: maximum number of clusters to be used to cluster the given
dataset.
:param name: component name (gravity or body)
:param save: bool parameter that indicates if the plots are saved
:param path: path where the plots will be saved
:return Koptimal: optimal number of clusters to be used to cluster the data
of the given dataset"""
# DETERMINE THE OPTIMAL NUMBER OF CLUSTERS (K) FOR THE GIVEN DATASET
# tuning parameters
threshold = 0.69 # threshold on the FITNESS of the current clustering
minK = 2 # initial number of clusters to be used
#first step is outside of the loop to have meaningful initial values
data = set_.transpose()#[:,1:]
n_samples, n_features = data.shape
print "samples= ", n_samples, "features", n_features
#n_digits = len(unique(digits.target))
#labels = digits.target
print(79 * '_')
print(name + '\n')
return self.bench_k_means(data,name,save, path)
def bench_k_means(self, data, name, save = False, path = '', plot = True):
""" Silhouette analysis
:param data: dataset trasposed
:param name: component name (gravity or body)
:param save: bool parameter that indicates if the plots are saved
:param path: path where the plots will be saved
:return Koptimal: optimal number of clusters to be used to cluster the data of the given dataset"""
#In this example the silhouette analysis is used to choose an optimal value for n_clusters.
#Bad pick for the given data due to the presence of clusters with
#below average silhouette scores and also due to wide fluctuations in the size of the silhouette plots.
threshold = 0.69;
#t0 = time()
X = data
cmin = 2
cmax = 50
for n_clusters in range(cmin,cmax):
# Create a subplot with 1 row and 2 columns
if(plot == True):
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
#cluster_labels = clusterer.fit(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels, metric='sqeuclidean')
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
#print sample_silhouette_values
Koptimal = n_clusters
#if (Koptimal == maxK):
#print('MATLAB:noConvergence','Failed to converge to the optimal K: increase maxK.')
if(silhouette_avg < threshold):
return (Koptimal)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.spectral(float(i) / n_clusters)
if(plot == True):
ax1.fill_betweenx(arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
if(plot == True):
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhoutte score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors)
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1],
marker='o', c="white", alpha=1, s=200)
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50)
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.savefig(path + str(name) + "_c_"+ str(n_clusters) + '.png')
return Koptimal
def runGMM(self,dataset,K):
""" Execute GMM in a data set given a K number of clusters
:param dataset: either the gravity or the body acc. dataset retrived from CreateDatasets
:param K: number of clusters
:return priors: apriori probavbility
:return mu: mean
:return sigma: covariance
"""
gmm = GMM(n_components=K,covariance_type='full')
gmm.fit(dataset.transpose())
priors = gmm.weights_
mu = gmm.means_
sigma = gmm.covars_
return priors, mu, sigma
def GetExpected(self,set_,K,numGMRPoints):
"""GetExpected performs Gaussian Mixture Modeling (GMM) and Gaussian Mixture
Regression (GMR) over the given dataset. It returns the expected curve
(expected mean for each point, computed over the values given in the
[set]) and associated set of covariance matrices defining the "model"
of the given dataset.
:param set: either the gravity or the body acc. dataset retrived from CreateDatasets
:param K: optimal number of clusters to be used to cluster the data of the given dataset retrieved from TuneK
:param numGMRPoints: number of data points composing the expected curves to be computed by GMR
:return expData: expected curve obtained by modelling the given dataset with the GMM+GMR procedure
:return expSigma: associated covariance matrices obtained by modelling the given dataset with the GMM+GMR procedure
"""
# PARAMETERS OF THE GMM+GMR PROCEDURE
numVar = 4; # number of variables in the system (time & 3 accelerations)
m,numData = set_.shape;# number of points in the dataset
priors, mu, sigma = self.runGMM(set_,K)
#APPLY GAUSSIAN MIXTURE REGRESSION TO FIND THE EXPECTED CURVE
#define the points to be used for the regression
#(assumption: CONSTANT SPACING)
expData1 = np.linspace(1,numGMRPoints+1, num=numGMRPoints+1);
expMeans, expSigma = self.RetrieveModel(K,priors,mu,sigma,expData1,0,np.arange(1,numVar),numVar)
return expMeans.transpose(), expSigma
def RetrieveModel(self,K,priors,mu,sigma,points,in_,out,numVar):
""" Performs Gaussian Mixture Regression (GMR) over the GM model defined by its parameters. By providing temporal values as inputs, it returns a smooth generalized version of the data encoded in the GMM and the associated constraints expressed by the covariance matrices.
:param K: number of clusters
:param priors: apriori probavbility
:param mu: mean
:param sigma: covariance
:param points: input data (starting points to be used for GMR)
:param in: input dimension
:param out: output dimension
:param numVar: axis number, ex: if we use x,y,z then this parameters must be = 3
:return expMeans: set of expected means for the given GM model
:return expSigma: covariance matrices of the expected points in expMeans
"""
numData = size(points)
pdf_point = zeros((numData,K))
beta = zeros((numData,K))
exp_point_k = zeros((numVar-1,numData,K))
exp_sigma_k = {}
for i in range(K):
# compute the probability of each point to belong to the actual GM
# model (probability density function of the point) --> p(point)
pdf_point_temp = multivariate_normal.pdf(points,mu[i,in_],sigma[i,in_,in_])
#compute p(Gaussians) * p(point|Gaussians)
pdf_point[:,i] = priors[i]* pdf_point_temp
#estimate the parameters beta
for i in range(K):
beta[:,i] = pdf_point[:,i]/sum(pdf_point,1)
for j in range (K):
temp = (ones((numData,1))*mu[j,out]).transpose()+(ones((1,numVar-1))*(sigma[j,out,in_]*1/(sigma[j,in_,in_]))).transpose()*(points-tile(mu[j,in_],[1,numData]))
exp_point_k[:,:,j] = temp
beta_tmp = reshape(beta,(1,numData,K))
#print tile(beta_tmp,[size(out),1,1]).shape
exp_point_k2 = tile(beta_tmp,[size(out),1,1])*exp_point_k;
#compute the set of expected means
expMeans = sum(exp_point_k2,2)
for j in range (K):
temp = sigma[j,1:numVar,1:numVar] -(ones((1,numVar-1))*(sigma[j,out,in_]*1/(sigma[j,in_,in_]))).transpose()*sigma[j,in_,out]
exp_sigma_k[j] = temp
expSigma_temp = {}
for i in range (numData):
expSigma_temp[i] = zeros((numVar-1,numVar-1))
for j in range (K):
expSigma_temp[i] = expSigma_temp[i] + beta[i,j]* beta[i,j]*exp_sigma_k[j]
expSigma = expSigma_temp[0]
for i in range (1,numData):
expSigma = concatenate((expSigma, expSigma_temp[i]), axis=0)
return expMeans, expSigma
if __name__ == "__main__":
import doctest
doctest.testmod()
|
enriquecoronadozu/HMPy
|
src/Creator.py
|
Python
|
gpl-3.0
| 22,258
|
[
"Gaussian"
] |
208e3072225d47837e27ce39c57117bc378e7f22c951584143f7113b85238fe6
|
##############################################################################
# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class RZlibbioc(RPackage):
"""This package uses the source code of zlib-1.2.5 to create libraries
for systems that do not have these available via other means (most
Linux and Mac users should have system-level access to zlib, and no
direct need for this package). See the vignette for instructions
on use."""
homepage = "http://bioconductor.org/packages/release/bioc/html/Zlibbioc.html"
url = "https://bioconductor.org/packages/3.5/bioc/src/contrib/zlibbioc_1.22.0.tar.gz"
version('1.22.0', '2e9496b860270d2e73d1305b8c6c69a5')
depends_on('r@3.4.0:3.4.9', when='@1.22.0')
|
lgarren/spack
|
var/spack/repos/builtin/packages/r-zlibbioc/package.py
|
Python
|
lgpl-2.1
| 1,882
|
[
"Bioconductor"
] |
61749bcf5836c8bcc25a0add1e56bce3cdcb85426b6cc9009d21a790eb3ae0f3
|
#!/bin/env python
"""
Module simtk.unit.unit_operators
Physical quantities with units, intended to produce similar functionality
to Boost.Units package in C++ (but with a runtime cost).
Uses similar API as Scientific.Physics.PhysicalQuantities
but different internals to satisfy our local requirements.
In particular, there is no underlying set of 'canonical' base
units, whereas in Scientific.Physics.PhysicalQuantities all
units are secretly in terms of SI units. Also, it is easier
to add new fundamental dimensions to simtk.dimensions. You
might want to make new dimensions for, say, "currency" or
"information".
Two possible enhancements that have not been implemented are
1) Include uncertainties with propagation of errors
2) Incorporate offsets for celsius <-> kelvin conversion
This is part of the OpenMM molecular simulation toolkit originating from
Simbios, the NIH National Center for Physics-Based Simulation of
Biological Structures at Stanford, funded under the NIH Roadmap for
Medical Research, grant U54 GM072970. See https://simtk.org.
Portions copyright (c) 2012 Stanford University and the Authors.
Authors: Christopher M. Bruns
Contributors: Peter Eastman
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS, CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
__author__ = "Christopher M. Bruns"
__version__ = "0.5"
from .unit import Unit, is_unit
from .quantity import Quantity, is_quantity
# Attach methods of Unit class that return a Quantity to Unit class.
# I put them here to avoid circular dependence in imports.
# i.e. Quantity depends on Unit, but not vice versa
def _unit_class_rdiv(self, other):
"""
Divide another object type by a Unit.
Returns a new Quantity with a value of other and units
of the inverse of self.
"""
if is_unit(other):
raise NotImplementedError('programmer is surprised __rtruediv__ was called instead of __truediv__')
else:
# print "R scalar / unit"
unit = pow(self, -1.0)
value = other
return Quantity(value, unit).reduce_unit(self)
Unit.__rtruediv__ = _unit_class_rdiv
Unit.__rdiv__ = _unit_class_rdiv
def _unit_class_mul(self, other):
"""Multiply a Unit by an object.
If other is another Unit, returns a new composite Unit.
Exponents of similar dimensions are added. If self and
other share similar BaseDimension, but
with different BaseUnits, the resulting BaseUnit for that
BaseDimension will be that used in self.
If other is a not another Unit, this method returns a
new Quantity... UNLESS other is a Quantity and the resulting
unit is dimensionless, in which case the underlying value type
of the Quantity is returned.
"""
if is_unit(other):
if self in Unit._multiplication_cache:
if other in Unit._multiplication_cache[self]:
return Unit._multiplication_cache[self][other]
else:
Unit._multiplication_cache[self] = {}
# print "unit * unit"
result1 = {} # dictionary of dimensionTuple: (BaseOrScaledUnit, exponent)
for unit, exponent in self.iter_base_or_scaled_units():
d = unit.get_dimension_tuple()
if d not in result1:
result1[d] = {}
assert unit not in result1[d]
result1[d][unit] = exponent
for unit, exponent in other.iter_base_or_scaled_units():
d = unit.get_dimension_tuple()
if d not in result1:
result1[d] = {}
if unit not in result1[d]:
result1[d][unit] = 0
result1[d][unit] += exponent
result2 = {} # stripped of zero exponents
for d in result1:
for unit in result1[d]:
exponent = result1[d][unit]
if exponent != 0:
assert unit not in result2
result2[unit] = exponent
new_unit = Unit(result2)
Unit._multiplication_cache[self][other] = new_unit
return new_unit
elif is_quantity(other):
# print "unit * quantity"
value = other._value
unit = self * other.unit
return Quantity(value, unit).reduce_unit(self)
else:
# print "scalar * unit"
value = other
unit = self
# Is reduce_unit needed here? I hope not, there is a performance issue...
# return Quantity(other, self).reduce_unit(self)
return Quantity(other, self)
Unit.__mul__ = _unit_class_mul
Unit.__rmul__ = Unit.__mul__
Unit._multiplication_cache = {}
# run module directly for testing
if __name__=='__main__':
# Test the examples in the docstrings
import doctest, sys
doctest.testmod(sys.modules[__name__])
|
swails/mdtraj
|
mdtraj/utils/unit/unit_operators.py
|
Python
|
lgpl-2.1
| 5,698
|
[
"OpenMM"
] |
eb25f2fdd2f318caa426f8873faed5ca92d0f57c073aee5f9172c59c577023ba
|
"""
chatbot.py
Ask Cleverbot something via CloudBot! This one is way shorter!
Created By:
- Foxlet <http://furcode.tk/>
License:
GNU General Public License (Version 3)
"""
import urllib.parse
import hashlib
import collections
import html
import requests
from cloudbot import hook
SESSION = collections.OrderedDict()
API_URL = "http://www.cleverbot.com/webservicemin?uc=321&"
HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',
'Accept-Language': 'en-us;q=0.8,en;q=0.5',
'Pragma': 'no-cache',
'Referer': 'http://www.cleverbot.com',
'User-Agent': 'Mozilla/5.0 (Linux; Android 4.0.4; Galaxy Nexus Build/IMM76B) AppleWebKit/535.19 (KHTML, like '
'Gecko) Chrome/18.0.1025.133 Mobile Safari/535.19',
'X-Moz': 'prefetch'
}
sess = requests.Session()
sess.get("http://www.cleverbot.com")
@hook.on_start()
def init_vars():
SESSION['stimulus'] = ""
SESSION['sessionid'] = ""
SESSION['start'] = 'y'
SESSION['icognoid'] = 'wsf'
SESSION['fno'] = '0'
SESSION['sub'] = 'Say'
SESSION['islearning'] = '1'
SESSION['cleanslate'] = 'false'
def cb_think(text):
SESSION['stimulus'] = text
payload = urllib.parse.urlencode(SESSION)
digest = hashlib.md5(payload[9:35].encode('utf-8')).hexdigest()
target_url = "{}&icognocheck={}".format(payload, digest)
parsed = sess.post(API_URL, data=target_url, headers=HEADERS)
data = parsed.text.split('\r')
SESSION['sessionid'] = data[1]
if parsed.status_code == 200:
return html.unescape(str(data[0]))
else:
print("CleverBot API Returned "+str(parsed.status_code))
return "Error: API returned "+str(parsed.status_code)
@hook.command("ask", "cleverbot", "cb")
def ask(text):
""" <question> -- Asks Cleverbot <question> """
return cb_think(text)
|
CrushAndRun/Cloudbot-Fluke
|
plugins/chatbot.py
|
Python
|
gpl-3.0
| 1,912
|
[
"Galaxy"
] |
781349a977cc9df453fbe615d1a1b73eca43ebce54241d666c9646528d387e3a
|
""" :mod: GFAL2_SRM2Storage
=================
.. module: python
:synopsis: SRM2 module based on the GFAL2_StorageBase class.
"""
# pylint: disable=invalid-name
import errno
import json
import gfal2 # pylint: disable=import-error
# from DIRAC
from DIRAC import gLogger, gConfig, S_OK, S_ERROR
from DIRAC.Resources.Storage.GFAL2_StorageBase import GFAL2_StorageBase
from DIRAC.Resources.Storage.Utilities import checkArgumentFormat
__RCSID__ = "$Id$"
class GFAL2_SRM2Storage(GFAL2_StorageBase):
""" SRM2 SE class that inherits from GFAL2StorageBase
"""
_INPUT_PROTOCOLS = ['file', 'root', 'srm', 'gsiftp']
_OUTPUT_PROTOCOLS = ['file', 'root', 'dcap', 'gsidcap', 'rfio', 'srm', 'gsiftp']
def __init__(self, storageName, parameters):
""" """
super(GFAL2_SRM2Storage, self).__init__(storageName, parameters)
self.log = gLogger.getSubLogger("GFAL2_SRM2Storage", True)
self.log.debug("GFAL2_SRM2Storage.__init__: Initializing object")
self.pluginName = 'GFAL2_SRM2'
# This attribute is used to know the file status (OFFLINE,NEARLINE,ONLINE)
self._defaultExtendedAttributes = ['user.status']
# ##
# Setting the default SRM parameters here. For methods where this
# is not the default there is a method defined in this class, setting
# the proper values and then calling the base class method.
# ##
self.gfal2requestLifetime = gConfig.getValue('/Resources/StorageElements/RequestLifeTime', 100)
self.__setSRMOptionsToDefault()
# This lists contains the list of protocols to ask to SRM to get a URL
# It can be either defined in the plugin of the SE, or as a global option
if 'ProtocolsList' in parameters:
self.protocolsList = parameters['ProtocolsList'].split(',')
else:
self.log.debug("GFAL2_SRM2Storage: No protocols provided, using the default protocols.")
self.protocolsList = self.defaultLocalProtocols
self.log.debug('GFAL2_SRM2Storage: protocolsList = %s' % self.protocolsList)
def __setSRMOptionsToDefault(self):
''' Resetting the SRM options back to default
'''
self.ctx.set_opt_integer("SRM PLUGIN", "OPERATION_TIMEOUT", self.gfal2Timeout)
if self.spaceToken:
self.ctx.set_opt_string("SRM PLUGIN", "SPACETOKENDESC", self.spaceToken)
self.ctx.set_opt_integer("SRM PLUGIN", "REQUEST_LIFETIME", self.gfal2requestLifetime)
# Setting the TURL protocol to gsiftp because with other protocols we have authorisation problems
# self.ctx.set_opt_string_list( "SRM PLUGIN", "TURL_PROTOCOLS", self.defaultLocalProtocols )
self.ctx.set_opt_string_list("SRM PLUGIN", "TURL_PROTOCOLS", ['gsiftp'])
def _updateMetadataDict(self, metadataDict, attributeDict):
""" Updating the metadata dictionary with srm specific attributes
:param self: self reference
:param dict: metadataDict we want add the SRM specific attributes to
:param dict: attributeDict contains 'user.status' which we then fill in the metadataDict
"""
# 'user.status' is the extended attribute we are interested in
user_status = attributeDict.get('user.status', '')
metadataDict['Cached'] = int('ONLINE' in user_status)
metadataDict['Migrated'] = int('NEARLINE' in user_status)
metadataDict['Lost'] = int(user_status == 'LOST')
metadataDict['Unavailable'] = int(user_status == 'UNAVAILABLE')
metadataDict['Accessible'] = not metadataDict['Lost'] and metadataDict['Cached'] and not metadataDict['Unavailable']
def getTransportURL(self, path, protocols=False):
""" obtain the tURLs for the supplied path and protocols
:param self: self reference
:param str path: path on storage
:param mixed protocols: protocols to use
:returns: Failed dict {path : error message}
Successful dict {path : transport url}
S_ERROR in case of argument problems
"""
res = checkArgumentFormat(path)
if not res['OK']:
return res
urls = res['Value']
self.log.debug(
'GFAL2_SRM2Storage.getTransportURL: Attempting to retrieve tURL for %s paths' %
len(urls))
failed = {}
successful = {}
if not protocols:
listProtocols = self.protocolsList
if not listProtocols:
return S_ERROR(
"GFAL2_SRM2Storage.getTransportURL: No local protocols defined and no defaults found.")
elif isinstance(protocols, basestring):
listProtocols = [protocols]
elif isinstance(protocols, list):
listProtocols = protocols
else:
return S_ERROR("getTransportURL: Must supply desired protocols to this plug-in.")
# Compatibility because of castor returning a castor: url if you ask
# for a root URL, and a root: url if you ask for a xroot url...
if 'root' in listProtocols and 'xroot' not in listProtocols:
listProtocols.insert(listProtocols.index('root'), 'xroot')
elif 'xroot' in listProtocols and 'root' not in listProtocols:
listProtocols.insert(listProtocols.index('xroot') + 1, 'root')
if self.protocolParameters['Protocol'] in listProtocols:
successful = {}
failed = {}
for url in urls:
if self.isURL(url)['Value']:
successful[url] = url
else:
failed[url] = 'getTransportURL: Failed to obtain turls.'
return S_OK({'Successful': successful, 'Failed': failed})
for url in urls:
res = self.__getSingleTransportURL(url, listProtocols)
self.log.debug('res = %s' % res)
if not res['OK']:
failed[url] = res['Message']
else:
successful[url] = res['Value']
return S_OK({'Failed': failed, 'Successful': successful})
def __getSingleTransportURL(self, path, protocols=False):
""" Get the tURL from path with getxattr from gfal2
:param self: self reference
:param str path: path on the storage
:returns: S_OK( Transport_URL ) in case of success
S_ERROR( errStr ) in case of a failure
"""
self.log.debug(
'GFAL2_SRM2Storage.__getSingleTransportURL: trying to retrieve tURL for %s' %
path)
if protocols:
self.ctx.set_opt_string_list("SRM PLUGIN", "TURL_PROTOCOLS", protocols)
res = self._getExtendedAttributes(path, attributes=['user.replicas'])
self.__setSRMOptionsToDefault()
if res['OK']:
return S_OK(res['Value']['user.replicas'])
errStr = 'GFAL2_SRM2Storage.__getSingleTransportURL: Extended attribute tURL is not set.'
self.log.debug(errStr, res['Message'])
return res
def getOccupancy(self, *parms, **kws):
""" Gets the GFAL2_SRM2Storage occupancy info.
TODO: needs gfal2.15 because of bugs:
https://its.cern.ch/jira/browse/DMC-979
https://its.cern.ch/jira/browse/DMC-977
It queries the srm interface for a given space token.
Out of the results, we keep totalsize, guaranteedsize, and unusedsize all in MB.
"""
# Gfal2 extended parameter name to query the space token occupancy
spaceTokenAttr = 'spacetoken.description?%s' % self.protocolParameters['SpaceToken']
# gfal2 can take any srm url as a base.
spaceTokenEndpoint = self.getURLBase(withWSUrl=True)['Value']
try:
occupancyStr = self.ctx.getxattr(spaceTokenEndpoint, spaceTokenAttr)
try:
occupancyDict = json.loads(occupancyStr)[0]
except ValueError:
# https://its.cern.ch/jira/browse/DMC-977
# a closing bracket is missing, so we retry after adding it
occupancyStr = occupancyStr[:-1] + '}]'
occupancyDict = json.loads(occupancyStr)[0]
# https://its.cern.ch/jira/browse/DMC-979
# We set totalsize to guaranteed size
# (it is anyway true for all the SEs I could test)
occupancyDict['totalsize'] = occupancyDict.get('guaranteedsize', 0)
except (gfal2.GError, ValueError) as e:
errStr = 'Something went wrong while checking for spacetoken occupancy.'
self.log.verbose(errStr, e.message)
return S_ERROR(getattr(e, 'code', errno.EINVAL), "%s %s" % (errStr, repr(e)))
sTokenDict = {}
sTokenDict['Total'] = float(occupancyDict.get('totalsize', '0')) / 1e6
sTokenDict['Free'] = float(occupancyDict.get('unusedsize', '0')) / 1e6
return S_OK(sTokenDict)
|
petricm/DIRAC
|
Resources/Storage/GFAL2_SRM2Storage.py
|
Python
|
gpl-3.0
| 8,242
|
[
"DIRAC"
] |
ad3bd76e64418f3bc5b8acb3880e29fcc2015340546e7829c89c0f4169e646a2
|
# Copyright 2012, SIL International
# All rights reserved.
#
# This library is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation; either version 2.1 of License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should also have received a copy of the GNU Lesser General Public
# License along with this library in the file named "LICENSE".
# If not, write to the Free Software Foundation, 51 Franklin Street,
# suite 500, Boston, MA 02110-1335, USA or visit their web page on the
# internet at http://www.fsf.org/licenses/lgpl.html.
import freetype
from ttfrename.glyph import GlyphItem
from qtpy import QtCore, QtWidgets
from fontTools import ttLib
import re
pgetTableModule = ttLib.getTableModule
def getTableModule(tag) :
#if tag in ("post", "cmap", "maxp", 'glyf', 'loca', 'head', 'hmtx', 'hhea') :
if tag in ("post", "cmap", 'maxp') :
return pgetTableModule(tag)
return None
ttLib.getTableModule = getTableModule
class Namedit(QtWidgets.QDialog) :
def __init__(self, name, uid, parent = None) :
super(Namedit, self).__init__(parent)
self.layout = QtWidgets.QGridLayout(self)
self.name = QtWidgets.QLineEdit(self)
self.name.setText(name)
self.name.setSelection(0, len(name))
self.layout.addWidget(QtWidgets.QLabel('Name'), 0, 0)
self.layout.addWidget(self.name, 0, 1)
self.uid = QtWidgets.QLineEdit(self)
if uid :
self.uid.setText("%04X" % uid)
self.layout.addWidget(QtWidgets.QLabel('Unicode'), 1, 0)
self.layout.addWidget(self.uid, 1, 1)
o = QtWidgets.QDialogButtonBox(QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel)
o.accepted.connect(self.accept)
o.rejected.connect(self.reject)
self.layout.addWidget(o, 2, 0, 1, 2)
def getValues(self) :
t = self.uid.text()
if re.match(u'^[0-9a-fA-F]+$', t) :
uid = int(t, 16)
else :
uid = 0
return (str(self.name.text()), uid)
def dictkeymv(d, kin, kout) :
x = d[kin]
del d[kin]
d[kout] = x
def isUnicodeCmap(t) :
p = t.platformID
e = t.platEncID
if p == 3 and e == 1 : return True
if p == 0 : return True
return False
class Ttx(ttLib.TTFont) :
def _writeTables(self, tag, writer, done) :
if tag in ("post", 'glyf', 'loca', 'hmtx', 'maxp') :
gorder = self.getGlyphOrder()
self.setGlyphOrder(self.psGlyphs)
ttLib.TTFont._writeTable(self,tag, writer, done)
self.setGlyphOrder(gorder)
else :
ttLib.TTFont._writeTable(self,tag, writer, done)
class Font(object) :
def __init__(self) :
super(Font, self).__init__()
self.glyphItems = []
self.pixrect = QtCore.QRect()
self.ttx = None
def loadFont(self, fontfile, size = 40) :
self.glyphItems = []
self.pixrect = QtCore.QRect()
self.gnames = {}
self.top = 0
self.size = size
self.fname = fontfile
face = freetype.Face(fontfile)
self.upem = face.units_per_EM
self.numGlyphs = face.num_glyphs
for i in range(self.numGlyphs) :
g = GlyphItem(face, i, size)
self.gnames[g.name] = i
self.glyphItems.append(g)
if g.pixmap :
grect = g.pixmap.rect()
grect.moveBottom(grect.height() - g.top)
self.pixrect = self.pixrect | grect
if g.top > self.top : self.top = g.top
self.ttx = Ttx(fontfile)
self.cmaps = []
self.bytemaps = []
for c in self.ttx['cmap'].tables :
if isUnicodeCmap(c) :
self.cmaps.append(c.cmap)
else :
self.bytemaps.append(c.cmap)
cmap = self.ttx['cmap'].getcmap(3, 1)
if not cmap : cmap = self.ttx['cmap'].getcmap(3, 0)
if cmap : cmap = cmap.cmap
for k, v in cmap.items() :
if v in self.gnames :
self.glyphItems[self.gnames[v]].uid = k
for k in self.ttx.keys() :
dummy = self.ttx[k] # trigger a read of each table
self.ttx.close()
def save(self, filename = None) :
if filename : self.fname = filename
self.ttx.psGlyphs = order = map(lambda g: g.name, self.glyphItems)
self.ttx.setGlyphOrder(order)
#self.ttx['glyf'].glyphOrder = order
self.ttx['post'].extraNames = []
self.ttx.recalcBBoxes = None
self.ttx.save(self.fname)
def __len__(self) :
return len(self.glyphItems)
def __getitem__(self, y) :
try :
return self.glyphItems[y]
except IndexError :
return None
def emunits(self) : return self.upem
def editGlyph(self, g) :
d = Namedit(g.name, g.uid)
if d.exec_() :
(name, uid) = d.getValues()
else :
return
if g.name != name or g.uid != uid :
for c in self.cmaps :
if g.uid != uid and g.uid in c : del c[g.uid]
c[uid] = name
if g.uid != uid and g.uid and g.uid < 256 :
for c in self.bytemaps :
c[g.uid] = '.notdef'
if uid and uid < 256 :
for c in self.bytemaps :
c[uid] = name
if g.name != name :
gid = self.gnames[g.name]
del self.gnames[g.name]
self.gnames[name] = gid
#dictkeymv(self.ttx['glyf'].glyphs, g.name, name)
#dictkeymv(self.ttx['hmtx'].metrics, g.name, name)
g.uid = uid
g.name = name
|
silnrsi/graide
|
lib/ttfrename/font.py
|
Python
|
lgpl-2.1
| 6,108
|
[
"VisIt"
] |
6077464983eabfd1150fe1aa26fc02305b376ebbe8496d3559a1d1d58cc34a16
|
#!/usr/bin/env python
from __future__ import print_function
from builtins import str
import argparse
import os
import sys
import math
import hashlib
import pandas as pd
import glob
import sibispy
from sibispy import sibislogger as slog
from sibispy import check_dti_gradients as chk_dti
def get_cases(cases_root, arm, event, case=None):
"""
Get a list of cases from root dir, optionally for a single case
"""
match = 'NCANDA_S*'
if case:
match = case
case_list = list()
for cpath in glob.glob(os.path.join(cases_root, match)):
if os.path.isdir(os.path.join(cpath,arm,event)) :
case_list.append(cpath)
case_list.sort()
return case_list
def main(args,sibis_session):
# Get the gradient tables for all cases and compare to ground truth
slog.startTimer1()
cases_dir = sibis_session.get_cases_dir()
if args.verbose:
print("Checking cases in " + cases_dir)
cases = get_cases(cases_dir, arm=args.arm, event=args.event, case=args.case)
if cases == [] :
if args.case :
case= args.case
else :
case = "*"
print("Error: Did not find any cases matching :" + "/".join([cases_dir,case,args.arm,args.event]))
sys.exit(1)
# Demographics from pipeline to grab case to scanner mapping
demo_path = os.path.join(sibis_session.get_summaries_dir(),'redcap/demographics.csv')
demographics = pd.read_csv(demo_path, index_col=['subject',
'arm',
'visit'])
checker = chk_dti.check_dti_gradients()
if not checker.configure(sibis_session,check_decimals = args.decimals) :
slog.info('exec_check_dti_gradients.main',"Configuration of check_dti_gradients failed !")
sys.exit(1)
for case in cases:
# Get the case's site
dti_path = os.path.join(case, args.arm, args.event,'diffusion/native',args.sequence)
if not os.path.exists(dti_path) :
if args.verbose:
print("Warning: " + dti_path + " does not exist!")
continue
if args.verbose:
print("Processing: " + "/".join([case,args.arm, args.event]))
sid = os.path.basename(case)
try:
scanner = demographics.xs([sid, args.arm, args.event])['scanner']
scanner_model = demographics.xs([sid, args.arm, args.event])['scanner_model']
except :
print("Error: case " + case + "," + args.arm + "," + args.event +" not in " + demo_path +"!")
error = 'Case, arm and event not in demo_path'
slog.info(hashlib.sha1('check_gradient_tables {} {} {}'.format(case, args.arm, args.event).encode()).hexdigest()[0:6], error,
case=str(case),
arm=str(args.arm),
event=str(args.event),
demo_path=str(demo_path))
continue
if (isinstance(scanner, float) and math.isnan(scanner)) or (isinstance(scanner_model, float) and math.isnan(scanner_model)) :
print("Error: Did not find scanner or model for " + sid + "/" + args.arm + "/" + args.event +" so cannot check gradient for that scan!")
error = "Did not find any cases matching cases_dir, case, arm, event"
slog.info(hashlib.sha1('check_gradient_tables {} {} {}'.format(args.base_dir, args.arm, args.event).encode()).hexdigest()[0:6], error,
cases_dir=cases_dir,
case=str(case),
arm=str(args.arm),
event=str(args.event))
continue
xml_file_path = checker.get_dti_stack_path(args.sequence, case, arm=args.arm, event=args.event)
checker.check_diffusion(dti_path,"",glob.glob(xml_file_path),scanner, scanner_model, "", args.sequence)
slog.takeTimer1("script_time", "{'records': " + str(len(cases)) + "}")
#
# =======================================
#
if __name__ == "__main__":
sibis_session = sibispy.Session()
if not sibis_session.configure() :
if verbose:
print("Error: session configure file was not found")
sys.exit()
formatter = argparse.RawDescriptionHelpFormatter
default = 'default: %(default)s'
parser = argparse.ArgumentParser(prog="check_gradient_tables.py",
description=__doc__,
formatter_class=formatter)
parser.add_argument('-a', '--arm', dest="arm",
help="Study arm. {}".format(default),
default='standard')
parser.add_argument('-d', '--decimals', dest="decimals",
help="Number of decimals. {}".format(default),
default=2)
parser.add_argument('-e', '--event', dest="event",
help="Study event. {}".format(default),
default='baseline')
parser.add_argument('-c', '--case', dest="case",
help="Case to check - if none are defined then it checks all cases in that directory. {}".format(default), default=None)
parser.add_argument('-v', '--verbose', dest="verbose",
help="Turn on verbose", action='store_true')
parser.add_argument("-p", "--post-to-github", help="Post all issues to GitHub instead of std out.",
action = "store_true", default = False)
parser.add_argument('-s', '--sequence',
help="Type of sequence to check: dti6b500pepolar, dti30b400, dti60b1000 . {}".format(default),
default='dti60b1000')
parser.add_argument("-t", "--time-log-dir",help = "If set then time logs are written to that directory",
action = "store",
default = None)
argv = parser.parse_args()
# Setting up logging
slog.init_log(argv.verbose, argv.post_to_github, 'NCANDA XNAT', 'check_gradient_tables', argv.time_log_dir)
sys.exit(main(argv,sibis_session))
|
sibis-platform/sibispy
|
cmds/exec_check_dti_gradients.py
|
Python
|
bsd-3-clause
| 6,144
|
[
"VisIt"
] |
445fa8a820884fdeecdc008ff525b15d087216b39aba0ae55de7e1545abf76ca
|
# -*- coding: utf-8 -*-
#!/usr/bin/env python
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2007 Donald N. Allingham
# Copyright (C) 2007 Johan Gonqvist <johan.gronqvist@gmail.com>
# Copyright (C) 2007-2009 Gary Burton <gary.burton@zen.co.uk>
# Copyright (C) 2007-2009 Stephane Charette <stephanecharette@gmail.com>
# Copyright (C) 2008-2009 Brian G. Matherly
# Copyright (C) 2008 Jason M. Simanek <jason@bohemianalps.com>
# Copyright (C) 2008-2011 Rob G. Healey <robhealey1@gmail.com>
# Copyright (C) 2010 Doug Blank <doug.blank@gmail.com>
# Copyright (C) 2010 Jakim Friant
# Copyright (C) 2010- Serge Noiraud
# Copyright (C) 2011 Tim G L Lyons
# Copyright (C) 2013 Benny Malengier
# Copyright (C) 2016 Allen Crider
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""
Narrative Web Page generator.
Classe:
StatisticsPage
"""
#------------------------------------------------
# python modules
#------------------------------------------------
from decimal import getcontext
import logging
#------------------------------------------------
# Gramps module
#------------------------------------------------
from gramps.gen.const import GRAMPS_LOCALE as glocale
from gramps.gen.lib import (Person, Family, Event, Place, Source,
Citation, Repository)
from gramps.gen.plug.report import Bibliography
from gramps.gen.utils.file import media_path_full
from gramps.plugins.lib.libhtml import Html
#------------------------------------------------
# specific narrative web import
#------------------------------------------------
from gramps.plugins.webreport.basepage import BasePage
from gramps.plugins.webreport.common import FULLCLEAR
LOG = logging.getLogger(".NarrativeWeb")
getcontext().prec = 8
_ = glocale.translation.sgettext
class StatisticsPage(BasePage):
"""
Create one page for statistics
"""
def __init__(self, report, title, step):
"""
@param: report -- The instance of the main report class
for this report
@param: title -- Is the title of the web page
"""
import os
BasePage.__init__(self, report, title)
self.bibli = Bibliography()
self.uplink = False
self.report = report
# set the file name and open file
output_file, sio = self.report.create_file("statistics")
result = self.write_header(_("Statistics"))
addressbookpage, dummy_head, dummy_body, outerwrapper = result
(males,
females,
unknown) = self.get_gender(report.database.iter_person_handles())
step()
mobjects = report.database.get_number_of_media()
npersons = report.database.get_number_of_people()
nfamilies = report.database.get_number_of_families()
nsurnames = len(set(report.database.surname_list))
notfound = []
total_media = 0
mbytes = "0"
chars = 0
for media in report.database.iter_media():
total_media += 1
fullname = media_path_full(report.database, media.get_path())
try:
chars += os.path.getsize(fullname)
length = len(str(chars))
if chars <= 999999:
mbytes = _("less than 1")
else:
mbytes = str(chars)[:(length-6)]
except OSError:
notfound.append(media.get_path())
with Html("div", class_="content", id='EventDetail') as section:
section += Html("h3", self._("Database overview"), inline=True)
outerwrapper += section
with Html("div", class_="content", id='subsection narrative') as sec11:
sec11 += Html("h4", self._("Individuals"), inline=True)
outerwrapper += sec11
with Html("div", class_="content", id='subsection narrative') as sec1:
sec1 += Html("br", self._("Number of individuals") + self.colon +
"%d" % npersons, inline=True)
sec1 += Html("br", self._("Males") + self.colon +
"%d" % males, inline=True)
sec1 += Html("br", self._("Females") + self.colon +
"%d" % females, inline=True)
sec1 += Html("br", self._("Individuals with unknown gender") +
self.colon + "%d" % unknown, inline=True)
outerwrapper += sec1
with Html("div", class_="content", id='subsection narrative') as sec2:
sec2 += Html("h4", self._("Family Information"), inline=True)
sec2 += Html("br", self._("Number of families") + self.colon +
"%d" % nfamilies, inline=True)
sec2 += Html("br", self._("Unique surnames") + self.colon +
"%d" % nsurnames, inline=True)
outerwrapper += sec2
with Html("div", class_="content", id='subsection narrative') as sec3:
sec3 += Html("h4", self._("Media Objects"), inline=True)
sec3 += Html("br",
self._("Total number of media object references") +
self.colon + "%d" % total_media, inline=True)
sec3 += Html("br", self._("Number of unique media objects") +
self.colon + "%d" % mobjects, inline=True)
sec3 += Html("br", self._("Total size of media objects") +
self.colon +
"%8s %s" % (mbytes, self._("Megabyte|MB")),
inline=True)
sec3 += Html("br", self._("Missing Media Objects") +
self.colon + "%d" % len(notfound), inline=True)
outerwrapper += sec3
with Html("div", class_="content", id='subsection narrative') as sec4:
sec4 += Html("h4", self._("Miscellaneous"), inline=True)
sec4 += Html("br", self._("Number of events") + self.colon +
"%d" % report.database.get_number_of_events(),
inline=True)
sec4 += Html("br", self._("Number of places") + self.colon +
"%d" % report.database.get_number_of_places(),
inline=True)
nsources = report.database.get_number_of_sources()
sec4 += Html("br", self._("Number of sources") +
self.colon + "%d" % nsources,
inline=True)
ncitations = report.database.get_number_of_citations()
sec4 += Html("br", self._("Number of citations") +
self.colon + "%d" % ncitations,
inline=True)
nrepo = report.database.get_number_of_repositories()
sec4 += Html("br", self._("Number of repositories") +
self.colon + "%d" % nrepo,
inline=True)
outerwrapper += sec4
(males,
females,
unknown) = self.get_gender(self.report.bkref_dict[Person].keys())
origin = " :<br/>" + report.filter.get_name(self.rlocale)
with Html("div", class_="content", id='EventDetail') as section:
section += Html("h3",
self._("Narrative web content report for") + origin,
inline=True)
outerwrapper += section
with Html("div", class_="content", id='subsection narrative') as sec5:
sec5 += Html("h4", self._("Individuals"), inline=True)
sec5 += Html("br", self._("Number of individuals") + self.colon +
"%d" % len(self.report.bkref_dict[Person]),
inline=True)
sec5 += Html("br", self._("Males") + self.colon +
"%d" % males, inline=True)
sec5 += Html("br", self._("Females") + self.colon +
"%d" % females, inline=True)
sec5 += Html("br", self._("Individuals with unknown gender") +
self.colon + "%d" % unknown, inline=True)
outerwrapper += sec5
with Html("div", class_="content", id='subsection narrative') as sec6:
sec6 += Html("h4", self._("Family Information"), inline=True)
sec6 += Html("br", self._("Number of families") + self.colon +
"%d" % len(self.report.bkref_dict[Family]),
inline=True)
outerwrapper += sec6
with Html("div", class_="content", id='subsection narrative') as sec7:
sec7 += Html("h4", self._("Miscellaneous"), inline=True)
sec7 += Html("br", self._("Number of events") + self.colon +
"%d" % len(self.report.bkref_dict[Event]),
inline=True)
sec7 += Html("br", self._("Number of places") + self.colon +
"%d" % len(self.report.bkref_dict[Place]),
inline=True)
sec7 += Html("br", self._("Number of sources") + self.colon +
"%d" % len(self.report.bkref_dict[Source]),
inline=True)
sec7 += Html("br", self._("Number of citations") + self.colon +
"%d" % len(self.report.bkref_dict[Citation]),
inline=True)
sec7 += Html("br", self._("Number of repositories") + self.colon +
"%d" % len(self.report.bkref_dict[Repository]),
inline=True)
outerwrapper += sec7
# add fullclear for proper styling
# and footer section to page
footer = self.write_footer(None)
outerwrapper += (FULLCLEAR, footer)
# send page out for processing
# and close the file
self.xhtml_writer(addressbookpage, output_file, sio, 0)
def get_gender(self, person_list):
"""
This function return the number of males, females and unknown gender
from a person list.
"""
males = 0
females = 0
unknown = 0
for person_handle in person_list:
person = self.report.database.get_person_from_handle(person_handle)
gender = person.get_gender()
if gender == Person.MALE:
males += 1
elif gender == Person.FEMALE:
females += 1
else:
unknown += 1
return (males, females, unknown)
|
sam-m888/gramps
|
gramps/plugins/webreport/statistics.py
|
Python
|
gpl-2.0
| 11,183
|
[
"Brian"
] |
d8f83fa9a82aed2b2b281ebaf46e013d13ad142606b7754c953a2246014a5628
|
#Dan Blankenberg
#takes commandline tree def and input multiple fasta alignment file and runs the branch length ananlysis
import os, sys
from galaxy import eggs
from galaxy.tools.util import hyphy_util
#Retrieve hyphy path, this will need to be the same across the cluster
tool_data = sys.argv.pop()
HYPHY_PATH = os.path.join( tool_data, "HYPHY" )
HYPHY_EXECUTABLE = os.path.join( HYPHY_PATH, "HYPHY" )
#Read command line arguments
input_filename = os.path.abspath(sys.argv[1].strip())
output_filename = os.path.abspath(sys.argv[2].strip())
tree_contents = sys.argv[3].strip()
nuc_model = sys.argv[4].strip()
base_freq = sys.argv[5].strip()
model_options = sys.argv[6].strip()
#Set up Temporary files for hyphy run
#set up tree file
tree_filename = hyphy_util.get_filled_temp_filename(tree_contents)
#Guess if this is a single or multiple FASTA input file
found_blank = False
is_multiple = False
for line in open(input_filename):
line = line.strip()
if line == "": found_blank = True
elif line.startswith(">") and found_blank:
is_multiple = True
break
else: found_blank = False
#set up BranchLengths file
BranchLengths_filename = hyphy_util.get_filled_temp_filename(hyphy_util.BranchLengths)
if is_multiple:
os.unlink(BranchLengths_filename)
BranchLengths_filename = hyphy_util.get_filled_temp_filename(hyphy_util.BranchLengthsMF)
print "Multiple Alignment Analyses"
else: print "Single Alignment Analyses"
#setup Config file
config_filename = hyphy_util.get_branch_lengths_config_filename(input_filename, nuc_model, model_options, base_freq, tree_filename, output_filename, BranchLengths_filename)
#Run Hyphy
hyphy_cmd = "%s BASEPATH=%s USEPATH=/dev/null %s" % (HYPHY_EXECUTABLE, HYPHY_PATH, config_filename)
hyphy = os.popen(hyphy_cmd, 'r')
#print hyphy.read()
hyphy.close()
#remove temporary files
os.unlink(BranchLengths_filename)
os.unlink(tree_filename)
os.unlink(config_filename)
|
volpino/Yeps-EURAC
|
tools/hyphy/hyphy_branch_lengths_wrapper.py
|
Python
|
mit
| 1,995
|
[
"Galaxy"
] |
043e7ab3ba034db6020994e476bd512ca7119c8a36b258da9372accb65b55505
|
#
# Copyright (C) 2013-2018 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
""" Visualization sample for a Lennard Jones liquid with live plotting via matplotlib.
"""
from __future__ import print_function
import numpy as np
from matplotlib import pyplot
from threading import Thread
import espressomd
from espressomd import thermostat
from espressomd import integrate
from espressomd import visualization
required_features = ["LENNARD_JONES"]
espressomd.assert_features(required_features)
print("""
=======================================================
= lj_liquid.py =
=======================================================
Program Information:""")
print(espressomd.features())
dev = "cpu"
# System parameters
#############################################################
# 10 000 Particles
box_l = 10.7437
density = 0.7
# Interaction parameters (repulsive Lennard Jones)
#############################################################
lj_eps = 1.0
lj_sig = 1.0
lj_cut = 1.12246
lj_cap = 20
# Integration parameters
#############################################################
system = espressomd.System(box_l=[box_l] * 3)
system.set_random_state_PRNG()
#system.seed = system.cell_system.get_state()['n_nodes'] * [1234]
np.random.seed(seed=system.seed)
system.time_step = 0.001
system.cell_system.skin = 0.4
#es._espressoHandle.Tcl_Eval('thermostat langevin 1.0 1.0')
system.thermostat.set_langevin(kT=1.0, gamma=1.0)
# warmup integration (with capped LJ potential)
warm_steps = 100
warm_n_times = 30
# do the warmup until the particles have at least the distance min__dist
min_dist = 0.9
# integration
int_steps = 10
int_n_times = 50000
#############################################################
# Setup System #
#############################################################
# Interaction setup
#############################################################
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig,
cutoff=lj_cut, shift="auto")
system.force_cap = lj_cap
print("LJ-parameters:")
print(system.non_bonded_inter[0, 0].lennard_jones.get_params())
# Particle setup
#############################################################
volume = box_l * box_l * box_l
n_part = int(volume * density)
for i in range(n_part):
system.part.add(id=i, pos=np.random.random(3) * system.box_l)
system.analysis.dist_to(0)
print("Simulate {} particles in a cubic simulation box {} at density {}."
.format(n_part, box_l, density).strip())
print("Interactions:\n")
act_min_dist = system.analysis.min_dist()
print("Start with minimal distance {}".format(act_min_dist))
system.cell_system.max_num_cells = 2744
# Switch between openGl/Mayavi
#visualizer = visualization.mayaviLive(system)
visualizer = visualization.openGLLive(system)
#############################################################
# Warmup Integration #
#############################################################
print("""
Start warmup integration:
At maximum {} times {} steps
Stop if minimal distance is larger than {}
""".strip().format(warm_n_times, warm_steps, min_dist))
# set LJ cap
lj_cap = 20
system.force_cap = lj_cap
print(system.non_bonded_inter[0, 0].lennard_jones)
# Warmup Integration Loop
i = 0
while (i < warm_n_times and act_min_dist < min_dist):
system.integrator.run(warm_steps)
# Warmup criterion
act_min_dist = system.analysis.min_dist()
# print("\rrun %d at time=%f (LJ cap=%f) min dist = %f\r" %
# (i,system.time,lj_cap,act_min_dist), end=' ')
i += 1
# Increase LJ cap
lj_cap = lj_cap + 10
system.force_cap = lj_cap
visualizer.update()
# Just to see what else we may get from the c code
# print("""
# ro variables:
# cell_grid {0.cell_grid}
# cell_size {0.cell_size}
# local_box_l {0.local_box_l}
# max_cut {0.max_cut}
# max_part {0.max_part}
# max_range {0.max_range}
# max_skin {0.max_skin}
# n_nodes {0.n_nodes}
# n_part {0.n_part}
# n_part_types {0.n_part_types}
# periodicity {0.periodicity}
# verlet_reuse {0.verlet_reuse}
#""".format(system))
#############################################################
# Integration #
#############################################################
print("\nStart integration: run %d times %d steps" % (int_n_times, int_steps))
# remove force capping
lj_cap = 0
system.force_cap = lj_cap
print(system.non_bonded_inter[0, 0].lennard_jones)
# print initial energies
energies = system.analysis.energy()
print(energies)
plot, = pyplot.plot([0], [energies['total']], label="total")
pyplot.xlabel("Time")
pyplot.ylabel("Energy")
pyplot.legend()
pyplot.show(block=False)
j = 0
def main_loop():
global energies
print("run %d at time=%f " % (i, system.time))
system.integrator.run(int_steps)
visualizer.update()
energies = system.analysis.energy()
plot.set_xdata(np.append(plot.get_xdata(), system.time))
plot.set_ydata(np.append(plot.get_ydata(), energies['total']))
def main_thread():
for i in range(0, int_n_times):
main_loop()
last_plotted = 0
def update_plot():
global last_plotted
current_time = plot.get_xdata()[-1]
if last_plotted == current_time:
return
last_plotted = current_time
pyplot.xlim(0, plot.get_xdata()[-1])
pyplot.ylim(plot.get_ydata().min(), plot.get_ydata().max())
pyplot.draw()
pyplot.pause(0.01)
t = Thread(target=main_thread)
t.daemon = True
t.start()
visualizer.register_callback(update_plot, interval=1000)
visualizer.start()
# terminate program
print("\nFinished.")
|
hmenke/espresso
|
samples/visualization_ljliquid.py
|
Python
|
gpl-3.0
| 6,372
|
[
"ESPResSo",
"Mayavi"
] |
07ccd6c06e13ed956ec17b8fc355d6c230731393a085ee358ae7b52d06a9cf3a
|
"""
Bok choy acceptance tests for conditionals in the LMS
"""
from capa.tests.response_xml_factory import StringResponseXMLFactory
from common.test.acceptance.tests.helpers import UniqueCourseTest
from common.test.acceptance.fixtures.course import CourseFixture, XBlockFixtureDesc
from common.test.acceptance.pages.lms.courseware import CoursewarePage
from common.test.acceptance.pages.lms.conditional import ConditionalPage, POLL_ANSWER
from common.test.acceptance.pages.lms.problem import ProblemPage
from common.test.acceptance.pages.studio.auto_auth import AutoAuthPage
class ConditionalTest(UniqueCourseTest):
"""
Test the conditional module in the lms.
"""
def setUp(self):
super(ConditionalTest, self).setUp()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
AutoAuthPage(
self.browser,
course_id=self.course_id,
staff=False
).visit()
def install_course_fixture(self, block_type='problem'):
"""
Install a course fixture
"""
course_fixture = CourseFixture(
self.course_info['org'],
self.course_info['number'],
self.course_info['run'],
self.course_info['display_name'],
)
vertical = XBlockFixtureDesc('vertical', 'Test Unit')
# populate the course fixture with the right conditional modules
course_fixture.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
vertical
)
)
)
course_fixture.install()
# Construct conditional block
conditional_metadata = {}
source_block = None
if block_type == 'problem':
problem_factory = StringResponseXMLFactory()
problem_xml = problem_factory.build_xml(
question_text='The answer is "correct string"',
case_sensitive=False,
answer='correct string',
),
problem = XBlockFixtureDesc('problem', 'Test Problem', data=problem_xml[0])
conditional_metadata = {
'xml_attributes': {
'attempted': 'True'
}
}
source_block = problem
elif block_type == 'poll':
poll = XBlockFixtureDesc(
'poll_question',
'Conditional Poll',
question='Is this a good poll?',
answers=[
{'id': 'yes', 'text': POLL_ANSWER},
{'id': 'no', 'text': 'Of course not!'}
],
)
conditional_metadata = {
'xml_attributes': {
'poll_answer': 'yes'
}
}
source_block = poll
else:
raise NotImplementedError()
course_fixture.create_xblock(vertical.locator, source_block)
# create conditional
conditional = XBlockFixtureDesc(
'conditional',
'Test Conditional',
metadata=conditional_metadata,
sources_list=[source_block.locator],
)
result_block = XBlockFixtureDesc(
'html', 'Conditional Contents',
data='<html><div class="hidden-contents">Hidden Contents</p></html>'
)
course_fixture.create_xblock(vertical.locator, conditional)
course_fixture.create_xblock(conditional.locator, result_block)
def test_conditional_hides_content(self):
self.install_course_fixture()
self.courseware_page.visit()
conditional_page = ConditionalPage(self.browser)
self.assertFalse(conditional_page.is_content_visible())
def test_conditional_displays_content(self):
self.install_course_fixture()
self.courseware_page.visit()
# Answer the problem
problem_page = ProblemPage(self.browser)
problem_page.fill_answer('correct string')
problem_page.click_submit()
# The conditional does not update on its own, so we need to reload the page.
self.courseware_page.visit()
# Verify that we can see the content.
conditional_page = ConditionalPage(self.browser)
self.assertTrue(conditional_page.is_content_visible())
def test_conditional_handles_polls(self):
self.install_course_fixture(block_type='poll')
self.courseware_page.visit()
# Fill in the conditional page poll
conditional_page = ConditionalPage(self.browser)
conditional_page.fill_in_poll()
# The conditional does not update on its own, so we need to reload the page.
self.courseware_page.visit()
self.assertTrue(conditional_page.is_content_visible())
|
tanmaykm/edx-platform
|
common/test/acceptance/tests/lms/test_conditional.py
|
Python
|
agpl-3.0
| 4,879
|
[
"VisIt"
] |
471ec05108261c0540f6bb52ccc42fe734301f39717bdb0f2276c59d0a197ebb
|
"""
sewpy: Source Extractor Wrapper for Python
Recent improvements (latest on top):
- new loglevel option to adjust sewpy's overall "verbosity" on instantiation.
- better verbosity about masked output of ASSOC procedure
- ASSOC helper implemented
- run() now returns a dict containing several objects, such as the output astropy table, catfilepath, workdir, and logfilepath.
- now also works with vector parameters such as MAG_APER(4)
- possibility to "nice" SExtractor
- a log file is written for every run() if not told otherwise
- filenames change according to FITS image file name where required
- but you can also pass an "imgname" argument to run, and this will be used instead.
- params and config files are written only once, as discussed
- appropriate warnings and behaviour when a workdir already exists, or when you rerun on the same file
- possibility to use existing param / config / conv / nnw files
- run() returns either the catalog, or the filepath to the catalog
To do:
- move "config" to run ?
- check that all masked columns of ASSOC do indeed share the same mask.
- implement _check_config()
- better detection of SExtractor failures
- implement raising Exceptions when SExtractor fails
- implement CHECK IMAGE "helper" ?
- give access to several conv and nnw settings (if needed)
"""
import os
import astropy
import astropy.table
import subprocess
import tempfile
import re
import copy
from datetime import datetime
import numpy as np
from astropy.io import fits
import logging
logger = logging.getLogger(__name__)
defaultparams = ["XWIN_IMAGE", "YWIN_IMAGE", "AWIN_IMAGE", "BWIN_IMAGE", "THETAWIN_IMAGE", "BACKGROUND",
"FLUX_AUTO"]
defaultconfig = {}
class SEW():
"""
Holds together all the settings to run SExtractor executable on one or several images.
"""
def __init__(self, workdir=None, sexpath="sex", params=None, config=None, configfilepath=None, nice=None, loglevel=None):
"""
All arguments have default values and are optional.
:param workdir: where I'll write my files. Specify this (e.g., "test") if you care about the
output files.
If None, I create a unique temporary directory myself, usually in /tmp.
:param sexpath: path to the sextractor executable (e.g., "sex" or "sextractor", if in your PATH)
:param params: the parameters you want SExtractor to measure (i.e., what you would write in the
"default.param" file)
:type params: list of strings
:param config: config settings that will supersede the default config (e.g., what you would
change in the "default.sex" file)
:type config: dict
:param configfilepath: specify this if you want me to use an existing SExtractor config file as
"default" (instead of the sextractor -d one)
:param nice: niceness with which I should run SExtractor. Use e.g. ``19`` for set lowest priority.
:type nice: int
:param loglevel: verbosity, e.g. the python-level logging threshold for the sewpy module logger.
For example, set this to "WARNING" and sewpy will no longer log simple INFOs.
Choices are "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL".
To disable logging, set ``loglevel="CRITICAL"``
:type loglevel: string or int or logging.level...
To use an existing SExtractor param-, conv-, or nnw-file, simply specify these in the config
dict, using the appropriate SExtractor keys (PARAMETERS_NAME, FILTER_NAME, ...)
.. warning:: When using *vector*-type params resulting in multiple columns (such as "FLUX_RADIUS(3)"
in the example above), do not put these in the last position of the params list, otherwise astropy
fails reading the catalog! This is probably due to the fact that the SExtractor header doesn't give
a hint that multiple columns are expected when a vector-type param comes last. A workaround would be
way too complicated.
"""
# We start by setting the log "verbosity":
if loglevel != None:
logger.setLevel(loglevel)
# We set up the trivial things:
self.sexpath = sexpath
self.configfilepath = configfilepath
self.nice = nice
logger.info("SExtractor version is %s" % (self.get_version()))
# ... and the workdir
if workdir is not None:
self.workdir = workdir
self.tmp = False
if os.path.isdir(workdir):
#logger.warning("SExtractor workdir '%s' exists, be careful! I will (maybe silently) delete or overwrite stuff." % (workdir))
pass
else:
logger.info("Making new SExtractor workdir '%s'..." % (workdir))
os.makedirs(workdir)
else:
self.workdir = tempfile.mkdtemp(prefix='sewpy_workdir_')
self.tmp = True
#self._clean_workdir()
# No, don't clean it ! This is an obvious race conditions when several processes use the same workdir !
# Commenting this is just a quick fix, we need to clean this up.
# ... and the params:
if params == None:
self.params = defaultparams
else:
self.params = params
self._check_params()
# ... and the config:
if config == None:
self.config = defaultconfig
else:
self.config = config
self._set_instance_config() # Adds some fixed stuff to self.config
self._check_config()
def get_version(self):
"""
To find the SExtractor version, we call it without arguments and parse the stdout.
:returns: a string (e.g. '2.4.4')
"""
try:
p = subprocess.Popen([self.sexpath], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except:
raise RuntimeError("Could not run SExtractor. Is the path '%s' correct ? If not, specify sexpath='/path/to/sextractor'" % self.sexpath)
out, err = p.communicate()
version_match = re.search("[Vv]ersion ([0-9\.])+", err.decode(encoding='UTF-8'))
if version_match is False:
raise RuntimeError("Could not determine SExctractor version, check the output of running '%s'" % (self.sexpath))
version = str(version_match.group()[8:])
assert len(version) != 0
return version
def __str__(self):
"""
A string summary representing the instance
"""
return "'SEW object with workdir %s'" % (self.workdir)
def _check_params(self):
"""
Compares the params to a list of known params, and spits out a useful warning if
something seems fishy.
"""
strange_param_helper = False
for param in self.params:
# It could be that the param encapsulates several values (e.g., "FLUX_RADIUS(10)")
# So we have to dissect this
match = re.compile("(\w*)\(\d*\)").match(param)
if match:
cleanparam = match.group(1)
else:
cleanparam = param
if cleanparam not in self.fullparamlist:
logger.warning("Parameter '%s' seems strange and might be unknown to SExtractor" \
% (param))
strange_param_helper = True
if strange_param_helper:
logger.warning("Known parameters are: %s" % (self.fullparamtxt))
def _check_config(self):
"""
Not yet implemented
"""
pass
def _set_instance_config(self):
"""
Sets config parameters that remain fixed for this instance.
Called by __init__(). If needed, you could still mess with this config after __init__() has run.
"""
if "PARAMETERS_NAME" in self.config.keys():
logger.info("You specified your own PARAMETERS_NAME, I will use it.")
else:
self.config["PARAMETERS_NAME"] = self._get_params_filepath()
if "FILTER_NAME" in self.config.keys():
logger.info("You specified your own FILTER_NAME, I will use it.")
else:
self.config["FILTER_NAME"] = self._get_conv_filepath()
if "CATALOG_NAME" in self.config.keys():
logger.warning("You specified your own CATALOG_NAME, but I will *NOT* use it !")
del self.config["CATALOG_NAME"]
if "PSF_NAME" in self.config.keys():
logger.info("You specified your own PSF_NAME, I will use it.")
else:
self.config["PSF_NAME"] = self._get_psf_filepath()
def _get_params_filepath(self):
"""
Stays the same for a given instance.
"""
return os.path.join(self.workdir, "params.txt")
def _get_config_filepath(self):
"""
Idem, stays the same for a given instance.
Might return the non-default configfilepath, if set.
"""
if self.configfilepath is None:
return os.path.join(self.workdir, "config.txt")
else:
return self.configfilepath
def _get_conv_filepath(self):
"""
Stays the same for a given instance.
"""
return os.path.join(self.workdir, "conv.txt")
def _get_psf_filepath(self):
"""
Stays the same for a given instance.
"""
return os.path.join(self.workdir, "default.psf")
def _get_cat_filepath(self, imgname):
"""
This changes from image to image
"""
return os.path.join(self.workdir, imgname + ".cat.txt")
def _get_assoc_filepath(self, imgname):
"""
Changes from image to image
"""
return os.path.join(self.workdir, imgname + ".assoc.txt")
def _get_log_filepath(self, imgname):
"""
Changes from image to image
"""
return os.path.join(self.workdir, imgname + ".log.txt")
def _write_params(self, force=False):
"""
Writes the parameters to the file, if needed.
:param force: if True, I overwrite any existing file.
"""
if force or not os.path.exists(self._get_params_filepath()):
f = open(self._get_params_filepath(), 'w')
f.write("\n".join(self.params))
f.write("\n")
f.close()
logger.debug("Wrote %s" % (self._get_params_filepath()))
else:
logger.debug("The params file already exists, I don't overwrite it.")
def _write_default_config(self, force=False):
"""
Writes the *default* config file, if needed.
I don't write this file if a specific config file is set.
:param force: if True, I overwrite any existing file.
"""
if self.configfilepath is not None:
logger.debug("You use the existing config file %s, I don't have to write one." % \
(self._get_config_filepath()))
return
if force or not os.path.exists(self._get_config_filepath()):
p = subprocess.Popen([self.sexpath, "-dd"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if err != "":
logger.warning("Ouch, SExtractor complains :")
logger.warning(err)
f = open(self._get_config_filepath(), 'w')
f.write(out.decode(encoding='UTF-8'))
f.close()
logger.debug("Wrote %s" % (self._get_config_filepath()))
else:
logger.debug("Default config file already exists, I don't overwrite it.")
def _write_default_conv(self):
"""
Writes the default convolution matrix, if needed.
"""
if not os.path.exists(self._get_conv_filepath()):
f = open(self._get_conv_filepath(), 'w')
f.write("""CONV NORM
# 3x3 ``all-ground'' convolution mask with FWHM = 2 pixels.
1 2 1
2 4 2
1 2 1""")
f.close()
logger.debug("Wrote %s" % (self._get_conv_filepath()))
else:
logger.debug("Default conv file already exists, I don't overwrite it.")
def _write_default_psf(self):
"""Writes the default psf file, if needed.
"""
if not os.path.exists(self._get_psf_filepath()):
arr= np.array([[[ 2.40381883e-06, 7.60069597e-05, 8.54981554e-05,
7.54465946e-05, 8.13425067e-05, 8.50538563e-05,
7.86255987e-05, 3.32223935e-05, -2.53305316e-05,
-2.43271165e-06, 1.11580441e-04, 1.85113196e-04,
1.11781279e-04, -5.22066912e-05, -1.48545674e-04,
-6.13458324e-05, 7.43288838e-05, 2.76304218e-05,
-1.35628201e-04, -1.57351358e-04, -5.41317095e-05,
-3.80859383e-05, -8.01880524e-05, 1.93700293e-06,
1.28125030e-04, 9.64317587e-05, -1.00769712e-05,
-3.07009032e-05, 2.51779438e-05, 7.89448313e-05,
1.31651745e-04],
[ 1.07151063e-04, 7.80015980e-05, 5.20944668e-05,
1.96801047e-05, 2.58915697e-05, 4.22252306e-05,
2.61278237e-05, 1.09607536e-05, 2.22668241e-05,
1.83793582e-05, 1.69202158e-05, 7.20969401e-05,
1.13053175e-04, 2.51594101e-05, -6.95991839e-05,
4.80339113e-05, 2.57156673e-04, 2.34465711e-04,
3.90260866e-05, 3.77108845e-05, 1.98102425e-04,
1.63195917e-04, -5.02058392e-05, -9.12948817e-05,
8.09195080e-06, -3.04424921e-05, -1.54503243e-04,
-1.23093574e-04, 3.82040671e-05, 1.51542059e-04,
1.55646892e-04],
[ 1.17171738e-04, 9.47605004e-05, 8.96408383e-05,
1.84595974e-05, -6.76616692e-05, -6.04558227e-05,
3.12718876e-05, 1.18101605e-04, 1.40572258e-04,
7.31113250e-05, -2.21763585e-05, -1.00581055e-05,
9.72477792e-05, 1.23102203e-04, 7.73502907e-05,
1.37092138e-04, 2.56634929e-04, 2.35879590e-04,
1.30117784e-04, 1.90375547e-04, 3.48268659e-04,
2.85394461e-04, 1.23960363e-05, -1.02018013e-04,
-1.04167375e-05, -6.52167273e-06, -1.33219000e-04,
-1.12960566e-04, 8.09253615e-05, 2.08761281e-04,
1.66711587e-04],
[ 1.32234691e-05, 1.02787519e-04, 1.83809068e-04,
9.58435703e-05, -9.55243813e-05, -9.70988549e-05,
1.35439012e-04, 2.94848898e-04, 2.40708367e-04,
1.22713420e-04, 5.07345976e-05, 6.42455634e-05,
1.56476934e-04, 1.97447749e-04, 1.18205535e-04,
4.94819542e-05, 6.14344171e-05, 9.58741002e-05,
1.28766158e-04, 1.94652850e-04, 2.41393209e-04,
1.60364230e-04, -1.41999890e-05, -6.13596058e-05,
6.79572622e-05, 1.21097153e-04, -9.76944037e-09,
-5.49086435e-05, 5.58508291e-05, 1.36878050e-04,
1.21550351e-04],
[ 5.26016038e-07, 1.24271712e-04, 2.35970729e-04,
1.74356275e-04, -1.32342866e-05, -5.57084786e-05,
1.29754219e-04, 2.72921199e-04, 2.43686882e-04,
1.98139867e-04, 1.90322971e-04, 1.92822670e-04,
2.19771260e-04, 1.89034996e-04, 2.74552949e-05,
-1.07212341e-04, -5.95865968e-05, 1.05343766e-04,
2.24970223e-04, 2.19185677e-04, 1.25688792e-04,
4.09510940e-05, 5.42922544e-06, 3.56244018e-05,
1.26195810e-04, 1.44066478e-04, 3.65264459e-05,
-3.75594900e-05, 6.59612351e-06, 4.50785592e-05,
7.68707177e-05],
[ 9.01028179e-05, 1.57428833e-04, 2.07729943e-04,
1.83902201e-04, 8.59787106e-05, -2.93249741e-05,
-4.01443904e-05, 8.65905531e-05, 2.74296646e-04,
3.97606462e-04, 3.76625831e-04, 3.10488453e-04,
3.03925684e-04, 2.50657264e-04, 8.78350329e-05,
-1.62432534e-05, 8.49524658e-05, 2.89930234e-04,
4.05957981e-04, 3.60714155e-04, 2.31550643e-04,
1.86596124e-04, 2.22587347e-04, 1.94238193e-04,
9.77661039e-05, 1.30580702e-05, -3.79030898e-05,
-2.51111196e-05, 5.09598831e-05, 1.17862386e-04,
1.58784678e-04],
[ 1.26586805e-04, 1.47764193e-04, 1.21227618e-04,
1.07567241e-04, 9.73561691e-05, -2.62960634e-06,
-7.90764097e-05, 8.23809023e-05, 4.21559729e-04,
5.81062224e-04, 4.64305049e-04, 3.95985029e-04,
5.04385040e-04, 5.14040526e-04, 3.40078899e-04,
2.08895421e-04, 2.48318480e-04, 3.65095621e-04,
4.52472566e-04, 4.69565508e-04, 4.18782613e-04,
4.12627007e-04, 4.26682149e-04, 3.09842784e-04,
1.15190043e-04, 1.26203922e-05, 1.67655326e-05,
9.60616308e-05, 2.24732503e-04, 3.24832916e-04,
3.07584094e-04],
[ 8.70921285e-05, 8.99088554e-05, 4.57445130e-05,
1.47284136e-05, 2.48302695e-05, 3.75261916e-05,
1.04437058e-04, 3.04165005e-04, 5.74388076e-04,
6.31482864e-04, 5.24375762e-04, 6.48113259e-04,
1.03455526e-03, 1.23596319e-03, 1.10962230e-03,
9.19762475e-04, 8.06013879e-04, 7.29060732e-04,
6.80224970e-04, 6.54205156e-04, 5.85820526e-04,
5.15840831e-04, 4.28209722e-04, 3.26885085e-04,
2.66392512e-04, 2.65364826e-04, 2.54026818e-04,
2.83679867e-04, 3.82139726e-04, 4.34142858e-04,
3.40681698e-04],
[ 2.67705618e-05, 3.52060233e-05, 5.01444010e-05,
2.07116755e-05, -1.47125493e-05, 2.03489180e-05,
1.74667264e-04, 3.91519163e-04, 6.15471101e-04,
7.03603029e-04, 7.29316904e-04, 9.43472434e-04,
1.35301880e-03, 1.60744193e-03, 1.57620630e-03,
1.42798119e-03, 1.28054479e-03, 1.16847351e-03,
1.07864384e-03, 9.50956077e-04, 7.26292899e-04,
5.19602501e-04, 3.67075088e-04, 3.11377604e-04,
3.44564585e-04, 4.15781105e-04, 4.12696565e-04,
3.71932460e-04, 3.34744342e-04, 2.89331714e-04,
2.17575871e-04],
[ -6.19701359e-06, 2.79479227e-05, 1.08629363e-04,
1.10446766e-04, 4.28918393e-05, -1.59790234e-05,
6.37421181e-05, 2.94988626e-04, 6.72983995e-04,
9.75804869e-04, 1.23918452e-03, 1.58980582e-03,
2.15687836e-03, 2.84912600e-03, 3.44874267e-03,
3.64848366e-03, 3.35466117e-03, 2.84754485e-03,
2.30689673e-03, 1.70514034e-03, 1.04716129e-03,
6.41009770e-04, 4.67646430e-04, 4.08837222e-04,
2.92016397e-04, 2.86833878e-04, 3.29357979e-04,
2.67972850e-04, 1.00064397e-04, 3.63361505e-05,
9.96371964e-05],
[ -5.67882466e-07, 5.08060548e-05, 1.30551591e-04,
1.57827846e-04, 1.27582811e-04, 4.91817409e-05,
9.40121536e-05, 3.15737299e-04, 7.65093428e-04,
1.24707678e-03, 1.83331280e-03, 2.57003610e-03,
3.45921540e-03, 4.29388555e-03, 4.88529447e-03,
5.04447240e-03, 4.70967358e-03, 4.11674846e-03,
3.41683673e-03, 2.55684974e-03, 1.53652672e-03,
8.98711965e-04, 6.89776614e-04, 6.15979312e-04,
3.43337771e-04, 1.73978668e-04, 1.21991347e-04,
3.66753629e-05, -8.58800995e-05, -3.52677125e-05,
1.08530170e-04],
[ 3.37872116e-05, 6.98544100e-05, 1.03699349e-04,
1.31374065e-04, 1.63710123e-04, 1.81806943e-04,
3.33123491e-04, 5.07342280e-04, 9.24890337e-04,
1.47657737e-03, 2.67444947e-03, 4.68453299e-03,
7.47570582e-03, 1.05615249e-02, 1.58804134e-02,
1.49398511e-02, 1.19190756e-02, 9.18036141e-03,
6.40049716e-03, 4.11758851e-03, 2.35498301e-03,
1.46076444e-03, 9.99144861e-04, 8.69806274e-04,
5.58527070e-04, 2.71378696e-04, 1.69032792e-05,
-1.02712445e-04, -7.02594843e-05, 7.04579725e-05,
1.29349166e-04],
[ 1.50863212e-04, 1.31067063e-04, 1.07944987e-04,
1.11346009e-04, 1.84717428e-04, 2.83048459e-04,
5.35565021e-04, 6.50145288e-04, 1.27295032e-03,
1.84973318e-03, 4.02115844e-03, 8.64035450e-03,
1.85660869e-02, 2.67588310e-02, 4.13986742e-02,
3.75211351e-02, 3.99451032e-02, 2.19815839e-02,
1.46964323e-02, 7.59503990e-03, 3.71277868e-03,
2.43205787e-03, 1.36608526e-03, 1.10033224e-03,
7.15283619e-04, 4.24796075e-04, 9.07893118e-05,
-3.93107002e-05, 3.73856092e-05, 1.34933362e-04,
8.41469518e-05],
[ 3.12657416e-04, 2.02556097e-04, 1.18483724e-04,
1.16338248e-04, 2.58727494e-04, 3.82861763e-04,
6.47869776e-04, 7.22532743e-04, 1.85770064e-03,
2.38643191e-03, 5.77186933e-03, 1.30834254e-02,
3.21811736e-02, 5.02903536e-02, 7.60164186e-02,
8.23163912e-02, 6.55187890e-02, 5.04501201e-02,
2.74458304e-02, 1.12441424e-02, 5.38016111e-03,
3.53081268e-03, 1.61524036e-03, 1.24434254e-03,
7.16076291e-04, 4.93183674e-04, 2.25529220e-04,
8.95171906e-05, 7.44353310e-05, 9.23250846e-05,
5.80607739e-05],
[ 3.23481683e-04, 1.60379801e-04, 6.94236369e-05,
1.16357034e-04, 3.34398763e-04, 4.58175607e-04,
7.41398602e-04, 8.16271116e-04, 2.42795702e-03,
2.75838282e-03, 7.24995369e-03, 2.07453221e-02,
4.65088971e-02, 7.89608508e-02, 1.19249240e-01,
1.33808449e-01, 1.13654882e-01, 8.49788636e-02,
4.27414551e-02, 2.24782787e-02, 6.53822487e-03,
4.25153133e-03, 1.61689415e-03, 1.29075698e-03,
6.18802325e-04, 4.10659268e-04, 2.27874494e-04,
1.22942773e-04, 4.52497516e-05, 1.28073461e-05,
1.60166855e-05],
[ 1.58397612e-04, 4.24731224e-05, 3.01601358e-05,
1.21667144e-04, 3.14779114e-04, 4.15842806e-04,
7.71404011e-04, 8.69169366e-04, 2.61580618e-03,
2.77252262e-03, 7.77245732e-03, 2.10336000e-02,
5.16618006e-02, 8.90334770e-02, 1.32652745e-01,
1.57979146e-01, 1.25354961e-01, 1.05975635e-01,
4.87471484e-02, 2.40709223e-02, 6.75325841e-03,
4.41299053e-03, 1.51851110e-03, 1.27687806e-03,
5.12708386e-04, 2.49664765e-04, 1.04957260e-04,
6.98268996e-05, 1.52905995e-05, -5.66302078e-05,
-1.03157217e-04],
[ 2.37388813e-05, -9.44329622e-06, 7.28973610e-05,
1.45914135e-04, 2.14597429e-04, 2.61879875e-04,
6.45024644e-04, 7.76131405e-04, 2.32133828e-03,
2.64355657e-03, 7.23631820e-03, 1.91156585e-02,
3.87151986e-02, 7.85027444e-02, 1.04771100e-01,
1.32386833e-01, 1.13936760e-01, 8.50518569e-02,
4.59707938e-02, 1.98367629e-02, 6.28389278e-03,
4.06758115e-03, 1.48181198e-03, 1.24518829e-03,
5.25051320e-04, 2.38819033e-04, 6.78987781e-05,
5.05482931e-05, 4.51459491e-05, -2.35327320e-06,
-8.10507045e-05],
[ 5.06010019e-06, 3.72947142e-07, 1.22372061e-04,
1.55802540e-04, 1.24519778e-04, 1.23926759e-04,
4.53011831e-04, 6.43062172e-04, 1.81168271e-03,
2.53919838e-03, 6.02304796e-03, 1.33035155e-02,
3.03080510e-02, 4.51316275e-02, 6.80428073e-02,
7.87105411e-02, 6.83350638e-02, 5.41879274e-02,
3.03043667e-02, 1.30727030e-02, 5.33299660e-03,
3.26684839e-03, 1.41233020e-03, 1.17955718e-03,
6.65704778e-04, 4.21560457e-04, 1.78371061e-04,
9.97985844e-05, 1.39496973e-04, 2.18161294e-04,
2.08089550e-04],
[ 2.39601777e-05, 2.21635291e-05, 1.14513583e-04,
1.19043048e-04, 7.76750167e-05, 1.02026192e-04,
4.02551843e-04, 6.65154657e-04, 1.42666139e-03,
2.27654330e-03, 4.46224120e-03, 8.39890260e-03,
1.30807683e-02, 2.55265832e-02, 3.33221108e-02,
3.61914933e-02, 3.77908386e-02, 2.31504384e-02,
1.68734826e-02, 8.33798666e-03, 4.00891248e-03,
2.29511573e-03, 1.17400289e-03, 9.92648304e-04,
7.22233963e-04, 5.48228098e-04, 2.67189520e-04,
1.37489595e-04, 2.13896274e-04, 3.94635805e-04,
4.39600262e-04],
[ 6.23589949e-05, 8.92760654e-05, 9.79828910e-05,
6.91207024e-05, 8.57825144e-05, 2.02807147e-04,
5.06308512e-04, 7.66013341e-04, 1.16996281e-03,
1.70234416e-03, 2.81825359e-03, 4.55184840e-03,
6.95564412e-03, 9.65203252e-03, 1.53076285e-02,
1.43200746e-02, 1.65908057e-02, 1.03640771e-02,
7.39802886e-03, 4.66627069e-03, 2.64005945e-03,
1.50676281e-03, 8.05904623e-04, 6.65010943e-04,
5.85204456e-04, 4.94408188e-04, 2.34044710e-04,
1.05874708e-04, 2.03739924e-04, 3.64033622e-04,
3.66085063e-04],
[ 8.67988638e-05, 1.41086144e-04, 1.11501467e-04,
1.00657169e-04, 1.85763391e-04, 3.11478914e-04,
4.91909974e-04, 6.40244340e-04, 8.55031773e-04,
1.11193419e-03, 1.64164591e-03, 2.48288224e-03,
3.66801536e-03, 4.93864343e-03, 5.92702301e-03,
6.39919844e-03, 6.21232018e-03, 5.31890662e-03,
3.92624969e-03, 2.61738058e-03, 1.64109841e-03,
1.00608368e-03, 5.06966840e-04, 3.94159695e-04,
4.27827297e-04, 3.86294152e-04, 1.48756444e-04,
4.65777812e-05, 1.58774055e-04, 2.52897677e-04,
2.00576906e-04],
[ 3.37751662e-05, 7.41251060e-05, 1.17368654e-04,
2.04129843e-04, 3.13290016e-04, 3.23200540e-04,
2.81042245e-04, 2.99956737e-04, 4.86716832e-04,
7.54022039e-04, 1.09415397e-03, 1.52351952e-03,
2.05146684e-03, 2.48926901e-03, 2.69005261e-03,
2.72322842e-03, 2.66885129e-03, 2.44992133e-03,
1.99378422e-03, 1.47749775e-03, 1.01475196e-03,
6.71428046e-04, 4.07902524e-04, 3.35486606e-04,
3.32384778e-04, 2.58078508e-04, 8.59883512e-05,
4.60952906e-05, 1.45739934e-04, 1.80587085e-04,
1.13000431e-04],
[ -4.24637801e-06, -1.40777156e-05, 8.70996955e-05,
2.22028204e-04, 2.97378720e-04, 2.60808825e-04,
1.91989529e-04, 1.85491386e-04, 3.34152573e-04,
5.63626410e-04, 8.59337917e-04, 1.17473991e-03,
1.52407016e-03, 1.90831302e-03, 2.20574974e-03,
2.22596643e-03, 1.98955438e-03, 1.70866319e-03,
1.43378158e-03, 1.09570054e-03, 7.16906914e-04,
5.33116632e-04, 5.02312207e-04, 4.49821993e-04,
2.42741953e-04, 9.23747939e-05, 7.99639820e-05,
1.36120565e-04, 1.63458535e-04, 1.22343510e-04,
3.93863447e-05],
[ 5.69609983e-05, 1.73523185e-05, 6.40260187e-05,
1.20879384e-04, 1.43646175e-04, 2.16061788e-04,
3.54024611e-04, 4.07661224e-04, 3.58933845e-04,
3.38174548e-04, 4.71632462e-04, 6.41264895e-04,
7.74718646e-04, 9.90612898e-04, 1.26875984e-03,
1.31991541e-03, 1.06533815e-03, 8.48297495e-04,
8.23326525e-04, 7.31756329e-04, 4.78470349e-04,
4.04468388e-04, 5.31740312e-04, 4.86872537e-04,
1.58969007e-04, 6.51523487e-06, 1.51630229e-04,
2.65381881e-04, 1.91450192e-04, 7.43656929e-05,
-1.58241182e-05],
[ 1.17583440e-04, 7.81745766e-05, 6.28807975e-05,
5.76429666e-05, 8.64068497e-05, 2.51753721e-04,
4.96915658e-04, 5.61750669e-04, 3.79482983e-04,
1.83867858e-04, 1.96382141e-04, 3.05051944e-04,
3.97037104e-04, 5.88906172e-04, 8.83791305e-04,
9.97585594e-04, 7.86893943e-04, 5.53074176e-04,
5.21226437e-04, 4.94938286e-04, 3.33534525e-04,
2.57302396e-04, 3.20619816e-04, 2.87285424e-04,
8.89307703e-05, 4.46845261e-05, 2.19037989e-04,
3.19396699e-04, 2.09293052e-04, 8.27520344e-05,
3.74381343e-05],
[ 8.29820856e-05, 6.90407614e-05, 7.88107500e-05,
1.12841059e-04, 1.79206720e-04, 2.88871757e-04,
3.90221598e-04, 3.79044010e-04, 2.56998552e-04,
1.32744783e-04, 9.68408785e-05, 1.29585766e-04,
2.00170762e-04, 3.32332536e-04, 4.80854040e-04,
5.01023198e-04, 3.60588630e-04, 2.58127344e-04,
3.06765636e-04, 3.57608777e-04, 2.83811503e-04,
1.61550328e-04, 6.34512617e-05, 4.94756819e-07,
-1.60358031e-05, 4.90231869e-05, 1.74128450e-04,
2.60341651e-04, 2.19396825e-04, 1.49694664e-04,
1.41322962e-04],
[ 1.49950465e-05, 7.63047574e-05, 1.41467142e-04,
1.94792388e-04, 2.02353724e-04, 1.83372205e-04,
1.58346622e-04, 1.04015133e-04, 7.28349551e-05,
7.21577380e-05, 5.38176573e-05, 7.33939814e-05,
1.82003147e-04, 2.95805483e-04, 2.92530225e-04,
1.55183196e-04, 2.42199985e-05, 6.74540934e-05,
2.36359934e-04, 3.27022077e-04, 2.74324237e-04,
1.71450374e-04, 4.91856554e-05, -4.54968213e-05,
-4.42001874e-05, 1.27510330e-05, 7.70893821e-05,
1.61725053e-04, 2.11200968e-04, 1.84710239e-04,
1.30427681e-04],
[ 3.82523067e-05, 1.62776094e-04, 2.29700861e-04,
2.13740321e-04, 8.46937110e-05, 2.72549073e-06,
3.94645940e-05, 1.42122672e-05, -6.94230403e-05,
-9.39216843e-05, -6.60941514e-05, 3.26379231e-05,
2.00269962e-04, 2.96857586e-04, 2.09510399e-04,
1.20158993e-05, -1.01647180e-04, -1.86980396e-05,
1.49961386e-04, 2.03447606e-04, 1.50521664e-04,
1.42593373e-04, 1.54100853e-04, 1.37039591e-04,
1.27139734e-04, 1.02055506e-04, 4.87680190e-05,
6.11553332e-05, 1.30138273e-04, 9.36011056e-05,
-3.46084780e-05],
[ 1.45188038e-04, 2.12477811e-04, 2.30793623e-04,
1.70628540e-04, 3.61743005e-05, 2.19909125e-05,
1.38719668e-04, 1.01310732e-04, -9.27221336e-05,
-1.97339192e-04, -1.34774629e-04, 3.83440201e-05,
2.23026072e-04, 2.67371885e-04, 1.41579862e-04,
4.79614926e-07, -3.39062790e-05, 1.89720467e-05,
8.16576139e-05, 7.70369734e-05, 4.06700310e-05,
7.07829677e-05, 1.45996106e-04, 2.27258963e-04,
2.88130395e-04, 2.59734894e-04, 1.29991880e-04,
3.11676340e-05, 1.03388302e-05, -7.12372421e-05,
-1.81315132e-04],
[ 1.74572386e-04, 1.21468227e-04, 1.03557053e-04,
9.54290008e-05, 1.19859134e-04, 2.27726166e-04,
3.10518721e-04, 2.02525058e-04, -9.97376901e-06,
-1.25466555e-04, -8.44838651e-05, 5.78080217e-05,
1.99554896e-04, 1.96891691e-04, 6.75892225e-05,
6.29681153e-07, 4.65245939e-05, 1.00516285e-04,
1.16317417e-04, 9.37591758e-05, 5.00349415e-05,
3.41828090e-05, 6.40527724e-05, 1.28187123e-04,
2.03160991e-04, 2.64978939e-04, 2.48178956e-04,
1.35484006e-04, -2.65322942e-05, -1.60183437e-04,
-1.74708708e-04],
[ 7.79956536e-05, 4.45946062e-05, 6.88394939e-05,
1.20791679e-04, 2.06197074e-04, 2.92086275e-04,
2.88313808e-04, 1.82209784e-04, 7.11210378e-05,
9.09970095e-06, -1.04216479e-05, 2.26099164e-05,
9.15258279e-05, 9.49135938e-05, 2.28152076e-05,
-9.77456921e-07, 6.45484615e-05, 1.56735507e-04,
2.20143396e-04, 2.00825802e-04, 1.07963591e-04,
4.76653659e-05, 5.46586489e-05, 7.16102004e-05,
9.44839630e-05, 1.83187702e-04, 2.52839702e-04,
1.77972455e-04, -1.64299454e-05, -1.33052861e-04,
-7.75216977e-05]]], dtype=np.float32)
tbhdu=fits.BinTableHDU.from_columns(fits.ColDefs([fits.Column(name='PSF_MASK' , format='961E' , dim='(31, 31, 1)' , array=arr)]))
hdr=tbhdu.header
hdr.set('EXTNAME' , 'PSF_DATA', 'TABLE NAME')
hdr.set('LOADED' , 36 , 'Number of loaded sources')
hdr.set('ACCEPTED' , 32 , 'Number of accepted sources')
hdr.set('CHI2' , 1.4190 , 'Final Chi2')
hdr.set('POLNAXIS' , 0 , 'Number of context parameters')
hdr.set('POLNGRP' , 0 , 'Number of context groups')
hdr.set('PSF_FWHM' , 2.5813 , 'PSF FWHM')
hdr.set('PSF_SAMP' , 0.5000 , 'Sampling step of the PSF data')
hdr.set('PSFNAXIS' , 3 , 'Dimensionality of the PSF data')
hdr.set('PSFAXIS1' , 31 , 'Number of element along this axis')
hdr.set('PSFAXIS2' , 31 , 'Number of element along this axis')
hdr.set('PSFAXIS3' , 1 , 'Number of element along this axis')
thdulist=fits.HDUList()
thdulist.append(tbhdu)
thdulist.writeto(self._get_psf_filepath())
logger.debug("Wrote %s" % (self._get_psf_filepath()))
else:
logger.debug("Default psf file already exists, I don't overwrite it.")
def _clean_workdir(self):
"""
Removes the config/param files related to this instance, to allow for a fresh restart.
Files related to specific images are not removed.
"""
toremove = [self._get_config_filepath(), self._get_params_filepath(), self._get_conv_filepath(), self._get_psf_filepath()]
for filepath in toremove:
if os.path.exists(filepath):
logger.debug("Removing existing file %s..." % (filepath))
os.remove(filepath)
def _write_assoc(self, cat, xname, yname, imgname):
"""
Writes a plain text file which can be used as sextractor input for the ASSOC identification.
And "index" for each source is generated, it gets used to identify galaxies.
"""
#if assoc_xname not in assoc_cat.colnames or assoc_yname not in assoc_cat.colnames:
# raise RuntimeError("I don't have columns %s or %s" % (assoc_xname, assoc_yname))
if os.path.exists(self._get_assoc_filepath(imgname)):
logger.warning("ASSOC file already exists, I will overwrite it")
lines = []
for (number, row) in enumerate(cat):
# Seems safe(r) to not use row.index but our own number.
lines.append("%.3f\t%.3f\t%i\n" % (row[xname], row[yname], number))
lines = "".join(lines)
f = open(self._get_assoc_filepath(imgname), "w")
f.writelines(lines)
f.close()
logger.debug("Wrote ASSOC file %s..." % (self._get_assoc_filepath(imgname)))
def _add_prefix(self, table, prefix):
"""
Modifies the column names of a table by prepending the prefix *in place*.
Skips the VECTOR_ASSOC stuff !
"""
if prefix == "":
return
for colname in table.colnames:
if colname not in ["VECTOR_ASSOC", "VECTOR_ASSOC_1", "VECTOR_ASSOC_2"]:
table.rename_column(colname, prefix + colname)
def __call__(self, imgfilepath, imgname=None, assoc_cat=None, assoc_xname="x", assoc_yname="y",
returncat=True, prefix="", writelog=True):
"""
Runs SExtractor on a given image.
:param imgfilepath: Path to the input FITS image I should run on
:param assoc_cat: optional input catalog (astropy table), if you want to use the ASSOC helper
:param assoc_xname: x coordinate name I should use in the ASSOC helper
:param assoc_yname: idem
:param returncat: by default I read the SExtractor output catalog and return it as an astropy
table.
If set to False, I do not attempt to read it.
:param prefix: will be prepended to the column names of the astropy table that I return
:type prefix: string
:param writelog: if True I save the sextractor command line input and output into a dedicated
log file in the workdir.
:returns: a dict containing the keys:
* **catfilepath**: the path to the sextractor output catalog file
* **table**: the astropy table of the output catalog (if returncat was not set to False)
* **workdir**: the path to the workdir (all my internal files are there)
* **logfilepath**: the path to the SExtractor log file (in the workdir)
Everything related to this particular image stays within this method, the SExtractor instance
(in particular config) is not modified !
"""
starttime = datetime.now()
# Let's first check if the image file exists.
if not os.path.exists(imgfilepath):
raise IOError("The image file %s does not exist." % imgfilepath)
logger.info("Preparing to run SExtractor on %s..." % imgfilepath)
if imgname == None:
imgname = os.path.splitext(os.path.basename(imgfilepath))[0]
logger.debug("Using imgname '%s'..." % (imgname))
# We make a deep copy of the config, that we can modify with settings related to this particular
# image.
imgconfig = copy.deepcopy(self.config)
# We set the catalog name :
imgconfig["CATALOG_NAME"] = self._get_cat_filepath(imgname)
if os.path.exists(self._get_cat_filepath(imgname)):
logger.warning("Output catalog %s already exists, I will overwrite it" % (self._get_cat_filepath(imgname)))
# We prepare the ASSOC catalog file, if needed
if assoc_cat is not None:
logger.info("I will run in ASSOC mode, trying to find %i sources..." % (len(assoc_cat)))
if "VECTOR_ASSOC(3)" not in self.params:
raise RuntimeError("To use the ASSOC helper, you have to add 'VECTOR_ASSOC(3)' to the params")
if assoc_xname not in assoc_cat.colnames or assoc_yname not in assoc_cat.colnames:
raise RuntimeError("I don't have columns %s or %s" % (assoc_xname, assoc_yname))
if "VECTOR_ASSOC_2" in assoc_cat.colnames:
raise RuntimeError("Do not give me an assoc_cat that already contains a column VECTOR_ASSOC_2")
for param in self.params + [prefix + "assoc_flag"]:
# This is not 100% correct, as some params might be vectors.
if prefix + param in assoc_cat.colnames:
raise RuntimeError("Your assoc_cat already has a column named %s, fix this" % (prefix + param))
self._write_assoc(cat=assoc_cat, xname=assoc_xname, yname=assoc_yname, imgname=imgname)
imgconfig["ASSOC_DATA"] = "1, 2, 3"
imgconfig["ASSOC_NAME"] = self._get_assoc_filepath(imgname)
imgconfig["ASSOC_PARAMS"] = "1, 2"
if "ASSOC_RADIUS" not in imgconfig:
logger.warning("ASSOC_RADIUS not specified, using a default of 10.0")
imgconfig["ASSOC_RADIUS"] = 10.0
if "ASSOC_TYPE" not in imgconfig:
logger.warning("ASSOC_TYPE not specified, using a default NEAREST")
imgconfig["ASSOC_TYPE"] = "NEAREST"
if "ASSOCSELEC_TYPE" in imgconfig:
raise RuntimeError("Sorry, you cannot mess with ASSOCSELEC_TYPE yourself when using the helper. I'm using MATCHED.")
imgconfig["ASSOCSELEC_TYPE"] = "MATCHED"
# We write the input files (if needed)
self._write_default_config()
self._write_params()
self._write_default_conv()
self._write_default_psf()
# We build the command line arguments
popencmd = [self.sexpath, imgfilepath, "-c", self._get_config_filepath()]
if self.nice != None: # We prepend the nice command
popencmd[:0] = ["nice", "-n", str(self.nice)]
# We add the current state of config
for (key, value) in imgconfig.items():
popencmd.append("-"+str(key))
popencmd.append(str(value).replace(' ',''))
# And we run
logger.info("Starting SExtractor now, with niceness %s..." % (self.nice))
logger.debug("Running with command %s..." % (popencmd))
p = subprocess.Popen(popencmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if writelog:
logfile = open(self._get_log_filepath(imgname), "w")
logfile.write("SExtractor was called with :\n")
logfile.write(" ".join(popencmd))
logfile.write("\n\nA nicer view of the config:\n")
logfile.write("\n".join(["%30s : %30s" % (str(key), str(value)) for (key, value) in imgconfig.items()]))
logfile.write("\n\n####### stdout #######\n")
logfile.write(out.decode(encoding='UTF-8'))
logfile.write("\n####### stderr #######\n")
logfile.write(err.decode(encoding='UTF-8'))
logfile.write("\n")
logfile.close()
logger.info("SExtractor stderr:")
logger.info(err)
if not "All done" in err.decode(encoding='UTF-8'):
logger.warning("Ouch, something seems wrong, check SExtractor log: %s" % self._get_log_filepath(imgname))
endtime = datetime.now()
logger.info("Running SExtractor done, it took %.2f seconds." % \
((endtime - starttime).total_seconds()))
# Let's check if this worked.
if not os.path.isfile(self._get_cat_filepath(imgname)):
raise RuntimeError("It seems that SExtractor did not write the file '%s'. Check SExtractor log: %s" % (self._get_cat_filepath(imgname), self._get_log_filepath(imgname)))
# We return a dict. It always contains at least the path to the sextractor catalog:
output = {"catfilepath":self._get_cat_filepath(imgname), "workdir":self.workdir}
if writelog:
output["logfilepath"] = self._get_log_filepath(imgname)
# And we read the output, if asked for:
if returncat:
if assoc_cat is None:
sextable = astropy.table.Table.read(self._get_cat_filepath(imgname),
format="ascii.sextractor")
logger.info("Read %i objects from the SExtractor output catalog" % (len(sextable)))
self._add_prefix(sextable, prefix)
output["table"] = sextable
else: # We have to process the output catalog, merging it.
# We add the "number" column to the assoc_cat, calling it VECTOR_ASSOC_2:
intable = copy.deepcopy(assoc_cat)
intable["VECTOR_ASSOC_2"] = range(len(assoc_cat))
# We read in the SExtractor output:
sextable = astropy.table.Table.read(self._get_cat_filepath(imgname),
format="ascii.sextractor")
logger.info("Read %i objects from the SExtractor output catalog" % (len(sextable)))
self._add_prefix(sextable, prefix)
sextable.remove_columns(["VECTOR_ASSOC", "VECTOR_ASSOC_1"])
# Due to what seems to be a bug in SExtractor (version 2.19.5 and earlier),
# we need to kick out "duplicated" (same VECTOR_ASSOC_2) rows.
# That's weird, as in principle we asked to keep the NEAREST !
sortedassoc = np.sort(sextable["VECTOR_ASSOC_2"].data)
duplassoc = list(np.unique(sortedassoc[sortedassoc[1:] == sortedassoc[:-1]]))
# The unique is here as there might be more than 2 identical numbers...
if len(duplassoc) > 0:
logger.warning("%i sources from the SExtractor catalog are strange duplicates (bug ?), I discard them." % (len(duplassoc)))
rowindices_to_remove = []
for row in sextable:
if row["VECTOR_ASSOC_2"] in duplassoc:
rowindices_to_remove.append(row.index)
sextable.remove_rows(rowindices_to_remove)
if len(sextable) == 0:
raise RuntimeError("SExtractor has returned no ASSOC match")
# We merge the tables, keeping all entries of the "intable"
joined = astropy.table.join(intable, sextable,
join_type='left', keys='VECTOR_ASSOC_2',
# raises an error in case of metadata conflict.
metadata_conflicts = "error",
# Will only be used in case of column name conflicts.
table_names = ['ASSOC', 'SEx'],
uniq_col_name = "{table_name}_{col_name}"
)
# This join does not mix the order, as the output is sorted according to our own
# VECTOR_ASSOC_2
# We remove the last ASSOC column:
joined.remove_columns(["VECTOR_ASSOC_2"])
#assert len(intable) == len(joined)
# More explicit:
if not len(intable) == len(joined):
raise RuntimeError("Problem with joined tables: intable has %i rows, joined has %i. %s %s" % (len(intable), len(joined), intable.colnames, joined.colnames))
# The join might return a **masked** table.
# In any case, we add one simply-named column with a flag telling if the
# identification has worked.
if joined.masked:
logger.info("ASSOC join done, my output is a masked table.")
joined[prefix + "assoc_flag"] = joined[joined.colnames[-1]].mask == False
nfound = sum(joined[prefix + "assoc_flag"])
logger.info("I could find %i out of %i sources (%i are missing)" % \
(nfound, len(assoc_cat), len(assoc_cat)-nfound))
else:
logger.info("ASSOC join done, I could find all your sources, my output is not masked.")
joined[prefix + "assoc_flag"] = [True] * len(joined)
output["table"] = joined
return output
# def destroy(self):
# """
# Removes the complete working dir, careful with this.
# """
# # No, this is way to dangerous, workdir could be "."
# #shutil.rmtree(self.workdir)
# Some class attributes:
# We give this fullparamtxt here as some earlier versions of sextractor are not able to spit it out.
# It's only used to check your params for typos, anyway.
fullparamtxt = """
#NUMBER Running object number
#EXT_NUMBER FITS extension number
#FLUX_ISO Isophotal flux [count]
#FLUXERR_ISO RMS error for isophotal flux [count]
#MAG_ISO Isophotal magnitude [mag]
#MAGERR_ISO RMS error for isophotal magnitude [mag]
#FLUX_ISOCOR Corrected isophotal flux [count]
#FLUXERR_ISOCOR RMS error for corrected isophotal flux [count]
#MAG_ISOCOR Corrected isophotal magnitude [mag]
#MAGERR_ISOCOR RMS error for corrected isophotal magnitude [mag]
#FLUX_APER Flux vector within fixed circular aperture(s) [count]
#FLUXERR_APER RMS error vector for aperture flux(es) [count]
#MAG_APER Fixed aperture magnitude vector [mag]
#MAGERR_APER RMS error vector for fixed aperture mag. [mag]
#FLUX_AUTO Flux within a Kron-like elliptical aperture [count]
#FLUXERR_AUTO RMS error for AUTO flux [count]
#MAG_AUTO Kron-like elliptical aperture magnitude [mag]
#MAGERR_AUTO RMS error for AUTO magnitude [mag]
#FLUX_PETRO Flux within a Petrosian-like elliptical aperture [count]
#FLUXERR_PETRO RMS error for PETROsian flux [count]
#MAG_PETRO Petrosian-like elliptical aperture magnitude [mag]
#MAGERR_PETRO RMS error for PETROsian magnitude [mag]
#FLUX_BEST Best of FLUX_AUTO and FLUX_ISOCOR [count]
#FLUXERR_BEST RMS error for BEST flux [count]
#MAG_BEST Best of MAG_AUTO and MAG_ISOCOR [mag]
#MAGERR_BEST RMS error for MAG_BEST [mag]
#FLUX_WIN Gaussian-weighted flux [count]
#FLUXERR_WIN RMS error for WIN flux [count]
#MAG_WIN Gaussian-weighted magnitude [mag]
#MAGERR_WIN RMS error for MAG_WIN [mag]
#FLUX_SOMFIT Flux derived from SOM fit [count]
#FLUXERR_SOMFIT RMS error for SOMFIT flux [count]
#MAG_SOMFIT Magnitude derived from SOM fit [mag]
#MAGERR_SOMFIT Magnitude error derived from SOM fit [mag]
#ERROR_SOMFIT Reduced Chi-square error of the SOM fit
#VECTOR_SOMFIT Position vector of the winning SOM node
#KRON_RADIUS Kron apertures in units of A or B
#PETRO_RADIUS Petrosian apertures in units of A or B
#BACKGROUND Background at centroid position [count]
#THRESHOLD Detection threshold above background [count]
#FLUX_MAX Peak flux above background [count]
#ISOAREA_IMAGE Isophotal area above Analysis threshold [pixel**2]
#ISOAREAF_IMAGE Isophotal area (filtered) above Detection threshold [pixel**2]
#XMIN_IMAGE Minimum x-coordinate among detected pixels [pixel]
#YMIN_IMAGE Minimum y-coordinate among detected pixels [pixel]
#XMAX_IMAGE Maximum x-coordinate among detected pixels [pixel]
#YMAX_IMAGE Maximum y-coordinate among detected pixels [pixel]
#XPEAK_IMAGE x-coordinate of the brightest pixel [pixel]
#YPEAK_IMAGE y-coordinate of the brightest pixel [pixel]
#XPEAK_WORLD World-x coordinate of the brightest pixel [deg]
#YPEAK_WORLD World-y coordinate of the brightest pixel [deg]
#ALPHAPEAK_SKY Right ascension of brightest pix (native) [deg]
#DELTAPEAK_SKY Declination of brightest pix (native) [deg]
#ALPHAPEAK_J2000 Right ascension of brightest pix (J2000) [deg]
#DELTAPEAK_J2000 Declination of brightest pix (J2000) [deg]
#ALPHAPEAK_B1950 Right ascension of brightest pix (B1950) [deg]
#DELTAPEAK_B1950 Declination of brightest pix (B1950) [deg]
#X_IMAGE Object position along x [pixel]
#Y_IMAGE Object position along y [pixel]
#X_IMAGE_DBL Object position along x (double precision) [pixel]
#Y_IMAGE_DBL Object position along y (double precision) [pixel]
#X_WORLD Barycenter position along world x axis [deg]
#Y_WORLD Barycenter position along world y axis [deg]
#X_MAMA Barycenter position along MAMA x axis [m**(-6)]
#Y_MAMA Barycenter position along MAMA y axis [m**(-6)]
#ALPHA_SKY Right ascension of barycenter (native) [deg]
#DELTA_SKY Declination of barycenter (native) [deg]
#ALPHA_J2000 Right ascension of barycenter (J2000) [deg]
#DELTA_J2000 Declination of barycenter (J2000) [deg]
#ALPHA_B1950 Right ascension of barycenter (B1950) [deg]
#DELTA_B1950 Declination of barycenter (B1950) [deg]
#X2_IMAGE Variance along x [pixel**2]
#Y2_IMAGE Variance along y [pixel**2]
#XY_IMAGE Covariance between x and y [pixel**2]
#X2_WORLD Variance along X-WORLD (alpha) [deg**2]
#Y2_WORLD Variance along Y-WORLD (delta) [deg**2]
#XY_WORLD Covariance between X-WORLD and Y-WORLD [deg**2]
#CXX_IMAGE Cxx object ellipse parameter [pixel**(-2)]
#CYY_IMAGE Cyy object ellipse parameter [pixel**(-2)]
#CXY_IMAGE Cxy object ellipse parameter [pixel**(-2)]
#CXX_WORLD Cxx object ellipse parameter (WORLD units) [deg**(-2)]
#CYY_WORLD Cyy object ellipse parameter (WORLD units) [deg**(-2)]
#CXY_WORLD Cxy object ellipse parameter (WORLD units) [deg**(-2)]
#A_IMAGE Profile RMS along major axis [pixel]
#B_IMAGE Profile RMS along minor axis [pixel]
#THETA_IMAGE Position angle (CCW/x) [deg]
#A_WORLD Profile RMS along major axis (world units) [deg]
#B_WORLD Profile RMS along minor axis (world units) [deg]
#THETA_WORLD Position angle (CCW/world-x) [deg]
#THETA_SKY Position angle (east of north) (native) [deg]
#THETA_J2000 Position angle (east of north) (J2000) [deg]
#THETA_B1950 Position angle (east of north) (B1950) [deg]
#ERRX2_IMAGE Variance of position along x [pixel**2]
#ERRY2_IMAGE Variance of position along y [pixel**2]
#ERRXY_IMAGE Covariance of position between x and y [pixel**2]
#ERRX2_WORLD Variance of position along X-WORLD (alpha) [deg**2]
#ERRY2_WORLD Variance of position along Y-WORLD (delta) [deg**2]
#ERRXY_WORLD Covariance of position X-WORLD/Y-WORLD [deg**2]
#ERRCXX_IMAGE Cxx error ellipse parameter [pixel**(-2)]
#ERRCYY_IMAGE Cyy error ellipse parameter [pixel**(-2)]
#ERRCXY_IMAGE Cxy error ellipse parameter [pixel**(-2)]
#ERRCXX_WORLD Cxx error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCYY_WORLD Cyy error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCXY_WORLD Cxy error ellipse parameter (WORLD units) [deg**(-2)]
#ERRA_IMAGE RMS position error along major axis [pixel]
#ERRB_IMAGE RMS position error along minor axis [pixel]
#ERRTHETA_IMAGE Error ellipse position angle (CCW/x) [deg]
#ERRA_WORLD World RMS position error along major axis [deg]
#ERRB_WORLD World RMS position error along minor axis [deg]
#ERRTHETA_WORLD Error ellipse pos. angle (CCW/world-x) [deg]
#ERRTHETA_SKY Native error ellipse pos. angle (east of north) [deg]
#ERRTHETA_J2000 J2000 error ellipse pos. angle (east of north) [deg]
#ERRTHETA_B1950 B1950 error ellipse pos. angle (east of north) [deg]
#XWIN_IMAGE Windowed position estimate along x [pixel]
#YWIN_IMAGE Windowed position estimate along y [pixel]
#XWIN_WORLD Windowed position along world x axis [deg]
#YWIN_WORLD Windowed position along world y axis [deg]
#ALPHAWIN_SKY Windowed right ascension (native) [deg]
#DELTAWIN_SKY Windowed declination (native) [deg]
#ALPHAWIN_J2000 Windowed right ascension (J2000) [deg]
#DELTAWIN_J2000 windowed declination (J2000) [deg]
#ALPHAWIN_B1950 Windowed right ascension (B1950) [deg]
#DELTAWIN_B1950 Windowed declination (B1950) [deg]
#X2WIN_IMAGE Windowed variance along x [pixel**2]
#Y2WIN_IMAGE Windowed variance along y [pixel**2]
#XYWIN_IMAGE Windowed covariance between x and y [pixel**2]
#X2WIN_WORLD Windowed variance along X-WORLD (alpha) [deg**2]
#Y2WIN_WORLD Windowed variance along Y-WORLD (delta) [deg**2]
#XYWIN_WORLD Windowed covariance between X-WORLD and Y-WORLD [deg**2]
#CXXWIN_IMAGE Windowed Cxx object ellipse parameter [pixel**(-2)]
#CYYWIN_IMAGE Windowed Cyy object ellipse parameter [pixel**(-2)]
#CXYWIN_IMAGE Windowed Cxy object ellipse parameter [pixel**(-2)]
#CXXWIN_WORLD Windowed Cxx object ellipse parameter (WORLD units) [deg**(-2)]
#CYYWIN_WORLD Windowed Cyy object ellipse parameter (WORLD units) [deg**(-2)]
#CXYWIN_WORLD Windowed Cxy object ellipse parameter (WORLD units) [deg**(-2)]
#AWIN_IMAGE Windowed profile RMS along major axis [pixel]
#BWIN_IMAGE Windowed profile RMS along minor axis [pixel]
#THETAWIN_IMAGE Windowed position angle (CCW/x) [deg]
#AWIN_WORLD Windowed profile RMS along major axis (world units) [deg]
#BWIN_WORLD Windowed profile RMS along minor axis (world units) [deg]
#THETAWIN_WORLD Windowed position angle (CCW/world-x) [deg]
#THETAWIN_SKY Windowed position angle (east of north) (native) [deg]
#THETAWIN_J2000 Windowed position angle (east of north) (J2000) [deg]
#THETAWIN_B1950 Windowed position angle (east of north) (B1950) [deg]
#ERRX2WIN_IMAGE Variance of windowed pos along x [pixel**2]
#ERRY2WIN_IMAGE Variance of windowed pos along y [pixel**2]
#ERRXYWIN_IMAGE Covariance of windowed pos between x and y [pixel**2]
#ERRX2WIN_WORLD Variance of windowed pos along X-WORLD (alpha) [deg**2]
#ERRY2WIN_WORLD Variance of windowed pos along Y-WORLD (delta) [deg**2]
#ERRXYWIN_WORLD Covariance of windowed pos X-WORLD/Y-WORLD [deg**2]
#ERRCXXWIN_IMAGE Cxx windowed error ellipse parameter [pixel**(-2)]
#ERRCYYWIN_IMAGE Cyy windowed error ellipse parameter [pixel**(-2)]
#ERRCXYWIN_IMAGE Cxy windowed error ellipse parameter [pixel**(-2)]
#ERRCXXWIN_WORLD Cxx windowed error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCYYWIN_WORLD Cyy windowed error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCXYWIN_WORLD Cxy windowed error ellipse parameter (WORLD units) [deg**(-2)]
#ERRAWIN_IMAGE RMS windowed pos error along major axis [pixel]
#ERRBWIN_IMAGE RMS windowed pos error along minor axis [pixel]
#ERRTHETAWIN_IMAGE Windowed error ellipse pos angle (CCW/x) [deg]
#ERRAWIN_WORLD World RMS windowed pos error along major axis [deg]
#ERRBWIN_WORLD World RMS windowed pos error along minor axis [deg]
#ERRTHETAWIN_WORLD Windowed error ellipse pos. angle (CCW/world-x) [deg]
#ERRTHETAWIN_SKY Native windowed error ellipse pos. angle (east of north) [deg]
#ERRTHETAWIN_J2000 J2000 windowed error ellipse pos. angle (east of north) [deg]
#ERRTHETAWIN_B1950 B1950 windowed error ellipse pos. angle (east of north) [deg]
#NITER_WIN Number of iterations for WIN centering
#MU_THRESHOLD Detection threshold above background [mag * arcsec**(-2)]
#MU_MAX Peak surface brightness above background [mag * arcsec**(-2)]
#ISOAREA_WORLD Isophotal area above Analysis threshold [deg**2]
#ISOAREAF_WORLD Isophotal area (filtered) above Detection threshold [deg**2]
#ISO0 Isophotal area at level 0 [pixel**2]
#ISO1 Isophotal area at level 1 [pixel**2]
#ISO2 Isophotal area at level 2 [pixel**2]
#ISO3 Isophotal area at level 3 [pixel**2]
#ISO4 Isophotal area at level 4 [pixel**2]
#ISO5 Isophotal area at level 5 [pixel**2]
#ISO6 Isophotal area at level 6 [pixel**2]
#ISO7 Isophotal area at level 7 [pixel**2]
#FLAGS Extraction flags
#FLAGS_WEIGHT Weighted extraction flags
#FLAGS_WIN Flags for WINdowed parameters
#IMAFLAGS_ISO FLAG-image flags OR'ed over the iso. profile
#NIMAFLAGS_ISO Number of flagged pixels entering IMAFLAGS_ISO
#FWHM_IMAGE FWHM assuming a gaussian core [pixel]
#FWHM_WORLD FWHM assuming a gaussian core [deg]
#ELONGATION A_IMAGE/B_IMAGE
#ELLIPTICITY 1 - B_IMAGE/A_IMAGE
#POLAR_IMAGE (A_IMAGE^2 - B_IMAGE^2)/(A_IMAGE^2 + B_IMAGE^2)
#POLAR_WORLD (A_WORLD^2 - B_WORLD^2)/(A_WORLD^2 + B_WORLD^2)
#POLARWIN_IMAGE (AWIN^2 - BWIN^2)/(AWIN^2 + BWIN^2)
#POLARWIN_WORLD (AWIN^2 - BWIN^2)/(AWIN^2 + BWIN^2)
#CLASS_STAR S/G classifier output
#VIGNET Pixel data around detection [count]
#VIGNET_SHIFT Pixel data around detection, corrected for shift [count]
#VECTOR_ASSOC ASSOCiated parameter vector
#NUMBER_ASSOC Number of ASSOCiated IDs
#THRESHOLDMAX Maximum threshold possible for detection [count]
#FLUX_GROWTH Cumulated growth-curve [count]
#FLUX_GROWTHSTEP Step for growth-curves [pixel]
#MAG_GROWTH Cumulated magnitude growth-curve [mag]
#MAG_GROWTHSTEP Step for growth-curves [pixel]
#FLUX_RADIUS Fraction-of-light radii [pixel]
#XPSF_IMAGE X coordinate from PSF-fitting [pixel]
#YPSF_IMAGE Y coordinate from PSF-fitting [pixel]
#XPSF_WORLD PSF position along world x axis [deg]
#YPSF_WORLD PSF position along world y axis [deg]
#ALPHAPSF_SKY Right ascension of the fitted PSF (native) [deg]
#DELTAPSF_SKY Declination of the fitted PSF (native) [deg]
#ALPHAPSF_J2000 Right ascension of the fitted PSF (J2000) [deg]
#DELTAPSF_J2000 Declination of the fitted PSF (J2000) [deg]
#ALPHAPSF_B1950 Right ascension of the fitted PSF (B1950) [deg]
#DELTAPSF_B1950 Declination of the fitted PSF (B1950) [deg]
#FLUX_PSF Flux from PSF-fitting [count]
#FLUXERR_PSF RMS flux error for PSF-fitting [count]
#MAG_PSF Magnitude from PSF-fitting [mag]
#MAGERR_PSF RMS magnitude error from PSF-fitting [mag]
#NITER_PSF Number of iterations for PSF-fitting
#CHI2_PSF Reduced chi2 from PSF-fitting
#ERRX2PSF_IMAGE Variance of PSF position along x [pixel**2]
#ERRY2PSF_IMAGE Variance of PSF position along y [pixel**2]
#ERRXYPSF_IMAGE Covariance of PSF position between x and y [pixel**2]
#ERRX2PSF_WORLD Variance of PSF position along X-WORLD (alpha) [deg**2]
#ERRY2PSF_WORLD Variance of PSF position along Y-WORLD (delta) [deg**2]
#ERRXYPSF_WORLD Covariance of PSF position X-WORLD/Y-WORLD [deg**2]
#ERRCXXPSF_IMAGE Cxx PSF error ellipse parameter [pixel**(-2)]
#ERRCYYPSF_IMAGE Cyy PSF error ellipse parameter [pixel**(-2)]
#ERRCXYPSF_IMAGE Cxy PSF error ellipse parameter [pixel**(-2)]
#ERRCXXPSF_WORLD Cxx PSF error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCYYPSF_WORLD Cyy PSF error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCXYPSF_WORLD Cxy PSF error ellipse parameter (WORLD units) [deg**(-2)]
#ERRAPSF_IMAGE PSF RMS position error along major axis [pixel]
#ERRBPSF_IMAGE PSF RMS position error along minor axis [pixel]
#ERRTHTPSF_IMAGE PSF error ellipse position angle (CCW/x) [deg]
#ERRAPSF_WORLD World PSF RMS position error along major axis [pixel]
#ERRBPSF_WORLD World PSF RMS position error along minor axis [pixel]
#ERRTHTPSF_WORLD PSF error ellipse pos. angle (CCW/world-x) [deg]
#ERRTHTPSF_SKY Native PSF error ellipse pos. angle (east of north) [deg]
#ERRTHTPSF_J2000 J2000 PSF error ellipse pos. angle (east of north) [deg]
#ERRTHTPSF_B1950 B1950 PSF error ellipse pos. angle (east of north) [deg]
#VECTOR_MODEL Model-fitting coefficients
#VECTOR_MODELERR Model-fitting coefficient uncertainties
#CHI2_MODEL Reduced Chi2 of the fit
#FLAGS_MODEL Model-fitting flags
#NITER_MODEL Number of iterations for model-fitting
#FLUX_MODEL Flux from model-fitting [count]
#FLUXERR_MODEL RMS error on model-fitting flux [count]
#MAG_MODEL Magnitude from model-fitting [mag]
#MAGERR_MODEL RMS error on model-fitting magnitude [mag]
#XMODEL_IMAGE X coordinate from model-fitting [pixel]
#YMODEL_IMAGE Y coordinate from model-fitting [pixel]
#XMODEL_WORLD Fitted position along world x axis [deg]
#YMODEL_WORLD Fitted position along world y axis [deg]
#ALPHAMODEL_SKY Fitted position along right ascension (native) [deg]
#DELTAMODEL_SKY Fitted position along declination (native) [deg]
#ALPHAMODEL_J2000 Fitted position along right ascension (J2000) [deg]
#DELTAMODEL_J2000 Fitted position along declination (J2000) [deg]
#ALPHAMODEL_B1950 Fitted position along right ascension (B1950) [deg]
#DELTAMODEL_B1950 Fitted position along declination (B1950) [deg]
#ERRX2MODEL_IMAGE Variance of fitted position along x [pixel**2]
#ERRY2MODEL_IMAGE Variance of fitted position along y [pixel**2]
#ERRXYMODEL_IMAGE Covariance of fitted position between x and y [pixel**2]
#ERRX2MODEL_WORLD Variance of fitted position along X-WORLD (alpha) [deg**2]
#ERRY2MODEL_WORLD Variance of fitted position along Y-WORLD (delta) [deg**2]
#ERRXYMODEL_WORLD Covariance of fitted position X-WORLD/Y-WORLD [deg**2]
#ERRCXXMODEL_IMAGE Cxx error ellipse parameter of fitted position [pixel**(-2)]
#ERRCYYMODEL_IMAGE Cyy error ellipse parameter of fitted position [pixel**(-2)]
#ERRCXYMODEL_IMAGE Cxy error ellipse parameter of fitted position [pixel**(-2)]
#ERRCXXMODEL_WORLD Cxx fitted error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCYYMODEL_WORLD Cyy fitted error ellipse parameter (WORLD units) [deg**(-2)]
#ERRCXYMODEL_WORLD Cxy fitted error ellipse parameter (WORLD units) [deg**(-2)]
#ERRAMODEL_IMAGE RMS error of fitted position along major axis [pixel]
#ERRBMODEL_IMAGE RMS error of fitted position along minor axis [pixel]
#ERRTHETAMODEL_IMAGE Error ellipse pos.angle of fitted position (CCW/x) [deg]
#ERRAMODEL_WORLD World RMS error of fitted position along major axis [deg]
#ERRBMODEL_WORLD World RMS error of fitted position along minor axis [deg]
#ERRTHETAMODEL_WORLD Error ellipse pos.angle of fitted position (CCW/world-x) [deg]
#ERRTHETAMODEL_SKY Native fitted error ellipse pos. angle (east of north) [deg]
#ERRTHETAMODEL_J2000 J2000 fitted error ellipse pos. angle (east of north) [deg]
#ERRTHETAMODEL_B1950 B1950 fitted error ellipse pos. angle (east of north) [deg]
#X2MODEL_IMAGE Variance along x from model-fitting [pixel**2]
#Y2MODEL_IMAGE Variance along y from model-fitting [pixel**2]
#XYMODEL_IMAGE Covariance between x and y from model-fitting [pixel**2]
#E1MODEL_IMAGE Ellipticity component from model-fitting
#E2MODEL_IMAGE Ellipticity component from model-fitting
#EPS1MODEL_IMAGE Ellipticity component (quadratic) from model-fitting
#EPS2MODEL_IMAGE Ellipticity component (quadratic) from model-fitting
#CONCENTRATION_MODEL Concentration parameter from model-fitting
#CLASS_STAR_MODEL S/G classifier from model-fitting
#FLUX_BACKOFFSET Background offset from fitting [count]
#FLUXERR_BACKOFFSET RMS error on fitted background offset [count]
#FLUX_SPHEROID Spheroid total flux from fitting [count]
#FLUXERR_SPHEROID RMS error on fitted spheroid total flux [count]
#MAG_SPHEROID Spheroid total magnitude from fitting [mag]
#MAGERR_SPHEROID RMS error on fitted spheroid total magnitude [mag]
#SPHEROID_REFF_IMAGE Spheroid effective radius from fitting [pixel]
#SPHEROID_REFFERR_IMAGE RMS error on fitted spheroid effective radius [pixel]
#SPHEROID_REFF_WORLD Spheroid effective radius from fitting [deg]
#SPHEROID_REFFERR_WORLD RMS error on fitted spheroid effective radius [deg]
#SPHEROID_ASPECT_IMAGE Spheroid aspect ratio from fitting
#SPHEROID_ASPECTERR_IMA RMS error on fitted spheroid aspect ratio
#SPHEROID_ASPECT_WORLD Spheroid aspect ratio from fitting
#SPHEROID_ASPECTERR_WOR RMS error on fitted spheroid aspect ratio
#SPHEROID_THETA_IMAGE Spheroid position angle (CCW/x) from fitting [deg]
#SPHEROID_THETAERR_IMAG RMS error on spheroid position angle [deg]
#SPHEROID_THETA_WORLD Spheroid position angle (CCW/world-x) [deg]
#SPHEROID_THETAERR_WORL RMS error on spheroid position angle [deg]
#SPHEROID_THETA_SKY Spheroid position angle (east of north, native) [deg]
#SPHEROID_THETA_J2000 Spheroid position angle (east of north, J2000) [deg]
#SPHEROID_THETA_B1950 Spheroid position angle (east of north, B1950) [deg]
#SPHEROID_SERSICN Spheroid Sersic index from fitting
#SPHEROID_SERSICNERR RMS error on fitted spheroid Sersic index
#FLUX_DISK Disk total flux from fitting [count]
#FLUXERR_DISK RMS error on fitted disk total flux [count]
#MAG_DISK Disk total magnitude from fitting [mag]
#MAGERR_DISK RMS error on fitted disk total magnitude [mag]
#DISK_SCALE_IMAGE Disk scalelength from fitting [pixel]
#DISK_SCALEERR_IMAGE RMS error on fitted disk scalelength [pixel]
#DISK_SCALE_WORLD Disk scalelength from fitting (world coords) [deg]
#DISK_SCALEERR_WORLD RMS error on fitted disk scalelength (world coords) [deg]
#DISK_ASPECT_IMAGE Disk aspect ratio from fitting
#DISK_ASPECTERR_IMAGE RMS error on fitted disk aspect ratio
#DISK_ASPECT_WORLD Disk aspect ratio from fitting
#DISK_ASPECTERR_WORLD RMS error on disk aspect ratio
#DISK_INCLINATION Disk inclination from fitting [deg]
#DISK_INCLINATIONERR RMS error on disk inclination from fitting [deg]
#DISK_THETA_IMAGE Disk position angle (CCW/x) from fitting [deg]
#DISK_THETAERR_IMAGE RMS error on fitted disk position angle [deg]
#DISK_THETA_WORLD Disk position angle (CCW/world-x) [deg]
#DISK_THETAERR_WORLD RMS error on disk position angle [deg]
#DISK_THETA_SKY Disk position angle (east of north, native) [deg]
#DISK_THETA_J2000 Disk position angle (east of north, J2000) [deg]
#DISK_THETA_B1950 Disk position angle (east of north, B1950) [deg]
#DISK_PATTERN_VECTOR Disk pattern fitted coefficients
#DISK_PATTERNMOD_VECTOR Disk pattern fitted moduli
#DISK_PATTERNARG_VECTOR Disk pattern fitted arguments [deg]
#DISK_PATTERN_SPIRAL Disk pattern spiral index
"""
# We turn this text block into a list of the parameter names:
fullparamlist = list(map(lambda s: s[1:-1], re.compile("#\w*\s").findall(fullparamtxt)))
|
megalut/sewpy
|
sewpy/sewpy.py
|
Python
|
gpl-3.0
| 74,870
|
[
"Gaussian"
] |
7b1319c2da7e6795741e0b5618d43aafac72ff18938face2727fc3dc5dfdb274
|
from scraper import Scraper
class missile(Scraper):
"""Handle usage of interplanetary missiles"""
def launch_missile(self, coordinates, missiles_count=10):
"launch the number of missiles to the coordinates "
url = self.url_provider.get_page_url("missile")
galaxy = coordinates.split(':')[0]
system = coordinates.split(':')[1]
position = coordinates.split(':')[2]
data = urllib.urlencode({'galaxy': galaxy, 'system': system, 'position': position, 'planetType': 1})
self.open_url(url, data)
self.browser.select_form(name='rocketForm')
self.browser['anz'] = missiles_count
self.submit_request()
|
winiciuscota/OG-Bot
|
ogbot/scraping/missile_attack.py
|
Python
|
mit
| 683
|
[
"Galaxy"
] |
41081cbd2ce8c013db601d890ff9c4bd61ed7ffb1b2eefa7e48fdd187247aa2e
|
from sympy import (meijerg, I, S, integrate, Integral, oo, gamma,
hyperexpand, exp, simplify, sqrt, pi, erf, sin, cos,
exp_polar, polar_lift, polygamma, hyper, log, expand_func)
from sympy.integrals.meijerint import (_rewrite_single, _rewrite1,
meijerint_indefinite, _inflate_g, _create_lookup_table,
meijerint_definite, meijerint_inversion)
from sympy.utilities import default_sort_key
from sympy.utilities.randtest import (verify_numerically,
random_complex_number as randcplx)
from sympy.abc import x, y, a, b, c, d, s, t, z
def test_rewrite_single():
def t(expr, c, m):
e = _rewrite_single(meijerg([a], [b], [c], [d], expr), x)
assert e is not None
assert isinstance(e[0][0][2], meijerg)
assert e[0][0][2].argument.as_coeff_mul(x) == (c, (m,))
def tn(expr):
assert _rewrite_single(meijerg([a], [b], [c], [d], expr), x) is None
t(x, 1, x)
t(x**2, 1, x**2)
t(x**2 + y*x**2, y + 1, x**2)
tn(x**2 + x)
tn(x**y)
def u(expr, x):
from sympy import Add, exp, exp_polar
r = _rewrite_single(expr, x)
e = Add(*[res[0]*res[2] for res in r[0]]).replace(
exp_polar, exp) # XXX Hack?
assert verify_numerically(e, expr, x)
u(exp(-x)*sin(x), x)
# The following has stopped working because hyperexpand changed slightly.
# It is probably not worth fixing
#u(exp(-x)*sin(x)*cos(x), x)
# This one cannot be done numerically, since it comes out as a g-function
# of argument 4*pi
# NOTE This also tests a bug in inverse mellin transform (which used to
# turn exp(4*pi*I*t) into a factor of exp(4*pi*I)**t instead of
# exp_polar).
#u(exp(x)*sin(x), x)
assert _rewrite_single(exp(x)*sin(x), x) == \
([(-sqrt(2)/(2*sqrt(pi)), 0,
meijerg(((-S(1)/2, 0, S(1)/4, S(1)/2, S(3)/4), (1,)),
((), (-S(1)/2, 0)), 64*exp_polar(-4*I*pi)/x**4))], True)
def test_rewrite1():
assert _rewrite1(x**3*meijerg([a], [b], [c], [d], x**2 + y*x**2)*5, x) == \
(5, x**3, [(1, 0, meijerg([a], [b], [c], [d], x**2*(y + 1)))], True)
def test_meijerint_indefinite_numerically():
def t(fac, arg):
g = meijerg([a], [b], [c], [d], arg)*fac
subs = {a: randcplx()/10, b: randcplx()/10 + I,
c: randcplx(), d: randcplx()}
integral = meijerint_indefinite(g, x)
assert integral is not None
assert verify_numerically(g.subs(subs), integral.diff(x).subs(subs), x)
t(1, x)
t(2, x)
t(1, 2*x)
t(1, x**2)
t(5, x**S('3/2'))
t(x**3, x)
t(3*x**S('3/2'), 4*x**S('7/3'))
def test_meijerint_definite():
v, b = meijerint_definite(x, x, 0, 0)
assert v.is_zero and b is True
v, b = meijerint_definite(x, x, oo, oo)
assert v.is_zero and b is True
def test_inflate():
subs = {a: randcplx()/10, b: randcplx()/10 + I, c: randcplx(),
d: randcplx(), y: randcplx()/10}
def t(a, b, arg, n):
from sympy import Mul
m1 = meijerg(a, b, arg)
m2 = Mul(*_inflate_g(m1, n))
# NOTE: (the random number)**9 must still be on the principal sheet.
# Thus make b&d small to create random numbers of small imaginary part.
return verify_numerically(m1.subs(subs), m2.subs(subs), x, b=0.1, d=-0.1)
assert t([[a], [b]], [[c], [d]], x, 3)
assert t([[a, y], [b]], [[c], [d]], x, 3)
assert t([[a], [b]], [[c, y], [d]], 2*x**3, 3)
def test_recursive():
from sympy import symbols, exp_polar, expand
a, b, c = symbols('a b c', positive=True)
r = exp(-(x - a)**2)*exp(-(x - b)**2)
e = integrate(r, (x, 0, oo), meijerg=True)
assert simplify(e.expand()) == (
sqrt(2)*sqrt(pi)*(
(erf(sqrt(2)*(a + b)/2) + 1)*exp(-a**2/2 + a*b - b**2/2))/4)
e = integrate(exp(-(x - a)**2)*exp(-(x - b)**2)*exp(c*x), (x, 0, oo), meijerg=True)
assert simplify(e) == (
sqrt(2)*sqrt(pi)*(erf(sqrt(2)*(2*a + 2*b + c)/4) + 1)*exp(-a**2 - b**2
+ (2*a + 2*b + c)**2/8)/4)
assert simplify(integrate(exp(-(x - a - b - c)**2), (x, 0, oo), meijerg=True)) == \
sqrt(pi)/2*(1 + erf(a + b + c))
assert simplify(integrate(exp(-(x + a + b + c)**2), (x, 0, oo), meijerg=True)) == \
sqrt(pi)/2*(1 - erf(a + b + c))
def test_meijerint():
from sympy import symbols, expand, arg
s, t, mu = symbols('s t mu', real=True)
assert integrate(meijerg([], [], [0], [], s*t)
*meijerg([], [], [mu/2], [-mu/2], t**2/4),
(t, 0, oo)).is_Piecewise
s = symbols('s', positive=True)
assert integrate(x**s*meijerg([[], []], [[0], []], x), (x, 0, oo)) == \
gamma(s + 1)
assert integrate(x**s*meijerg([[], []], [[0], []], x), (x, 0, oo),
meijerg=True) == gamma(s + 1)
assert isinstance(integrate(x**s*meijerg([[], []], [[0], []], x),
(x, 0, oo), meijerg=False),
Integral)
assert meijerint_indefinite(exp(x), x) == exp(x)
# TODO what simplifications should be done automatically?
# This tests "extra case" for antecedents_1.
a, b = symbols('a b', positive=True)
assert simplify(meijerint_definite(x**a, x, 0, b)[0]) == \
b**(a + 1)/(a + 1)
# This tests various conditions and expansions:
meijerint_definite((x + 1)**3*exp(-x), x, 0, oo) == (16, True)
# Again, how about simplifications?
sigma, mu = symbols('sigma mu', positive=True)
i, c = meijerint_definite(exp(-((x - mu)/(2*sigma))**2), x, 0, oo)
assert simplify(i) == sqrt(pi)*sigma*(erf(mu/(2*sigma)) + 1)
assert c == True
i, _ = meijerint_definite(exp(-mu*x)*exp(sigma*x), x, 0, oo)
# TODO it would be nice to test the condition
assert simplify(i) == 1/(mu - sigma)
# Test substitutions to change limits
assert meijerint_definite(exp(x), x, -oo, 2) == (exp(2), True)
assert expand(meijerint_definite(exp(x), x, 0, I)[0]) == exp(I) - 1
assert expand(meijerint_definite(exp(-x), x, 0, x)[0]) == \
1 - exp(-exp(I*arg(x))*abs(x))
# Test -oo to oo
assert meijerint_definite(exp(-x**2), x, -oo, oo) == (sqrt(pi), True)
assert meijerint_definite(exp(-abs(x)), x, -oo, oo) == (2, True)
assert meijerint_definite(exp(-(2*x - 3)**2), x, -oo, oo) == \
(sqrt(pi)/2, True)
assert meijerint_definite(exp(-abs(2*x - 3)), x, -oo, oo) == (1, True)
assert meijerint_definite(exp(-((x - mu)/sigma)**2/2)/sqrt(2*pi*sigma**2),
x, -oo, oo) == (1, True)
# Test one of the extra conditions for 2 g-functinos
assert meijerint_definite(exp(-x)*sin(x), x, 0, oo) == (S(1)/2, True)
# Test a bug
def res(n):
return (1/(1 + x**2)).diff(x, n).subs(x, 1)*(-1)**n
for n in range(6):
assert integrate(exp(-x)*sin(x)*x**n, (x, 0, oo), meijerg=True) == \
res(n)
# This used to test trigexpand... now it is done by linear substitution
assert simplify(integrate(exp(-x)*sin(x + a), (x, 0, oo), meijerg=True)
) == sqrt(2)*sin(a + pi/4)/2
# Test the condition 14 from prudnikov.
# (This is besselj*besselj in disguise, to stop the product from being
# recognised in the tables.)
a, b, s = symbols('a b s')
from sympy import And, re
assert meijerint_definite(meijerg([], [], [a/2], [-a/2], x/4)
*meijerg([], [], [b/2], [-b/2], x/4)*x**(s - 1), x, 0, oo) == \
(4*2**(2*s - 2)*gamma(-2*s + 1)*gamma(a/2 + b/2 + s)
/(gamma(-a/2 + b/2 - s + 1)*gamma(a/2 - b/2 - s + 1)
*gamma(a/2 + b/2 - s + 1)),
And(0 < -2*re(4*s) + 8, 0 < re(a/2 + b/2 + s), re(2*s) < 1))
# test a bug
assert integrate(sin(x**a)*sin(x**b), (x, 0, oo), meijerg=True) == \
Integral(sin(x**a)*sin(x**b), (x, 0, oo))
# test better hyperexpand
assert integrate(exp(-x**2)*log(x), (x, 0, oo), meijerg=True) == \
(sqrt(pi)*polygamma(0, S(1)/2)/4).expand()
# Test hyperexpand bug.
from sympy import lowergamma
n = symbols('n', integer=True)
assert simplify(integrate(exp(-x)*x**n, x, meijerg=True)) == \
lowergamma(n + 1, x)
# Test a bug with argument 1/x
alpha = symbols('alpha', positive=True)
assert meijerint_definite((2 - x)**alpha*sin(alpha/x), x, 0, 2) == \
(sqrt(pi)*alpha*gamma(alpha + 1)*meijerg(((), (alpha/2 + S(1)/2,
alpha/2 + 1)), ((0, 0, S(1)/2), (-S(1)/2,)), alpha**S(2)/16)/4, True)
# test a bug related to 3016
a, s = symbols('a s', positive=True)
assert simplify(integrate(x**s*exp(-a*x**2), (x, -oo, oo))) == \
a**(-s/2 - S(1)/2)*((-1)**s + 1)*gamma(s/2 + S(1)/2)/2
def test_bessel():
from sympy import (besselj, Heaviside, besseli, polar_lift, exp_polar,
powdenest)
assert simplify(integrate(besselj(a, z)*besselj(b, z)/z, (z, 0, oo),
meijerg=True, conds='none')) == \
2*sin(pi*(a/2 - b/2))/(pi*(a - b)*(a + b))
assert simplify(integrate(besselj(a, z)*besselj(a, z)/z, (z, 0, oo),
meijerg=True, conds='none')) == 1/(2*a)
# TODO more orthogonality integrals
assert simplify(integrate(sin(z*x)*(x**2 - 1)**(-(y + S(1)/2)),
(x, 1, oo), meijerg=True, conds='none')
*2/((z/2)**y*sqrt(pi)*gamma(S(1)/2 - y))) == \
besselj(y, z)
# Werner Rosenheinrich
# SOME INDEFINITE INTEGRALS OF BESSEL FUNCTIONS
assert integrate(x*besselj(0, x), x, meijerg=True) == x*besselj(1, x)
assert integrate(x*besseli(0, x), x, meijerg=True) == x*besseli(1, x)
# TODO can do higher powers, but come out as high order ... should they be
# reduced to order 0, 1?
assert integrate(besselj(1, x), x, meijerg=True) == -besselj(0, x)
assert integrate(besselj(1, x)**2/x, x, meijerg=True) == \
-(besselj(0, x)**2 + besselj(1, x)**2)/2
# TODO more besseli when tables are extended or recursive mellin works
assert integrate(besselj(0, x)**2/x**2, x, meijerg=True) == \
-2*x*besselj(0, x)**2 - 2*x*besselj(1, x)**2 \
+ 2*besselj(0, x)*besselj(1, x) - besselj(0, x)**2/x
assert integrate(besselj(0, x)*besselj(1, x), x, meijerg=True) == \
-besselj(0, x)**2/2
assert integrate(x**2*besselj(0, x)*besselj(1, x), x, meijerg=True) == \
x**2*besselj(1, x)**2/2
assert integrate(besselj(0, x)*besselj(1, x)/x, x, meijerg=True) == \
(x*besselj(0, x)**2 + x*besselj(1, x)**2 -
besselj(0, x)*besselj(1, x))
# TODO how does besselj(0, a*x)*besselj(0, b*x) work?
# TODO how does besselj(0, x)**2*besselj(1, x)**2 work?
# TODO sin(x)*besselj(0, x) etc come out a mess
# TODO can x*log(x)*besselj(0, x) be done?
# TODO how does besselj(1, x)*besselj(0, x+a) work?
# TODO more indefinite integrals when struve functions etc are implemented
# test a substitution
assert integrate(besselj(1, x**2)*x, x, meijerg=True) == \
-besselj(0, x**2)/2
def test_inversion():
from sympy import piecewise_fold, besselj, sqrt, I, sin, cos, Heaviside
def inv(f):
return piecewise_fold(meijerint_inversion(f, s, t))
assert inv(1/(s**2 + 1)) == sin(t)*Heaviside(t)
assert inv(s/(s**2 + 1)) == cos(t)*Heaviside(t)
assert inv(exp(-s)/s) == Heaviside(t - 1)
assert inv(1/sqrt(1 + s**2)) == besselj(0, t)*Heaviside(t)
# Test some antcedents checking.
assert meijerint_inversion(sqrt(s)/sqrt(1 + s**2), s, t) is None
assert inv(exp(s**2)) is None
assert meijerint_inversion(exp(-s**2), s, t) is None
def test_lookup_table():
from random import uniform, randrange
from sympy import Add, unpolarify, exp_polar, exp
from sympy.integrals.meijerint import z as z_dummy
table = {}
_create_lookup_table(table)
for _, l in sorted(table.items()):
for formula, terms, cond, hint in sorted(l, key=default_sort_key):
subs = {}
for a in list(formula.free_symbols) + [z_dummy]:
if hasattr(a, 'properties') and a.properties:
# these Wilds match positive integers
subs[a] = randrange(1, 10)
else:
subs[a] = uniform(1.5, 2.0)
if not isinstance(terms, list):
terms = terms(subs)
# First test that hyperexpand can do this.
expanded = [hyperexpand(g) for (_, g) in terms]
assert all(x.is_Piecewise or not x.has(meijerg) for x in expanded)
# Now test that the meijer g-function is indeed as advertised.
expanded = Add(*[f*x for (f, x) in terms])
a, b = formula.n(subs=subs), expanded.n(subs=subs)
r = min(abs(a), abs(b))
if r < 1:
assert abs(a - b).n() <= 1e-10
else:
assert (abs(a - b)/r).n() <= 1e-10
def test_branch_bug():
from sympy import powdenest, lowergamma
# TODO combsimp cannot prove that the factor is unity
assert powdenest(integrate(erf(x**3), x, meijerg=True).diff(x),
polar=True) == 2*erf(x**3)*gamma(S(2)/3)/3/gamma(S(5)/3)
assert integrate(erf(x**3), x, meijerg=True) == \
2*x*erf(x**3)*gamma(S(2)/3)/(3*gamma(S(5)/3)) \
- 2*gamma(S(2)/3)*lowergamma(S(2)/3, x**6)/(3*sqrt(pi)*gamma(S(5)/3))
def test_linear_subs():
from sympy import besselj
assert integrate(sin(x - 1), x, meijerg=True) == -cos(1 - x)
assert integrate(besselj(1, x - 1), x, meijerg=True) == -besselj(0, 1 - x)
def test_probability():
# various integrals from probability theory
from sympy.abc import x, y, z
from sympy import symbols, Symbol, Abs, expand_mul, combsimp, powsimp, sin
mu1, mu2 = symbols('mu1 mu2', real=True, nonzero=True, finite=True)
sigma1, sigma2 = symbols('sigma1 sigma2', real=True, nonzero=True,
finite=True, positive=True)
rate = Symbol('lambda', real=True, positive=True, finite=True)
def normal(x, mu, sigma):
return 1/sqrt(2*pi*sigma**2)*exp(-(x - mu)**2/2/sigma**2)
def exponential(x, rate):
return rate*exp(-rate*x)
assert integrate(normal(x, mu1, sigma1), (x, -oo, oo), meijerg=True) == 1
assert integrate(x*normal(x, mu1, sigma1), (x, -oo, oo), meijerg=True) == \
mu1
assert integrate(x**2*normal(x, mu1, sigma1), (x, -oo, oo), meijerg=True) \
== mu1**2 + sigma1**2
assert integrate(x**3*normal(x, mu1, sigma1), (x, -oo, oo), meijerg=True) \
== mu1**3 + 3*mu1*sigma1**2
assert integrate(normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == 1
assert integrate(x*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == mu1
assert integrate(y*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == mu2
assert integrate(x*y*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == mu1*mu2
assert integrate((x + y + 1)*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == 1 + mu1 + mu2
assert integrate((x + y - 1)*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == \
-1 + mu1 + mu2
i = integrate(x**2*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True)
assert not i.has(Abs)
assert simplify(i) == mu1**2 + sigma1**2
assert integrate(y**2*normal(x, mu1, sigma1)*normal(y, mu2, sigma2),
(x, -oo, oo), (y, -oo, oo), meijerg=True) == \
sigma2**2 + mu2**2
assert integrate(exponential(x, rate), (x, 0, oo), meijerg=True) == 1
assert integrate(x*exponential(x, rate), (x, 0, oo), meijerg=True) == \
1/rate
assert integrate(x**2*exponential(x, rate), (x, 0, oo), meijerg=True) == \
2/rate**2
def E(expr):
res1 = integrate(expr*exponential(x, rate)*normal(y, mu1, sigma1),
(x, 0, oo), (y, -oo, oo), meijerg=True)
res2 = integrate(expr*exponential(x, rate)*normal(y, mu1, sigma1),
(y, -oo, oo), (x, 0, oo), meijerg=True)
assert expand_mul(res1) == expand_mul(res2)
return res1
assert E(1) == 1
assert E(x*y) == mu1/rate
assert E(x*y**2) == mu1**2/rate + sigma1**2/rate
ans = sigma1**2 + 1/rate**2
assert simplify(E((x + y + 1)**2) - E(x + y + 1)**2) == ans
assert simplify(E((x + y - 1)**2) - E(x + y - 1)**2) == ans
assert simplify(E((x + y)**2) - E(x + y)**2) == ans
# Beta' distribution
alpha, beta = symbols('alpha beta', positive=True)
betadist = x**(alpha - 1)*(1 + x)**(-alpha - beta)*gamma(alpha + beta) \
/gamma(alpha)/gamma(beta)
assert integrate(betadist, (x, 0, oo), meijerg=True) == 1
i = integrate(x*betadist, (x, 0, oo), meijerg=True, conds='separate')
assert (combsimp(i[0]), i[1]) == (alpha/(beta - 1), 1 < beta)
j = integrate(x**2*betadist, (x, 0, oo), meijerg=True, conds='separate')
assert j[1] == (1 < beta - 1)
assert combsimp(j[0] - i[0]**2) == (alpha + beta - 1)*alpha \
/(beta - 2)/(beta - 1)**2
# Beta distribution
# NOTE: this is evaluated using antiderivatives. It also tests that
# meijerint_indefinite returns the simplest possible answer.
a, b = symbols('a b', positive=True)
betadist = x**(a - 1)*(-x + 1)**(b - 1)*gamma(a + b)/(gamma(a)*gamma(b))
assert simplify(integrate(betadist, (x, 0, 1), meijerg=True)) == 1
assert simplify(integrate(x*betadist, (x, 0, 1), meijerg=True)) == \
a/(a + b)
assert simplify(integrate(x**2*betadist, (x, 0, 1), meijerg=True)) == \
a*(a + 1)/(a + b)/(a + b + 1)
assert simplify(integrate(x**y*betadist, (x, 0, 1), meijerg=True)) == \
gamma(a + b)*gamma(a + y)/gamma(a)/gamma(a + b + y)
# Chi distribution
k = Symbol('k', integer=True, positive=True)
chi = 2**(1 - k/2)*x**(k - 1)*exp(-x**2/2)/gamma(k/2)
assert powsimp(integrate(chi, (x, 0, oo), meijerg=True)) == 1
assert simplify(integrate(x*chi, (x, 0, oo), meijerg=True)) == \
sqrt(2)*gamma((k + 1)/2)/gamma(k/2)
assert simplify(integrate(x**2*chi, (x, 0, oo), meijerg=True)) == k
# Chi^2 distribution
chisquared = 2**(-k/2)/gamma(k/2)*x**(k/2 - 1)*exp(-x/2)
assert powsimp(integrate(chisquared, (x, 0, oo), meijerg=True)) == 1
assert simplify(integrate(x*chisquared, (x, 0, oo), meijerg=True)) == k
assert simplify(integrate(x**2*chisquared, (x, 0, oo), meijerg=True)) == \
k*(k + 2)
assert combsimp(integrate(((x - k)/sqrt(2*k))**3*chisquared, (x, 0, oo),
meijerg=True)) == 2*sqrt(2)/sqrt(k)
# Dagum distribution
a, b, p = symbols('a b p', positive=True)
# XXX (x/b)**a does not work
dagum = a*p/x*(x/b)**(a*p)/(1 + x**a/b**a)**(p + 1)
assert simplify(integrate(dagum, (x, 0, oo), meijerg=True)) == 1
# XXX conditions are a mess
arg = x*dagum
assert simplify(integrate(arg, (x, 0, oo), meijerg=True, conds='none')
) == a*b*gamma(1 - 1/a)*gamma(p + 1 + 1/a)/(
(a*p + 1)*gamma(p))
assert simplify(integrate(x*arg, (x, 0, oo), meijerg=True, conds='none')
) == a*b**2*gamma(1 - 2/a)*gamma(p + 1 + 2/a)/(
(a*p + 2)*gamma(p))
# F-distribution
d1, d2 = symbols('d1 d2', positive=True)
f = sqrt(((d1*x)**d1 * d2**d2)/(d1*x + d2)**(d1 + d2))/x \
/gamma(d1/2)/gamma(d2/2)*gamma((d1 + d2)/2)
assert simplify(integrate(f, (x, 0, oo), meijerg=True)) == 1
# TODO conditions are a mess
assert simplify(integrate(x*f, (x, 0, oo), meijerg=True, conds='none')
) == d2/(d2 - 2)
assert simplify(integrate(x**2*f, (x, 0, oo), meijerg=True, conds='none')
) == d2**2*(d1 + 2)/d1/(d2 - 4)/(d2 - 2)
# TODO gamma, rayleigh
# inverse gaussian
lamda, mu = symbols('lamda mu', positive=True)
dist = sqrt(lamda/2/pi)*x**(-S(3)/2)*exp(-lamda*(x - mu)**2/x/2/mu**2)
mysimp = lambda expr: simplify(expr.rewrite(exp))
assert mysimp(integrate(dist, (x, 0, oo))) == 1
assert mysimp(integrate(x*dist, (x, 0, oo))) == mu
assert mysimp(integrate((x - mu)**2*dist, (x, 0, oo))) == mu**3/lamda
assert mysimp(integrate((x - mu)**3*dist, (x, 0, oo))) == 3*mu**5/lamda**2
# Levi
c = Symbol('c', positive=True)
assert integrate(sqrt(c/2/pi)*exp(-c/2/(x - mu))/(x - mu)**S('3/2'),
(x, mu, oo)) == 1
# higher moments oo
# log-logistic
distn = (beta/alpha)*x**(beta - 1)/alpha**(beta - 1)/ \
(1 + x**beta/alpha**beta)**2
assert simplify(integrate(distn, (x, 0, oo))) == 1
# NOTE the conditions are a mess, but correctly state beta > 1
assert simplify(integrate(x*distn, (x, 0, oo), conds='none')) == \
pi*alpha/beta/sin(pi/beta)
# (similar comment for conditions applies)
assert simplify(integrate(x**y*distn, (x, 0, oo), conds='none')) == \
pi*alpha**y*y/beta/sin(pi*y/beta)
# weibull
k = Symbol('k', positive=True)
n = Symbol('n', positive=True)
distn = k/lamda*(x/lamda)**(k - 1)*exp(-(x/lamda)**k)
assert simplify(integrate(distn, (x, 0, oo))) == 1
assert simplify(integrate(x**n*distn, (x, 0, oo))) == \
lamda**n*gamma(1 + n/k)
# rice distribution
from sympy import besseli
nu, sigma = symbols('nu sigma', positive=True)
rice = x/sigma**2*exp(-(x**2 + nu**2)/2/sigma**2)*besseli(0, x*nu/sigma**2)
assert integrate(rice, (x, 0, oo), meijerg=True) == 1
# can someone verify higher moments?
# Laplace distribution
mu = Symbol('mu', real=True)
b = Symbol('b', positive=True)
laplace = exp(-abs(x - mu)/b)/2/b
assert integrate(laplace, (x, -oo, oo), meijerg=True) == 1
assert integrate(x*laplace, (x, -oo, oo), meijerg=True) == mu
assert integrate(x**2*laplace, (x, -oo, oo), meijerg=True) == \
2*b**2 + mu**2
# TODO are there other distributions supported on (-oo, oo) that we can do?
# misc tests
k = Symbol('k', positive=True)
assert combsimp(expand_mul(integrate(log(x)*x**(k - 1)*exp(-x)/gamma(k),
(x, 0, oo)))) == polygamma(0, k)
def test_expint():
""" Test various exponential integrals. """
from sympy import (expint, unpolarify, Symbol, Ci, Si, Shi, Chi,
sin, cos, sinh, cosh, Ei)
assert simplify(unpolarify(integrate(exp(-z*x)/x**y, (x, 1, oo),
meijerg=True, conds='none'
).rewrite(expint).expand(func=True))) == expint(y, z)
assert integrate(exp(-z*x)/x, (x, 1, oo), meijerg=True,
conds='none').rewrite(expint).expand() == \
expint(1, z)
assert integrate(exp(-z*x)/x**2, (x, 1, oo), meijerg=True,
conds='none').rewrite(expint).expand() == \
expint(2, z).rewrite(Ei).rewrite(expint)
assert integrate(exp(-z*x)/x**3, (x, 1, oo), meijerg=True,
conds='none').rewrite(expint).expand() == \
expint(3, z).rewrite(Ei).rewrite(expint).expand()
t = Symbol('t', positive=True)
assert integrate(-cos(x)/x, (x, t, oo), meijerg=True).expand() == Ci(t)
assert integrate(-sin(x)/x, (x, t, oo), meijerg=True).expand() == \
Si(t) - pi/2
assert integrate(sin(x)/x, (x, 0, z), meijerg=True) == Si(z)
assert integrate(sinh(x)/x, (x, 0, z), meijerg=True) == Shi(z)
assert integrate(exp(-x)/x, x, meijerg=True).expand().rewrite(expint) == \
I*pi - expint(1, x)
assert integrate(exp(-x)/x**2, x, meijerg=True).rewrite(expint).expand() \
== expint(1, x) - exp(-x)/x - I*pi
u = Symbol('u', polar=True)
assert integrate(cos(u)/u, u, meijerg=True).expand().as_independent(u)[1] \
== Ci(u)
assert integrate(cosh(u)/u, u, meijerg=True).expand().as_independent(u)[1] \
== Chi(u)
assert integrate(expint(1, x), x, meijerg=True
).rewrite(expint).expand() == x*expint(1, x) - exp(-x)
assert integrate(expint(2, x), x, meijerg=True
).rewrite(expint).expand() == \
-x**2*expint(1, x)/2 + x*exp(-x)/2 - exp(-x)/2
assert simplify(unpolarify(integrate(expint(y, x), x,
meijerg=True).rewrite(expint).expand(func=True))) == \
-expint(y + 1, x)
assert integrate(Si(x), x, meijerg=True) == x*Si(x) + cos(x)
assert integrate(Ci(u), u, meijerg=True).expand() == u*Ci(u) - sin(u)
assert integrate(Shi(x), x, meijerg=True) == x*Shi(x) - cosh(x)
assert integrate(Chi(u), u, meijerg=True).expand() == u*Chi(u) - sinh(u)
assert integrate(Si(x)*exp(-x), (x, 0, oo), meijerg=True) == pi/4
assert integrate(expint(1, x)*sin(x), (x, 0, oo), meijerg=True) == log(2)/2
def test_messy():
from sympy import (laplace_transform, Si, Ci, Shi, Chi, atan, Piecewise,
atanh, acoth, E1, besselj, acosh, asin, Ne, And, re,
fourier_transform, sqrt, Abs)
assert laplace_transform(Si(x), x, s) == ((-atan(s) + pi/2)/s, 0, True)
assert laplace_transform(Shi(x), x, s) == (acoth(s)/s, 1, True)
# where should the logs be simplified?
assert laplace_transform(Chi(x), x, s) == \
((log(s**(-2)) - log((s**2 - 1)/s**2))/(2*s), 1, True)
# TODO maybe simplify the inequalities?
assert laplace_transform(besselj(a, x), x, s)[1:] == \
(0, And(S(0) < re(a/2) + S(1)/2, S(0) < re(a/2) + 1))
# NOTE s < 0 can be done, but argument reduction is not good enough yet
assert fourier_transform(besselj(1, x)/x, x, s, noconds=False) == \
(Piecewise((0, 4*abs(pi**2*s**2) > 1),
(2*sqrt(-4*pi**2*s**2 + 1), True)), s > 0)
# TODO FT(besselj(0,x)) - conditions are messy (but for acceptable reasons)
# - folding could be better
assert integrate(E1(x)*besselj(0, x), (x, 0, oo), meijerg=True) == \
log(1 + sqrt(2))
assert integrate(E1(x)*besselj(1, x), (x, 0, oo), meijerg=True) == \
log(S(1)/2 + sqrt(2)/2)
assert integrate(1/x/sqrt(1 - x**2), x, meijerg=True) == \
Piecewise((-acosh(1/x), 1 < abs(x**(-2))), (I*asin(1/x), True))
def test_issue_6122():
assert integrate(exp(-I*x**2), (x, -oo, oo), meijerg=True) == \
-I*sqrt(pi)*exp(I*pi/4)
def test_issue_6252():
expr = 1/x/(a + b*x)**(S(1)/3)
anti = integrate(expr, x, meijerg=True)
assert not expr.has(hyper)
# XXX the expression is a mess, but actually upon differentiation and
# putting in numerical values seems to work...
def test_issue_6348():
assert integrate(exp(I*x)/(1 + x**2), (x, -oo, oo)).simplify().rewrite(exp) \
== pi*exp(-1)
def test_fresnel():
from sympy import fresnels, fresnelc
assert expand_func(integrate(sin(pi*x**2/2), x)) == fresnels(x)
assert expand_func(integrate(cos(pi*x**2/2), x)) == fresnelc(x)
def test_issue_6860():
assert meijerint_indefinite(x**x**x, x) is None
|
beni55/sympy
|
sympy/integrals/tests/test_meijerint.py
|
Python
|
bsd-3-clause
| 27,319
|
[
"Gaussian"
] |
73c5e280e3b7d61731279e6648d965fe97585ea094a5d57800901ee67a91f3fc
|
# Copyright 2003-2008 by Leighton Pritchard. All rights reserved.
# Revisions copyright 2008-2012 by Peter Cock.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
#
# Contact: Leighton Pritchard, Scottish Crop Research Institute,
# Invergowrie, Dundee, Scotland, DD2 5DA, UK
# L.Pritchard@scri.ac.uk
################################################################################
"""CircularDrawer module for GenomeDiagram."""
# ReportLab imports
from __future__ import print_function
from reportlab.graphics.shapes import Drawing, String, Group, Line, Circle, Polygon
from reportlab.lib import colors
from reportlab.graphics.shapes import ArcPath
from Bio._py3k import range
# GenomeDiagram imports
from ._AbstractDrawer import AbstractDrawer, draw_polygon, intermediate_points
from ._AbstractDrawer import _stroke_and_fill_colors
from ._FeatureSet import FeatureSet
from ._GraphSet import GraphSet
from math import pi, cos, sin
class CircularDrawer(AbstractDrawer):
"""Object for drawing circular diagrams.
o __init__(self, ...) Called on instantiation
o set_page_size(self, pagesize, orientation) Set the page size to the
passed size and orientation
o set_margins(self, x, y, xl, xr, yt, yb) Set the drawable area of the
page
o set_bounds(self, start, end) Set the bounds for the elements to be
drawn
o is_in_bounds(self, value) Returns a boolean for whether the position
is actually to be drawn
o __len__(self) Returns the length of sequence that will be drawn
o draw(self) Place the drawing elements on the diagram
o init_fragments(self) Calculate information
about sequence fragment locations on the drawing
o set_track_heights(self) Calculate information about the offset of
each track from the fragment base
o draw_test_tracks(self) Add lines demarcating each track to the
drawing
o draw_track(self, track) Return the contents of the passed track as
drawing elements
o draw_scale(self, track) Return a scale for the passed track as
drawing elements
o draw_greytrack(self, track) Return a grey background and superposed
label for the passed track as drawing
elements
o draw_feature_set(self, set) Return the features in the passed set as
drawing elements
o draw_feature(self, feature) Return a single feature as drawing
elements
o get_feature_sigil(self, feature, x0, x1, fragment) Return a single
feature as its sigil in drawing elements
o draw_graph_set(self, set) Return the data in a set of graphs as
drawing elements
o draw_line_graph(self, graph) Return the data in a graph as a line
graph in drawing elements
o draw_heat_graph(self, graph) Return the data in a graph as a heat
graph in drawing elements
o draw_bar_graph(self, graph) Return the data in a graph as a bar
graph in drawing elements
o canvas_angle(self, base) Return the angle, and cos and sin of
that angle, subtended by the passed
base position at the diagram center
o draw_arc(self, inner_radius, outer_radius, startangle, endangle,
color) Return a drawable element describing an arc
Attributes:
o tracklines Boolean for whether to draw lines dilineating tracks
o pagesize Tuple describing the size of the page in pixels
o x0 Float X co-ord for leftmost point of drawable area
o xlim Float X co-ord for rightmost point of drawable area
o y0 Float Y co-ord for lowest point of drawable area
o ylim Float Y co-ord for topmost point of drawable area
o pagewidth Float pixel width of drawable area
o pageheight Float pixel height of drawable area
o xcenter Float X co-ord of center of drawable area
o ycenter Float Y co-ord of center of drawable area
o start Int, base to start drawing from
o end Int, base to stop drawing at
o length Size of sequence to be drawn
o track_size Float (0->1) the proportion of the track height to
draw in
o drawing Drawing canvas
o drawn_tracks List of ints denoting which tracks are to be drawn
o current_track_level Int denoting which track is currently being
drawn
o track_offsets Dictionary of number of pixels that each track top,
center and bottom is offset from the base of a
fragment, keyed by track
o sweep Float (0->1) the proportion of the circle circumference to
use for the diagram
o cross_track_links List of tuples each with four entries (track A,
feature A, track B, feature B) to be linked.
"""
def __init__(self, parent=None, pagesize='A3', orientation='landscape',
x=0.05, y=0.05, xl=None, xr=None, yt=None, yb=None,
start=None, end=None, tracklines=0, track_size=0.75,
circular=1, circle_core=0.0, cross_track_links=None):
"""Create CircularDrawer object.
o parent Diagram object containing the data that the drawer
draws
o pagesize String describing the ISO size of the image, or a tuple
of pixels
o orientation String describing the required orientation of the
final drawing ('landscape' or 'portrait')
o x Float (0->1) describing the relative size of the X
margins to the page
o y Float (0->1) describing the relative size of the Y
margins to the page
o xl Float (0->1) describing the relative size of the left X
margin to the page (overrides x)
o xl Float (0->1) describing the relative size of the left X
margin to the page (overrides x)
o xr Float (0->1) describing the relative size of the right X
margin to the page (overrides x)
o yt Float (0->1) describing the relative size of the top Y
margin to the page (overrides y)
o yb Float (0->1) describing the relative size of the lower Y
margin to the page (overrides y)
o start Int, the position to begin drawing the diagram at
o end Int, the position to stop drawing the diagram at
o tracklines Boolean flag to show (or not) lines delineating tracks
on the diagram
o track_size The proportion of the available track height that
should be taken up in drawing
o circular Boolean flaw to show whether the passed sequence is
circular or not
o circle_core The proportion of the available radius to leave
empty at the center of a circular diagram (0 to 1).
o cross_track_links List of tuples each with four entries (track A,
feature A, track B, feature B) to be linked.
"""
# Use the superclass' instantiation method
AbstractDrawer.__init__(self, parent, pagesize, orientation,
x, y, xl, xr, yt, yb, start, end,
tracklines, cross_track_links)
# Useful measurements on the page
self.track_size = track_size
self.circle_core = circle_core
if not circular: # Determine the proportion of the circumference
self.sweep = 0.9 # around which information will be drawn
else:
self.sweep = 1
def set_track_heights(self):
"""Initialise track heights.
Since tracks may not be of identical heights, the bottom and top
radius for each track is stored in a dictionary - self.track_radii,
keyed by track number
"""
bot_track = min(min(self.drawn_tracks), 1)
top_track = max(self.drawn_tracks) # The 'highest' track to draw
trackunit_sum = 0 # Total number of 'units' taken up by all tracks
trackunits = {} # Start and & units for each track keyed by track number
heightholder = 0 # placeholder variable
for track in range(bot_track, top_track + 1): # track numbers to 'draw'
try:
trackheight = self._parent[track].height # Get track height
except Exception: # TODO: ValueError? IndexError?
trackheight = 1
trackunit_sum += trackheight # increment total track unit height
trackunits[track] = (heightholder, heightholder + trackheight)
heightholder += trackheight # move to next height
max_radius = 0.5 * min(self.pagewidth, self.pageheight)
trackunit_height = max_radius * (1 - self.circle_core) / trackunit_sum
track_core = max_radius * self.circle_core
# Calculate top and bottom radii for each track
self.track_radii = {} # The inner, outer and center radii for each track
track_crop = trackunit_height * (1 - self.track_size) / 2. # 'step back' in pixels
for track in trackunits:
top = trackunits[track][1] * trackunit_height - track_crop + track_core
btm = trackunits[track][0] * trackunit_height + track_crop + track_core
ctr = btm + (top - btm) / 2.
self.track_radii[track] = (btm, ctr, top)
def draw(self):
"""Draw a circular diagram of the stored data."""
# Instantiate the drawing canvas
self.drawing = Drawing(self.pagesize[0], self.pagesize[1])
feature_elements = [] # holds feature elements
feature_labels = [] # holds feature labels
greytrack_bgs = [] # holds track background
greytrack_labels = [] # holds track foreground labels
scale_axes = [] # holds scale axes
scale_labels = [] # holds scale axis labels
# Get tracks to be drawn and set track sizes
self.drawn_tracks = self._parent.get_drawn_levels()
self.set_track_heights()
# Go through each track in the parent (if it is to be drawn) one by
# one and collate the data as drawing elements
for track_level in self._parent.get_drawn_levels():
self.current_track_level = track_level
track = self._parent[track_level]
gbgs, glabels = self.draw_greytrack(track) # Greytracks
greytrack_bgs.append(gbgs)
greytrack_labels.append(glabels)
features, flabels = self.draw_track(track) # Features and graphs
feature_elements.append(features)
feature_labels.append(flabels)
if track.scale:
axes, slabels = self.draw_scale(track) # Scale axes
scale_axes.append(axes)
scale_labels.append(slabels)
feature_cross_links = []
for cross_link_obj in self.cross_track_links:
cross_link_elements = self.draw_cross_link(cross_link_obj)
if cross_link_elements:
feature_cross_links.append(cross_link_elements)
# Groups listed in order of addition to page (from back to front)
# Draw track backgrounds
# Draw feature cross track links
# Draw features and graphs
# Draw scale axes
# Draw scale labels
# Draw feature labels
# Draw track labels
element_groups = [greytrack_bgs, feature_cross_links,
feature_elements,
scale_axes, scale_labels,
feature_labels, greytrack_labels
]
for element_group in element_groups:
for element_list in element_group:
[self.drawing.add(element) for element in element_list]
if self.tracklines:
# Draw test tracks over top of diagram
self.draw_test_tracks()
def draw_track(self, track):
"""Returns list of track elements and list of track labels."""
track_elements = [] # Holds elements for features and graphs
track_labels = [] # Holds labels for features and graphs
# Distribution dictionary for dealing with different set types
set_methods = {FeatureSet: self.draw_feature_set,
GraphSet: self.draw_graph_set
}
for set in track.get_sets(): # Draw the feature or graph sets
elements, labels = set_methods[set.__class__](set)
track_elements += elements
track_labels += labels
return track_elements, track_labels
def draw_feature_set(self, set):
"""Returns list of feature elements and list of labels for them."""
# print 'draw feature set'
feature_elements = [] # Holds diagram elements belonging to the features
label_elements = [] # Holds diagram elements belonging to feature labels
# Collect all the elements for the feature set
for feature in set.get_features():
if self.is_in_bounds(feature.start) or self.is_in_bounds(feature.end):
features, labels = self.draw_feature(feature)
feature_elements += features
label_elements += labels
return feature_elements, label_elements
def draw_feature(self, feature):
"""Returns list of feature elements and list of labels for them."""
feature_elements = [] # Holds drawable elements for a single feature
label_elements = [] # Holds labels for a single feature
if feature.hide: # Don't show feature: return early
return feature_elements, label_elements
start, end = self._current_track_start_end()
# A single feature may be split into subfeatures, so loop over them
for locstart, locend in feature.locations:
if locend < start:
continue
locstart = max(locstart, start)
if end < locstart:
continue
locend = min(locend, end)
# Get sigil for the feature/ each subfeature
feature_sigil, label = self.get_feature_sigil(feature, locstart, locend)
feature_elements.append(feature_sigil)
if label is not None: # If there's a label
label_elements.append(label)
return feature_elements, label_elements
def get_feature_sigil(self, feature, locstart, locend, **kwargs):
"""Returns graphics for feature, and any required label for it.
o feature Feature object
o locstart The start position of the feature
o locend The end position of the feature
"""
# Establish the co-ordinates for the sigil
btm, ctr, top = self.track_radii[self.current_track_level]
startangle, startcos, startsin = self.canvas_angle(locstart)
endangle, endcos, endsin = self.canvas_angle(locend)
midangle, midcos, midsin = self.canvas_angle(float(locend + locstart) / 2)
# Distribution dictionary for various ways of drawing the feature
# Each method takes the inner and outer radii, the start and end angle
# subtended at the diagram center, and the color as arguments
draw_methods = {'BOX': self._draw_sigil_box,
'OCTO': self._draw_sigil_cut_corner_box,
'JAGGY': self._draw_sigil_jaggy,
'ARROW': self._draw_sigil_arrow,
'BIGARROW': self._draw_sigil_big_arrow,
}
# Get sigil for the feature, location dependent on the feature strand
method = draw_methods[feature.sigil]
kwargs['head_length_ratio'] = feature.arrowhead_length
kwargs['shaft_height_ratio'] = feature.arrowshaft_height
# Support for clickable links... needs ReportLab 2.4 or later
# which added support for links in SVG output.
if hasattr(feature, "url"):
kwargs["hrefURL"] = feature.url
kwargs["hrefTitle"] = feature.name
sigil = method(btm, ctr, top, startangle, endangle, feature.strand,
color=feature.color, border=feature.border, **kwargs)
if feature.label: # Feature needs a label
# The spaces are a hack to force a little space between the label
# and the edge of the feature
label = String(0, 0, " %s " % feature.name.strip(),
fontName=feature.label_font,
fontSize=feature.label_size,
fillColor=feature.label_color)
labelgroup = Group(label)
if feature.label_strand:
strand = feature.label_strand
else:
strand = feature.strand
if feature.label_position in ('start', "5'", 'left'):
# Position the label at the feature's start
if strand != -1:
label_angle = startangle + 0.5 * pi # Make text radial
sinval, cosval = startsin, startcos
else:
label_angle = endangle + 0.5 * pi # Make text radial
sinval, cosval = endsin, endcos
elif feature.label_position in ('middle', 'center', 'centre'):
# Position the label at the feature's midpoint
label_angle = midangle + 0.5 * pi # Make text radial
sinval, cosval = midsin, midcos
elif feature.label_position in ('end', "3'", 'right'):
# Position the label at the feature's end
if strand != -1:
label_angle = endangle + 0.5 * pi # Make text radial
sinval, cosval = endsin, endcos
else:
label_angle = startangle + 0.5 * pi # Make text radial
sinval, cosval = startsin, startcos
elif startangle < pi:
# Default to placing the label the bottom of the feature
# as drawn on the page, meaning feature end on left half
label_angle = endangle + 0.5 * pi # Make text radial
sinval, cosval = endsin, endcos
else:
# Default to placing the label on the bottom of the feature,
# which means the feature end when on right hand half
label_angle = startangle + 0.5 * pi # Make text radial
sinval, cosval = startsin, startcos
if strand != -1:
# Feature label on top
radius = top
if startangle < pi: # Turn text round
label_angle -= pi
else:
labelgroup.contents[0].textAnchor = 'end'
else:
# Feature label on bottom
radius = btm
if startangle < pi: # Turn text round and anchor end
label_angle -= pi
labelgroup.contents[0].textAnchor = 'end'
x_pos = self.xcenter + radius * sinval
y_pos = self.ycenter + radius * cosval
coslabel = cos(label_angle)
sinlabel = sin(label_angle)
labelgroup.transform = (coslabel, -sinlabel, sinlabel, coslabel,
x_pos, y_pos)
else:
# No label required
labelgroup = None
# if locstart > locend:
# print locstart, locend, feature.strand, sigil, feature.name
# print locstart, locend, feature.name
return sigil, labelgroup
def draw_cross_link(self, cross_link):
startA = cross_link.startA
startB = cross_link.startB
endA = cross_link.endA
endB = cross_link.endB
if not self.is_in_bounds(startA) \
and not self.is_in_bounds(endA):
return None
if not self.is_in_bounds(startB) \
and not self.is_in_bounds(endB):
return None
if startA < self.start:
startA = self.start
if startB < self.start:
startB = self.start
if self.end < endA:
endA = self.end
if self.end < endB:
endB = self.end
trackobjA = cross_link._trackA(list(self._parent.tracks.values()))
trackobjB = cross_link._trackB(list(self._parent.tracks.values()))
assert trackobjA is not None
assert trackobjB is not None
if trackobjA == trackobjB:
raise NotImplementedError()
if trackobjA.start is not None:
if endA < trackobjA.start:
return
startA = max(startA, trackobjA.start)
if trackobjA.end is not None:
if trackobjA.end < startA:
return
endA = min(endA, trackobjA.end)
if trackobjB.start is not None:
if endB < trackobjB.start:
return
startB = max(startB, trackobjB.start)
if trackobjB.end is not None:
if trackobjB.end < startB:
return
endB = min(endB, trackobjB.end)
for track_level in self._parent.get_drawn_levels():
track = self._parent[track_level]
if track == trackobjA:
trackA = track_level
if track == trackobjB:
trackB = track_level
if trackA == trackB:
raise NotImplementedError()
startangleA, startcosA, startsinA = self.canvas_angle(startA)
startangleB, startcosB, startsinB = self.canvas_angle(startB)
endangleA, endcosA, endsinA = self.canvas_angle(endA)
endangleB, endcosB, endsinB = self.canvas_angle(endB)
btmA, ctrA, topA = self.track_radii[trackA]
btmB, ctrB, topB = self.track_radii[trackB]
if ctrA < ctrB:
return [self._draw_arc_poly(topA, btmB,
startangleA, endangleA,
startangleB, endangleB,
cross_link.color, cross_link.border, cross_link.flip)]
else:
return [self._draw_arc_poly(btmA, topB,
startangleA, endangleA,
startangleB, endangleB,
cross_link.color, cross_link.border, cross_link.flip)]
def draw_graph_set(self, set):
"""Returns list of graph elements and list of their labels.
o set GraphSet object
"""
# print 'draw graph set'
elements = [] # Holds graph elements
# Distribution dictionary for how to draw the graph
style_methods = {'line': self.draw_line_graph,
'heat': self.draw_heat_graph,
'bar': self.draw_bar_graph
}
for graph in set.get_graphs():
elements += style_methods[graph.style](graph)
return elements, []
def draw_line_graph(self, graph):
"""Returns line graph as list of drawable elements.
o graph GraphData object
"""
line_elements = [] # holds drawable elements
# Get graph data
data_quartiles = graph.quartiles()
minval, maxval = data_quartiles[0], data_quartiles[4]
btm, ctr, top = self.track_radii[self.current_track_level]
trackheight = 0.5 * (top - btm)
datarange = maxval - minval
if datarange == 0:
datarange = trackheight
start, end = self._current_track_start_end()
data = graph[start:end]
if not data:
return []
# midval is the value at which the x-axis is plotted, and is the
# central ring in the track
if graph.center is None:
midval = (maxval + minval) / 2.
else:
midval = graph.center
# Whichever is the greatest difference: max-midval or min-midval, is
# taken to specify the number of pixel units resolved along the
# y-axis
resolution = max((midval - minval), (maxval - midval))
# Start from first data point
pos, val = data[0]
lastangle, lastcos, lastsin = self.canvas_angle(pos)
# We calculate the track height
posheight = trackheight * (val - midval) / resolution + ctr
lastx = self.xcenter + posheight * lastsin # start xy coords
lasty = self.ycenter + posheight * lastcos
for pos, val in data:
posangle, poscos, possin = self.canvas_angle(pos)
posheight = trackheight * (val - midval) / resolution + ctr
x = self.xcenter + posheight * possin # next xy coords
y = self.ycenter + posheight * poscos
line_elements.append(Line(lastx, lasty, x, y,
strokeColor=graph.poscolor,
strokeWidth=graph.linewidth))
lastx, lasty, = x, y
return line_elements
def draw_bar_graph(self, graph):
"""Returns list of drawable elements for a bar graph.
o graph Graph object
"""
# At each point contained in the graph data, we draw a vertical bar
# from the track center to the height of the datapoint value (positive
# values go up in one color, negative go down in the alternative
# color).
bar_elements = []
# Set the number of pixels per unit for the data
data_quartiles = graph.quartiles()
minval, maxval = data_quartiles[0], data_quartiles[4]
btm, ctr, top = self.track_radii[self.current_track_level]
trackheight = 0.5 * (top - btm)
datarange = maxval - minval
if datarange == 0:
datarange = trackheight
data = graph[self.start:self.end]
# midval is the value at which the x-axis is plotted, and is the
# central ring in the track
if graph.center is None:
midval = (maxval + minval) / 2.
else:
midval = graph.center
# Convert data into 'binned' blocks, covering half the distance to the
# next data point on either side, accounting for the ends of fragments
# and tracks
start, end = self._current_track_start_end()
data = intermediate_points(start, end, graph[start:end])
if not data:
return []
# Whichever is the greatest difference: max-midval or min-midval, is
# taken to specify the number of pixel units resolved along the
# y-axis
resolution = max((midval - minval), (maxval - midval))
if resolution == 0:
resolution = trackheight
# Create elements for the bar graph based on newdata
for pos0, pos1, val in data:
pos0angle, pos0cos, pos0sin = self.canvas_angle(pos0)
pos1angle, pos1cos, pos1sin = self.canvas_angle(pos1)
barval = trackheight * (val - midval) / resolution
if barval >= 0:
barcolor = graph.poscolor
else:
barcolor = graph.negcolor
# Draw bar
bar_elements.append(self._draw_arc(ctr, ctr + barval, pos0angle,
pos1angle, barcolor))
return bar_elements
def draw_heat_graph(self, graph):
"""Returns list of drawable elements for the heat graph.
o graph Graph object
"""
# At each point contained in the graph data, we draw a box that is the
# full height of the track, extending from the midpoint between the
# previous and current data points to the midpoint between the current
# and next data points
heat_elements = [] # holds drawable elements
# Get graph data
data_quartiles = graph.quartiles()
minval, maxval = data_quartiles[0], data_quartiles[4]
midval = (maxval + minval) / 2. # mid is the value at the X-axis
btm, ctr, top = self.track_radii[self.current_track_level]
trackheight = (top - btm)
start, end = self._current_track_start_end()
data = intermediate_points(start, end, graph[start:end])
# Create elements on the graph, indicating a large positive value by
# the graph's poscolor, and a large negative value by the graph's
# negcolor attributes
for pos0, pos1, val in data:
pos0angle, pos0cos, pos0sin = self.canvas_angle(pos0)
pos1angle, pos1cos, pos1sin = self.canvas_angle(pos1)
# Calculate the heat color, based on the differential between
# the value and the median value
heat = colors.linearlyInterpolatedColor(graph.poscolor,
graph.negcolor,
maxval, minval, val)
# Draw heat box
heat_elements.append(self._draw_arc(btm, top, pos0angle, pos1angle,
heat, border=heat))
return heat_elements
def draw_scale(self, track):
"""Returns list of elements in the scale and list of their labels.
o track Track object
"""
scale_elements = [] # holds axes and ticks
scale_labels = [] # holds labels
if not track.scale:
# no scale required, exit early
return [], []
# Get track locations
btm, ctr, top = self.track_radii[self.current_track_level]
trackheight = (top - ctr)
# X-axis
start, end = self._current_track_start_end()
if track.start is not None or track.end is not None:
# Draw an arc, leaving out the wedge
p = ArcPath(strokeColor=track.scale_color, fillColor=None)
startangle, startcos, startsin = self.canvas_angle(start)
endangle, endcos, endsin = self.canvas_angle(end)
p.addArc(self.xcenter, self.ycenter, ctr,
90 - (endangle * 180 / pi),
90 - (startangle * 180 / pi))
scale_elements.append(p)
del p
# Y-axis start marker
x0, y0 = self.xcenter + btm * startsin, self.ycenter + btm * startcos
x1, y1 = self.xcenter + top * startsin, self.ycenter + top * startcos
scale_elements.append(Line(x0, y0, x1, y1, strokeColor=track.scale_color))
# Y-axis end marker
x0, y0 = self.xcenter + btm * endsin, self.ycenter + btm * endcos
x1, y1 = self.xcenter + top * endsin, self.ycenter + top * endcos
scale_elements.append(Line(x0, y0, x1, y1, strokeColor=track.scale_color))
elif self.sweep < 1:
# Draw an arc, leaving out the wedge
p = ArcPath(strokeColor=track.scale_color, fillColor=None)
# Note reportlab counts angles anti-clockwise from the horizontal
# (as in mathematics, e.g. complex numbers and polar coordinates)
# in degrees.
p.addArc(self.xcenter, self.ycenter, ctr,
startangledegrees=90 - 360 * self.sweep,
endangledegrees=90)
scale_elements.append(p)
del p
# Y-axis start marker
x0, y0 = self.xcenter, self.ycenter + btm
x1, y1 = self.xcenter, self.ycenter + top
scale_elements.append(Line(x0, y0, x1, y1, strokeColor=track.scale_color))
# Y-axis end marker
alpha = 2 * pi * self.sweep
x0, y0 = self.xcenter + btm * sin(alpha), self.ycenter + btm * cos(alpha)
x1, y1 = self.xcenter + top * sin(alpha), self.ycenter + top * cos(alpha)
scale_elements.append(Line(x0, y0, x1, y1, strokeColor=track.scale_color))
else:
# Draw a full circle
scale_elements.append(Circle(self.xcenter, self.ycenter, ctr,
strokeColor=track.scale_color,
fillColor=None))
start, end = self._current_track_start_end()
if track.scale_ticks: # Ticks are required on the scale
# Draw large ticks
# I want the ticks to be consistently positioned relative to
# the start of the sequence (position 0), not relative to the
# current viewpoint (self.start and self.end)
ticklen = track.scale_largeticks * trackheight
tickiterval = int(track.scale_largetick_interval)
# Note that we could just start the list of ticks using
# range(0,self.end,tickinterval) and the filter out the
# ones before self.start - but this seems wasteful.
# Using tickiterval * (self.start/tickiterval) is a shortcut.
for tickpos in range(tickiterval * (self.start // tickiterval),
int(self.end), tickiterval):
if tickpos <= start or end <= tickpos:
continue
tick, label = self.draw_tick(tickpos, ctr, ticklen,
track,
track.scale_largetick_labels)
scale_elements.append(tick)
if label is not None: # If there's a label, add it
scale_labels.append(label)
# Draw small ticks
ticklen = track.scale_smallticks * trackheight
tickiterval = int(track.scale_smalltick_interval)
for tickpos in range(tickiterval * (self.start // tickiterval),
int(self.end), tickiterval):
if tickpos <= start or end <= tickpos:
continue
tick, label = self.draw_tick(tickpos, ctr, ticklen,
track,
track.scale_smalltick_labels)
scale_elements.append(tick)
if label is not None: # If there's a label, add it
scale_labels.append(label)
# Check to see if the track contains a graph - if it does, get the
# minimum and maximum values, and put them on the scale Y-axis
# at 60 degree intervals, ordering the labels by graph_id
startangle, startcos, startsin = self.canvas_angle(start)
endangle, endcos, endsin = self.canvas_angle(end)
if track.axis_labels:
for set in track.get_sets():
if set.__class__ is GraphSet:
# Y-axis
for n in range(7):
angle = n * 1.0471975511965976
if angle < startangle or endangle < angle:
continue
ticksin, tickcos = sin(angle), cos(angle)
x0, y0 = self.xcenter + btm * ticksin, self.ycenter + btm * tickcos
x1, y1 = self.xcenter + top * ticksin, self.ycenter + top * tickcos
scale_elements.append(Line(x0, y0, x1, y1,
strokeColor=track.scale_color))
graph_label_min = []
graph_label_max = []
graph_label_mid = []
for graph in set.get_graphs():
quartiles = graph.quartiles()
minval, maxval = quartiles[0], quartiles[4]
if graph.center is None:
midval = (maxval + minval) / 2.
graph_label_min.append("%.3f" % minval)
graph_label_max.append("%.3f" % maxval)
graph_label_mid.append("%.3f" % midval)
else:
diff = max((graph.center - minval),
(maxval - graph.center))
minval = graph.center - diff
maxval = graph.center + diff
midval = graph.center
graph_label_mid.append("%.3f" % midval)
graph_label_min.append("%.3f" % minval)
graph_label_max.append("%.3f" % maxval)
xmid, ymid = (x0 + x1) / 2., (y0 + y1) / 2.
for limit, x, y, in [(graph_label_min, x0, y0),
(graph_label_max, x1, y1),
(graph_label_mid, xmid, ymid)]:
label = String(0, 0, ";".join(limit),
fontName=track.scale_font,
fontSize=track.scale_fontsize,
fillColor=track.scale_color)
label.textAnchor = 'middle'
labelgroup = Group(label)
labelgroup.transform = (tickcos, -ticksin,
ticksin, tickcos,
x, y)
scale_labels.append(labelgroup)
return scale_elements, scale_labels
def draw_tick(self, tickpos, ctr, ticklen, track, draw_label):
"""Returns drawing element for a tick on the scale.
o tickpos Int, position of the tick on the sequence
o ctr Float, Y co-ord of the center of the track
o ticklen How long to draw the tick
o track Track, the track the tick is drawn on
o draw_label Boolean, write the tick label?
"""
# Calculate tick co-ordinates
tickangle, tickcos, ticksin = self.canvas_angle(tickpos)
x0, y0 = self.xcenter + ctr * ticksin, self.ycenter + ctr * tickcos
x1, y1 = self.xcenter + (ctr + ticklen) * ticksin, self.ycenter + (ctr + ticklen) * tickcos
# Calculate height of text label so it can be offset on lower half
# of diagram
# LP: not used, as not all fonts have ascent_descent data in reportlab.pdfbase._fontdata
# label_offset = _fontdata.ascent_descent[track.scale_font][0]*\
# track.scale_fontsize/1000.
tick = Line(x0, y0, x1, y1, strokeColor=track.scale_color)
if draw_label:
# Put tick position on as label
if track.scale_format == 'SInt':
if tickpos >= 1000000:
tickstring = str(tickpos // 1000000) + " Mbp"
elif tickpos >= 1000:
tickstring = str(tickpos // 1000) + " Kbp"
else:
tickstring = str(tickpos)
else:
tickstring = str(tickpos)
label = String(0, 0, tickstring, # Make label string
fontName=track.scale_font,
fontSize=track.scale_fontsize,
fillColor=track.scale_color)
if tickangle > pi:
label.textAnchor = 'end'
# LP: This label_offset depends on ascent_descent data, which is not available for all
# fonts, so has been deprecated.
# if 0.5*pi < tickangle < 1.5*pi:
# y1 -= label_offset
labelgroup = Group(label)
labelgroup.transform = (1, 0, 0, 1, x1, y1)
else:
labelgroup = None
return tick, labelgroup
def draw_test_tracks(self):
"""Draw blue test tracks with grene line down their center."""
# Add lines only for drawn tracks
for track in self.drawn_tracks:
btm, ctr, top = self.track_radii[track]
self.drawing.add(Circle(self.xcenter, self.ycenter, top,
strokeColor=colors.blue,
fillColor=None)) # top line
self.drawing.add(Circle(self.xcenter, self.ycenter, ctr,
strokeColor=colors.green,
fillColor=None)) # middle line
self.drawing.add(Circle(self.xcenter, self.ycenter, btm,
strokeColor=colors.blue,
fillColor=None)) # bottom line
def draw_greytrack(self, track):
"""Drawing element for grey background to passed track.
o track Track object
"""
greytrack_bgs = [] # Holds track backgrounds
greytrack_labels = [] # Holds track foreground labels
if not track.greytrack: # No greytrack required, return early
return [], []
# Get track location
btm, ctr, top = self.track_radii[self.current_track_level]
start, end = self._current_track_start_end()
startangle, startcos, startsin = self.canvas_angle(start)
endangle, endcos, endsin = self.canvas_angle(end)
# Make background
if track.start is not None or track.end is not None:
# Draw an arc, leaving out the wedge
p = ArcPath(strokeColor=track.scale_color, fillColor=None)
greytrack_bgs.append(self._draw_arc(btm, top, startangle, endangle,
colors.Color(0.96, 0.96, 0.96)))
elif self.sweep < 1:
# Make a partial circle, a large arc box
# This method assumes the correct center for us.
greytrack_bgs.append(self._draw_arc(btm, top, 0, 2 * pi * self.sweep,
colors.Color(0.96, 0.96, 0.96)))
else:
# Make a full circle (using a VERY thick linewidth)
greytrack_bgs.append(Circle(self.xcenter, self.ycenter, ctr,
strokeColor=colors.Color(0.96, 0.96, 0.96),
fillColor=None, strokeWidth=top - btm))
if track.greytrack_labels:
# Labels are required for this track
labelstep = self.length // track.greytrack_labels # label interval
for pos in range(self.start, self.end, labelstep):
label = String(0, 0, track.name, # Add a new label at
fontName=track.greytrack_font, # each interval
fontSize=track.greytrack_fontsize,
fillColor=track.greytrack_fontcolor)
theta, costheta, sintheta = self.canvas_angle(pos)
if theta < startangle or endangle < theta:
continue
x, y = self.xcenter + btm * sintheta, self.ycenter + btm * costheta # start text halfway up marker
labelgroup = Group(label)
labelangle = self.sweep * 2 * pi * (pos - self.start) / self.length - pi / 2
if theta > pi:
label.textAnchor = 'end' # Anchor end of text to inner radius
labelangle += pi # and reorient it
cosA, sinA = cos(labelangle), sin(labelangle)
labelgroup.transform = (cosA, -sinA, sinA,
cosA, x, y)
if not self.length - x <= labelstep: # Don't overrun the circle
greytrack_labels.append(labelgroup)
return greytrack_bgs, greytrack_labels
def canvas_angle(self, base):
"""Given base-pair position, return (angle, cosine, sin)."""
angle = self.sweep * 2 * pi * (base - self.start) / self.length
return (angle, cos(angle), sin(angle))
def _draw_sigil_box(self, bottom, center, top,
startangle, endangle, strand,
**kwargs):
"""Draw BOX sigil."""
if strand == 1:
inner_radius = center
outer_radius = top
elif strand == -1:
inner_radius = bottom
outer_radius = center
else:
inner_radius = bottom
outer_radius = top
return self._draw_arc(inner_radius, outer_radius, startangle, endangle, **kwargs)
def _draw_arc(self, inner_radius, outer_radius, startangle, endangle,
color, border=None, colour=None, **kwargs):
"""Returns close path describing an arc box.
o inner_radius Float distance of inside of arc from drawing center
o outer_radius Float distance of outside of arc from drawing center
o startangle Float angle subtended by start of arc at drawing center
(in radians)
o endangle Float angle subtended by end of arc at drawing center
(in radians)
o color colors.Color object for arc (overridden by backwards
compatible argument with UK spelling, colour).
Returns a closed path object describing an arced box corresponding to
the passed values. For very small angles, a simple four sided
polygon is used.
"""
# Let the UK spelling (colour) override the USA spelling (color)
if colour is not None:
color = colour
strokecolor, color = _stroke_and_fill_colors(color, border)
if abs(float(endangle - startangle)) > .01:
# Wide arc, must use full curves
p = ArcPath(strokeColor=strokecolor,
fillColor=color,
strokewidth=0)
# Note reportlab counts angles anti-clockwise from the horizontal
# (as in mathematics, e.g. complex numbers and polar coordinates)
# but we use clockwise from the vertical. Also reportlab uses
# degrees, but we use radians.
p.addArc(self.xcenter, self.ycenter, inner_radius,
90 - (endangle * 180 / pi), 90 - (startangle * 180 / pi),
moveTo=True)
p.addArc(self.xcenter, self.ycenter, outer_radius,
90 - (endangle * 180 / pi), 90 - (startangle * 180 / pi),
reverse=True)
p.closePath()
return p
else:
# Cheat and just use a four sided polygon.
# Calculate trig values for angle and coordinates
startcos, startsin = cos(startangle), sin(startangle)
endcos, endsin = cos(endangle), sin(endangle)
x0, y0 = self.xcenter, self.ycenter # origin of the circle
x1, y1 = (x0 + inner_radius * startsin, y0 + inner_radius * startcos)
x2, y2 = (x0 + inner_radius * endsin, y0 + inner_radius * endcos)
x3, y3 = (x0 + outer_radius * endsin, y0 + outer_radius * endcos)
x4, y4 = (x0 + outer_radius * startsin, y0 + outer_radius * startcos)
return draw_polygon([(x1, y1), (x2, y2), (x3, y3), (x4, y4)], color, border)
def _draw_arc_line(self, path, start_radius, end_radius, start_angle, end_angle,
move=False):
"""Adds a list of points to a path object.
Assumes angles given are in degrees!
Represents what would be a straight line on a linear diagram.
"""
x0, y0 = self.xcenter, self.ycenter # origin of the circle
radius_diff = end_radius - start_radius
angle_diff = end_angle - start_angle
dx = 0.01 # heuristic
a = start_angle * pi / 180
if move:
path.moveTo(x0 + start_radius * cos(a), y0 + start_radius * sin(a))
else:
path.lineTo(x0 + start_radius * cos(a), y0 + start_radius * sin(a))
x = dx
if 0.01 <= abs(dx):
while x < 1:
r = start_radius + x * radius_diff
a = (start_angle + x * (angle_diff)) * pi / 180 # to radians for sin/cos
# print x0+r*cos(a), y0+r*sin(a)
path.lineTo(x0 + r * cos(a), y0 + r * sin(a))
x += dx
a = end_angle * pi / 180
path.lineTo(x0 + end_radius * cos(a), y0 + end_radius * sin(a))
def _draw_arc_poly(self, inner_radius, outer_radius,
inner_startangle, inner_endangle,
outer_startangle, outer_endangle,
color, border=None, flip=False,
**kwargs):
"""Returns polygon path describing an arc."""
strokecolor, color = _stroke_and_fill_colors(color, border)
x0, y0 = self.xcenter, self.ycenter # origin of the circle
if abs(inner_endangle - outer_startangle) > 0.01 \
or abs(outer_endangle - inner_startangle) > 0.01 \
or abs(inner_startangle - outer_startangle) > 0.01 \
or abs(outer_startangle - outer_startangle) > 0.01:
# Wide arc, must use full curves
p = ArcPath(strokeColor=strokecolor,
fillColor=color,
# default is mitre/miter which can stick out too much:
strokeLineJoin=1, # 1=round
strokewidth=0)
# Note reportlab counts angles anti-clockwise from the horizontal
# (as in mathematics, e.g. complex numbers and polar coordinates)
# but we use clockwise from the vertical. Also reportlab uses
# degrees, but we use radians.
i_start = 90 - (inner_startangle * 180 / pi)
i_end = 90 - (inner_endangle * 180 / pi)
o_start = 90 - (outer_startangle * 180 / pi)
o_end = 90 - (outer_endangle * 180 / pi)
p.addArc(x0, y0, inner_radius, i_end, i_start,
moveTo=True, reverse=True)
if flip:
# Flipped, join end to start,
self._draw_arc_line(p, inner_radius, outer_radius, i_end, o_start)
p.addArc(x0, y0, outer_radius, o_end, o_start, reverse=True)
self._draw_arc_line(p, outer_radius, inner_radius, o_end, i_start)
else:
# Not flipped, join start to start, end to end
self._draw_arc_line(p, inner_radius, outer_radius, i_end, o_end)
p.addArc(x0, y0, outer_radius, o_end, o_start,
reverse=False)
self._draw_arc_line(p, outer_radius, inner_radius, o_start, i_start)
p.closePath()
return p
else:
# Cheat and just use a four sided polygon.
# Calculate trig values for angle and coordinates
inner_startcos, inner_startsin = cos(inner_startangle), sin(inner_startangle)
inner_endcos, inner_endsin = cos(inner_endangle), sin(inner_endangle)
outer_startcos, outer_startsin = cos(outer_startangle), sin(outer_startangle)
outer_endcos, outer_endsin = cos(outer_endangle), sin(outer_endangle)
x1, y1 = (x0 + inner_radius * inner_startsin, y0 + inner_radius * inner_startcos)
x2, y2 = (x0 + inner_radius * inner_endsin, y0 + inner_radius * inner_endcos)
x3, y3 = (x0 + outer_radius * outer_endsin, y0 + outer_radius * outer_endcos)
x4, y4 = (x0 + outer_radius * outer_startsin, y0 + outer_radius * outer_startcos)
return draw_polygon([(x1, y1), (x2, y2), (x3, y3), (x4, y4)], color, border,
# default is mitre/miter which can stick out too much:
strokeLineJoin=1, # 1=round
)
def _draw_sigil_cut_corner_box(self, bottom, center, top,
startangle, endangle, strand,
color, border=None, corner=0.5,
**kwargs):
"""Draw OCTO sigil, box with corners cut off."""
if strand == 1:
inner_radius = center
outer_radius = top
elif strand == -1:
inner_radius = bottom
outer_radius = center
else:
inner_radius = bottom
outer_radius = top
strokecolor, color = _stroke_and_fill_colors(color, border)
startangle, endangle = min(startangle, endangle), max(startangle, endangle)
angle = float(endangle - startangle)
middle_radius = 0.5 * (inner_radius + outer_radius)
boxheight = outer_radius - inner_radius
corner_len = min(0.5 * boxheight, 0.5 * boxheight * corner)
shaft_inner_radius = inner_radius + corner_len
shaft_outer_radius = outer_radius - corner_len
cornerangle_delta = max(0.0, min(abs(boxheight) * 0.5 * corner / middle_radius, abs(angle * 0.5)))
if angle < 0:
cornerangle_delta *= -1 # reverse it
# Calculate trig values for angle and coordinates
startcos, startsin = cos(startangle), sin(startangle)
endcos, endsin = cos(endangle), sin(endangle)
x0, y0 = self.xcenter, self.ycenter # origin of the circle
p = ArcPath(strokeColor=strokecolor,
fillColor=color,
strokeLineJoin=1, # 1=round
strokewidth=0,
**kwargs)
# Inner curved edge
p.addArc(self.xcenter, self.ycenter, inner_radius,
90 - ((endangle - cornerangle_delta) * 180 / pi),
90 - ((startangle + cornerangle_delta) * 180 / pi),
moveTo=True)
# Corner edge - straight lines assumes small angle!
# TODO - Use self._draw_arc_line(p, ...) here if we expose corner setting
p.lineTo(x0 + shaft_inner_radius * startsin, y0 + shaft_inner_radius * startcos)
p.lineTo(x0 + shaft_outer_radius * startsin, y0 + shaft_outer_radius * startcos)
# Outer curved edge
p.addArc(self.xcenter, self.ycenter, outer_radius,
90 - ((endangle - cornerangle_delta) * 180 / pi),
90 - ((startangle + cornerangle_delta) * 180 / pi),
reverse=True)
# Corner edges
p.lineTo(x0 + shaft_outer_radius * endsin, y0 + shaft_outer_radius * endcos)
p.lineTo(x0 + shaft_inner_radius * endsin, y0 + shaft_inner_radius * endcos)
p.closePath()
return p
def _draw_sigil_arrow(self, bottom, center, top,
startangle, endangle, strand,
**kwargs):
"""Draw ARROW sigil."""
if strand == 1:
inner_radius = center
outer_radius = top
orientation = "right"
elif strand == -1:
inner_radius = bottom
outer_radius = center
orientation = "left"
else:
inner_radius = bottom
outer_radius = top
orientation = "right" # backwards compatibility
return self._draw_arc_arrow(inner_radius, outer_radius, startangle, endangle,
orientation=orientation, **kwargs)
def _draw_sigil_big_arrow(self, bottom, center, top,
startangle, endangle, strand,
**kwargs):
"""Draw BIGARROW sigil, like ARROW but straddles the axis."""
if strand == -1:
orientation = "left"
else:
orientation = "right"
return self._draw_arc_arrow(bottom, top, startangle, endangle,
orientation=orientation, **kwargs)
def _draw_arc_arrow(self, inner_radius, outer_radius, startangle, endangle,
color, border=None,
shaft_height_ratio=0.4, head_length_ratio=0.5, orientation='right',
colour=None, **kwargs):
"""Draw an arrow along an arc."""
# Let the UK spelling (colour) override the USA spelling (color)
if colour is not None:
color = colour
strokecolor, color = _stroke_and_fill_colors(color, border)
# if orientation == 'right':
# startangle, endangle = min(startangle, endangle), max(startangle, endangle)
# elif orientation == 'left':
# startangle, endangle = max(startangle, endangle), min(startangle, endangle)
# else:
startangle, endangle = min(startangle, endangle), max(startangle, endangle)
if orientation != "left" and orientation != "right":
raise ValueError("Invalid orientation %s, should be 'left' or 'right'"
% repr(orientation))
angle = float(endangle - startangle) # angle subtended by arc
middle_radius = 0.5 * (inner_radius + outer_radius)
boxheight = outer_radius - inner_radius
shaft_height = boxheight * shaft_height_ratio
shaft_inner_radius = middle_radius - 0.5 * shaft_height
shaft_outer_radius = middle_radius + 0.5 * shaft_height
headangle_delta = max(0.0, min(abs(boxheight) * head_length_ratio / middle_radius, abs(angle)))
if angle < 0:
headangle_delta *= -1 # reverse it
if orientation == "right":
headangle = endangle - headangle_delta
else:
headangle = startangle + headangle_delta
if startangle <= endangle:
headangle = max(min(headangle, endangle), startangle)
else:
headangle = max(min(headangle, startangle), endangle)
assert startangle <= headangle <= endangle \
or endangle <= headangle <= startangle, \
(startangle, headangle, endangle, angle)
# Calculate trig values for angle and coordinates
startcos, startsin = cos(startangle), sin(startangle)
headcos, headsin = cos(headangle), sin(headangle)
endcos, endsin = cos(endangle), sin(endangle)
x0, y0 = self.xcenter, self.ycenter # origin of the circle
if 0.5 >= abs(angle) and abs(headangle_delta) >= abs(angle):
# If the angle is small, and the arrow is all head,
# cheat and just use a triangle.
if orientation == "right":
x1, y1 = (x0 + inner_radius * startsin, y0 + inner_radius * startcos)
x2, y2 = (x0 + outer_radius * startsin, y0 + outer_radius * startcos)
x3, y3 = (x0 + middle_radius * endsin, y0 + middle_radius * endcos)
else:
x1, y1 = (x0 + inner_radius * endsin, y0 + inner_radius * endcos)
x2, y2 = (x0 + outer_radius * endsin, y0 + outer_radius * endcos)
x3, y3 = (x0 + middle_radius * startsin, y0 + middle_radius * startcos)
# return draw_polygon([(x1,y1),(x2,y2),(x3,y3)], color, border,
# stroke_line_join=1)
return Polygon([x1, y1, x2, y2, x3, y3],
strokeColor=border or color,
fillColor=color,
strokeLineJoin=1, # 1=round, not mitre!
strokewidth=0)
elif orientation == "right":
p = ArcPath(strokeColor=strokecolor,
fillColor=color,
# default is mitre/miter which can stick out too much:
strokeLineJoin=1, # 1=round
strokewidth=0,
**kwargs)
# Note reportlab counts angles anti-clockwise from the horizontal
# (as in mathematics, e.g. complex numbers and polar coordinates)
# but we use clockwise from the vertical. Also reportlab uses
# degrees, but we use radians.
p.addArc(self.xcenter, self.ycenter, shaft_inner_radius,
90 - (headangle * 180 / pi), 90 - (startangle * 180 / pi),
moveTo=True)
p.addArc(self.xcenter, self.ycenter, shaft_outer_radius,
90 - (headangle * 180 / pi), 90 - (startangle * 180 / pi),
reverse=True)
if abs(angle) < 0.5:
p.lineTo(x0 + outer_radius * headsin, y0 + outer_radius * headcos)
p.lineTo(x0 + middle_radius * endsin, y0 + middle_radius * endcos)
p.lineTo(x0 + inner_radius * headsin, y0 + inner_radius * headcos)
else:
self._draw_arc_line(p, outer_radius, middle_radius,
90 - (headangle * 180 / pi),
90 - (endangle * 180 / pi))
self._draw_arc_line(p, middle_radius, inner_radius,
90 - (endangle * 180 / pi),
90 - (headangle * 180 / pi))
p.closePath()
return p
else:
p = ArcPath(strokeColor=strokecolor,
fillColor=color,
# default is mitre/miter which can stick out too much:
strokeLineJoin=1, # 1=round
strokewidth=0,
**kwargs)
# Note reportlab counts angles anti-clockwise from the horizontal
# (as in mathematics, e.g. complex numbers and polar coordinates)
# but we use clockwise from the vertical. Also reportlab uses
# degrees, but we use radians.
p.addArc(self.xcenter, self.ycenter, shaft_inner_radius,
90 - (endangle * 180 / pi), 90 - (headangle * 180 / pi),
moveTo=True, reverse=True)
p.addArc(self.xcenter, self.ycenter, shaft_outer_radius,
90 - (endangle * 180 / pi), 90 - (headangle * 180 / pi),
reverse=False)
# Note - two staight lines is only a good approximation for small
# head angle, in general will need to curved lines here:
if abs(angle) < 0.5:
p.lineTo(x0 + outer_radius * headsin, y0 + outer_radius * headcos)
p.lineTo(x0 + middle_radius * startsin, y0 + middle_radius * startcos)
p.lineTo(x0 + inner_radius * headsin, y0 + inner_radius * headcos)
else:
self._draw_arc_line(p, outer_radius, middle_radius,
90 - (headangle * 180 / pi),
90 - (startangle * 180 / pi))
self._draw_arc_line(p, middle_radius, inner_radius,
90 - (startangle * 180 / pi),
90 - (headangle * 180 / pi))
p.closePath()
return p
def _draw_sigil_jaggy(self, bottom, center, top,
startangle, endangle, strand,
color, border=None,
**kwargs):
"""Draw JAGGY sigil.
Although we may in future expose the head/tail jaggy lengths, for now
both the left and right edges are drawn jagged.
"""
if strand == 1:
inner_radius = center
outer_radius = top
teeth = 2
elif strand == -1:
inner_radius = bottom
outer_radius = center
teeth = 2
else:
inner_radius = bottom
outer_radius = top
teeth = 4
# TODO, expose these settings?
tail_length_ratio = 1.0
head_length_ratio = 1.0
strokecolor, color = _stroke_and_fill_colors(color, border)
startangle, endangle = min(startangle, endangle), max(startangle, endangle)
angle = float(endangle - startangle) # angle subtended by arc
height = outer_radius - inner_radius
assert startangle <= endangle and angle >= 0
if head_length_ratio and tail_length_ratio:
headangle = max(endangle - min(height * head_length_ratio / (center * teeth), angle * 0.5), startangle)
tailangle = min(startangle + min(height * tail_length_ratio / (center * teeth), angle * 0.5), endangle)
# With very small features, can due to floating point calculations
# violate the assertion below that start <= tail <= head <= end
tailangle = min(tailangle, headangle)
elif head_length_ratio:
headangle = max(endangle - min(height * head_length_ratio / (center * teeth), angle), startangle)
tailangle = startangle
else:
headangle = endangle
tailangle = min(startangle + min(height * tail_length_ratio / (center * teeth), angle), endangle)
assert startangle <= tailangle <= headangle <= endangle, \
(startangle, tailangle, headangle, endangle, angle)
# Calculate trig values for angle and coordinates
startcos, startsin = cos(startangle), sin(startangle)
headcos, headsin = cos(headangle), sin(headangle)
endcos, endsin = cos(endangle), sin(endangle)
x0, y0 = self.xcenter, self.ycenter # origin of the circle
p = ArcPath(strokeColor=strokecolor,
fillColor=color,
# default is mitre/miter which can stick out too much:
strokeLineJoin=1, # 1=round
strokewidth=0,
**kwargs)
# Note reportlab counts angles anti-clockwise from the horizontal
# (as in mathematics, e.g. complex numbers and polar coordinates)
# but we use clockwise from the vertical. Also reportlab uses
# degrees, but we use radians.
p.addArc(self.xcenter, self.ycenter, inner_radius,
90 - (headangle * 180 / pi), 90 - (tailangle * 180 / pi),
moveTo=True)
for i in range(0, teeth):
p.addArc(self.xcenter, self.ycenter, inner_radius + i * height / teeth,
90 - (tailangle * 180 / pi), 90 - (startangle * 180 / pi))
# Curved line needed when drawing long jaggies
self._draw_arc_line(p,
inner_radius + i * height / teeth,
inner_radius + (i + 1) * height / teeth,
90 - (startangle * 180 / pi),
90 - (tailangle * 180 / pi))
p.addArc(self.xcenter, self.ycenter, outer_radius,
90 - (headangle * 180 / pi), 90 - (tailangle * 180 / pi),
reverse=True)
for i in range(0, teeth):
p.addArc(self.xcenter, self.ycenter,
outer_radius - i * height / teeth,
90 - (endangle * 180 / pi), 90 - (headangle * 180 / pi),
reverse=True)
# Curved line needed when drawing long jaggies
self._draw_arc_line(p,
outer_radius - i * height / teeth,
outer_radius - (i + 1) * height / teeth,
90 - (endangle * 180 / pi),
90 - (headangle * 180 / pi))
p.closePath()
return p
|
zjuchenyuan/BioWeb
|
Lib/Bio/Graphics/GenomeDiagram/_CircularDrawer.py
|
Python
|
mit
| 68,642
|
[
"Biopython"
] |
b25e06af8b19e30aea8a2800c92b6514f26dcb3de1093dbcffb4f4e6924b17a4
|
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import nest # import NEST module
def single_neuron(spike_times, sim_duration):
nest.set_verbosity('M_WARNING') # reduce NEST output
nest.ResetKernel() # reset simulation kernel
# create LIF neuron with exponential synaptic currents
neuron = nest.Create('iaf_psc_exp')
# create a voltmeter
voltmeter = nest.Create('voltmeter', params={'interval': 0.1})
# create a spike generator
spikegenerator = nest.Create('spike_generator')
# ... and let it spike at predefined times
nest.SetStatus(spikegenerator, {'spike_times': spike_times})
# connect spike generator and voltmeter to the neuron
nest.Connect(spikegenerator, neuron)
nest.Connect(voltmeter, neuron)
# run simulation for sim_duration
nest.Simulate(sim_duration)
# read out recording time and voltage from voltmeter
times = nest.GetStatus(voltmeter)[0]['events']['times']
voltage = nest.GetStatus(voltmeter)[0]['events']['V_m']
# plot results
plt.plot(times, voltage)
plt.xlabel('Time (ms)')
plt.ylabel('Membrane potential (mV)')
filename = 'single_neuron.png'
plt.savefig(filename, dpi=300)
if __name__ == '__main__':
spike_times = [10., 50.]
sim_duration = 100.
single_neuron(spike_times, sim_duration)
|
jhnnsnk/UP-Tasks
|
NEST/single_neuron_task/single_neuron.py
|
Python
|
gpl-2.0
| 1,344
|
[
"NEURON"
] |
336719797bd7c2284a9d6b45523db3cf088fce8471102f9e2fc84481a153c0f5
|
from setuptools import setup, find_packages
setup(
name='dfttopif',
version='1.1.0',
description='Library for parsing Density Functional Theory calculations',
url='https://github.com/CitrineInformatics/pif-dft',
install_requires=[
'ase<=3.17.0',
'pypif>=2.0.1,<3',
'dftparse>=0.2.1'
],
extras_require={
'report': ["requests"],
},
packages=find_packages(exclude=('tests', 'docs')),
entry_points={
'citrine.dice.converter': [
'dft = dfttopif'
]
}
)
|
CitrineInformatics/pif-dft
|
setup.py
|
Python
|
apache-2.0
| 552
|
[
"ASE"
] |
ba9aaae0542047c56c6ad055d8c04d99a4366aa9a79ae3a3102c0911e18fe633
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
from __future__ import division, unicode_literals
"""
This module provides so-called "strategies" to determine the coordination environments of an atom in a structure.
Some strategies can favour larger or smaller environments. Some strategies uniquely identifies the environments while
some others can identify the environment as a "mix" of several environments, each of which is assigned with a given
fraction. The choice of the strategy depends on the purpose of the user.
"""
__author__ = "David Waroquiers"
__copyright__ = "Copyright 2012, The Materials Project"
__credits__ = "Geoffroy Hautier"
__version__ = "2.0"
__maintainer__ = "David Waroquiers"
__email__ = "david.waroquiers@gmail.com"
__date__ = "Feb 20, 2016"
import abc
import os
import json
from monty.json import MSONable
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from pymatgen.core.operations import SymmOp
from pymatgen.core.sites import PeriodicSite
import numpy as np
from scipy.stats import gmean
from pymatgen.analysis.chemenv.coordination_environments.coordination_geometries import UNCLEAR_ENVIRONMENT_SYMBOL
from pymatgen.analysis.chemenv.utils.coordination_geometry_utils import get_lower_and_upper_f
from pymatgen.analysis.chemenv.utils.func_utils import CSMFiniteRatioFunction
from pymatgen.analysis.chemenv.utils.func_utils import CSMInfiniteRatioFunction
from pymatgen.analysis.chemenv.utils.func_utils import DeltaCSMRatioFunction
from pymatgen.analysis.chemenv.utils.func_utils import RatioFunction
from pymatgen.analysis.chemenv.utils.chemenv_errors import EquivalentSiteSearchError
from pymatgen.analysis.chemenv.coordination_environments.coordination_geometries import AllCoordinationGeometries
from pymatgen.analysis.chemenv.utils.defs_utils import AdditionalConditions
from six import with_metaclass
from pymatgen.analysis.chemenv.coordination_environments.voronoi import DetailedVoronoiContainer
from collections import OrderedDict
module_dir = os.path.dirname(os.path.abspath(__file__))
MPSYMBOL_TO_CN = AllCoordinationGeometries().get_symbol_cn_mapping()
ALLCG = AllCoordinationGeometries()
class StrategyOption(with_metaclass(abc.ABCMeta, MSONable)):
allowed_values = None
@abc.abstractmethod
def as_dict(self):
"""
A JSON serializable dict representation of this strategy option.
"""
pass
class DistanceCutoffFloat(float, StrategyOption):
allowed_values = 'Real number between 1.0 and +infinity'
def __new__(cls, myfloat):
flt = float.__new__(cls, myfloat)
if flt < 1.0:
raise ValueError("Distance cutoff should be between 1.0 and +infinity")
return flt
def as_dict(self):
return {'@module': self.__class__.__module__,
'@class': self.__class__.__name__,
'value': self}
@classmethod
def from_dict(cls, d):
return cls(d['value'])
class AngleCutoffFloat(float, StrategyOption):
allowed_values = 'Real number between 0.0 and 1.0'
def __new__(cls, myfloat):
flt = float.__new__(cls, myfloat)
if flt < 0.0 or flt > 1.0:
raise ValueError("Angle cutoff should be between 0.0 and 1.0")
return flt
def as_dict(self):
return {'@module': self.__class__.__module__,
'@class': self.__class__.__name__,
'value': self}
@classmethod
def from_dict(cls, d):
return cls(d['value'])
class CSMFloat(float, StrategyOption):
allowed_values = 'Real number between 0.0 and 100.0'
def __new__(cls, myfloat):
flt = float.__new__(cls, myfloat)
if flt < 0.0 or flt > 100.0:
raise ValueError("Continuous symmetry measure limits should be between 0.0 and 100.0")
return flt
def as_dict(self):
return {'@module': self.__class__.__module__,
'@class': self.__class__.__name__,
'value': self}
@classmethod
def from_dict(cls, d):
return cls(d['value'])
class AdditionalConditionInt(int, StrategyOption):
allowed_values = 'Integer amongst :\n'
for integer, description in AdditionalConditions.CONDITION_DESCRIPTION.items():
allowed_values += ' - {:d} for "{}"\n'.format(integer, description)
def __new__(cls, integer):
if str(int(integer)) != str(integer):
raise ValueError("Additional condition {} is not an integer".format(str(integer)))
intger = int.__new__(cls, integer)
if intger not in AdditionalConditions.ALL:
raise ValueError("Additional condition {:d} is not allowed".format(integer))
return intger
def as_dict(self):
return {'@module': self.__class__.__module__,
'@class': self.__class__.__name__,
'value': self}
@classmethod
def from_dict(cls, d):
return cls(d['value'])
class AbstractChemenvStrategy(with_metaclass(abc.ABCMeta, MSONable)):
"""
Class used to define a Chemenv strategy for the neighbors and coordination environment to be applied to a
StructureEnvironments object
"""
AC = AdditionalConditions()
STRATEGY_OPTIONS = OrderedDict()
STRATEGY_DESCRIPTION = None
STRATEGY_INFO_FIELDS = []
DEFAULT_SYMMETRY_MEASURE_TYPE = 'csm_wcs_ctwcc'
def __init__(self, structure_environments=None, symmetry_measure_type=DEFAULT_SYMMETRY_MEASURE_TYPE):
"""
Abstract constructor for the all chemenv strategies.
:param structure_environments: StructureEnvironments object containing all the information on the
coordination of the sites in a structure
"""
self.structure_environments = None
if structure_environments is not None:
self.set_structure_environments(structure_environments)
self._symmetry_measure_type = symmetry_measure_type
@property
def symmetry_measure_type(self):
return self._symmetry_measure_type
def set_structure_environments(self, structure_environments):
self.structure_environments = structure_environments
if not isinstance(self.structure_environments.voronoi, DetailedVoronoiContainer):
raise ValueError('Voronoi Container not of type "DetailedVoronoiContainer"')
self.prepare_symmetries()
def prepare_symmetries(self):
try:
self.spg_analyzer = SpacegroupAnalyzer(self.structure_environments.structure)
self.symops = self.spg_analyzer.get_symmetry_operations()
except:
self.symops = []
def equivalent_site_index_and_transform(self, psite):
# Get the index of the site in the unit cell of which the PeriodicSite psite is a replica.
try:
isite = self.structure_environments.structure.index(psite)
except ValueError:
try:
uc_psite = psite.to_unit_cell
isite = self.structure_environments.structure.index(uc_psite)
except ValueError:
for isite2, site2 in enumerate(self.structure_environments.structure):
if psite.is_periodic_image(site2):
isite = isite2
break
# Get the translation between psite and its corresponding site in the unit cell (Translation I)
thissite = self.structure_environments.structure[isite]
dthissite = psite.frac_coords - thissite.frac_coords
# Get the translation between the equivalent site for which the neighbors have been computed and the site in
# the unit cell that corresponds to psite (Translation II)
equivsite = self.structure_environments.structure[self.structure_environments.sites_map[isite]].to_unit_cell
#equivsite = self.structure_environments.structure[self.structure_environments.sites_map[isite]]
dequivsite = (self.structure_environments.structure[self.structure_environments.sites_map[isite]].frac_coords
- equivsite.frac_coords)
found = False
# Find the symmetry that applies the site in the unit cell to the equivalent site, as well as the translation
# that gets back the site to the unit cell (Translation III)
#TODO: check that these tolerances are needed, now that the structures are refined before analyzing environments
tolerances = [1e-8, 1e-7, 1e-6, 1e-5, 1e-4]
for tolerance in tolerances:
for symop in self.symops:
newsite = PeriodicSite(equivsite._species, symop.operate(equivsite.frac_coords), equivsite._lattice)
if newsite.is_periodic_image(thissite, tolerance=tolerance):
mysym = symop
dthissite2 = thissite.frac_coords - newsite.frac_coords
found = True
break
if not found:
symops = [SymmOp.from_rotation_and_translation()]
for symop in symops:
newsite = PeriodicSite(equivsite._species, symop.operate(equivsite.frac_coords), equivsite._lattice)
#if newsite.is_periodic_image(thissite):
if newsite.is_periodic_image(thissite, tolerance=tolerance):
mysym = symop
dthissite2 = thissite.frac_coords - newsite.frac_coords
found = True
break
if found:
break
if not found:
raise EquivalentSiteSearchError(psite)
return [self.structure_environments.sites_map[isite], dequivsite, dthissite + dthissite2, mysym]
@abc.abstractmethod
def get_site_neighbors(self, site):
"""
Applies the strategy to the structure_environments object in order to get the neighbors of a given site.
:param site: Site for which the neighbors are looked for
:param structure_environments: StructureEnvironments object containing all the information needed to get the
neighbors of the site
:return: The list of neighbors of the site. For complex strategies, where one allows multiple solutions, this
can return a list of list of neighbors
"""
raise NotImplementedError()
@property
def uniquely_determines_coordination_environments(self):
"""
Returns True if the strategy leads to a unique coordination environment, False otherwise.
:return: True if the strategy leads to a unique coordination environment, False otherwise.
"""
raise NotImplementedError()
@abc.abstractmethod
def get_site_coordination_environment(self, site):
"""
Applies the strategy to the structure_environments object in order to define the coordination environment of
a given site.
:param site: Site for which the coordination environment is looked for
:return: The coordination environment of the site. For complex strategies, where one allows multiple
solutions, this can return a list of coordination environments for the site
"""
raise NotImplementedError()
@abc.abstractmethod
def get_site_coordination_environments(self, site):
"""
Applies the strategy to the structure_environments object in order to define the coordination environment of
a given site.
:param site: Site for which the coordination environment is looked for
:return: The coordination environment of the site. For complex strategies, where one allows multiple
solutions, this can return a list of coordination environments for the site
"""
raise NotImplementedError()
@abc.abstractmethod
def get_site_coordination_environments_fractions(self, site, isite=None, dequivsite=None, dthissite=None,
mysym=None, ordered=True, min_fraction=0.0, return_maps=True,
return_strategy_dict_info=False):
"""
Applies the strategy to the structure_environments object in order to define the coordination environment of
a given site.
:param site: Site for which the coordination environment is looked for
:return: The coordination environment of the site. For complex strategies, where one allows multiple
solutions, this can return a list of coordination environments for the site
"""
raise NotImplementedError()
def get_site_ce_fractions_and_neighbors(self, site, full_ce_info=False, strategy_info=False):
"""
Applies the strategy to the structure_environments object in order to get coordination environments, their
fraction, csm, geometry_info, and neighbors
:param site: Site for which the above information is seeked
:return: The list of neighbors of the site. For complex strategies, where one allows multiple solutions, this
can return a list of list of neighbors
"""
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
geoms_and_maps_list = self.get_site_coordination_environments_fractions(site=site, isite=isite,
dequivsite=dequivsite,
dthissite=dthissite, mysym=mysym,
return_maps=True,
return_strategy_dict_info=True)
if geoms_and_maps_list is None:
return None
site_nbs_sets = self.structure_environments.neighbors_sets[isite]
ce_and_neighbors = []
for fractions_dict in geoms_and_maps_list:
ce_map = fractions_dict['ce_map']
ce_nb_set = site_nbs_sets[ce_map[0]][ce_map[1]]
neighbors = [{'site': nb_site_and_index['site'],
'index': nb_site_and_index['index']}
for nb_site_and_index in ce_nb_set.neighb_sites_and_indices]
fractions_dict['neighbors'] = neighbors
ce_and_neighbors.append(fractions_dict)
return ce_and_neighbors
def set_option(self, option_name, option_value):
self.__setattr__(option_name, option_value)
def setup_options(self, all_options_dict):
for option_name, option_value in all_options_dict.items():
self.set_option(option_name, option_value)
@abc.abstractmethod
def __eq__(self, other):
"""
Equality method that should be implemented for any strategy
:param other: strategy to be compared with the current one
:return:
"""
raise NotImplementedError()
def __str__(self):
out = ' Chemenv Strategy "{}"\n'.format(self.__class__.__name__)
out += ' {}\n\n'.format('='*(19+len(self.__class__.__name__)))
out += ' Description :\n {}\n'.format('-'*13)
out += self.STRATEGY_DESCRIPTION
out += '\n\n'
out += ' Options :\n {}\n'.format('-'*9)
for option_name, option_dict in self.STRATEGY_OPTIONS.items():
out += ' - {} : {}\n'.format(option_name, str(getattr(self, option_name)))
return out
@abc.abstractmethod
def as_dict(self):
"""
Bson-serializable dict representation of the SimplestChemenvStrategy object.
:return: Bson-serializable dict representation of the SimplestChemenvStrategy object.
"""
raise NotImplementedError()
@classmethod
def from_dict(cls, d):
"""
Reconstructs the SimpleAbundanceChemenvStrategy object from a dict representation of the
SimpleAbundanceChemenvStrategy object created using the as_dict method.
:param d: dict representation of the SimpleAbundanceChemenvStrategy object
:return: StructureEnvironments object
"""
raise NotImplementedError()
class SimplestChemenvStrategy(AbstractChemenvStrategy):
"""
Simplest ChemenvStrategy using fixed angle and distance parameters for the definition of neighbors in the
Voronoi approach. The coordination environment is then given as the one with the lowest continuous symmetry measure
"""
# Default values for the distance and angle cutoffs
DEFAULT_DISTANCE_CUTOFF = 1.4
DEFAULT_ANGLE_CUTOFF = 0.3
DEFAULT_CONTINUOUS_SYMMETRY_MEASURE_CUTOFF = 10.0
DEFAULT_ADDITIONAL_CONDITION = AbstractChemenvStrategy.AC.ONLY_ACB
STRATEGY_OPTIONS = OrderedDict()
STRATEGY_OPTIONS['distance_cutoff'] = {'type': DistanceCutoffFloat, 'internal': '_distance_cutoff',
'default': DEFAULT_DISTANCE_CUTOFF}
STRATEGY_OPTIONS['angle_cutoff'] = {'type': AngleCutoffFloat, 'internal': '_angle_cutoff',
'default': DEFAULT_ANGLE_CUTOFF}
STRATEGY_OPTIONS['additional_condition'] = {'type': AdditionalConditionInt,
'internal': '_additional_condition',
'default': DEFAULT_ADDITIONAL_CONDITION}
STRATEGY_OPTIONS['continuous_symmetry_measure_cutoff'] = {'type': CSMFloat,
'internal': '_continuous_symmetry_measure_cutoff',
'default': DEFAULT_CONTINUOUS_SYMMETRY_MEASURE_CUTOFF}
STRATEGY_DESCRIPTION = ' Simplest ChemenvStrategy using fixed angle and distance parameters \n' \
' for the definition of neighbors in the Voronoi approach. \n' \
' The coordination environment is then given as the one with the \n' \
' lowest continuous symmetry measure.'
def __init__(self, structure_environments=None, distance_cutoff=DEFAULT_DISTANCE_CUTOFF,
angle_cutoff=DEFAULT_ANGLE_CUTOFF, additional_condition=DEFAULT_ADDITIONAL_CONDITION,
continuous_symmetry_measure_cutoff=DEFAULT_CONTINUOUS_SYMMETRY_MEASURE_CUTOFF,
symmetry_measure_type=AbstractChemenvStrategy.DEFAULT_SYMMETRY_MEASURE_TYPE):
"""
Constructor for this SimplestChemenvStrategy.
:param distance_cutoff: Distance cutoff used
:param angle_cutoff: Angle cutoff used
"""
AbstractChemenvStrategy.__init__(self, structure_environments, symmetry_measure_type=symmetry_measure_type)
self.distance_cutoff = distance_cutoff
self.angle_cutoff = angle_cutoff
self.additional_condition = additional_condition
self.continuous_symmetry_measure_cutoff = continuous_symmetry_measure_cutoff
@property
def uniquely_determines_coordination_environments(self):
return True
@property
def distance_cutoff(self):
return self._distance_cutoff
@distance_cutoff.setter
def distance_cutoff(self, distance_cutoff):
self._distance_cutoff = DistanceCutoffFloat(distance_cutoff)
@property
def angle_cutoff(self):
return self._angle_cutoff
@angle_cutoff.setter
def angle_cutoff(self, angle_cutoff):
self._angle_cutoff = AngleCutoffFloat(angle_cutoff)
@property
def additional_condition(self):
return self._additional_condition
@additional_condition.setter
def additional_condition(self, additional_condition):
self._additional_condition = AdditionalConditionInt(additional_condition)
@property
def continuous_symmetry_measure_cutoff(self):
return self._continuous_symmetry_measure_cutoff
@continuous_symmetry_measure_cutoff.setter
def continuous_symmetry_measure_cutoff(self, continuous_symmetry_measure_cutoff):
self._continuous_symmetry_measure_cutoff = CSMFloat(continuous_symmetry_measure_cutoff)
def get_site_neighbors(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None):#, neighbors_map=None):
#if neighbors_map is not None:
# return self.structure_environments.voronoi.get_neighbors(isite=isite, neighbors_map=neighbors_map)
if isite is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
ce, cn_map = self.get_site_coordination_environment(site=site, isite=isite,
dequivsite=dequivsite, dthissite=dthissite, mysym=mysym,
return_map=True)
nb_set = self.structure_environments.neighbors_sets[isite][cn_map[0]][cn_map[1]]
eqsite_ps = nb_set.neighb_sites
coordinated_neighbors = []
for ips, ps in enumerate(eqsite_ps):
coords = mysym.operate(ps.frac_coords + dequivsite) + dthissite
ps_site = PeriodicSite(ps._species, coords, ps._lattice)
coordinated_neighbors.append(ps_site)
return coordinated_neighbors
def get_site_coordination_environment(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_map=False):
if isite is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
neighbors_normalized_distances = self.structure_environments.voronoi.neighbors_normalized_distances[isite]
neighbors_normalized_angles = self.structure_environments.voronoi.neighbors_normalized_angles[isite]
idist = None
for iwd, wd in enumerate(neighbors_normalized_distances):
if self.distance_cutoff >= wd['min']:
idist = iwd
else:
break
iang = None
for iwa, wa in enumerate(neighbors_normalized_angles):
if self.angle_cutoff <= wa['max']:
iang = iwa
else:
break
if idist is None or iang is None:
raise ValueError('Distance or angle parameter not found ...')
my_cn = None
my_inb_set = None
found = False
for cn, nb_sets in self.structure_environments.neighbors_sets[isite].items():
for inb_set, nb_set in enumerate(nb_sets):
sources = [src for src in nb_set.sources
if src['origin'] == 'dist_ang_ac_voronoi' and src['ac'] == self.additional_condition]
for src in sources:
if src['idp'] == idist and src['iap'] == iang:
my_cn = cn
my_inb_set = inb_set
found = True
break
if found:
break
if found:
break
if not found:
return None
cn_map = (my_cn, my_inb_set)
ce = self.structure_environments.ce_list[self.structure_environments.sites_map[isite]][cn_map[0]][cn_map[1]]
if ce is None:
return None
coord_geoms = ce.coord_geoms
if return_map:
if coord_geoms is None:
return cn_map[0], cn_map
return (ce.minimum_geometry(symmetry_measure_type=self._symmetry_measure_type), cn_map)
else:
if coord_geoms is None:
return cn_map[0]
return ce.minimum_geometry(symmetry_measure_type=self._symmetry_measure_type)
def get_site_coordination_environments_fractions(self, site, isite=None, dequivsite=None, dthissite=None,
mysym=None, ordered=True, min_fraction=0.0, return_maps=True,
return_strategy_dict_info=False):
if isite is None or dequivsite is None or dthissite is None or mysym is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
site_nb_sets = self.structure_environments.neighbors_sets[isite]
if site_nb_sets is None:
return None
ce_and_map = self.get_site_coordination_environment(site=site, isite=isite, dequivsite=dequivsite,
dthissite=dthissite, mysym=mysym,
return_map=True)
if ce_and_map is None:
return None
ce, ce_map = ce_and_map
if ce is None:
ce_dict = {'ce_symbol': 'UNKNOWN:{:d}'.format(ce_map[0]), 'ce_dict': None, 'ce_fraction': 1.0}
else:
ce_dict = {'ce_symbol': ce[0], 'ce_dict': ce[1], 'ce_fraction': 1.0}
if return_maps:
ce_dict['ce_map'] = ce_map
if return_strategy_dict_info:
ce_dict['strategy_info'] = {}
fractions_info_list = [ce_dict]
return fractions_info_list
def get_site_coordination_environments(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_maps=False):
return [self.get_site_coordination_environment(site=site, isite=isite, dequivsite=dequivsite,
dthissite=dthissite, mysym=mysym, return_map=return_maps)]
def add_strategy_visualization_to_subplot(self, subplot, visualization_options=None, plot_type=None):
subplot.plot(self._distance_cutoff, self._angle_cutoff, 'o', mec=None, mfc='w', markersize=12)
subplot.plot(self._distance_cutoff, self._angle_cutoff, 'x', linewidth=2, markersize=12)
def __eq__(self, other):
return (self.__class__.__name__ == other.__class__.__name__ and
self._distance_cutoff == other._distance_cutoff and self._angle_cutoff == other._angle_cutoff and
self._additional_condition == other._additional_condition and
self._continuous_symmetry_measure_cutoff == other._continuous_symmetry_measure_cutoff and
self.symmetry_measure_type == other.symmetry_measure_type)
def as_dict(self):
"""
Bson-serializable dict representation of the SimplestChemenvStrategy object.
:return: Bson-serializable dict representation of the SimplestChemenvStrategy object.
"""
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"distance_cutoff": float(self._distance_cutoff),
"angle_cutoff": float(self._angle_cutoff),
"additional_condition": int(self._additional_condition),
"continuous_symmetry_measure_cutoff": float(self._continuous_symmetry_measure_cutoff),
"symmetry_measure_type": self._symmetry_measure_type}
@classmethod
def from_dict(cls, d):
"""
Reconstructs the SimplestChemenvStrategy object from a dict representation of the SimplestChemenvStrategy object
created using the as_dict method.
:param d: dict representation of the SimplestChemenvStrategy object
:return: StructureEnvironments object
"""
return cls(distance_cutoff=d["distance_cutoff"], angle_cutoff=d["angle_cutoff"],
additional_condition=d["additional_condition"],
continuous_symmetry_measure_cutoff=d["continuous_symmetry_measure_cutoff"],
symmetry_measure_type=d["symmetry_measure_type"])
class SimpleAbundanceChemenvStrategy(AbstractChemenvStrategy):
"""
Simple ChemenvStrategy using the neighbors that are the most "abundant" in the grid of angle and distance
parameters for the definition of neighbors in the Voronoi approach.
The coordination environment is then given as the one with the lowest continuous symmetry measure
"""
DEFAULT_MAX_DIST = 2.0
DEFAULT_ADDITIONAL_CONDITION = AbstractChemenvStrategy.AC.ONLY_ACB
STRATEGY_OPTIONS = OrderedDict()
STRATEGY_OPTIONS['additional_condition'] = {'type': AdditionalConditionInt,
'internal': '_additional_condition',
'default': DEFAULT_ADDITIONAL_CONDITION}
STRATEGY_OPTIONS['surface_calculation_type'] = {}
STRATEGY_DESCRIPTION = ' Simple Abundance ChemenvStrategy using the most "abundant" neighbors map \n' \
' for the definition of neighbors in the Voronoi approach. \n' \
' The coordination environment is then given as the one with the \n' \
' lowest continuous symmetry measure.'
def __init__(self, structure_environments=None,
additional_condition=AbstractChemenvStrategy.AC.ONLY_ACB,
symmetry_measure_type=AbstractChemenvStrategy.DEFAULT_SYMMETRY_MEASURE_TYPE):
"""
Constructor for the SimpleAbundanceChemenvStrategy.
:param structure_environments: StructureEnvironments object containing all the information on the
coordination of the sites in a structure
"""
raise NotImplementedError('SimpleAbundanceChemenvStrategy not yet implemented')
AbstractChemenvStrategy.__init__(self, structure_environments, symmetry_measure_type=symmetry_measure_type)
self._additional_condition = additional_condition
@property
def uniquely_determines_coordination_environments(self):
return True
def get_site_neighbors(self, site):
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
cn_map = self._get_map(isite)
eqsite_ps = (self.structure_environments.unique_coordinated_neighbors(isite, cn_map=cn_map))
coordinated_neighbors = []
for ips, ps in enumerate(eqsite_ps):
coords = mysym.operate(ps.frac_coords + dequivsite) + dthissite
ps_site = PeriodicSite(ps._species, coords, ps._lattice)
coordinated_neighbors.append(ps_site)
return coordinated_neighbors
def get_site_coordination_environment(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_map=False):
if isite is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
cn_map = self._get_map(isite)
if cn_map is None:
return None
coord_geoms = (self.structure_environments.
ce_list[self.structure_environments.sites_map[isite]][cn_map[0]][cn_map[1]])
if return_map:
if coord_geoms is None:
return cn_map[0], cn_map
return coord_geoms.minimum_geometry(symmetry_measure_type=self._symmetry_measure_type), cn_map
else:
if coord_geoms is None:
return cn_map[0]
return coord_geoms.minimum_geometry(symmetry_measure_type=self._symmetry_measure_type)
def get_site_coordination_environments(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_maps=False):
return [self.get_site_coordination_environment(site=site, isite=isite, dequivsite=dequivsite,
dthissite=dthissite, mysym=mysym, return_map=return_maps)]
def _get_map(self, isite):
maps_and_surfaces = self._get_maps_surfaces(isite)
if maps_and_surfaces is None:
return None
surface_max = 0.0
imax = -1
for ii, map_and_surface in enumerate(maps_and_surfaces):
all_additional_conditions = [ac[2] for ac in map_and_surface['parameters_indices']]
if self._additional_condition in all_additional_conditions and map_and_surface['surface'] > surface_max:
surface_max = map_and_surface['surface']
imax = ii
return maps_and_surfaces[imax]['map']
def _get_maps_surfaces(self, isite, surface_calculation_type=None):
if surface_calculation_type is None:
surface_calculation_type = {'distance_parameter': ('initial_normalized', None),
'angle_parameter': ('initial_normalized', None)}
return self.structure_environments.voronoi.maps_and_surfaces(isite=isite,
surface_calculation_type=surface_calculation_type,
max_dist=self.DEFAULT_MAX_DIST)
def __eq__(self, other):
return (self.__class__.__name__ == other.__class__.__name__ and
self._additional_condition == other.additional_condition)
def as_dict(self):
"""
Bson-serializable dict representation of the SimpleAbundanceChemenvStrategy object.
:return: Bson-serializable dict representation of the SimpleAbundanceChemenvStrategy object.
"""
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"additional_condition": self._additional_condition}
@classmethod
def from_dict(cls, d):
"""
Reconstructs the SimpleAbundanceChemenvStrategy object from a dict representation of the
SimpleAbundanceChemenvStrategy object created using the as_dict method.
:param d: dict representation of the SimpleAbundanceChemenvStrategy object
:return: StructureEnvironments object
"""
return cls(additional_condition=d["additional_condition"])
class TargettedPenaltiedAbundanceChemenvStrategy(SimpleAbundanceChemenvStrategy):
"""
Simple ChemenvStrategy using the neighbors that are the most "abundant" in the grid of angle and distance
parameters for the definition of neighbors in the Voronoi approach, with a bias for a given list of target
environments. This can be useful in the case of, e.g. connectivity search of some given environment.
The coordination environment is then given as the one with the lowest continuous symmetry measure
"""
DEFAULT_TARGET_ENVIRONMENTS = ['O:6']
def __init__(self, structure_environments=None, truncate_dist_ang=True,
additional_condition=AbstractChemenvStrategy.AC.ONLY_ACB,
max_nabundant=5, target_environments=DEFAULT_TARGET_ENVIRONMENTS, target_penalty_type='max_csm',
max_csm=5.0, symmetry_measure_type=AbstractChemenvStrategy.DEFAULT_SYMMETRY_MEASURE_TYPE):
raise NotImplementedError('TargettedPenaltiedAbundanceChemenvStrategy not yet implemented')
SimpleAbundanceChemenvStrategy.__init__(self, structure_environments,
additional_condition=additional_condition,
symmetry_measure_type=symmetry_measure_type)
self.max_nabundant = max_nabundant
self.target_environments = target_environments
self.target_penalty_type = target_penalty_type
self.max_csm = max_csm
def get_site_coordination_environment(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_map=False):
if isite is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
cn_map = self._get_map(isite)
if cn_map is None:
return None
chemical_environments = (self.structure_environments.ce_list
[self.structure_environments.sites_map[isite]][cn_map[0]][cn_map[1]])
if return_map:
if chemical_environments.coord_geoms is None or len(chemical_environments) == 0:
return cn_map[0], cn_map
return chemical_environments.minimum_geometry(symmetry_measure_type=self._symmetry_measure_type), cn_map
else:
if chemical_environments.coord_geoms is None:
return cn_map[0]
return chemical_environments.minimum_geometry(symmetry_measure_type=self._symmetry_measure_type)
def _get_map(self, isite):
maps_and_surfaces = SimpleAbundanceChemenvStrategy._get_maps_surfaces(self, isite)
if maps_and_surfaces is None:
return SimpleAbundanceChemenvStrategy._get_map(self, isite)
current_map = None
current_target_env_csm = 100.0
surfaces = [map_and_surface['surface'] for map_and_surface in maps_and_surfaces]
order = np.argsort(surfaces)[::-1]
target_cgs = [AllCoordinationGeometries().get_geometry_from_mp_symbol(mp_symbol)
for mp_symbol in self.target_environments]
target_cns = [cg.coordination_number for cg in target_cgs]
for ii in range(min([len(maps_and_surfaces), self.max_nabundant])):
my_map_and_surface = maps_and_surfaces[order[ii]]
mymap = my_map_and_surface['map']
cn = mymap[0]
if cn not in target_cns or cn > 12 or cn == 0:
continue
all_conditions = [params[2] for params in my_map_and_surface['parameters_indices']]
if self._additional_condition not in all_conditions:
continue
cg, cgdict = (self.structure_environments.ce_list
[self.structure_environments.sites_map[isite]]
[mymap[0]][mymap[1]].minimum_geometry(symmetry_measure_type=self._symmetry_measure_type))
if (cg in self.target_environments and cgdict['symmetry_measure'] <= self.max_csm and
cgdict['symmetry_measure'] < current_target_env_csm):
current_map = mymap
current_target_env_csm = cgdict['symmetry_measure']
if current_map is not None:
return current_map
else:
return SimpleAbundanceChemenvStrategy._get_map(self, isite)
@property
def uniquely_determines_coordination_environments(self):
return True
def as_dict(self):
"""
Bson-serializable dict representation of the TargettedPenaltiedAbundanceChemenvStrategy object.
:return: Bson-serializable dict representation of the TargettedPenaltiedAbundanceChemenvStrategy object.
"""
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"additional_condition": self._additional_condition,
"max_nabundant": self.max_nabundant,
"target_environments": self.target_environments,
"target_penalty_type": self.target_penalty_type,
"max_csm": self.max_csm}
def __eq__(self, other):
return (self.__class__.__name__ == other.__class__.__name__ and
self._additional_condition == other.additional_condition and
self.max_nabundant == other.max_nabundant and
self.target_environments == other.target_environments and
self.target_penalty_type == other.target_penalty_type and
self.max_csm == other.max_csm)
@classmethod
def from_dict(cls, d):
"""
Reconstructs the TargettedPenaltiedAbundanceChemenvStrategy object from a dict representation of the
TargettedPenaltiedAbundanceChemenvStrategy object created using the as_dict method.
:param d: dict representation of the TargettedPenaltiedAbundanceChemenvStrategy object
:return: TargettedPenaltiedAbundanceChemenvStrategy object
"""
return cls(additional_condition=d["additional_condition"],
max_nabundant=d["max_nabundant"],
target_environments=d["target_environments"],
target_penalty_type=d["target_penalty_type"],
max_csm=d["max_csm"])
class NbSetWeight(with_metaclass(abc.ABCMeta, MSONable)):
@abc.abstractmethod
def as_dict(self):
"""
A JSON serializable dict representation of this neighbors set weight.
"""
pass
@abc.abstractmethod
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
pass
class AngleNbSetWeight(NbSetWeight):
SHORT_NAME = 'AngleWeight'
def __init__(self, aa=1.0):
self.aa = aa
if self.aa == 1.0:
self.aw = self.angle_sum
else:
self.aw = self.angle_sumn
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
return self.aw(nb_set=nb_set)
def angle_sum(self, nb_set):
return np.sum(nb_set.angles) / (4.0 * np.pi)
def angle_sumn(self, nb_set):
return np.power(self.angle_sum(nb_set=nb_set), self.aa)
def __eq__(self, other):
return self.aa == other.aa
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"aa": self.aa
}
@classmethod
def from_dict(cls, dd):
return cls(aa=dd['aa'])
class NormalizedAngleDistanceNbSetWeight(NbSetWeight):
SHORT_NAME = 'NormAngleDistWeight'
def __init__(self, average_type, aa, bb):
self.average_type = average_type
if self.average_type == 'geometric':
self.eval = self.gweight
elif self.average_type == 'arithmetic':
self.eval = self.aweight
else:
raise ValueError('Average type is "{}" while it should be '
'"geometric" or "arithmetic"'.format(average_type))
self.aa = aa
self.bb = bb
if self.aa == 0:
if self.bb == 1:
self.fda = self.invdist
elif self.bb == 0:
raise ValueError('Both exponents are 0.')
else:
self.fda = self.invndist
elif self.bb == 0:
if self.aa == 1:
self.fda = self.ang
else:
self.fda = self.angn
else:
if self.aa == 1:
if self.bb == 1:
self.fda = self.anginvdist
else:
self.fda = self.anginvndist
else:
if self.bb == 1:
self.fda = self.angninvdist
else:
self.fda = self.angninvndist
def __eq__(self, other):
return self.average_type == other.average_type and self.aa == other.aa and self.bb == other.bb
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"average_type": self.average_type,
"aa": self.aa,
"bb": self.bb
}
@classmethod
def from_dict(cls, dd):
return cls(average_type=dd['average_type'], aa=dd['aa'], bb=dd['bb'])
def invdist(self, nb_set):
return [1.0 / dist for dist in nb_set.normalized_distances]
def invndist(self, nb_set):
return [1.0 / dist**self.bb for dist in nb_set.normalized_distances]
def ang(self, nb_set):
return nb_set.normalized_angles
def angn(self, nb_set):
return [ang**self.aa for ang in nb_set.normalized_angles]
def anginvdist(self, nb_set):
nangles = nb_set.normalized_angles
return [nangles[ii] / dist for ii, dist in enumerate(nb_set.normalized_distances)]
def anginvndist(self, nb_set):
nangles = nb_set.normalized_angles
return [nangles[ii] / dist**self.bb for ii, dist in enumerate(nb_set.normalized_distances)]
def angninvdist(self, nb_set):
nangles = nb_set.normalized_angles
return [nangles[ii]**self.aa / dist for ii, dist in enumerate(nb_set.normalized_distances)]
def angninvndist(self, nb_set):
nangles = nb_set.normalized_angles
return [nangles[ii]**self.aa / dist**self.bb for ii, dist in enumerate(nb_set.normalized_distances)]
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
fda_list = self.fda(nb_set=nb_set)
return self.eval(fda_list=fda_list)
def gweight(self, fda_list):
return gmean(fda_list)
def aweight(self, fda_list):
return np.mean(fda_list)
def get_effective_csm(nb_set, cn_map, structure_environments, additional_info,
symmetry_measure_type, max_effective_csm, effective_csm_estimator_ratio_function):
try:
effective_csm = additional_info['effective_csms'][nb_set.isite][cn_map]
except KeyError:
site_ce_list = structure_environments.ce_list[nb_set.isite]
site_chemenv = site_ce_list[cn_map[0]][cn_map[1]]
if site_chemenv is None:
effective_csm = 100.0
else:
mingeoms = site_chemenv.minimum_geometries(symmetry_measure_type=symmetry_measure_type,
max_csm=max_effective_csm)
if len(mingeoms) == 0:
effective_csm = 100.0
else:
csms = [ce_dict['other_symmetry_measures'][symmetry_measure_type] for mp_symbol, ce_dict in mingeoms
if ce_dict['other_symmetry_measures'][symmetry_measure_type] <= max_effective_csm]
effective_csm = effective_csm_estimator_ratio_function.mean_estimator(csms)
set_info(additional_info=additional_info, field='effective_csms',
isite=nb_set.isite, cn_map=cn_map, value=effective_csm)
return effective_csm
def set_info(additional_info, field, isite, cn_map, value):
try:
additional_info[field][isite][cn_map] = value
except KeyError:
try:
additional_info[field][isite] = {cn_map: value}
except KeyError:
additional_info[field] = {isite: {cn_map: value}}
class SelfCSMNbSetWeight(NbSetWeight):
SHORT_NAME = 'SelfCSMWeight'
DEFAULT_EFFECTIVE_CSM_ESTIMATOR = {'function': 'power2_inverse_decreasing',
'options': {'max_csm': 8.0}}
DEFAULT_WEIGHT_ESTIMATOR = {'function': 'power2_decreasing_exp',
'options': {'max_csm': 8.0,
'alpha': 1.0}}
DEFAULT_SYMMETRY_MEASURE_TYPE = 'csm_wcs_ctwcc'
def __init__(self, effective_csm_estimator=DEFAULT_EFFECTIVE_CSM_ESTIMATOR,
weight_estimator=DEFAULT_WEIGHT_ESTIMATOR,
symmetry_measure_type=DEFAULT_SYMMETRY_MEASURE_TYPE):
self.effective_csm_estimator = effective_csm_estimator
self.effective_csm_estimator_rf = CSMInfiniteRatioFunction.from_dict(effective_csm_estimator)
self.weight_estimator = weight_estimator
self.weight_estimator_rf = CSMFiniteRatioFunction.from_dict(weight_estimator)
self.symmetry_measure_type = symmetry_measure_type
self.max_effective_csm = self.effective_csm_estimator['options']['max_csm']
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
effective_csm = get_effective_csm(nb_set=nb_set, cn_map=cn_map,
structure_environments=structure_environments,
additional_info=additional_info,
symmetry_measure_type=self.symmetry_measure_type,
max_effective_csm=self.max_effective_csm,
effective_csm_estimator_ratio_function=self.effective_csm_estimator_rf)
weight = self.weight_estimator_rf.evaluate(effective_csm)
set_info(additional_info=additional_info, field='self_csms_weights', isite=nb_set.isite,
cn_map=cn_map, value=weight)
return weight
def __eq__(self, other):
return (self.effective_csm_estimator == other.effective_csm_estimator and
self.weight_estimator == other.weight_estimator and
self.symmetry_measure_type == other.symmetry_measure_type)
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"effective_csm_estimator": self.effective_csm_estimator,
"weight_estimator": self.weight_estimator,
"symmetry_measure_type": self.symmetry_measure_type
}
@classmethod
def from_dict(cls, dd):
return cls(effective_csm_estimator=dd['effective_csm_estimator'],
weight_estimator=dd['weight_estimator'],
symmetry_measure_type=dd['symmetry_measure_type'])
class DeltaCSMNbSetWeight(NbSetWeight):
SHORT_NAME = 'DeltaCSMWeight'
DEFAULT_EFFECTIVE_CSM_ESTIMATOR = {'function': 'power2_inverse_decreasing',
'options': {'max_csm': 8.0}}
DEFAULT_SYMMETRY_MEASURE_TYPE = 'csm_wcs_ctwcc'
DEFAULT_WEIGHT_ESTIMATOR = {'function': 'smootherstep',
'options': {'delta_csm_min': 0.5,
'delta_csm_max': 3.0}}
def __init__(self, effective_csm_estimator=DEFAULT_EFFECTIVE_CSM_ESTIMATOR,
weight_estimator=DEFAULT_WEIGHT_ESTIMATOR,
delta_cn_weight_estimators=None,
symmetry_measure_type=DEFAULT_SYMMETRY_MEASURE_TYPE):
self.effective_csm_estimator = effective_csm_estimator
self.effective_csm_estimator_rf = CSMInfiniteRatioFunction.from_dict(effective_csm_estimator)
self.weight_estimator = weight_estimator
if self.weight_estimator is not None:
self.weight_estimator_rf = DeltaCSMRatioFunction.from_dict(weight_estimator)
self.delta_cn_weight_estimators = delta_cn_weight_estimators
self.delta_cn_weight_estimators_rfs = {}
if delta_cn_weight_estimators is not None:
for delta_cn, dcn_w_estimator in delta_cn_weight_estimators.items():
self.delta_cn_weight_estimators_rfs[delta_cn] = DeltaCSMRatioFunction.from_dict(dcn_w_estimator)
self.symmetry_measure_type = symmetry_measure_type
self.max_effective_csm = self.effective_csm_estimator['options']['max_csm']
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
effcsm = get_effective_csm(nb_set=nb_set, cn_map=cn_map,
structure_environments=structure_environments,
additional_info=additional_info,
symmetry_measure_type=self.symmetry_measure_type,
max_effective_csm=self.max_effective_csm,
effective_csm_estimator_ratio_function=self.effective_csm_estimator_rf)
cn = cn_map[0]
inb_set = cn_map[1]
isite = nb_set.isite
delta_csm = None
delta_csm_cn_map2 = None
nb_set_weight = 1.0
for cn2, nb_sets in structure_environments.neighbors_sets[isite].items():
if cn2 < cn:
continue
for inb_set2, nb_set2 in enumerate(nb_sets):
if cn == cn2 and inb_set == inb_set:
continue
effcsm2 = get_effective_csm(nb_set=nb_set2, cn_map=(cn2, inb_set2),
structure_environments=structure_environments,
additional_info=additional_info,
symmetry_measure_type=self.symmetry_measure_type,
max_effective_csm=self.max_effective_csm,
effective_csm_estimator_ratio_function=self.effective_csm_estimator_rf)
this_delta_csm = effcsm2 - effcsm
if cn2 == cn:
if this_delta_csm < 0.0:
set_info(additional_info=additional_info, field='delta_csms', isite=isite,
cn_map=cn_map, value=this_delta_csm)
set_info(additional_info=additional_info, field='delta_csms_weights', isite=isite,
cn_map=cn_map, value=0.0)
set_info(additional_info=additional_info, field='delta_csms_cn_map2', isite=isite,
cn_map=cn_map, value=(cn2, inb_set2))
return 0.0
else:
dcn = cn2 - cn
if dcn in self.delta_cn_weight_estimators_rfs:
this_delta_csm_weight = self.delta_cn_weight_estimators_rfs[dcn].evaluate(this_delta_csm)
else:
this_delta_csm_weight = self.weight_estimator_rf.evaluate(this_delta_csm)
if this_delta_csm_weight < nb_set_weight:
delta_csm = this_delta_csm
delta_csm_cn_map2 = (cn2, inb_set2)
nb_set_weight = this_delta_csm_weight
set_info(additional_info=additional_info, field='delta_csms', isite=isite,
cn_map=cn_map, value=delta_csm)
set_info(additional_info=additional_info, field='delta_csms_weights', isite=isite,
cn_map=cn_map, value=nb_set_weight)
set_info(additional_info=additional_info, field='delta_csms_cn_map2', isite=isite,
cn_map=cn_map, value=delta_csm_cn_map2)
return nb_set_weight
def __eq__(self, other):
return (self.effective_csm_estimator == other.effective_csm_estimator and
self.weight_estimator == other.weight_estimator and
self.delta_cn_weight_estimators == other.delta_cn_weight_estimators and
self.symmetry_measure_type == other.symmetry_measure_type)
def __ne__(self, other):
return not self == other
@classmethod
def delta_cn_specifics(cls, delta_csm_mins=None, delta_csm_maxs=None, function='smootherstep',
symmetry_measure_type='csm_wcs_ctwcc',
effective_csm_estimator=DEFAULT_EFFECTIVE_CSM_ESTIMATOR):
if delta_csm_mins is None or delta_csm_maxs is None:
delta_cn_weight_estimators = {dcn: {'function': function,
'options': {'delta_csm_min': 0.25+dcn*0.25,
'delta_csm_max': 5.0+dcn*0.25}} for dcn in range(1, 13)}
else:
delta_cn_weight_estimators = {dcn: {'function': function,
'options': {'delta_csm_min': delta_csm_mins[dcn-1],
'delta_csm_max': delta_csm_maxs[dcn-1]}}
for dcn in range(1, 13)}
return cls(effective_csm_estimator=effective_csm_estimator,
weight_estimator={'function': function,
'options': {'delta_csm_min': delta_cn_weight_estimators[12]
['options']['delta_csm_min'],
'delta_csm_max': delta_cn_weight_estimators[12]
['options']['delta_csm_max']}},
delta_cn_weight_estimators=delta_cn_weight_estimators,
symmetry_measure_type=symmetry_measure_type)
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"effective_csm_estimator": self.effective_csm_estimator,
"weight_estimator": self.weight_estimator,
"delta_cn_weight_estimators": self.delta_cn_weight_estimators,
"symmetry_measure_type": self.symmetry_measure_type
}
@classmethod
def from_dict(cls, dd):
return cls(effective_csm_estimator=dd['effective_csm_estimator'],
weight_estimator=dd['weight_estimator'],
delta_cn_weight_estimators={int(dcn): dcn_estimator
for dcn, dcn_estimator in dd['delta_cn_weight_estimators'].items()}
if ('delta_cn_weight_estimators' in dd and dd['delta_cn_weight_estimators'] is not None) else None,
symmetry_measure_type=dd['symmetry_measure_type'])
class CNBiasNbSetWeight(NbSetWeight):
SHORT_NAME = 'CNBiasWeight'
def __init__(self, cn_weights, initialization_options):
self.cn_weights = cn_weights
self.initialization_options = initialization_options
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
return self.cn_weights[len(nb_set)]
def __eq__(self, other):
return (self.cn_weights == other.cn_weights and
self.initialization_options == other.initialization_options)
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"cn_weights": {str(cn): cnw for cn, cnw in self.cn_weights.items()},
"initialization_options": self.initialization_options,
}
@classmethod
def from_dict(cls, dd):
return cls(cn_weights={int(cn): cnw for cn, cnw in dd['cn_weights'].items()},
initialization_options=dd['initialization_options'])
@classmethod
def linearly_equidistant(cls, weight_cn1, weight_cn13):
initialization_options = {'type': 'linearly_equidistant',
'weight_cn1': weight_cn1,
'weight_cn13': weight_cn13
}
dw = (weight_cn13 - weight_cn1) / 12.0
cn_weights = {cn: weight_cn1 + (cn - 1) * dw for cn in range(1, 14)}
return cls(cn_weights=cn_weights, initialization_options=initialization_options)
@classmethod
def geometrically_equidistant(cls, weight_cn1, weight_cn13):
initialization_options = {'type': 'geometrically_equidistant',
'weight_cn1': weight_cn1,
'weight_cn13': weight_cn13
}
factor = np.power(float(weight_cn13) / weight_cn1, 1.0 / 12.0)
cn_weights = {cn: weight_cn1 * np.power(factor, cn - 1) for cn in range(1, 14)}
return cls(cn_weights=cn_weights, initialization_options=initialization_options)
@classmethod
def explicit(cls, cn_weights):
initialization_options = {'type': 'explicit'}
if set(cn_weights.keys()) != set(range(1, 14)):
raise ValueError('Weights should be provided for CN 1 to 13')
return cls(cn_weights=cn_weights, initialization_options=initialization_options)
@classmethod
def from_description(cls, dd):
if dd['type'] == 'linearly_equidistant':
return cls.linearly_equidistant(weight_cn1=dd['weight_cn1'], weight_cn13=dd['weight_cn13'])
elif dd['type'] == 'geometrically_equidistant':
return cls.geometrically_equidistant(weight_cn1=dd['weight_cn1'], weight_cn13=dd['weight_cn13'])
elif dd['type'] == 'explicit':
return cls.explicit(cn_weights=dd['cn_weights'])
class DistanceAngleAreaNbSetWeight(NbSetWeight):
SHORT_NAME = 'DistAngleAreaWeight'
AC = AdditionalConditions()
DEFAULT_SURFACE_DEFINITION = {'type': 'standard_elliptic',
'distance_bounds': {'lower': 1.2, 'upper': 1.8},
'angle_bounds': {'lower': 0.1, 'upper': 0.8}}
def __init__(self, weight_type='has_intersection', surface_definition=DEFAULT_SURFACE_DEFINITION,
nb_sets_from_hints='fallback_to_source', other_nb_sets='0_weight',
additional_condition=AC.ONLY_ACB, smoothstep_distance=None, smoothstep_angle=None):
self.weight_type = weight_type
if weight_type == 'has_intersection':
self.area_weight = self.w_area_has_intersection
elif weight_type == 'has_intersection_smoothstep':
raise NotImplementedError()
# self.area_weight = self.w_area_has_intersection_smoothstep
else:
raise ValueError('Weight type is "{}" while it should be "has_intersection"'.format(weight_type))
self.surface_definition = surface_definition
self.nb_sets_from_hints = nb_sets_from_hints
self.other_nb_sets = other_nb_sets
self.additional_condition = additional_condition
self.smoothstep_distance = smoothstep_distance
self.smoothstep_angle = smoothstep_angle
if self.nb_sets_from_hints == 'fallback_to_source':
if self.other_nb_sets == '0_weight':
self.w_area_intersection_specific = self.w_area_intersection_nbsfh_fbs_onb0
else:
raise ValueError('Other nb_sets should be "0_weight"')
else:
raise ValueError('Nb_sets from hints should fallback to source')
lower_and_upper_functions = get_lower_and_upper_f(surface_calculation_options=surface_definition)
self.dmin = surface_definition['distance_bounds']['lower']
self.dmax = surface_definition['distance_bounds']['upper']
self.amin = surface_definition['angle_bounds']['lower']
self.amax = surface_definition['angle_bounds']['upper']
self.f_lower = lower_and_upper_functions['lower']
self.f_upper = lower_and_upper_functions['upper']
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
return self.area_weight(nb_set=nb_set, structure_environments=structure_environments,
cn_map=cn_map, additional_info=additional_info)
def w_area_has_intersection_smoothstep(self, nb_set, structure_environments,
cn_map, additional_info):
w_area = self.w_area_intersection_specific(nb_set=nb_set, structure_environments=structure_environments,
cn_map=cn_map, additional_info=additional_info)
if w_area > 0.0:
if self.smoothstep_distance is not None:
w_area = w_area
if self.smoothstep_angle is not None:
w_area = w_area
return w_area
def w_area_has_intersection(self, nb_set, structure_environments,
cn_map, additional_info):
return self.w_area_intersection_specific(nb_set=nb_set, structure_environments=structure_environments,
cn_map=cn_map, additional_info=additional_info)
def w_area_intersection_nbsfh_fbs_onb0(self, nb_set, structure_environments,
cn_map, additional_info):
dist_ang_sources = [src for src in nb_set.sources
if src['origin'] == 'dist_ang_ac_voronoi' and src['ac'] == self.additional_condition]
if len(dist_ang_sources) > 0:
for src in dist_ang_sources:
d1 = src['dp_dict']['min']
d2 = src['dp_dict']['next']
a1 = src['ap_dict']['next']
a2 = src['ap_dict']['max']
if self.rectangle_crosses_area(d1=d1, d2=d2, a1=a1, a2=a2):
return 1.0
return 0.0
else:
from_hints_sources = [src for src in nb_set.sources if src['origin'] == 'nb_set_hints']
if len(from_hints_sources) == 0:
return 0.0
elif len(from_hints_sources) != 1:
raise ValueError('Found multiple hints sources for nb_set')
else:
cn_map_src = from_hints_sources[0]['cn_map_source']
nb_set_src = structure_environments.neighbors_sets[nb_set.isite][cn_map_src[0]][cn_map_src[1]]
dist_ang_sources = [src for src in nb_set_src.sources
if src['origin'] == 'dist_ang_ac_voronoi' and
src['ac'] == self.additional_condition]
if len(dist_ang_sources) == 0:
return 0.0
for src in dist_ang_sources:
d1 = src['dp_dict']['min']
d2 = src['dp_dict']['next']
a1 = src['ap_dict']['next']
a2 = src['ap_dict']['max']
if self.rectangle_crosses_area(d1=d1, d2=d2, a1=a1, a2=a2):
return 1.0
return 0.0
def rectangle_crosses_area(self, d1, d2, a1, a2):
# Case 1
if d1 <= self.dmin and d2 <= self.dmin:
return False
# Case 6
if d1 >= self.dmax and d2 >= self.dmax:
return False
# Case 2
if d1 <= self.dmin and d2 <= self.dmax:
ld2 = self.f_lower(d2)
if a2 <= ld2 or a1 >= self.amax:
return False
return True
# Case 3
if d1 <= self.dmin and d2 >= self.dmax:
if a2 <= self.amin or a1 >= self.amax:
return False
return True
# Case 4
if self.dmin <= d1 <= self.dmax and self.dmin <= d2 <= self.dmax:
ld1 = self.f_lower(d1)
ld2 = self.f_lower(d2)
if a2 <= ld1 and a2 <= ld2:
return False
ud1 = self.f_upper(d1)
ud2 = self.f_upper(d2)
if a1 >= ud1 and a1 >= ud2:
return False
return True
# Case 5
if self.dmin <= d1 <= self.dmax and d2 >= self.dmax:
ud1 = self.f_upper(d1)
if a1 >= ud1 or a2 <= self.amin:
return False
return True
raise ValueError('Should not reach this point!')
def __eq__(self, other):
return (self.weight_type == other.weight_type and
self.surface_definition == other.surface_definition and
self.nb_sets_from_hints == other.nb_sets_from_hints and
self.other_nb_sets == other.other_nb_sets and
self.additional_condition == other.additional_condition
)
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"weight_type": self.weight_type,
"surface_definition": self.surface_definition,
"nb_sets_from_hints": self.nb_sets_from_hints,
"other_nb_sets": self.other_nb_sets,
"additional_condition": self.additional_condition}
@classmethod
def from_dict(cls, dd):
return cls(weight_type=dd['weight_type'], surface_definition=dd['surface_definition'],
nb_sets_from_hints=dd['nb_sets_from_hints'], other_nb_sets=dd['other_nb_sets'],
additional_condition=dd['additional_condition'])
class DistancePlateauNbSetWeight(NbSetWeight):
SHORT_NAME = 'DistancePlateauWeight'
def __init__(self, distance_function=None, weight_function=None):
if distance_function is None:
self.distance_function = {'type': 'normalized_distance'}
else:
self.distance_function = distance_function
if weight_function is None:
self.weight_function = {'function': 'inverse_smootherstep', 'options': {'lower': 0.2, 'upper': 0.4}}
else:
self.weight_function = weight_function
self.weight_rf = RatioFunction.from_dict(self.weight_function)
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
return self.weight_rf.eval(nb_set.distance_plateau())
def __eq__(self, other):
return self.__class__ == other.__class__
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"distance_function": self.distance_function,
"weight_function": self.weight_function
}
@classmethod
def from_dict(cls, dd):
return cls(distance_function=dd['distance_function'], weight_function=dd['weight_function'])
class AnglePlateauNbSetWeight(NbSetWeight):
SHORT_NAME = 'AnglePlateauWeight'
def __init__(self, angle_function=None, weight_function=None):
if angle_function is None:
self.angle_function = {'type': 'normalized_angle'}
else:
self.angle_function = angle_function
if weight_function is None:
self.weight_function = {'function': 'inverse_smootherstep', 'options': {'lower': 0.05, 'upper': 0.15}}
else:
self.weight_function = weight_function
self.weight_rf = RatioFunction.from_dict(self.weight_function)
def weight(self, nb_set, structure_environments, cn_map=None, additional_info=None):
return self.weight_rf.eval(nb_set.angle_plateau())
def __eq__(self, other):
return self.__class__ == other.__class__
def __ne__(self, other):
return not self == other
def as_dict(self):
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"angle_function": self.angle_function,
"weight_function": self.weight_function
}
@classmethod
def from_dict(cls, dd):
return cls(angle_function=dd['angle_function'], weight_function=dd['weight_function'])
class MultiUnlimitedWeightsChemenvStrategy(AbstractChemenvStrategy):
"""
MultiUnlimitedWeightsChemenvStrategy
"""
STRATEGY_DESCRIPTION = ' Multi Unlimited Weights ChemenvStrategy'
DEFAULT_CE_ESTIMATOR = {'function': 'power2_inverse_power2_decreasing',
'options': {'max_csm': 8.0}}
def __init__(self, structure_environments=None,
additional_condition=AbstractChemenvStrategy.AC.ONLY_ACB,
symmetry_measure_type=AbstractChemenvStrategy.DEFAULT_SYMMETRY_MEASURE_TYPE,
nb_set_weights=None,
ce_estimator=DEFAULT_CE_ESTIMATOR):
"""
Constructor for the MultiUnlimitedWeightsChemenvStrategy.
:param structure_environments: StructureEnvironments object containing all the information on the
coordination of the sites in a structure
"""
AbstractChemenvStrategy.__init__(self, structure_environments, symmetry_measure_type=symmetry_measure_type)
self._additional_condition = additional_condition
if nb_set_weights is None:
raise ValueError()
self.nb_set_weights = nb_set_weights
self.ordered_weights = []
for nb_set_weight in self.nb_set_weights:
self.ordered_weights.append({'weight': nb_set_weight, 'name': nb_set_weight.SHORT_NAME})
self.ce_estimator = ce_estimator
self.ce_estimator_ratio_function = CSMInfiniteRatioFunction.from_dict(self.ce_estimator)
self.ce_estimator_fractions = self.ce_estimator_ratio_function.fractions
@property
def uniquely_determines_coordination_environments(self):
return False
def get_site_coordination_environments_fractions(self, site, isite=None, dequivsite=None, dthissite=None,
mysym=None, ordered=True, min_fraction=0.0, return_maps=True,
return_strategy_dict_info=False, return_all=False):
if isite is None or dequivsite is None or dthissite is None or mysym is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
site_nb_sets = self.structure_environments.neighbors_sets[isite]
if site_nb_sets is None:
return None
cn_maps = []
for cn, nb_sets in site_nb_sets.items():
for inb_set, nb_set in enumerate(nb_sets):
#CHECK THE ADDITIONAL CONDITION HERE ?
cn_maps.append((cn, inb_set))
weights_additional_info = {'weights': {isite: {}}}
for wdict in self.ordered_weights:
cn_maps_new = []
weight = wdict['weight']
weight_name = wdict['name']
for cn_map in cn_maps:
nb_set = site_nb_sets[cn_map[0]][cn_map[1]]
w_nb_set = weight.weight(nb_set=nb_set, structure_environments=self.structure_environments,
cn_map=cn_map, additional_info=weights_additional_info)
if cn_map not in weights_additional_info['weights'][isite]:
weights_additional_info['weights'][isite][cn_map] = {}
weights_additional_info['weights'][isite][cn_map][weight_name] = w_nb_set
if w_nb_set > 0.0:
cn_maps_new.append(cn_map)
cn_maps = cn_maps_new
for cn_map, weights in weights_additional_info['weights'][isite].items():
weights_additional_info['weights'][isite][cn_map]['Product'] = np.product(weights.values())
w_nb_sets = {cn_map: weights['Product']
for cn_map, weights in weights_additional_info['weights'][isite].items()}
w_nb_sets_total = np.sum(w_nb_sets.values())
nb_sets_fractions = {cn_map: w_nb_set / w_nb_sets_total for cn_map, w_nb_set in w_nb_sets.items()}
for cn_map in weights_additional_info['weights'][isite]:
weights_additional_info['weights'][isite][cn_map]['NbSetFraction'] = nb_sets_fractions[cn_map]
ce_symbols = []
ce_dicts = []
ce_fractions = []
ce_dict_fractions = []
ce_maps = []
site_ce_list = self.structure_environments.ce_list[isite]
if return_all:
for cn_map, nb_set_fraction in nb_sets_fractions.items():
cn = cn_map[0]
inb_set = cn_map[1]
site_ce_nb_set = site_ce_list[cn][inb_set]
if site_ce_nb_set is None:
continue
mingeoms = site_ce_nb_set.minimum_geometries(symmetry_measure_type=self.symmetry_measure_type)
if len(mingeoms) > 0:
csms = [ce_dict['other_symmetry_measures'][self.symmetry_measure_type]
for ce_symbol, ce_dict in mingeoms]
fractions = self.ce_estimator_fractions(csms)
if fractions is None:
ce_symbols.append('UNCLEAR:{:d}'.format(cn))
ce_dicts.append(None)
ce_fractions.append(nb_set_fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = None
dict_fractions['Fraction'] = nb_set_fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
else:
for ifraction, fraction in enumerate(fractions):
ce_symbols.append(mingeoms[ifraction][0])
ce_dicts.append(mingeoms[ifraction][1])
ce_fractions.append(nb_set_fraction * fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = fraction
dict_fractions['Fraction'] = nb_set_fraction * fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
else:
ce_symbols.append('UNCLEAR:{:d}'.format(cn))
ce_dicts.append(None)
ce_fractions.append(nb_set_fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = None
dict_fractions['Fraction'] = nb_set_fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
else:
for cn_map, nb_set_fraction in nb_sets_fractions.items():
if nb_set_fraction > 0.0:
cn = cn_map[0]
inb_set = cn_map[1]
site_ce_nb_set = site_ce_list[cn][inb_set]
mingeoms = site_ce_nb_set.minimum_geometries(symmetry_measure_type=self._symmetry_measure_type)
csms = [ce_dict['other_symmetry_measures'][self._symmetry_measure_type]
for ce_symbol, ce_dict in mingeoms]
fractions = self.ce_estimator_fractions(csms)
for ifraction, fraction in enumerate(fractions):
if fraction > 0.0:
ce_symbols.append(mingeoms[ifraction][0])
ce_dicts.append(mingeoms[ifraction][1])
ce_fractions.append(nb_set_fraction * fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = fraction
dict_fractions['Fraction'] = nb_set_fraction * fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
if ordered:
indices = np.argsort(ce_fractions)[::-1]
else:
indices = list(range(len(ce_fractions)))
fractions_info_list = [
{'ce_symbol': ce_symbols[ii], 'ce_dict': ce_dicts[ii], 'ce_fraction': ce_fractions[ii]}
for ii in indices if ce_fractions[ii] >= min_fraction]
if return_maps:
for ifinfo, ii in enumerate(indices):
if ce_fractions[ii] >= min_fraction:
fractions_info_list[ifinfo]['ce_map'] = ce_maps[ii]
if return_strategy_dict_info:
for ifinfo, ii in enumerate(indices):
if ce_fractions[ii] >= min_fraction:
fractions_info_list[ifinfo]['strategy_info'] = ce_dict_fractions[ii]
return fractions_info_list
def get_site_coordination_environment(self, site):
pass
def get_site_neighbors(self, site):
pass
def get_site_coordination_environments(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_maps=False):
if isite is None or dequivsite is None or dthissite is None or mysym is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
return [self.get_site_coordination_environment(site=site, isite=isite, dequivsite=dequivsite,
dthissite=dthissite, mysym=mysym, return_map=return_maps)]
def __eq__(self, other):
return (self.__class__.__name__ == other.__class__.__name__ and
self._additional_condition == other._additional_condition and
self.symmetry_measure_type == other.symmetry_measure_type and
self.nb_set_weights == other.nb_set_weights and
self.ce_estimator == other.ce_estimator)
def __ne__(self, other):
return not self == other
def as_dict(self):
"""
Bson-serializable dict representation of the MultiWeightsChemenvStrategy object.
:return: Bson-serializable dict representation of the MultiWeightsChemenvStrategy object.
"""
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"additional_condition": self._additional_condition,
"symmetry_measure_type": self.symmetry_measure_type,
"nb_set_weights": [nb_set_weight.as_dict() for nb_set_weight in self.nb_set_weights],
"ce_estimator": self.ce_estimator,
}
@classmethod
def from_dict(cls, d):
"""
Reconstructs the MultiWeightsChemenvStrategy object from a dict representation of the
MultipleAbundanceChemenvStrategy object created using the as_dict method.
:param d: dict representation of the MultiWeightsChemenvStrategy object
:return: MultiWeightsChemenvStrategy object
"""
return cls(additional_condition=d["additional_condition"],
symmetry_measure_type=d["symmetry_measure_type"],
nb_set_weights=d["nb_set_weights"],
ce_estimator=d["ce_estimator"])
class MultiWeightsChemenvStrategy(AbstractChemenvStrategy):
"""
MultiWeightsChemenvStrategy
"""
STRATEGY_DESCRIPTION = ' Multi Weights ChemenvStrategy'
# STRATEGY_INFO_FIELDS = ['cn_map_surface_fraction', 'cn_map_surface_weight',
# 'cn_map_mean_csm', 'cn_map_csm_weight',
# 'cn_map_delta_csm', 'cn_map_delta_csms_cn_map2', 'cn_map_delta_csm_weight',
# 'cn_map_cn_weight',
# 'cn_map_fraction', 'cn_map_ce_fraction', 'ce_fraction']
DEFAULT_CE_ESTIMATOR = {'function': 'power2_inverse_power2_decreasing',
'options': {'max_csm': 8.0}}
DEFAULT_DIST_ANG_AREA_WEIGHT = {}
def __init__(self, structure_environments=None,
additional_condition=AbstractChemenvStrategy.AC.ONLY_ACB,
symmetry_measure_type=AbstractChemenvStrategy.DEFAULT_SYMMETRY_MEASURE_TYPE,
dist_ang_area_weight=None,
self_csm_weight=None,
delta_csm_weight=None,
cn_bias_weight=None,
angle_weight=None,
normalized_angle_distance_weight=None,
ce_estimator=DEFAULT_CE_ESTIMATOR
):
"""
Constructor for the MultiWeightsChemenvStrategy.
:param structure_environments: StructureEnvironments object containing all the information on the
coordination of the sites in a structure
"""
AbstractChemenvStrategy.__init__(self, structure_environments, symmetry_measure_type=symmetry_measure_type)
self._additional_condition = additional_condition
self.dist_ang_area_weight = dist_ang_area_weight
self.angle_weight = angle_weight
self.normalized_angle_distance_weight = normalized_angle_distance_weight
self.self_csm_weight = self_csm_weight
self.delta_csm_weight = delta_csm_weight
self.cn_bias_weight = cn_bias_weight
self.ordered_weights = []
if dist_ang_area_weight is not None:
self.ordered_weights.append({'weight': dist_ang_area_weight, 'name': 'DistAngArea'})
if self_csm_weight is not None:
self.ordered_weights.append({'weight': self_csm_weight, 'name': 'SelfCSM'})
if delta_csm_weight is not None:
self.ordered_weights.append({'weight': delta_csm_weight, 'name': 'DeltaCSM'})
if cn_bias_weight is not None:
self.ordered_weights.append({'weight': cn_bias_weight, 'name': 'CNBias'})
if angle_weight is not None:
self.ordered_weights.append({'weight': angle_weight, 'name': 'Angle'})
if normalized_angle_distance_weight is not None:
self.ordered_weights.append({'weight': normalized_angle_distance_weight, 'name': 'NormalizedAngDist'})
self.ce_estimator = ce_estimator
self.ce_estimator_ratio_function = CSMInfiniteRatioFunction.from_dict(self.ce_estimator)
self.ce_estimator_fractions = self.ce_estimator_ratio_function.fractions
@classmethod
def stats_article_weights_parameters(cls):
self_csm_weight = SelfCSMNbSetWeight(weight_estimator={'function': 'power2_decreasing_exp',
'options': {'max_csm': 8.0,
'alpha': 1.0}})
surface_definition = {'type': 'standard_elliptic',
'distance_bounds': {'lower': 1.15, 'upper': 2.0},
'angle_bounds': {'lower': 0.05, 'upper': 0.75}}
da_area_weight = DistanceAngleAreaNbSetWeight(weight_type='has_intersection',
surface_definition=surface_definition,
nb_sets_from_hints='fallback_to_source',
other_nb_sets='0_weight',
additional_condition=DistanceAngleAreaNbSetWeight.AC.ONLY_ACB)
symmetry_measure_type = 'csm_wcs_ctwcc'
delta_weight = DeltaCSMNbSetWeight.delta_cn_specifics()
bias_weight = None
angle_weight = None
nad_weight = None
return cls(dist_ang_area_weight=da_area_weight,
self_csm_weight=self_csm_weight,
delta_csm_weight=delta_weight,
cn_bias_weight=bias_weight,
angle_weight=angle_weight,
normalized_angle_distance_weight=nad_weight,
symmetry_measure_type=symmetry_measure_type)
@property
def uniquely_determines_coordination_environments(self):
return False
def get_site_coordination_environments_fractions(self, site, isite=None, dequivsite=None, dthissite=None,
mysym=None, ordered=True, min_fraction=0.0, return_maps=True,
return_strategy_dict_info=False, return_all=False):
if isite is None or dequivsite is None or dthissite is None or mysym is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
site_nb_sets = self.structure_environments.neighbors_sets[isite]
if site_nb_sets is None:
return None
cn_maps = []
for cn, nb_sets in site_nb_sets.items():
for inb_set, nb_set in enumerate(nb_sets):
#CHECK THE ADDITIONAL CONDITION HERE ?
cn_maps.append((cn, inb_set))
weights_additional_info = {'weights': {isite: {}}}
for wdict in self.ordered_weights:
cn_maps_new = []
weight = wdict['weight']
weight_name = wdict['name']
for cn_map in cn_maps:
nb_set = site_nb_sets[cn_map[0]][cn_map[1]]
w_nb_set = weight.weight(nb_set=nb_set, structure_environments=self.structure_environments,
cn_map=cn_map, additional_info=weights_additional_info)
if cn_map not in weights_additional_info['weights'][isite]:
weights_additional_info['weights'][isite][cn_map] = {}
weights_additional_info['weights'][isite][cn_map][weight_name] = w_nb_set
if w_nb_set > 0.0:
cn_maps_new.append(cn_map)
cn_maps = cn_maps_new
for cn_map, weights in weights_additional_info['weights'][isite].items():
weights_additional_info['weights'][isite][cn_map]['Product'] = np.product(list(weights.values()))
w_nb_sets = {cn_map: weights['Product']
for cn_map, weights in weights_additional_info['weights'][isite].items()}
w_nb_sets_total = np.sum(list(w_nb_sets.values()))
nb_sets_fractions = {cn_map: w_nb_set / w_nb_sets_total for cn_map, w_nb_set in w_nb_sets.items()}
for cn_map in weights_additional_info['weights'][isite]:
weights_additional_info['weights'][isite][cn_map]['NbSetFraction'] = nb_sets_fractions[cn_map]
ce_symbols = []
ce_dicts = []
ce_fractions = []
ce_dict_fractions = []
ce_maps = []
site_ce_list = self.structure_environments.ce_list[isite]
if return_all:
for cn_map, nb_set_fraction in nb_sets_fractions.items():
cn = cn_map[0]
inb_set = cn_map[1]
site_ce_nb_set = site_ce_list[cn][inb_set]
if site_ce_nb_set is None:
continue
mingeoms = site_ce_nb_set.minimum_geometries(symmetry_measure_type=self.symmetry_measure_type)
if len(mingeoms) > 0:
csms = [ce_dict['other_symmetry_measures'][self.symmetry_measure_type]
for ce_symbol, ce_dict in mingeoms]
fractions = self.ce_estimator_fractions(csms)
if fractions is None:
ce_symbols.append('UNCLEAR:{:d}'.format(cn))
ce_dicts.append(None)
ce_fractions.append(nb_set_fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = None
dict_fractions['Fraction'] = nb_set_fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
else:
for ifraction, fraction in enumerate(fractions):
ce_symbols.append(mingeoms[ifraction][0])
ce_dicts.append(mingeoms[ifraction][1])
ce_fractions.append(nb_set_fraction * fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = fraction
dict_fractions['Fraction'] = nb_set_fraction * fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
else:
ce_symbols.append('UNCLEAR:{:d}'.format(cn))
ce_dicts.append(None)
ce_fractions.append(nb_set_fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = None
dict_fractions['Fraction'] = nb_set_fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
else:
for cn_map, nb_set_fraction in nb_sets_fractions.items():
if nb_set_fraction > 0.0:
cn = cn_map[0]
inb_set = cn_map[1]
site_ce_nb_set = site_ce_list[cn][inb_set]
mingeoms = site_ce_nb_set.minimum_geometries(symmetry_measure_type=self._symmetry_measure_type)
csms = [ce_dict['other_symmetry_measures'][self._symmetry_measure_type]
for ce_symbol, ce_dict in mingeoms]
fractions = self.ce_estimator_fractions(csms)
for ifraction, fraction in enumerate(fractions):
if fraction > 0.0:
ce_symbols.append(mingeoms[ifraction][0])
ce_dicts.append(mingeoms[ifraction][1])
ce_fractions.append(nb_set_fraction * fraction)
all_weights = weights_additional_info['weights'][isite][cn_map]
dict_fractions = {wname: wvalue for wname, wvalue in all_weights.items()}
dict_fractions['CEFraction'] = fraction
dict_fractions['Fraction'] = nb_set_fraction * fraction
ce_dict_fractions.append(dict_fractions)
ce_maps.append(cn_map)
if ordered:
indices = np.argsort(ce_fractions)[::-1]
else:
indices = list(range(len(ce_fractions)))
fractions_info_list = [
{'ce_symbol': ce_symbols[ii], 'ce_dict': ce_dicts[ii], 'ce_fraction': ce_fractions[ii]}
for ii in indices if ce_fractions[ii] >= min_fraction]
if return_maps:
for ifinfo, ii in enumerate(indices):
if ce_fractions[ii] >= min_fraction:
fractions_info_list[ifinfo]['ce_map'] = ce_maps[ii]
if return_strategy_dict_info:
for ifinfo, ii in enumerate(indices):
if ce_fractions[ii] >= min_fraction:
fractions_info_list[ifinfo]['strategy_info'] = ce_dict_fractions[ii]
return fractions_info_list
def get_site_coordination_environment(self, site):
pass
def get_site_neighbors(self, site):
pass
def get_site_coordination_environments(self, site, isite=None, dequivsite=None, dthissite=None, mysym=None,
return_maps=False):
if isite is None or dequivsite is None or dthissite is None or mysym is None:
[isite, dequivsite, dthissite, mysym] = self.equivalent_site_index_and_transform(site)
return [self.get_site_coordination_environment(site=site, isite=isite, dequivsite=dequivsite,
dthissite=dthissite, mysym=mysym, return_map=return_maps)]
def __eq__(self, other):
return (self.__class__.__name__ == other.__class__.__name__ and
self._additional_condition == other._additional_condition and
self.symmetry_measure_type == other.symmetry_measure_type and
self.dist_ang_area_weight == other.dist_ang_area_weight and
self.self_csm_weight == other.self_csm_weight and
self.delta_csm_weight == other.delta_csm_weight and
self.cn_bias_weight == other.cn_bias_weight and
self.angle_weight == other.angle_weight and
self.normalized_angle_distance_weight == other.normalized_angle_distance_weight and
self.ce_estimator == other.ce_estimator)
def __ne__(self, other):
return not self == other
def as_dict(self):
"""
Bson-serializable dict representation of the MultiWeightsChemenvStrategy object.
:return: Bson-serializable dict representation of the MultiWeightsChemenvStrategy object.
"""
return {"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"additional_condition": self._additional_condition,
"symmetry_measure_type": self.symmetry_measure_type,
"dist_ang_area_weight": self.dist_ang_area_weight.as_dict()
if self.dist_ang_area_weight is not None else None,
"self_csm_weight": self.self_csm_weight.as_dict()
if self.self_csm_weight is not None else None,
"delta_csm_weight": self.delta_csm_weight.as_dict()
if self.delta_csm_weight is not None else None,
"cn_bias_weight": self.cn_bias_weight.as_dict()
if self.cn_bias_weight is not None else None,
"angle_weight": self.angle_weight.as_dict()
if self.angle_weight is not None else None,
"normalized_angle_distance_weight": self.normalized_angle_distance_weight.as_dict()
if self.normalized_angle_distance_weight is not None else None,
"ce_estimator": self.ce_estimator,
}
@classmethod
def from_dict(cls, d):
"""
Reconstructs the MultiWeightsChemenvStrategy object from a dict representation of the
MultipleAbundanceChemenvStrategy object created using the as_dict method.
:param d: dict representation of the MultiWeightsChemenvStrategy object
:return: MultiWeightsChemenvStrategy object
"""
if d["normalized_angle_distance_weight"] is not None:
nad_w = NormalizedAngleDistanceNbSetWeight.from_dict(d["normalized_angle_distance_weight"])
else:
nad_w = None
return cls(additional_condition=d["additional_condition"],
symmetry_measure_type=d["symmetry_measure_type"],
dist_ang_area_weight=DistanceAngleAreaNbSetWeight.from_dict(d["dist_ang_area_weight"])
if d["dist_ang_area_weight"] is not None else None,
self_csm_weight=SelfCSMNbSetWeight.from_dict(d["self_csm_weight"])
if d["self_csm_weight"] is not None else None,
delta_csm_weight=DeltaCSMNbSetWeight.from_dict(d["delta_csm_weight"])
if d["delta_csm_weight"] is not None else None,
cn_bias_weight=CNBiasNbSetWeight.from_dict(d["cn_bias_weight"])
if d["cn_bias_weight"] is not None else None,
angle_weight=AngleNbSetWeight.from_dict(d["angle_weight"])
if d["angle_weight"] is not None else None,
normalized_angle_distance_weight=nad_w,
ce_estimator=d["ce_estimator"])
|
gpetretto/pymatgen
|
pymatgen/analysis/chemenv/coordination_environments/chemenv_strategies.py
|
Python
|
mit
| 98,578
|
[
"pymatgen"
] |
3ab59c79c2febe89e24f42918a673966aec66270424486eed1bb2bc832dfd01d
|
# Copyright 2015 Allen Institute for Brain Science
# This file is part of Allen SDK.
#
# Allen SDK is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# Allen SDK is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Allen SDK. If not, see <http://www.gnu.org/licenses/>.
from allensdk.api.api import Api
import os, json
from collections import OrderedDict
class BiophysicalPerisomaticApi(Api):
_NWB_file_type = 'NWB'
_SWC_file_type = '3DNeuronReconstruction'
_MOD_file_type = 'BiophysicalModelDescription'
_FIT_file_type = 'NeuronalModelParameters'
def __init__(self, base_uri=None):
super(BiophysicalPerisomaticApi, self).__init__(base_uri)
self.cache_stimulus = True
self.ids = {}
self.sweeps = []
self.manifest = {}
def build_rma(self, neuronal_model_id, fmt='json'):
'''Construct a query to find all files related to a neuronal model.
Parameters
----------
neuronal_model_id : integer or string representation
key of experiment to retrieve.
fmt : string, optional
json (default) or xml
Returns
-------
string
RMA query url.
'''
include_associations = ''.join([
'neuronal_model_template(well_known_files(well_known_file_type)),',
'specimen',
'(ephys_result(well_known_files(well_known_file_type)),'
'neuron_reconstructions(well_known_files(well_known_file_type)),',
'ephys_sweeps),',
'well_known_files(well_known_file_type)'])
criteria_associations = ''.join([
("[id$eq%d]," % (neuronal_model_id)),
include_associations])
return ''.join([self.rma_endpoint,
'/query.',
fmt,
'?q=',
'model::NeuronalModel,',
'rma::criteria,',
criteria_associations,
',rma::include,',
include_associations])
def read_json(self, json_parsed_data):
'''Get the list of well_known_file ids from a response body
containing nested sample,microarray_slides,well_known_files.
Parameters
----------
json_parsed_data : dict
Response from the Allen Institute Api RMA.
Returns
-------
list of strings
Well known file ids.
'''
self.ids = {
'stimulus': {},
'morphology': {},
'modfiles': {},
'fit': {}
}
self.sweeps = []
if 'msg' in json_parsed_data:
for neuronal_model in json_parsed_data['msg']:
if 'well_known_files' in neuronal_model:
for well_known_file in neuronal_model['well_known_files']:
if ('id' in well_known_file and
'path' in well_known_file and
self.is_well_known_file_type(well_known_file,
BiophysicalPerisomaticApi._FIT_file_type)):
self.ids['fit'][str(well_known_file['id'])] = \
os.path.split(well_known_file['path'])[1]
if 'neuronal_model_template' in neuronal_model:
neuronal_model_template = neuronal_model['neuronal_model_template']
if 'well_known_files' in neuronal_model_template:
for well_known_file in neuronal_model_template['well_known_files']:
if ('id' in well_known_file and
'path' in well_known_file and
self.is_well_known_file_type(well_known_file,
BiophysicalPerisomaticApi._MOD_file_type)):
self.ids['modfiles'][str(well_known_file['id'])] = \
os.path.join('modfiles',
os.path.split(well_known_file['path'])[1])
if 'specimen' in neuronal_model:
specimen = neuronal_model['specimen']
if 'neuron_reconstructions' in specimen:
for neuron_reconstruction in specimen['neuron_reconstructions']:
if 'well_known_files' in neuron_reconstruction:
for well_known_file in neuron_reconstruction['well_known_files']:
if ('id' in well_known_file and
'path' in well_known_file and
self.is_well_known_file_type(well_known_file,
BiophysicalPerisomaticApi._SWC_file_type)):
self.ids['morphology'][str(well_known_file['id'])] = \
os.path.split(well_known_file['path'])[1]
if 'ephys_result' in specimen:
ephys_result = specimen['ephys_result']
if 'well_known_files' in ephys_result:
for well_known_file in ephys_result['well_known_files']:
if ('id' in well_known_file and
'path' in well_known_file and
self.is_well_known_file_type(well_known_file,
BiophysicalPerisomaticApi._NWB_file_type)):
self.ids['stimulus'][str(well_known_file['id'])] = \
"%d.nwb" % (ephys_result['id'])
self.sweeps = [sweep['sweep_number']
for sweep in specimen['ephys_sweeps']
if sweep['stimulus_name'] != 'Test']
return self.ids
def is_well_known_file_type(self, wkf, name):
'''Check if a structure has the expected name.
Parameters
----------
wkf : dict
A well-known-file structure with nested type information.
name : string
The expected type name
See Also
--------
read_json: where this helper function is used.
'''
try:
return wkf['well_known_file_type']['name'] == name
except:
return False
def get_well_known_file_ids(self, neuronal_model_id):
'''Query the current RMA endpoint with a neuronal_model id
to get the corresponding well known file ids.
Returns
-------
list
A list of well known file id strings.
'''
rma_builder_fn = self.build_rma
json_traversal_fn = self.read_json
return self.do_query(rma_builder_fn, json_traversal_fn, neuronal_model_id)
def create_manifest(self,
fit_path='',
stimulus_filename='',
swc_morphology_path='',
sweeps=[]):
'''Generate a json configuration file with parameters for a
a biophysical experiment.
Parameters
----------
fit_path : string
filename of a json configuration file with cell parameters.
stimulus_filename : string
path to an NWB file with input currents.
swc_morphology_path : string
file in SWC format.
sweeps : array of integers
which sweeps in the stimulus file are to be used.
'''
self.manifest = OrderedDict()
self.manifest['biophys'] = [{
'model_file': [ 'manifest.json', fit_path ]
}]
self.manifest['runs'] = [{
'sweeps': sweeps
}]
self.manifest['neuron'] = [{
'hoc': [ 'stdgui.hoc', 'import3d.hoc' ]
}]
self.manifest['manifest'] = [
{
'type': 'dir',
'spec': '.',
'key': 'BASEDIR'
},
{
'type': 'dir',
'spec': 'work',
'key': 'WORKDIR',
'parent': 'BASEDIR'
},
{
'type': 'file',
'spec': swc_morphology_path,
'key': 'MORPHOLOGY'
},
{
'type': 'dir',
'spec': 'modfiles',
'key': 'MODFILE_DIR'
},
{
'type': 'file',
'format': 'NWB',
'spec': stimulus_filename,
'key': 'stimulus_path'
},
{
'parent_key': 'WORKDIR',
'type': 'file',
'format': 'NWB',
'spec': stimulus_filename,
'key': 'output'
}
]
def cache_data(self,
neuronal_model_id,
working_directory=None):
'''Take a an experiment id, query the Api RMA to get well-known-files
download the files, and store them in the working directory.
Parameters
----------
neuronal_model_id : int or string representation
found in the neuronal_model table in the api
working_directory : string
Absolute path name where the downloaded well-known files will be stored.
'''
if working_directory is None:
working_directory = self.default_working_directory
try:
os.stat(working_directory)
except:
os.mkdir(working_directory)
work_dir = os.path.join(working_directory, 'work')
try:
os.stat(work_dir)
except:
os.mkdir(work_dir)
modfile_dir = os.path.join(working_directory, 'modfiles')
try:
os.stat(modfile_dir)
except:
os.mkdir(modfile_dir)
well_known_file_id_dict = self.get_well_known_file_ids(neuronal_model_id)
for key, id_dict in well_known_file_id_dict.items():
if (not self.cache_stimulus) and (key == 'stimulus'):
continue
for well_known_id, filename in id_dict.items():
well_known_file_url = self.construct_well_known_file_download_url(well_known_id)
cached_file_path = os.path.join(working_directory, filename)
self.retrieve_file_over_http(well_known_file_url, cached_file_path)
fit_path = self.ids['fit'].values()[0]
stimulus_filename = self.ids['stimulus'].values()[0]
swc_morphology_path = self.ids['morphology'].values()[0]
sweeps = sorted(self.sweeps)
self.create_manifest(fit_path,
stimulus_filename,
swc_morphology_path,
sweeps)
manifest_path = os.path.join(working_directory, 'manifest.json')
with open(manifest_path, 'wb') as f:
f.write(json.dumps(self.manifest, indent=2))
|
wvangeit/AllenSDK
|
allensdk/api/queries/biophysical_perisomatic_api.py
|
Python
|
gpl-3.0
| 12,180
|
[
"NEURON"
] |
3b0119cbbe8c9c8c8b9ed2bd895a6e2910b9b96c3f3431adc9d105146d9b3739
|
r"""
===============
Decoding (MVPA)
===============
.. contents:: Contents
:local:
:depth: 3
.. include:: ../../links.inc
Design philosophy
=================
Decoding (a.k.a. MVPA) in MNE largely follows the machine
learning API of the scikit-learn package.
Each estimator implements ``fit``, ``transform``, ``fit_transform``, and
(optionally) ``inverse_transform`` methods. For more details on this design,
visit scikit-learn_. For additional theoretical insights into the decoding
framework in MNE :footcite:`KingEtAl2018`.
For ease of comprehension, we will denote instantiations of the class using
the same name as the class but in small caps instead of camel cases.
Let's start by loading data for a simple two-class problem:
"""
# sphinx_gallery_thumbnail_number = 6
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the examlpe to run faster. The 2 Hz high-pass helps improve CSP.
raw.filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=10)
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
del raw
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: Audio left or right
###############################################################################
# Transformation classes
# ======================
#
# Scaler
# ^^^^^^
# The :class:`mne.decoding.Scaler` will standardize the data based on channel
# scales. In the simplest modes ``scalings=None`` or ``scalings=dict(...)``,
# each data channel type (e.g., mag, grad, eeg) is treated separately and
# scaled by a constant. This is the approach used by e.g.,
# :func:`mne.compute_covariance` to standardize channel scales.
#
# If ``scalings='mean'`` or ``scalings='median'``, each channel is scaled using
# empirical measures. Each channel is scaled independently by the mean and
# standand deviation, or median and interquartile range, respectively, across
# all epochs and time points during :class:`~mne.decoding.Scaler.fit`
# (during training). The :meth:`~mne.decoding.Scaler.transform` method is
# called to transform data (training or test set) by scaling all time points
# and epochs on a channel-by-channel basis. To perform both the ``fit`` and
# ``transform`` operations in a single call, the
# :meth:`~mne.decoding.Scaler.fit_transform` method may be used. To invert the
# transform, :meth:`~mne.decoding.Scaler.inverse_transform` can be used. For
# ``scalings='median'``, scikit-learn_ version 0.17+ is required.
#
# .. note:: Using this class is different from directly applying
# :class:`sklearn.preprocessing.StandardScaler` or
# :class:`sklearn.preprocessing.RobustScaler` offered by
# scikit-learn_. These scale each *classification feature*, e.g.
# each time point for each channel, with mean and standard
# deviation computed across epochs, whereas
# :class:`mne.decoding.Scaler` scales each *channel* using mean and
# standard deviation computed across all of its time points
# and epochs.
#
# Vectorizer
# ^^^^^^^^^^
# Scikit-learn API provides functionality to chain transformers and estimators
# by using :class:`sklearn.pipeline.Pipeline`. We can construct decoding
# pipelines and perform cross-validation and grid-search. However scikit-learn
# transformers and estimators generally expect 2D data
# (n_samples * n_features), whereas MNE transformers typically output data
# with a higher dimensionality
# (e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer
# therefore needs to be applied between the MNE and the scikit-learn steps
# like:
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='lbfgs'))
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
###############################################################################
# PSDEstimator
# ^^^^^^^^^^^^
# The :class:`mne.decoding.PSDEstimator`
# computes the power spectral density (PSD) using the multitaper
# method. It takes a 3D array as input, converts it into 2D and computes the
# PSD.
#
# FilterEstimator
# ^^^^^^^^^^^^^^^
# The :class:`mne.decoding.FilterEstimator` filters the 3D epochs data.
#
# Spatial filters
# ===============
#
# Just like temporal filters, spatial filters provide weights to modify the
# data along the sensor dimension. They are popular in the BCI community
# because of their simplicity and ability to distinguish spatially-separated
# neural activity.
#
# Common spatial pattern
# ^^^^^^^^^^^^^^^^^^^^^^
#
# :class:`mne.decoding.CSP` is a technique to analyze multichannel data based
# on recordings from two classes :footcite:`Koles1991` (see also
# https://en.wikipedia.org/wiki/Common_spatial_pattern).
#
# Let :math:`X \in R^{C\times T}` be a segment of data with
# :math:`C` channels and :math:`T` time points. The data at a single time point
# is denoted by :math:`x(t)` such that :math:`X=[x(t), x(t+1), ..., x(t+T-1)]`.
# Common spatial pattern (CSP) finds a decomposition that projects the signal
# in the original sensor space to CSP space using the following transformation:
#
# .. math:: x_{CSP}(t) = W^{T}x(t)
# :label: csp
#
# where each column of :math:`W \in R^{C\times C}` is a spatial filter and each
# row of :math:`x_{CSP}` is a CSP component. The matrix :math:`W` is also
# called the de-mixing matrix in other contexts. Let
# :math:`\Sigma^{+} \in R^{C\times C}` and :math:`\Sigma^{-} \in R^{C\times C}`
# be the estimates of the covariance matrices of the two conditions.
# CSP analysis is given by the simultaneous diagonalization of the two
# covariance matrices
#
# .. math:: W^{T}\Sigma^{+}W = \lambda^{+}
# :label: diagonalize_p
# .. math:: W^{T}\Sigma^{-}W = \lambda^{-}
# :label: diagonalize_n
#
# where :math:`\lambda^{C}` is a diagonal matrix whose entries are the
# eigenvalues of the following generalized eigenvalue problem
#
# .. math:: \Sigma^{+}w = \lambda \Sigma^{-}w
# :label: eigen_problem
#
# Large entries in the diagonal matrix corresponds to a spatial filter which
# gives high variance in one class but low variance in the other. Thus, the
# filter facilitates discrimination between the two classes.
#
# .. topic:: Examples
#
# * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py`
# * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_csp_timefreq.py`
#
# .. note::
#
# The winning entry of the Grasp-and-lift EEG competition in Kaggle used
# the :class:`~mne.decoding.CSP` implementation in MNE and was featured as
# a `script of the week <sotw_>`_.
#
# .. _sotw: http://blog.kaggle.com/2015/08/12/july-2015-scripts-of-the-week/
#
# We can use CSP with these data with:
csp = CSP(n_components=3, norm_trace=False)
clf_csp = make_pipeline(csp, LinearModel(LogisticRegression(solver='lbfgs')))
scores = cross_val_multiscore(clf_csp, X, y, cv=5, n_jobs=1)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
###############################################################################
# Source power comodulation (SPoC)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Source Power Comodulation (:class:`mne.decoding.SPoC`)
# :footcite:`DahneEtAl2014` identifies the composition of
# orthogonal spatial filters that maximally correlate with a continuous target.
#
# SPoC can be seen as an extension of the CSP where the target is driven by a
# continuous variable rather than a discrete variable. Typical applications
# include extraction of motor patterns using EMG power or audio patterns using
# sound envelope.
#
# .. topic:: Examples
#
# * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_spoc_CMC.py`
#
# xDAWN
# ^^^^^
# :class:`mne.preprocessing.Xdawn` is a spatial filtering method designed to
# improve the signal to signal + noise ratio (SSNR) of the ERP responses
# :footcite:`RivetEtAl2009`. Xdawn was originally
# designed for P300 evoked potential by enhancing the target response with
# respect to the non-target response. The implementation in MNE-Python is a
# generalization to any type of ERP.
#
# .. topic:: Examples
#
# * :ref:`sphx_glr_auto_examples_preprocessing_plot_xdawn_denoising.py`
# * :ref:`sphx_glr_auto_examples_decoding_plot_decoding_xdawn_eeg.py`
#
# Effect-matched spatial filtering
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# The result of :class:`mne.decoding.EMS` is a spatial filter at each time
# point and a corresponding time course :footcite:`SchurgerEtAl2013`.
# Intuitively, the result gives the similarity between the filter at
# each time point and the data vector (sensors) at that time point.
#
# .. topic:: Examples
#
# * :ref:`sphx_glr_auto_examples_decoding_plot_ems_filtering.py`
#
# Patterns vs. filters
# ^^^^^^^^^^^^^^^^^^^^
#
# When interpreting the components of the CSP (or spatial filters in general),
# it is often more intuitive to think about how :math:`x(t)` is composed of
# the different CSP components :math:`x_{CSP}(t)`. In other words, we can
# rewrite Equation :eq:`csp` as follows:
#
# .. math:: x(t) = (W^{-1})^{T}x_{CSP}(t)
# :label: patterns
#
# The columns of the matrix :math:`(W^{-1})^T` are called spatial patterns.
# This is also called the mixing matrix. The example
# :ref:`sphx_glr_auto_examples_decoding_plot_linear_model_patterns.py`
# discusses the difference between patterns and filters.
#
# These can be plotted with:
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
###############################################################################
# Decoding over time
# ==================
#
# This strategy consists in fitting a multivariate predictive model on each
# time instant and evaluating its performance at the same instant on new
# epochs. The :class:`mne.decoding.SlidingEstimator` will take as input a
# pair of features :math:`X` and targets :math:`y`, where :math:`X` has
# more than 2 dimensions. For decoding over time the data :math:`X`
# is the epochs data of shape n_epochs x n_channels x n_times. As the
# last dimension of :math:`X` is the time, an estimator will be fit
# on every time instant.
#
# This approach is analogous to SlidingEstimator-based approaches in fMRI,
# where here we are interested in when one can discriminate experimental
# conditions and therefore figure out when the effect of interest happens.
#
# When working with linear models as estimators, this approach boils
# down to estimating a discriminative spatial filter for each time instant.
#
# Temporal decoding
# ^^^^^^^^^^^^^^^^^
#
# We'll use a Logistic Regression for a binary classification as machine
# learning model.
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
###############################################################################
# You can retrieve the spatial filters and spatial patterns if you explicitly
# use a LinearModel
clf = make_pipeline(StandardScaler(),
LinearModel(LogisticRegression(solver='lbfgs')))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked_time_gen = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked_time_gen.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
###############################################################################
# Temporal generalization
# ^^^^^^^^^^^^^^^^^^^^^^^
#
# Temporal generalization is an extension of the decoding over time approach.
# It consists in evaluating whether the model estimated at a particular
# time instant accurately predicts any other time instant. It is analogous to
# transferring a trained model to a distinct learning problem, where the
# problems correspond to decoding the patterns of brain activity recorded at
# distinct time instants.
#
# The object to for Temporal generalization is
# :class:`mne.decoding.GeneralizingEstimator`. It expects as input :math:`X`
# and :math:`y` (similarly to :class:`~mne.decoding.SlidingEstimator`) but
# generates predictions from each model for all time instants. The class
# :class:`~mne.decoding.GeneralizingEstimator` is generic and will treat the
# last dimension as the one to be used for generalization testing. For
# convenience, here, we refer to it as different tasks. If :math:`X`
# corresponds to epochs data then the last dimension is time.
#
# This runs the analysis used in :footcite:`KingEtAl2014` and further detailed
# in :footcite:`KingDehaene2014`:
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc',
verbose=True)
scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
###############################################################################
# Plot the full (generalization) matrix:
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
###############################################################################
# Projecting sensor-space patterns to source space
# ================================================
# If you use a linear classifier (or regressor) for your data, you can also
# project these to source space. For example, using our ``evoked_time_gen``
# from before:
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
inv = mne.minimum_norm.make_inverse_operator(
evoked_time_gen.info, fwd, cov, loose=0.)
stc = mne.minimum_norm.apply_inverse(evoked_time_gen, inv, 1. / 9., 'dSPM')
del fwd, inv
###############################################################################
# And this can be visualized using :meth:`stc.plot <mne.SourceEstimate.plot>`:
brain = stc.plot(hemi='split', views=('lat', 'med'), initial_time=0.1,
subjects_dir=subjects_dir)
###############################################################################
# Source-space decoding
# =====================
#
# Source space decoding is also possible, but because the number of features
# can be much larger than in the sensor space, univariate feature selection
# using ANOVA f-test (or some other metric) can be done to reduce the feature
# dimension. Interpreting decoding results might be easier in source space as
# compared to sensor space.
#
# .. topic:: Examples
#
# * :ref:`tut_dec_st_source`
#
# Exercise
# ========
#
# - Explore other datasets from MNE (e.g. Face dataset from SPM to predict
# Face vs. Scrambled)
#
# References
# ==========
# .. footbibliography::
|
mne-tools/mne-tools.github.io
|
0.21/_downloads/794cfa7d3963d16b6e76d03bfa14d1e0/plot_sensors_decoding.py
|
Python
|
bsd-3-clause
| 17,587
|
[
"VisIt"
] |
c5f704054fcd39f4f46f0977f573e79b46dd64d6a2aa3b7a07b47c6099140ab2
|
from pylab import *
from numpy import *
from scipy.io.netcdf import netcdf_file
from datetime import datetime, time
from funcs import *
from read_NewTable import tshck, tini_icme, tend_icme, tini_mc, tend_mc, n_icmes, MCsig
from ShiftTimes import *
def thetacond(ThetaThres, ThetaSh):
if ThetaThres<0.:
return ones(len(ThetaSh), dtype=bool)
else:
return (ThetaSh > ThetaThres)
day = 86400.
#fname_omni = '../../../../../../data_omni/jan1996_jan2010/binary_format/omni_1996-2010.nc'
#fname_ace = '../../../../../../../data_ace/64sec_mag-swepam/ace.1998-2014.nc'
fname_aceMulti = '../../../../../../../data_ace/1hr_multi/ace.1998-2013.nc'
fname_events = '../../../../data_317events_iii.nc'
#f_ace = netcdf_file(fname_ace, 'r')
f_aceMulti = netcdf_file(fname_aceMulti, 'r')
f_events = netcdf_file(fname_events, 'r')
print " -------> archivos leidos!"
#----------------------------------------------------------
DayInMin= 24*60.
dtmin = 2.88 # [min] ventana minima en q voy a promediar
dTmin = 0.1*DayInMin # [min] (*) duracion minima q pido para las sheaths
# q entran en mi promedio.
# (*) esto debe dar 0.1days para nbin=50, y dtmin=2.88.
dTday = dTmin*60./day # [day] lo mismo q 'dTmin' pero en dias
print " leyendo tiempo..."
t_utc = utc_from_omni(f_aceMulti)
print " Ready."
#------------------------------------ EVENTOS
#MCsig = array(f_events.variables['MC_sig'].data) # 2,1,0: MC, rotation, irregular
#Vnsh = array(f_events.variables['wang_Vsh'].data) # veloc normal del shock
#ThetaSh = array(f_events.variables['wang_theta_shock'].data) # veloc normal del shock
#------------------------------------
#++++++++++++++++++++ SELECCION DE EVENTOS +++++++++++++++++++++++++
#ThetaThres = 130 #-1 #130.
#ThetaCond = thetacond(ThetaThres, ThetaSh)
BETW1998_2006 = ones(n_icmes, dtype=bool)
for i in range(307, n_icmes)+range(0, 26):
BETW1998_2006[i]=False # 'False' para excluir eventos
SELECC = BETW1998_2006
# seleccionamos solo los de flag>'MCwant'
#MCwant = {'flags': ('2', '2H'),
# 'alias': '2.2H'} # para "flagear" el nombre/ruta de las figuras
MCwant = {'flags': ('2',),
'alias': '2'} # para "flagear" el nombre/ruta de las figuras
for i in range(n_icmes):
cc = MCsig[i] in MCwant['flags']
SELECC[i] &= cc
#SELECC = ((MCsig>=MCwant) & (BETW1998_2006)) #& (ThetaCond)
# excluimos eventos de 2MCs
EVENTS_with_2MCs= (26, 148, 259, 295)
MCmultiple = False # para "flagear" el nombre/ruta de las figuras
for i in EVENTS_with_2MCs:
if(~MCmultiple): SELECC[i] &= False
#++++++++++++++++++++ CORRECCION DE BORDES +++++++++++++++++++++++++++++
# IMPORTANTE:
# Solo valido para los "63 eventos" (MCflag='2', y visibles en ACE)
CorrShift = False#True
if CorrShift:
ShiftCorrection(ShiftDts, tshck)
ShiftCorrection(ShiftDts, tini_icme)
ShiftCorrection(ShiftDts, tend_icme)
ShiftCorrection(ShiftDts, tini_mc)
ShiftCorrection(ShiftDts, tend_mc)
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"""B = array(f_ace.variables['Bmag'].data)
Vsw = array(f_ace.variables['Vp'].data)
Temp = array(f_ace.variables['Tp'].data)
Pcc = array(f_ace.variables['Np'].data)
rmsB = array(f_ace.variables['dBrms'].data)
alphar = array(f_ace.variables['Alpha_ratio'].data)
beta = calc_beta(Temp, Pcc, B)
rmsBoB = rmsB/B"""
O7O6 = array(f_aceMulti.variables['O7toO6'].data)
print " -------> variables leidas!"
#------------------------------------ VARIABLES
VARS = []
# variable, nombre archivo, limite vertical, ylabel
VARS += [[O7O6, 'o7o6', [0., 1.5], 'O7/O6 [1]']]
"""VARS += [[B, 'B', [5., 18.], 'B [nT]']]
VARS += [[Vsw, 'V', [380., 650.], 'Vsw [km/s]']]
VARS += [[beta, 'beta', [0.001, 5.], '$\\beta$ [1]']]
VARS += [[Pcc, 'Pcc', [2, 17.], 'proton density [#/cc]']]
VARS += [[Temp, 'Temp', [1e4, 4e5], 'Temp [K]']]
VARS += [[rmsBoB, 'rmsBoB', [0.01, 0.1], 'rms($\hat B$/|B|) [1]']]
VARS += [[alphar, 'AlphaRatio', [0.02, 0.1], 'alpha ratio [1]']]"""
nvars = len(VARS)
#---------
nbefore = 1
nafter = 1
nbin = (1 + nbefore + nafter)*50 # [1] nro de bines q quiero en mi perfil promedio
fgap = 0.2 # fraccion de gap que tolero
# nEnough: nmbr of events aporting good data in 80% of the window
dVARS, nEnough, Enough = avrs_and_stds([nbefore, nafter],
SELECC,
n_icmes, tini_mc, tend_mc, dTday, nbin, t_utc, VARS, fgap)
#------------------------------------
tnorm = dVARS[0][2]
#-------------------------------------------------------------
##
|
jimsrc/seatos
|
sheaths/src/forbush/rebineo_o7o6.py
|
Python
|
mit
| 4,551
|
[
"NetCDF"
] |
658c08409c63bf17c5ee65afc40caa7b122f3dae42c9a57f71bec767a536d08a
|
import tensorflow as tf
import re
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_boolean('use_fp16', False,
"""Train the model using fp16.""")
TOWER_NAME = 'tower'
NUM_CLASSES = 3
def _variable_on_cpu(name, shape, initializer):
"""Helper to create a Variable stored on CPU memory.
Args:
name: name of the variable
shape: list of ints
initializer: initializer for Variable
Returns:
Variable Tensor
"""
with tf.device('/cpu:0'):
dtype = tf.float16 if FLAGS.use_fp16 else tf.float32
var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)
return var
def _variable_with_weight_decay(name, shape, stddev, wd):
"""Helper to create an initialized Variable with weight decay.
Note that the Variable is initialized with a truncated normal distribution.
A weight decay is added only if one is specified.
Args:
name: name of the variable
shape: list of ints
stddev: standard deviation of a truncated Gaussian
wd: add L2Loss weight decay multiplied by this float. If None, weight
decay is not added for this Variable.
Returns:
Variable Tensor
"""
dtype = tf.float16 if FLAGS.use_fp16 else tf.float32
var = _variable_on_cpu(
name,
shape,
tf.truncated_normal_initializer(stddev=stddev, dtype=dtype))
if wd is not None:
weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
tf.add_to_collection('losses', weight_decay)
return var
def _variable_with_weight_decay_xavier(name, shape, wd):
"""Helper to create an initialized Variable with weight decay.
Note that the Variable is initialized with a truncated normal distribution.
A weight decay is added only if one is specified.
Args:
name: name of the variable
shape: list of ints
stddev: standard deviation of a truncated Gaussian
wd: add L2Loss weight decay multiplied by this float. If None, weight
decay is not added for this Variable.
Returns:
Variable Tensor
"""
dtype = tf.float16 if FLAGS.use_fp16 else tf.float32
var = _variable_on_cpu(
name,
shape,
tf.contrib.layers.xavier_initializer())
if wd is not None:
weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
tf.add_to_collection('losses', weight_decay)
return var
def _activation_summary(x):
"""Helper to create summaries for activations.
Creates a summary that provides a histogram of activations.
Creates a summary that measures the sparsity of activations.
Args:
x: Tensor
Returns:
nothing
"""
# Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training
# session. This helps the clarity of presentation on tensorboard.
tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name)
tf.summary.histogram(tensor_name + '/activations', x)
tf.summary.scalar(tensor_name + '/sparsity',
tf.nn.zero_fraction(x))
def inference(txts, dropout_keep_prob=1.0):
"""Build the cnn based sentiment prediction model.
Args:
txts: text returned from get_inputs().
Returns:
Logits.
"""
# We instantiate all variables using tf.get_variable() instead of
# tf.Variable() in order to share variables across multiple GPU training runs.
# If we only ran this model on a single GPU, we could simplify this function
# by replacing all instances of tf.get_variable() with tf.Variable().
#
sequence_length = 60
embedding_size = 400
num_filters = 64
filter_sizes = [2, 3, 4, 5]
pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.variable_scope("conv-maxpool-%s" % filter_size) as scope:
cnn_shape = [filter_size, embedding_size, 1, num_filters]
kernel = _variable_with_weight_decay('weights',
shape=cnn_shape,
stddev=0.1,
wd=None)
conv = tf.nn.conv2d(txts, kernel, [1, 1, 1, 1], padding='VALID')
biases = _variable_on_cpu('biases', [num_filters], tf.constant_initializer(0.0))
pre_activation = tf.nn.bias_add(conv, biases)
conv_out = tf.nn.relu(pre_activation, name=scope.name)
_activation_summary(conv_out)
ksize = [1, sequence_length - filter_size + 1, 1, 1]
print 'filter_size', filter_size
print 'ksize', ksize
print 'conv_out', conv_out
pooled = tf.nn.max_pool(conv_out, ksize=ksize, strides=[1, 1, 1, 1],
padding='VALID', name='pool1')
norm_pooled = tf.nn.lrn(pooled, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
name='norm1')
# pooled_outputs.append(pooled)
pooled_outputs.append(norm_pooled)
# print 'norm1', norm1
num_filters_total = num_filters * len(filter_sizes)
h_pool = tf.concat(pooled_outputs, 3)
h_pool = tf.concat(pooled_outputs, 3)
h_pool_flat = tf.reshape(h_pool, [-1, num_filters_total])
print 'h_pool', h_pool
print 'h_pool_flat', h_pool_flat
h_drop = tf.nn.dropout(h_pool_flat, dropout_keep_prob)
# num_filters_total = num_filters * 1
# norm_flat = tf.reshape(norm1, [-1, num_filters_total])
with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay_xavier('weights', [num_filters_total, NUM_CLASSES],
wd=0.2)
biases = _variable_on_cpu('biases', [NUM_CLASSES],
tf.constant_initializer(0.1))
softmax_linear = tf.add(tf.matmul(h_pool_flat, weights), biases, name=scope.name)
_activation_summary(softmax_linear)
return softmax_linear
def loss(logits, labels):
"""Add L2Loss to all the trainable variables.
Add summary for "Loss" and "Loss/avg".
Args:
logits: Logits from inference().
labels: Labels from distorted_inputs or inputs(). 1-D tensor
of shape [batch_size]
Returns:
Loss tensor of type float.
"""
# Calculate the average cross entropy loss across the batch.
# labels = tf.cast(labels, tf.int64)
# labels = tf.cast(tf.argmax(labels, 1), tf.int64)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels, name='cross_entropy_per_example')
# cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='cross_entropy_per_example')
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
tf.add_to_collection('losses', cross_entropy_mean)
golds = tf.argmax(labels, 1, name="golds")
predictions = tf.argmax(logits, 1, name="predictions")
correct_predictions = tf.equal(predictions, tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")
# The total loss is defined as the cross entropy loss plus all of the weight
# decay terms (L2 loss).
return tf.add_n(tf.get_collection('losses'), name='total_loss'), accuracy
|
bgshin/doc-classify-multi-gpu
|
src/cnntw/cnn_model.py
|
Python
|
apache-2.0
| 7,389
|
[
"Gaussian"
] |
977b6a8a1dd21c167b68ad7d6945fa4855abc7a8f0b8dbfa68b2f30abf8c3162
|
import datetime
import logging
from xml.etree import ElementTree as ET
from django.shortcuts import render
from serializable import models_to_xml, xml_to_models, delete_all_models_in_db
from models import Menu, Order, MenuItem, OrderEntry
from test_serializable import indent_xml
def index(request):
if request.POST.get('cmd') == 'Load XML':
delete_all_models_in_db([Menu,Order])
xml = ET.fromstring(request.POST['xml'])
xml_to_models(xml)
elif request.POST.get('cmd') == 'Clear XML':
delete_all_models_in_db([Menu,Order])
elif request.POST.get('cmd') == 'Default XML':
def s(o):
o.save()
return o
menu = s(Menu(name='Breakfast'))
menui1 = s(MenuItem(menu=menu, name='Spam and Eggs', price=4.00))
menui2 = s(MenuItem(menu=menu, name='Eggs and Spam', price=4.50))
menui3 = s(MenuItem(menu=menu, name='Spammity Spam', price=5.00))
menui4 = s(MenuItem(menu=menu, name='Spam' , price=3.00))
order = s(Order(customer='Brian', date=datetime.date.today()))
orderi1 = s(OrderEntry(order=order, menuitem=menui1, count=1))
orderi2 = s(OrderEntry(order=order, menuitem=menui2, count=1))
orderi3 = s(OrderEntry(order=order, menuitem=menui4, count=2))
elif request.POST.get('cmd') in (None, 'Refresh'):
pass
else:
raise Exception('Unrecognized command')
xml = indent_xml( models_to_xml([Menu, Order]) )
context = dict( xml = xml
)
return render(request, 'xmldump/xmldump.html', context)
|
marksantesson/xmldump
|
xmldump/views.py
|
Python
|
apache-2.0
| 1,612
|
[
"Brian"
] |
9065460ca98915d802f8e28e1dc95466e4458a6947450e47c4b46a24b37432b1
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2017 Lenovo, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Module to download new image to Lenovo Switches
# Lenovo Networking
#
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: cnos_image
author: "Dave Kasberg (@dkasberg)"
short_description: Perform firmware upgrade/download from a remote server on devices running Lenovo CNOS
description:
- This module allows you to work with switch firmware images. It provides a way to download a firmware image
to a network device from a remote server using FTP, SFTP, TFTP, or SCP. The first step is to create a directory
from where the remote server can be reached. The next step is to provide the full file path of the image’s
location. Authentication details required by the remote server must be provided as well. By default, this
method makes the newly downloaded firmware image the active image, which will be used by the switch during the
next restart.
This module uses SSH to manage network device configuration.
The results of the operation will be placed in a directory named 'results'
that must be created by the user in their local directory to where the playbook is run.
For more information about this module from Lenovo and customizing it usage for your
use cases, please visit U(http://systemx.lenovofiles.com/help/index.jsp?topic=%2Fcom.lenovo.switchmgt.ansible.doc%2Fcnos_image.html)
version_added: "2.3"
extends_documentation_fragment: cnos
options:
protocol:
description:
- This refers to the protocol used by the network device to interact with the remote server from where
to download the firmware image. The choices are FTP, SFTP, TFTP, or SCP. Any other protocols will
result in error. If this parameter is not specified, there is no default value to be used.
required: true
default: null
choices: [SFTP, SCP, FTP, TFTP]
serverip:
description:
- This specifies the IP Address of the remote server from where the software image will be downloaded.
required: true
default: null
imgpath:
description:
- This specifies the full file path of the image located on the remote server. In case the relative path
is used as the variable value, the root folder for the user of the server needs to be specified.
required: true
default: null
imgtype:
description:
- This specifies the firmware image type to be downloaded
required: true
default: null
choices: [all, boot, os, onie]
serverusername:
description:
- Specify the username for the server relating to the protocol used.
required: true
default: null
serverpassword:
description:
- Specify the password for the server relating to the protocol used.
required: false
default: null
'''
EXAMPLES = '''
Tasks : The following are examples of using the module cnos_image. These are written in the main.yml file of the tasks directory.
---
- name: Test Image transfer
cnos_image:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['username'] }}"
password: "{{ hostvars[inventory_hostname]['password'] }}"
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
enablePassword: "{{ hostvars[inventory_hostname]['enablePassword'] }}"
outputfile: "./results/test_image_{{ inventory_hostname }}_output.txt"
protocol: "sftp"
serverip: "10.241.106.118"
imgpath: "/root/cnos_images/G8272-10.1.0.112.img"
imgtype: "os"
serverusername: "root"
serverpassword: "root123"
- name: Test Image tftp
cnos_image:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['username'] }}"
password: "{{ hostvars[inventory_hostname]['password'] }}"
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
enablePassword: "{{ hostvars[inventory_hostname]['enablePassword'] }}"
outputfile: "./results/test_image_{{ inventory_hostname }}_output.txt"
protocol: "tftp"
serverip: "10.241.106.118"
imgpath: "/anil/G8272-10.2.0.34.img"
imgtype: "os"
serverusername: "root"
serverpassword: "root123"
'''
RETURN = '''
return value: |
On successful execution, the method returns a message in JSON format
[Image file tranferred to device]
Upon any failure, the method returns an error display string.
'''
import sys
import paramiko
import time
import argparse
import socket
import array
import json
import time
import re
try:
from ansible.module_utils import cnos
HAS_LIB = True
except:
HAS_LIB = False
from ansible.module_utils.basic import AnsibleModule
from collections import defaultdict
def main():
module = AnsibleModule(
argument_spec=dict(
outputfile=dict(required=True),
host=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
enablePassword=dict(required=False, no_log=True),
deviceType=dict(required=True),
protocol=dict(required=True),
serverip=dict(required=True),
imgpath=dict(required=True),
imgtype=dict(required=True),
serverusername=dict(required=False),
serverpassword=dict(required=False, no_log=True),),
supports_check_mode=False)
username = module.params['username']
password = module.params['password']
enablePassword = module.params['enablePassword']
outputfile = module.params['outputfile']
host = module.params['host']
deviceType = module.params['deviceType']
protocol = module.params['protocol'].lower()
imgserverip = module.params['serverip']
imgpath = module.params['imgpath']
imgtype = module.params['imgtype']
imgserveruser = module.params['serverusername']
imgserverpwd = module.params['serverpassword']
output = ""
timeout = 120
tftptimeout = 600
# Create instance of SSHClient object
remote_conn_pre = paramiko.SSHClient()
# Automatically add untrusted hosts (make sure okay for security policy in your environment)
remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# initiate SSH connection with the switch
remote_conn_pre.connect(host, username=username, password=password)
time.sleep(2)
# Use invoke_shell to establish an 'interactive session'
remote_conn = remote_conn_pre.invoke_shell()
time.sleep(2)
# Enable and enter configure terminal then send command
output = output + cnos.waitForDeviceResponse("\n", ">", 2, remote_conn)
output = output + cnos.enterEnableModeForDevice(enablePassword, 3, remote_conn)
# Make terminal length = 0
output = output + cnos.waitForDeviceResponse("terminal length 0\n", "#", 2, remote_conn)
transfer_status = ""
# Invoke method for image transfer from server
if(protocol == "tftp" or protocol == "ftp"):
transfer_status = cnos.doImageTransfer(protocol, tftptimeout, imgserverip, imgpath, imgtype, imgserveruser, imgserverpwd, remote_conn)
elif(protocol == "sftp" or protocol == "scp"):
transfer_status = cnos.doSecureImageTransfer(protocol, timeout, imgserverip, imgpath, imgtype, imgserveruser, imgserverpwd, remote_conn)
else:
transfer_status = "Invalid Protocol option"
output = output + "\n Image Transfer status \n" + transfer_status
# Save it into the file
file = open(outputfile, "a")
file.write(output)
file.close()
# Logic to check when changes occur or not
errorMsg = cnos.checkOutputForError(output)
if(errorMsg is None):
module.exit_json(changed=True, msg="Image file tranferred to device")
else:
module.fail_json(msg=errorMsg)
if __name__ == '__main__':
main()
|
t0mk/ansible
|
lib/ansible/modules/network/lenovo/cnos_image.py
|
Python
|
gpl-3.0
| 8,759
|
[
"VisIt"
] |
2c7b61298e7c58a8d0c32024e6d609320bae488e6c9f244fff9476391af94145
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Hive Appier Framework
# Copyright (c) 2008-2021 Hive Solutions Lda.
#
# This file is part of Hive Appier Framework.
#
# Hive Appier Framework is free software: you can redistribute it and/or modify
# it under the terms of the Apache License as published by the Apache
# Foundation, either version 2.0 of the License, or (at your option) any
# later version.
#
# Hive Appier Framework is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# Apache License for more details.
#
# You should have received a copy of the Apache License along with
# Hive Appier Framework. If not, see <http://www.apache.org/licenses/>.
__author__ = "João Magalhães <joamag@hive.pt>"
""" The author(s) of the module """
__version__ = "1.0.0"
""" The version of the module """
__revision__ = "$LastChangedRevision$"
""" The revision number of the module """
__date__ = "$LastChangedDate$"
""" The last change date of the module """
__copyright__ = "Copyright (c) 2008-2021 Hive Solutions Lda."
""" The copyright for the module """
__license__ = "Apache License, Version 2.0"
""" The license for the module """
import os
import imp
import sys
import inspect
import functools
import itertools
import contextlib
import collections
import urllib #@UnusedImport
ArgSpec = collections.namedtuple(
"ArgSpec",
["args", "varargs", "keywords", "defaults"]
)
@contextlib.contextmanager
def ctx_absolute():
root = sys.path.pop(0)
try: yield
finally: sys.path.insert(0, root)
with ctx_absolute():
try: import urllib2
except ImportError: urllib2 = None
with ctx_absolute():
try: import httplib
except ImportError: httplib = None
with ctx_absolute():
try: import http
except ImportError: http = None
with ctx_absolute():
try: import urllib.error
except ImportError: pass
with ctx_absolute():
try: import urllib.request
except ImportError: pass
with ctx_absolute():
try: import http.client
except ImportError: pass
try: import HTMLParser
except ImportError: import html.parser; HTMLParser = html.parser
try: import cPickle
except ImportError: import pickle; cPickle = pickle
try: import cStringIO
except ImportError: import io; cStringIO = io
try: import StringIO as _StringIO
except ImportError: import io; _StringIO = io
try: import urlparse as _urlparse
except ImportError: import urllib.parse; _urlparse = urllib.parse
PYTHON_3 = sys.version_info[0] >= 3
""" Global variable that defines if the current Python
interpreter is at least Python 3 compliant, this is used
to take some of the conversion decision for runtime """
PYTHON_35 = sys.version_info[0] >= 3 and sys.version_info[1] >= 5
""" Global variable that defines if the current Python
interpreter is at least Python 3.5 compliant """
PYTHON_36 = sys.version_info[0] >= 3 and sys.version_info[1] >= 6
""" Global variable that defines if the current Python
interpreter is at least Python 3.6 compliant """
PYTHON_39 = sys.version_info[0] >= 3 and sys.version_info[1] >= 9
""" Global variable that defines if the current Python
interpreter is at least Python 3.9 compliant """
PYTHON_ASYNC = PYTHON_35
""" Global variable that defines if the current Python
interpreter support the async/await syntax responsible
for the easy to use async methods """
PYTHON_ASYNC_GEN = PYTHON_36
""" Global variable that defines if the current Python
interpreter support the async/await generator syntax
responsible for the async generator methods """
PYTHON_V = int("".join([str(v) for v in sys.version_info[:3]]))
""" The Python version integer describing the version of
a the interpreter as a set of three integer digits """
if PYTHON_3: LONG = int
else: LONG = long #@UndefinedVariable
if PYTHON_3: BYTES = bytes
else: BYTES = str #@UndefinedVariable
if PYTHON_3: UNICODE = str
else: UNICODE = unicode #@UndefinedVariable
if PYTHON_3: OLD_UNICODE = None
else: OLD_UNICODE = unicode #@UndefinedVariable
if PYTHON_3: STRINGS = (str,)
else: STRINGS = (str, unicode) #@UndefinedVariable
if PYTHON_3: ALL_STRINGS = (bytes, str)
else: ALL_STRINGS = (bytes, str, unicode) #@UndefinedVariable
if PYTHON_3: INTEGERS = (int,)
else: INTEGERS = (int, long) #@UndefinedVariable
# saves a series of global symbols that are going to be
# used latter for some of the legacy operations
_ord = ord
_chr = chr
_str = str
_bytes = bytes
_range = range
try: _xrange = xrange #@UndefinedVariable
except Exception: _xrange = None
if PYTHON_3: Request = urllib.request.Request
else: Request = urllib2.Request
if PYTHON_3: HTTPHandler = urllib.request.HTTPHandler
else: HTTPHandler = urllib2.HTTPHandler
if PYTHON_3: HTTPError = urllib.error.HTTPError
else: HTTPError = urllib2.HTTPError
if PYTHON_3: HTTPConnection = http.client.HTTPConnection #@UndefinedVariable
else: HTTPConnection = httplib.HTTPConnection
if PYTHON_3: HTTPSConnection = http.client.HTTPSConnection #@UndefinedVariable
else: HTTPSConnection = httplib.HTTPSConnection
try: _execfile = execfile #@UndefinedVariable
except Exception: _execfile = None
try: _reduce = reduce #@UndefinedVariable
except Exception: _reduce = None
try: _reload = reload #@UndefinedVariable
except Exception: _reload = None
try: _unichr = unichr #@UndefinedVariable
except Exception: _unichr = None
def with_meta(meta, *bases):
return meta("Class", bases, {})
def eager(iterable):
if PYTHON_3: return list(iterable)
return iterable
def iteritems(associative):
if PYTHON_3: return associative.items()
return associative.iteritems()
def iterkeys(associative):
if PYTHON_3: return associative.keys()
return associative.iterkeys()
def itervalues(associative):
if PYTHON_3: return associative.values()
return associative.itervalues()
def items(associative):
if PYTHON_3: return eager(associative.items())
return associative.items()
def keys(associative):
if PYTHON_3: return eager(associative.keys())
return associative.keys()
def values(associative):
if PYTHON_3: return eager(associative.values())
return associative.values()
def xrange(start, stop = None, step = 1):
if PYTHON_3: return _range(start, stop, step) if stop else _range(start)
return _xrange(start, stop, step) if stop else _range(start)
def range(start, stop = None, step = None):
if PYTHON_3: return eager(_range(start, stop, step)) if stop else eager(_range(start))
return _range(start, stop, step) if stop else _range(start)
def ord(value):
if PYTHON_3 and type(value) == int: return value
return _ord(value)
def chr(value):
if PYTHON_3: return _bytes([value])
if type(value) in INTEGERS: return _chr(value)
return value
def chri(value):
if PYTHON_3: return value
if type(value) in INTEGERS: return _chr(value)
return value
def bytes(value, encoding = "latin-1", errors = "strict", force = False):
if not PYTHON_3 and not force: return value
if value == None: return value
if type(value) == _bytes: return value
return value.encode(encoding, errors)
def str(value, encoding = "latin-1", errors = "strict", force = False):
if not PYTHON_3 and not force: return value
if value == None: return value
if type(value) in STRINGS: return value
return value.decode(encoding, errors)
def u(value, encoding = "utf-8", errors = "strict", force = False):
if PYTHON_3 and not force: return value
if value == None: return value
if type(value) == UNICODE: return value
return value.decode(encoding, errors)
def ascii(value, encoding = "utf-8", errors = "replace"):
if is_bytes(value): value = value.decode(encoding, errors)
else: value = UNICODE(value)
value = value.encode("ascii", errors)
value = str(value)
return value
def orderable(value):
if not PYTHON_3: return value
return Orderable(value)
def is_str(value):
return type(value) == _str
def is_unicode(value):
if PYTHON_3: return type(value) == _str
else: return type(value) == unicode #@UndefinedVariable
def is_bytes(value):
if PYTHON_3: return type(value) == _bytes
else: return type(value) == _str #@UndefinedVariable
def is_string(value, all = False):
target = ALL_STRINGS if all else STRINGS
return type(value) in target
def is_generator(value):
if inspect.isgenerator(value): return True
if type(value) in (itertools.chain,): return True
if hasattr(value, "_is_generator"): return True
return False
def is_async_generator(value):
if not hasattr(inspect, "isasyncgen"): return False
return inspect.isasyncgen(value)
def is_unittest(name = "unittest"):
current_stack = inspect.stack()
for stack_frame in current_stack:
for program_line in stack_frame[4]:
is_unittest = not name in program_line
if is_unittest: continue
return True
return False
def execfile(path, global_vars, local_vars = None, encoding = "utf-8"):
if local_vars == None: local_vars = global_vars
if not PYTHON_3: return _execfile(path, global_vars, local_vars)
file = open(path, "rb")
try: data = file.read()
finally: file.close()
data = data.decode(encoding)
code = compile(data, path, "exec")
exec(code, global_vars, local_vars) #@UndefinedVariable
def walk(path, visit, arg):
for root, dirs, _files in os.walk(path):
names = os.listdir(root)
visit(arg, root, names)
for dir in list(dirs):
exists = dir in names
not exists and dirs.remove(dir)
def getargspec(func):
has_full = hasattr(inspect, "getfullargspec")
if has_full: return ArgSpec(*inspect.getfullargspec(func)[:4])
else: return inspect.getargspec(func)
def reduce(*args, **kwargs):
if PYTHON_3: return functools.reduce(*args, **kwargs)
return _reduce(*args, **kwargs)
def reload(*args, **kwargs):
if PYTHON_3: return imp.reload(*args, **kwargs)
return _reload(*args, **kwargs)
def unichr(*args, **kwargs):
if PYTHON_3: return _chr(*args, **kwargs)
return _unichr(*args, **kwargs)
def urlopen(*args, **kwargs):
if PYTHON_3: return urllib.request.urlopen(*args, **kwargs)
else: return urllib2.urlopen(*args, **kwargs) #@UndefinedVariable
def build_opener(*args, **kwargs):
if PYTHON_3: return urllib.request.build_opener(*args, **kwargs)
else: return urllib2.build_opener(*args, **kwargs) #@UndefinedVariable
def urlparse(*args, **kwargs):
return _urlparse.urlparse(*args, **kwargs)
def urlunparse(*args, **kwargs):
return _urlparse.urlunparse(*args, **kwargs)
def parse_qs(*args, **kwargs):
return _urlparse.parse_qs(*args, **kwargs)
def urlencode(*args, **kwargs):
if PYTHON_3: return urllib.parse.urlencode(*args, **kwargs)
else: return urllib.urlencode(*args, **kwargs) #@UndefinedVariable
def quote(*args, **kwargs):
if PYTHON_3: return urllib.parse.quote(*args, **kwargs)
else: return urllib.quote(*args, **kwargs) #@UndefinedVariable
def quote_plus(*args, **kwargs):
if PYTHON_3: return urllib.parse.quote_plus(*args, **kwargs)
else: return urllib.quote_plus(*args, **kwargs) #@UndefinedVariable
def unquote(*args, **kwargs):
if PYTHON_3: return urllib.parse.unquote(*args, **kwargs)
else: return urllib.unquote(*args, **kwargs) #@UndefinedVariable
def unquote_plus(*args, **kwargs):
if PYTHON_3: return urllib.parse.unquote_plus(*args, **kwargs)
else: return urllib.unquote_plus(*args, **kwargs) #@UndefinedVariable
def cmp_to_key(*args, **kwargs):
if PYTHON_3: return dict(key = functools.cmp_to_key(*args, **kwargs)) #@UndefinedVariable
else: return dict(cmp = args[0])
def tobytes(self, *args, **kwargs):
if PYTHON_3: return self.tobytes(*args, **kwargs)
else: return self.tostring(*args, **kwargs)
def tostring(self, *args, **kwargs):
if PYTHON_3: return self.tobytes(*args, **kwargs)
else: return self.tostring(*args, **kwargs)
def StringIO(*args, **kwargs):
if PYTHON_3: return cStringIO.StringIO(*args, **kwargs)
else: return _StringIO.StringIO(*args, **kwargs)
def BytesIO(*args, **kwargs):
if PYTHON_3: return cStringIO.BytesIO(*args, **kwargs)
else: return cStringIO.StringIO(*args, **kwargs)
class Orderable(tuple):
"""
Simple tuple type wrapper that provides a simple
first element ordering, that is compatible with
both the Python 2 and Python 3+ infra-structures.
"""
def __cmp__(self, value):
return self[0].__cmp__(value[0])
def __lt__(self, value):
return self[0].__lt__(value[0])
|
hivesolutions/appier
|
src/appier/legacy.py
|
Python
|
apache-2.0
| 13,083
|
[
"VisIt"
] |
2d8b7ef3739546033b1044271f2bf12ffbb7f83ae324980ed41bcc3f5cbbcb81
|
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Macsio(CMakePackage):
"""A Multi-purpose, Application-Centric, Scalable I/O Proxy Application."""
tags = ['proxy-app', 'ecp-proxy-app']
homepage = "https://computing.llnl.gov/projects/co-design/macsio"
url = "https://github.com/LLNL/MACSio/archive/v1.1.tar.gz"
git = "https://github.com/LLNL/MACSio.git"
version('develop', branch='master')
version('1.1', sha256='a86249b0f10647c0b631773db69568388094605ec1a0af149d9e61e95e6961ec')
version('1.0', sha256='1dd0df28f9f31510329d5874c1519c745b5c6bec12e102cea3e9f4b05e5d3072')
variant('mpi', default=True, description="Build MPI plugin")
variant('silo', default=True, description="Build with SILO plugin")
# TODO: multi-level variants for hdf5
variant('hdf5', default=False, description="Build HDF5 plugin")
variant('zfp', default=False, description="Build HDF5 with ZFP compression")
variant('szip', default=False, description="Build HDF5 with SZIP compression")
variant('zlib', default=False, description="Build HDF5 with ZLIB compression")
variant('pdb', default=False, description="Build PDB plugin")
variant('exodus', default=False, description="Build EXODUS plugin")
variant('scr', default=False, description="Build with SCR support")
variant('typhonio', default=False, description="Build TYPHONIO plugin")
depends_on('json-cwx')
depends_on('mpi', when="+mpi")
depends_on('silo', when="+silo")
depends_on('hdf5+hl', when="+hdf5")
# depends_on('hdf5+szip', when="+szip")
depends_on('exodusii', when="+exodus")
# pdb is packaged with silo
depends_on('silo', when="+pdb")
depends_on('typhonio', when="+typhonio")
depends_on('scr', when="+scr")
# macsio@1.1 has bug with ~mpi configuration
conflicts('~mpi', when='@1.1')
# Ref: https://github.com/LLNL/MACSio/commit/51b8c40cd9813adec5dd4dd6cee948bb9ddb7ee1
patch('cast.patch', when='@1.1')
def cmake_args(self):
spec = self.spec
cmake_args = []
if "~mpi" in spec:
cmake_args.append("-DENABLE_MPI=OFF")
if "~silo" in spec:
cmake_args.append("-DENABLE_SILO_PLUGIN=OFF")
if "+silo" in spec:
cmake_args.append("-DWITH_SILO_PREFIX={0}"
.format(spec['silo'].prefix))
if "+pdb" in spec:
# pdb is a part of silo
cmake_args.append("-DENABLE_PDB_PLUGIN=ON")
cmake_args.append("-DWITH_SILO_PREFIX={0}"
.format(spec['silo'].prefix))
if "+hdf5" in spec:
cmake_args.append("-DENABLE_HDF5_PLUGIN=ON")
cmake_args.append("-DWITH_HDF5_PREFIX={0}"
.format(spec['hdf5'].prefix))
# TODO: Multi-level variants
# ZFP not in hdf5 spack package??
# if "+zfp" in spec:
# cmake_args.append("-DENABLE_HDF5_ZFP")
# cmake_args.append("-DWITH_ZFP_PREFIX={0}"
# .format(spec['silo'].prefix))
# SZIP is an hdf5 spack variant
# if "+szip" in spec:
# cmake_args.append("-DENABLE_HDF5_SZIP")
# cmake_args.append("-DWITH_SZIP_PREFIX={0}"
# .format(spec['SZIP'].prefix))
# ZLIB is on by default, @1.1.2
# if "+zlib" in spec:
# cmake_args.append("-DENABLE_HDF5_ZLIB")
# cmake_args.append("-DWITH_ZLIB_PREFIX={0}"
# .format(spec['silo'].prefix))
if "+typhonio" in spec:
cmake_args.append("-DENABLE_TYPHONIO_PLUGIN=ON")
cmake_args.append("-DWITH_TYPHONIO_PREFIX={0}"
.format(spec['typhonio'].prefix))
if "+exodus" in spec:
cmake_args.append("-DENABLE_EXODUS_PLUGIN=ON")
cmake_args.append("-DWITH_EXODUS_PREFIX={0}"
.format(spec['exodusii'].prefix))
# exodus requires netcdf
cmake_args.append("-DWITH_NETCDF_PREFIX={0}"
.format(spec['netcdf-c'].prefix))
return cmake_args
|
LLNL/spack
|
var/spack/repos/builtin/packages/macsio/package.py
|
Python
|
lgpl-2.1
| 4,380
|
[
"NetCDF"
] |
126efcdc5b66797cfa54b11f4c7c7b4a8446611e354b2667f05b6b245cd79f60
|
# __BEGIN_LICENSE__
#
# Copyright (C) 2010-2013 Stanford University.
# All rights reserved.
#
# __END_LICENSE__
import numpy as np
import time
from scipy.sparse.linalg.interface import LinearOperator
from lflib.lightfield import LightField
from lflib.imageio import save_image
from lflib.linear_operators import LightFieldOperator, RegularizedNormalEquationLightFieldOperator
# ----------------------------------------------------------------------------------------
# Modified Residual Norm Steepest Descent SOLVER
# ----------------------------------------------------------------------------------------
def mrnsd_reconstruction(A, b, Rtol = 1e-6, NE_Rtol = 1e-6, max_iter = 100, x0 = None):
'''
Modified Residual Norm Steepest Descent
Nonnegatively constrained steepest descent method.
Ported from the RestoreTools MATLAB package available at:
http://www.mathcs.emory.edu/~nagy/RestoreTools/
Input: A - object defining the coefficient matrix.
b - Right hand side vector.
Optional Intputs:
options - Structure that can have:
x0 - initial guess (must be strictly positive); default is x0 = 1
max_iter - integer specifying maximum number of iterations;
default is 100
Rtol - stopping tolerance for the relative residual,
norm(b - A*x)/norm(b)
default is 1e-6
NE_Rtol - stopping tolerance for the relative residual,
norm(A.T*b - A.T*A*x)/norm(A.T*b)
default is 1e-6
Output:
x - solution
Original MATLAB code by J. Nagy, August, 2011
References:
[1] J. Nagy, Z. Strakos.
"Enforcing nonnegativity in image reconstruction algorithms"
in Mathematical Modeling, Estimation, and Imaging,
David C. Wilson, et.al., Eds., 4121 (2000), pg. 182--190.
[2] L. Kaufman.
"Maximum likelihood, least squares and penalized least squares for PET",
IEEE Trans. Med. Imag. 12 (1993) pp. 200--214.
'''
# The A operator represents a large, sparse matrix that has dimensions [ nrays x nvoxels ]
nrays = A.shape[0]
nvoxels = A.shape[1]
# Pre-compute some values for use in stopping criteria below
b_norm = np.linalg.norm(b)
trAb = A.rmatvec(b)
trAb_norm = np.linalg.norm(trAb)
# Start the optimization from the initial volume of a focal stack.
if x0 != None:
x = x0
else:
x = np.ones(nvoxels)
Rnrm = np.zeros(max_iter+1);
Xnrm = np.zeros(max_iter+1);
NE_Rnrm = np.zeros(max_iter+1);
eps = np.spacing(1)
tau = np.sqrt(eps);
sigsq = tau;
minx = x.min()
# If initial guess has negative values, compensate
if minx < 0:
x = x - min(0,minx) + sigsq;
# Initialize some values before iterations begin.
Rnrm = np.zeros((max_iter+1, 1))
Xnrm = np.zeros((max_iter+1, 1))
# Initial Iteration
r = b - A.matvec(x)
g = -(A.rmatvec(r));
xg = x * g;
gamma = np.dot(g.T, xg);
for i in range(max_iter):
tic = time.time()
x_prev = x
# STEP 1: MRNSD Update step
s = - x * g;
u = A.matvec(s);
theta = gamma / np.dot(u.T, u);
neg_ind = np.nonzero(s < 0)
zero_ratio = -x[neg_ind] / s[neg_ind]
if zero_ratio.shape[0] == 0:
alpha = theta
else:
alpha = min( theta, zero_ratio.min() );
x = x + alpha*s;
g = g + alpha * A.rmatvec(u);
xg = x * g;
gamma = np.dot(g.T, xg);
# STEP 2: Compute residuals and check stopping criteria
Rnrm[i] = np.sqrt(gamma) / b_norm
Xnrm[i] = np.linalg.norm(x - x_prev) / nvoxels
toc = time.time()
print '\t--> [ MRNSD Iteration %d (%0.2f seconds) ] ' % (i, toc-tic)
print '\t Residual Norm: %0.4g (tol = %0.2e) ' % (Rnrm[i], Rtol)
print '\t Update Norm: %0.4g ' % (Xnrm[i])
# stop because residual satisfies ||b-A*x|| / ||b||<= Rtol
if Rnrm[i] <= Rtol:
break
return x.astype(np.float32)
# ----------------------------------------------------------------------------------------
# Weighted Modified Residual Norm Steepest Descent SOLVER
# ----------------------------------------------------------------------------------------
def wmrnsd_reconstruction(A, b,
Rtol = 1e-6, NE_Rtol = 1e-6, max_iter = 100, x0 = None,
sigmaSq = 0.0, beta = 0.0):
'''
Modified Residual Norm Steepest Descent
Nonnegatively constrained steepest descent method.
Ported from the RestoreTools MATLAB package available at:
http://www.mathcs.emory.edu/~nagy/RestoreTools/
Input: A - object defining the coefficient matrix.
b - Right hand side vector.
Optional Intputs:
options - Structure that can have:
x0 - initial guess (must be strictly positive); default is x0 = 1
sigmaSq - the square of the standard deviation for the
white Gaussian read noise (variance)
beta - Poisson parameter for background light level
max_iter - integer specifying maximum number of iterations;
default is 100
Rtol - stopping tolerance for the relative residual,
norm(b - A*x)/norm(b)
default is 1e-6
NE_Rtol - stopping tolerance for the relative residual,
norm(A.T*b - A.T*A*x)/norm(A.T*b)
default is 1e-6
Output:
x - solution
Original MATLAB code by J. Nagy, August, 2011
References:
[1] J. Nagy, Z. Strakos.
"Enforcing nonnegativity in image reconstruction algorithms"
in Mathematical Modeling, Estimation, and Imaging,
David C. Wilson, et.al., Eds., 4121 (2000), pg. 182--190.
[2] L. Kaufman.
"Maximum likelihood, least squares and penalized least squares for PET",
IEEE Trans. Med. Imag. 12 (1993) pp. 200--214.
'''
# The A operator represents a large, sparse matrix that has dimensions [ nrays x nvoxels ]
nrays = A.shape[0]
nvoxels = A.shape[1]
# Pre-compute some values for use in stopping criteria below
b_norm = np.linalg.norm(b)
trAb = A.rmatvec(b)
trAb_norm = np.linalg.norm(trAb)
# Start the optimization from the initial volume of a focal stack.
if x0 != None:
x = x0
else:
x = np.ones(nvoxels)
Rnrm = np.zeros(max_iter+1);
Xnrm = np.zeros(max_iter+1);
NE_Rnrm = np.zeros(max_iter+1);
eps = np.spacing(1)
tau = np.sqrt(eps);
sigsq = tau;
minx = x.min()
# If initial guess has negative values, compensate
if minx < 0:
x = x - min(0,minx) + sigsq;
# Initialize some values before iterations begin.
Rnrm = np.zeros((max_iter+1, 1))
Xnrm = np.zeros((max_iter+1, 1))
# Initial Iteration
c = b + sigmaSq;
b = b - beta;
r = b - A.matvec(x);
trAr = A.rmatvec(r);
wt = np.sqrt(c);
for i in range(max_iter):
tic = time.time()
x_prev = x
# STEP 1: WMRNSD Update step
v = A.rmatvec(r/c)
d = x * v;
w = A.matvec(d);
w = w/wt;
tau_uc = np.dot(d.T,v) / np.dot(w.T,w);
neg_ind = np.nonzero(d < 0)
zero_ratio = -x[neg_ind] / d[neg_ind]
if zero_ratio.shape[0] == 0:
tau = tau_uc;
else:
tau_bd = np.min( zero_ratio );
tau = min(tau_uc, tau_bd);
x = x + tau*d;
w = w * wt;
r = r - tau*w;
trAr = A.rmatvec(r);
# STEP 2: Compute residuals and check stopping criteria
Rnrm[i] = np.linalg.norm(r) / b_norm
Xnrm[i] = np.linalg.norm(x - x_prev) / nvoxels
NE_Rnrm[i] = np.linalg.norm(trAr) / trAb_norm
toc = time.time()
print '\t--> [ MRNSD Iteration %d (%0.2f seconds) ] ' % (i, toc-tic)
print '\t Residual Norm: %0.4g (tol = %0.2e) ' % (Rnrm[i], Rtol)
print '\t Error Norm: %0.4g (tol = %0.2e) ' % (NE_Rnrm[i], NE_Rtol)
print '\t Update Norm: %0.4g ' % (Xnrm[i])
# stop because residual satisfies ||b-A*x|| / ||b||<= Rtol
if Rnrm[i] <= Rtol:
break
# stop because normal equations residual satisfies ||A'*b-A'*A*x|| / ||A'b||<= NE_Rtol
if NE_Rnrm[i] <= NE_Rtol:
break
return x.astype(np.float32)
#---------------------------------------------------------------------------------------
if __name__ == "__main__":
pass
|
sophie63/FlyLFM
|
stanford_lfanalyze_v0.4/lflib/solvers/mrnsd.py
|
Python
|
bsd-2-clause
| 8,875
|
[
"Gaussian"
] |
2419b474d671d4385179536dd1cce2a79adefce8112ec2af5d0b28d55f29f3f6
|
import os
import sys
import itertools
import string
import sympy as sp
import numpy as np
import types
try:
import mpmath
sp.mpmath = mpmath
except:
pass
from sympy.utilities.autowrap import ufuncify
from sympy.utilities import lambdify
if sys.version_info[0]<3:
import cPickle as pickle
from urllib import urlopen, urlencode
else:
import pickle
from urllib.request import urlopen
from urllib.parse import urlencode
SETTPATH = os.path.join(os.path.dirname(__file__), "settings.txt.gz")
if not os.path.isfile(SETTPATH):
SETTPATH = SETTPATH[:-3]
decode = np.vectorize(bytes.decode)
def get_ITA_settings(sgnum):
if os.path.isfile(SETTPATH):
#settings = np.genfromtxt(SETTPATH, dtype="i2,O,O")
settings = np.genfromtxt(SETTPATH, dtype="i2,O,O", delimiter=";", autostrip=True)
ind = settings["f0"]==sgnum
#print(settings["f1"][ind])
return dict(zip(decode(settings["f1"][ind]),
decode(settings["f2"][ind])
))
else:
from lxml import etree
print("Fetching ITA settings for space group %i"%sgnum)
baseurl = "http://www.cryst.ehu.es/cgi-bin/cryst/programs/nph-getgen"
params = urlencode({'gnum': sgnum, 'what':'gp', 'settings':'ITA Settings'}).encode()
result = urlopen(baseurl, params)
result = result.read()
parser = etree.HTMLParser()
tree = etree.fromstring(result, parser, base_url = baseurl)
table = tree[1][3][6][0]
settings = dict()
for tr in table:
children = tr[1].getchildren()
if not children:
continue
elif children[0].tag=="a":
a = children[0]
trmat = tr[2][0].text
setting = ""
for i in a:
setting += i.text
if i.tail != None:
setting += i.tail
if "origin" in setting:
setting = setting.split("[origin")
#print setting
setting = setting[0] + ":" + setting[1].strip("]")
setting = setting.replace(" ", "")
#url = a.attrib["href"]
#url = urllib.quote(url, safe="%/:=&?~#+!$,;'@()*[]")
#settings[setting] = urllib.basejoin(baseurl, url)
settings[setting.decode()] = trmat.decode()
return settings
def fetch_ITA_generators(sgnum, trmat=None):
"""
Retrieves all Generators of a given Space Group (sgnum) and for an
unconventional setting `trmat' if given from
http://www.cryst.ehu.es.
"""
#url = "http://www.cryst.ehu.es/cgi-bin/cryst/programs/nph-getgen"
url = "http://www.cryst.ehu.es/cgi-bin/cryst/programs/nph-trgen"
params = {'gnum': sgnum, 'what':'gp'}
if trmat!=None:
params["trmat"] = trmat
params = urlencode(params).encode()
result = urlopen(url, params)
result = result.read()
from lxml import etree
parser = etree.HTMLParser()
tree = etree.fromstring(result, parser, base_url = url)
table = tree[1].find("center").findall("table")[1]
generators=[]
genlist = list(table)
if table.find("tbody") != None:
for tbody in table.findall("tbody"):
genlist.extend(list(tbody))
for tr in genlist:
if not isinstance(tr[0].text, str) or not tr[0].text.isdigit():
continue
pre = tr[4][0][0][1][0].text
generator = list(map(sp.S, pre.split()))
generators.append(sp.Matrix(generator).reshape(3,4))
return generators
def get_generators(sgnum=0, sgsym=None):
"""
Retrieves all Generators of a given Space Group Number (sgnum) a local
database OR from http://www.cryst.ehu.es and stores them into
the local database.
Inputs:
sgnum : int
Number of space groupt according to International Tables
of Crystallography (ITC) A
sgsym : str
Full Hermann-Mauguin symbol according to ITC A.
In some cases, more than one setting can be chosen for the
space group. Then it is possible to specify the desired
setting by giving the full Hermann-Mauguin symbol here.
Otherwise the standard setting will be picked.
"""
if isinstance(sgnum, str):
if sgnum.isdigit():
sgnum = int(sgnum)
else:
if sgsym==None:
sgsym = str(sgnum)
sgnum = 0
if sgnum==0 and sgsym!=None and os.path.isfile(SETTPATH):
#settings = np.genfromtxt(SETTPATH, dtype="i2,O,O")
settings = np.genfromtxt(SETTPATH, dtype="i2,O,O", delimiter=";", autostrip=True)
settings["f1"] = decode(settings["f1"])
settings["f2"] = decode(settings["f2"])
ind = settings["f1"] == sgsym
if not ind.sum()==1:
raise ValueError("Space group not found: %s"%sgsym)
sgnum, _, trmat = settings[ind].item()
#sgnum = settings["f0"][ind].item()
#trmat = settings["f2"]][ind]
elif isinstance(sgnum, int):
if sgnum < 1 or sgnum > 230:
raise ValueError("Space group number must be in range of 1...230")
settings = get_ITA_settings(sgnum)
#print(settings)
setnames = " ".join(settings.keys())
if len(settings)==1:
sgsym = list(settings.keys())[0]
elif sgsym!=None:
if sgsym not in settings:
raise ValueError(
"Invalid space group symbol (sgsym) entered: %s%s"\
" Valid space group settings: %s"\
%(sgsym, os.linesep,setnames))
else:
print("Warning: Space Group #%i is ambigous:%s"\
" Possible settings: %s "%(sgnum, os.linesep, setnames))
settings_inv = dict([(v,k) for (k,v) in settings.items()])
sgsym = settings_inv["a,b,c"]
print(" Using standard setting: %s"%sgsym)
trmat = settings[sgsym]
else:
raise ValueError("Integer required for space group number (`sgnum')")
if SETTPATH.endswith(".gz"):
from gzip import open as gzopen
read = lambda fname: gzopen(fname, "rt")
else:
read = lambda fname: open(fname, "r")
generators = []
with read(SETTPATH) as fh:
while True:
line = fh.readline().split(";")
if len(line)<2:
continue
if sgsym in line[1]:
break
while True:
line = fh.readline()
if not line.startswith("#"):
break
generators.append(sp.S(line.strip("# ").split(";")[1]))
# fallback
#generators = fetch_ITA_generators(sgnum, trmat)
return generators
def gcd(*args):
if len(args) == 1:
return args[0]
L = list(args)
while len(L) > 1:
a = L[len(L) - 2]
b = L[len(L) - 1]
L = L[:len(L) - 2]
while a:
a, b = b%a, a
L.append(b)
return abs(b)
def stay_in_UC(coordinate):
if coordinate.has(sp.Symbol): return coordinate
else: return coordinate%1
@sp.vectorize(0)
def hassymb(x):
return x.has(sp.Symbol)
dictcall = lambda self, d: self.__call__(*[d.get(k, d.get(k.name, k)) for k in self.kw])
def makefunc(expr, mathmodule = "numpy", dummify=False, **kwargs):
symbols = list(expr.atoms(sp.Symbol))
symbols.sort(key=str)
func = lambdify(symbols, expr, mathmodule, dummify=dummify, **kwargs)
func.kw = symbols
func.expr = expr
func.kwstr = map(lambda x: x.name, symbols)
func.dictcall = types.MethodType(dictcall, func)
func.__doc__ = str(expr)
return func
class makeufunc(object):
def __init__(self, expr):
symbols = list(expr.atoms(sp.Symbol))
symbols.sort(key=lambda s: s.name)
self.func = ufuncify(symbols, expr)
self.kw = symbols
self.expr = expr
self.kwstr = map(lambda x: x.name, symbols)
self.__doc__ = "f = f(%s)"%(", ".join(self.kwstr))
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def dictcall(self, d):
return self.func(*[d.get(k, d.get(k.name, k)) for k in self.kw])
def full_transform_old(Matrix, Tensor):
"""
Transforms the Tensor to Representation in new Basis with given Transformation Matrix.
"""
import numpy
for i in range(Tensor.ndim):
Axes = range(Tensor.ndim)
Axes[0] = i
Axes[i] = 0
Tensor = numpy.tensordot(Matrix, Tensor.transpose(Axes), axes=1).transpose(Axes)
return Tensor
def full_transform2(Matrix, Tensor):
"""
Transforms the Tensor to Representation in new Basis with given Transformation Matrix.
"""
for i in range(Tensor.ndim):
Tensor = np.tensordot(Tensor, Matrix, axes=(0,0))
return Tensor
def full_transform(Matrix, Tensor):
"""
Transforms the Tensor to Representation in new Basis with given Transformation Matrix
(but using somehow the transformed matrix :/).
"""
Matrix = np.array(Matrix)
Tensor = np.array(Tensor)
dtype = np.find_common_type([],[Matrix.dtype, Tensor.dtype])
Tnew = np.zeros_like(Tensor, dtype = dtype)
for ind in itertools.product(*map(range, Tensor.shape)): # i
for inds in itertools.product(*map(range, Tensor.shape)): #j
Tnew[ind] += Tensor[inds] * Matrix[inds, ind].prod() #
#print Matrix[inds, ind], Tnew
return Tnew
def get_cell_parameters(sg, sgsym = None):
"""
Returns the general cell parameters for a lattice of given space group number sg.
"""
import sympy as sp
a, b, c, alpha, beta, gamma = sp.symbols("a, b, c, alpha, beta, gamma", real=True, positive=True, finite=True)
if sg in range(1,3):
system = "Triclinic"
elif sg in range(3,16):
if isinstance(sgsym, str):
trmat = get_ITA_settings(sg)[sgsym].split(",")
rightangles = [i for i in range(3) if "b" not in trmat[i]]
else:
rightangles = [0,2]
if 0 in rightangles: alpha=sp.S("pi/2")
else: unique = "a"
if 1 in rightangles: beta =sp.S("pi/2")
else: unique = "b"
if 2 in rightangles: gamma=sp.S("pi/2")
else: unique = "c"
system = "Monoclinic (unique axis %s)"%unique
elif sg in range(16,75):
alpha=sp.S("pi/2")
beta=sp.S("pi/2")
gamma=sp.S("pi/2")
system = "Orthorhombic"
elif sg in range(75,143):
b=a
alpha=sp.S("pi/2")
beta=sp.S("pi/2")
gamma=sp.S("pi/2")
system = "Tetragonal"
elif sg in range(143,168):
b=a
if isinstance(sgsym, str) and sgsym.endswith(":r"):
c = a
beta = alpha
gamma = alpha
system = "Trigonal (rhombohedral setting)"
else:
alpha=sp.S("pi/2")
beta=sp.S("pi/2")
gamma=sp.S("2/3*pi")
system = "Trigonal (hexagonal setting)"
elif sg in range(168,195):
b=a
alpha=sp.S("pi/2")
beta=sp.S("pi/2")
gamma=sp.S("2/3*pi")
system = "Hexagonal"
elif sg in range(195,231):
b=a
c=a
alpha=sp.S("pi/2")
beta=sp.S("pi/2")
gamma=sp.S("pi/2")
system = "Cubic"
else:
raise ValueError(
"Invalid Space Group Number. Has to be in range(1,231).")
print(system)
return a, b, c, alpha, beta, gamma, system
def get_rec_cell_parameters(a, b, c, alpha, beta, gamma):
import sympy as sp
"""
Returns cell vectors in direkt and reciprocal space and reciprocal
lattice parameters of given cell parameters (vectors in direct crystal
fixed carthesian system).
"""
# Metric Tensor:
G = sp.Matrix([[a**2, a*b*sp.cos(gamma), a*c*sp.cos(beta)],
[a*b*sp.cos(gamma), b**2, b*c*sp.cos(alpha)],
[a*c*sp.cos(beta), b*c*sp.cos(alpha), c**2]])
G_r = G.inv() # reciprocal Metric
G_r.simplify()
# volume of crystall system cell in carthesian system
# V = sp.sqrt(G.det())
# reciprocal cell lengths
ar = sp.sqrt(G_r[0,0])
br = sp.sqrt(G_r[1,1])
cr = sp.sqrt(G_r[2,2])
#alphar = sp.acos(G_r[1,2]/(br*cr)).simplify()
alphar = sp.acos((-sp.cos(alpha) + sp.cos(beta)*sp.cos(gamma))/(sp.Abs(sp.sin(beta))*sp.Abs(sp.sin(gamma))))
betar = sp.acos(G_r[0,2]/(ar*cr))
#gammar = sp.acos(G_r[0,1]/(ar*br)).simplify()
gammar = sp.acos((sp.cos(alpha)*sp.cos(beta) - sp.cos(gamma))/(sp.Abs(sp.sin(alpha))*sp.Abs(sp.sin(beta))))
# x parallel to a* and z parallel to a* x b* (ITC Vol B Ch. 3.3.1.1.1)
#B = sp.Matrix([[ar, br*sp.cos(gammar), cr*sp.cos(betar)],
# [0, br*sp.sin(gammar), -cr*sp.sin(betar)*sp.cos(alpha)],
# [0, 0, 1/c]])
#B_0 = sp.Matrix([[1, sp.cos(gammar), sp.cos(betar)],
# [0, sp.sin(gammar), -sp.sin(betar)*sp.cos(alpha)],
# [0, 0, 1]])
#V_0 = sp.sqrt(1 - sp.cos(alphar)**2 - sp.cos(betar)**2 - sp.cos(gammar)**2 + 2*sp.cos(alphar)*sp.cos(betar)*sp.cos(gammar))
# x parallel to a and z parallel to a x b (ITC Vol B Ch. 3.3.1.1.1)
V0 = sp.sin(alpha) * sp.sin(beta) * sp.sin(gammar)
M = sp.Matrix([[a, b*sp.cos(gamma), c*sp.cos(beta)],
[0, b*sp.sin(gamma), c*(sp.cos(alpha) - sp.cos(beta)*sp.cos(gamma))/sp.sin(gamma)],
[0, 0, c*V0/sp.sin(gamma)]])
Mi = sp.Matrix([[1/a,-1/(a*sp.tan(gamma)), (sp.cos(alpha)*sp.cos(gamma) - sp.cos(beta))/(a*V0*sp.sin(gamma))],
[0, 1/(b*sp.sin(gamma)), (sp.cos(beta)*sp.cos(gamma) - sp.cos(alpha))/(b*V0*sp.sin(gamma))],
[0, 0, sp.sin(gamma)/(c*V0)]])
return (ar, br, cr, alphar, betar, gammar, M, Mi, G, G_r)
@np.vectorize
def debye_phi(v):
if v==0:
return 1.
elif v < 1e-3:
phi = v
elif v > 1e2:
phi = np.pi**2/6
else:
v = complex(v)
phi = (sp.mpmath.fp.polylog(2, np.exp(v.real)) - v**2/2. + v*np.log(1-np.exp(v)) - np.pi**2/6.)
return (phi/v).real
#debye_phi_v = np.vectorize(debye_phi)
def pvoigt(x, x0, amp, fwhm, y0=0, eta=0.5):
fwhm /= 2.
return y0 + amp * (eta / (1+((x-x0)/fwhm)**2)
+ (1-eta) * np.exp(-np.log(2)*((x-x0)/fwhm)**2))
def axangle2mat(axis, angle, is_normalized=False):
''' Rotation matrix for rotation angle `angle` around `axis`
taken and adapted from transforms3d package
Parameters
----------
axis : 3 element sequence
vector specifying axis for rotation.
angle : scalar
angle of rotation in radians.
is_normalized : bool, optional
True if `axis` is already normalized (has norm of 1). Default False.
Returns
-------
mat : array shape (3,3)
rotation matrix for specified rotation
Notes
-----
From: http://en.wikipedia.org/wiki/Rotation_matrix#Axis_and_angle
'''
x, y, z = axis
if not is_normalized:
n = sp.sqrt(x*x + y*y + z*z)
x = x/n
y = y/n
z = z/n
c = sp.cos(angle); s = sp.sin(angle); C = 1-c
xs = x*s; ys = y*s; zs = z*s
xC = x*C; yC = y*C; zC = z*C
xyC = x*yC; yzC = y*zC; zxC = z*xC
return sp.Matrix([
[ x*xC+c, xyC-zs, zxC+ys ],
[ xyC+zs, y*yC+c, yzC-xs ],
[ zxC-ys, yzC+xs, z*zC+c ]])
def angle_between(v1, v2):
""" Returns the angle in radians between vectors 'v1' and 'v2'::
>>> angle_between((1, 0, 0), (0, 1, 0))
1.5707963267948966
>>> angle_between((1, 0, 0), (1, 0, 0))
0.0
>>> angle_between((1, 0, 0), (-1, 0, 0))
3.141592653589793
"""
v1 = sp.Matrix(v1).normalized()
v2 = sp.Matrix(v2).normalized()
return sp.acos(v1.dot(v2))
def rotation_from_vectors(v1, v2):
"""
Find the rotation Matrix R that fullfils:
R*v2 = v1
Jur van den Berg,
Calculate Rotation Matrix to align Vector A to Vector B in 3d?,
URL (version: 2016-09-01): https://math.stackexchange.com/q/476311
"""
v1 = sp.Matrix(v1).normalized()
v2 = sp.Matrix(v2).normalized()
ax = v1.cross(v2)
s = ax.norm()
c = v1.dot(v2)
if c==1:
return sp.eye(3)
if c==-1:
return -sp.eye(3)
u1, u2, u3 = ax
u_ = sp.Matrix((( 0, -u3, u2),
( u3, 0, -u1),
(-u2, u1, 0)))
R = sp.eye(3) - u_ + u_**2 * (1-c)/s**2
return R
|
carichte/pyasf
|
pyasf/functions.py
|
Python
|
gpl-3.0
| 17,291
|
[
"CRYSTAL"
] |
cc54d64918ea1e6175384040b4732dae7e256314c2f62e893ad53fdcdac1ac84
|
#!/usr/bin/env python
#-----------------------------------------------------------------------------
# Copyright (c) 2013--, biocore development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
"""
Application controller for SortMeRNA version 2.0
================================================
"""
# ----------------------------------------------------------------------------
# Copyright (c) 2014--, biocore development team
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
# ----------------------------------------------------------------------------
from os.path import split, splitext, dirname, join
from glob import glob
import re
from burrito.util import CommandLineApplication, ResultPath
from burrito.parameters import ValuedParameter, FlagParameter
from skbio.parse.sequences import parse_fasta
class IndexDB(CommandLineApplication):
""" SortMeRNA generic application controller for building databases
"""
_command = 'indexdb_rna'
_command_delimiter = ' '
_parameters = {
# Fasta reference file followed by indexed reference
# (ex. /path/to/refseqs.fasta,/path/to/refseqs.idx)
'--ref': ValuedParameter('--', Name='ref', Delimiter=' ', IsPath=True),
# Maximum number of positions to store for each unique seed
'--max_pos': ValuedParameter('--', Name='max_pos', Delimiter=' ',
IsPath=False, Value="10000"),
# tmp folder for storing unique L-mers (prior to calling CMPH
# in indexdb_rna), this tmp file is removed by indexdb_rna
# after it is not used any longer
'--tmpdir': ValuedParameter('--', Name='tmpdir', Delimiter=' ',
IsPath=True)
}
def _get_result_paths(self, data):
""" Build the dict of result filepaths
"""
# get the filepath of the indexed database (after comma)
# /path/to/refseqs.fasta,/path/to/refseqs.idx
# ^------------------^
db_name = (self.Parameters['--ref'].Value).split(',')[1]
result = {}
extensions = ['bursttrie', 'kmer', 'pos', 'stats']
for extension in extensions:
for file_path in glob("%s.%s*" % (db_name, extension)):
# this will match e.g. nr.bursttrie_0.dat, nr.bursttrie_1.dat
# and nr.stats
key = file_path.split(db_name + '.')[1]
result[key] = ResultPath(Path=file_path, IsWritten=True)
return result
def build_database_sortmerna(fasta_path,
max_pos=None,
output_dir=None,
HALT_EXEC=False):
""" Build sortmerna db from fasta_path; return db name
and list of files created
Parameters
----------
fasta_path : string
path to fasta file of sequences to build database.
max_pos : integer, optional
maximum positions to store per seed in index
[default: 10000].
output_dir : string, optional
directory where output should be written
[default: same directory as fasta_path]
HALT_EXEC : boolean, optional
halt just before running the indexdb_rna command
and print the command -- useful for debugging
[default: False].
Return
------
db_name : string
filepath to indexed database.
db_filepaths : list
output files by indexdb_rna
"""
if fasta_path is None:
raise ValueError("Error: path to fasta reference "
"sequences must exist.")
fasta_dir, fasta_filename = split(fasta_path)
if not output_dir:
output_dir = fasta_dir or '.'
# Will cd to this directory, so just pass the filename
# so the app is not confused by relative paths
fasta_path = fasta_filename
index_basename = splitext(fasta_filename)[0]
db_name = join(output_dir, index_basename)
# Instantiate the object
sdb = IndexDB(WorkingDir=output_dir, HALT_EXEC=HALT_EXEC)
# The parameter --ref STRING must follow the format where
# STRING = /path/to/ref.fasta,/path/to/ref.idx
sdb.Parameters['--ref'].on("%s,%s" % (fasta_path, db_name))
# Set temporary directory
sdb.Parameters['--tmpdir'].on(output_dir)
# Override --max_pos parameter
if max_pos is not None:
sdb.Parameters['--max_pos'].on(max_pos)
# Run indexdb_rna
app_result = sdb()
# Return all output files (by indexdb_rna) as a list,
# first however remove the StdErr and StdOut filepaths
# as they files will be destroyed at the exit from
# this function (IndexDB is a local instance)
db_filepaths = [v.name for k, v in app_result.items()
if k not in {'StdErr', 'StdOut'} and hasattr(v, 'name')]
return db_name, db_filepaths
class Sortmerna(CommandLineApplication):
""" SortMeRNA generic application controller for OTU picking
"""
_command = 'sortmerna'
_command_delimiter = ' '
_parameters = {
# Verbose (log to stdout)
'-v': FlagParameter('-', Name='v', Value=True),
# Fasta or Fastq input query sequences file
'--reads': ValuedParameter('--', Name='reads', Delimiter=' ',
IsPath=True, Value=None),
# Fasta reference file followed by indexed reference
'--ref': ValuedParameter('--', Name='ref', Delimiter=' ',
IsPath=True, Value=None),
# File path + base name for all output files
'--aligned': ValuedParameter('--', Name='aligned', Delimiter=' ',
IsPath=True, Value=None),
# Output log file with parameters used to launch sortmerna and
# statistics on final results (the log file takes on
# the basename given in --aligned and the extension '.log')
'--log': FlagParameter('--', Name='log', Value=True),
# Output Fasta or Fastq file of aligned reads (flag)
'--fastx': FlagParameter('--', Name='fastx', Value=True),
# Output BLAST alignment file, options include [0,3] where:
# 0: Blast-like pairwise alignment,
# 1: Blast tabular format,
# 2: 1 + extra column for CIGAR string,
# 3: 2 + extra column for query coverage
'--blast': ValuedParameter('--', Name='blast', Delimiter=' ',
IsPath=False, Value=None),
# Output SAM alignment file
'--sam': FlagParameter('--', Name='sam', Value=False),
# Output SQ tags in the SAM file (useful for whole-genome alignment)
'--SQ': FlagParameter('--', Name='SQ', Value=False),
# Report the best INT number of alignments
'--best': ValuedParameter('--', Name='best', Delimiter=' ',
IsPath=False, Value="1"),
# Report first INT number of alignments
'--num_alignments': ValuedParameter('--', Name='num_alignments',
Delimiter=' ', IsPath=False,
Value=None),
# Number of threads
'-a': ValuedParameter('-', Name='a', Delimiter=' ',
IsPath=False, Value="1"),
# E-value threshold
'-e': ValuedParameter('-', Name='e', Delimiter=' ',
IsPath=False, Value="1"),
# Similarity threshold
'--id': ValuedParameter('--', Name='id', Delimiter=' ',
IsPath=False, Value="0.97"),
# Query coverage threshold
'--coverage': ValuedParameter('--', Name='coverage', Delimiter=' ',
IsPath=False, Value="0.97"),
# Output Fasta/Fastq file with reads failing to pass the --id and
# --coverage thresholds for de novo clustering
'--de_novo_otu': FlagParameter('--', Name='de_novo_otu', Value=True),
# Output an OTU map
'--otu_map': FlagParameter('--', Name='otu_map', Value=True),
# Print a NULL alignment string for non-aligned reads
'--print_all_reads': FlagParameter('--', Name='print_all_reads',
Value=False)
}
_synonyms = {}
_input_handler = '_input_as_string'
_supress_stdout = False
_supress_stderr = False
def _get_result_paths(self, data):
""" Set the result paths """
result = {}
# get the file extension of the reads file (sortmerna
# internally outputs all results with this extension)
fileExtension = splitext(self.Parameters['--reads'].Value)[1]
# at this point the parameter --aligned should be set as
# sortmerna will not run without it
if self.Parameters['--aligned'].isOff():
raise ValueError("Error: the --aligned parameter must be set.")
# file base name for aligned reads
output_base = self.Parameters['--aligned'].Value
# Blast alignments
result['BlastAlignments'] =\
ResultPath(Path=output_base + '.blast',
IsWritten=self.Parameters['--blast'].isOn())
# SAM alignments
result['SAMAlignments'] =\
ResultPath(Path=output_base + '.sam',
IsWritten=self.Parameters['--sam'].isOn())
# OTU map (mandatory output)
result['OtuMap'] =\
ResultPath(Path=output_base + '_otus.txt',
IsWritten=self.Parameters['--otu_map'].isOn())
# FASTA file of sequences in the OTU map (madatory output)
result['FastaMatches'] =\
ResultPath(Path=output_base + fileExtension,
IsWritten=self.Parameters['--fastx'].isOn())
# FASTA file of sequences not in the OTU map (mandatory output)
result['FastaForDenovo'] =\
ResultPath(Path=output_base + '_denovo' +
fileExtension,
IsWritten=self.Parameters['--de_novo_otu'].isOn())
# Log file
result['LogFile'] =\
ResultPath(Path=output_base + '.log',
IsWritten=self.Parameters['--log'].isOn())
return result
def getHelp(self):
"""Method that points to documentation"""
help_str = ("SortMeRNA is hosted at:\n"
"http://bioinfo.lifl.fr/RNA/sortmerna/\n"
"https://github.com/biocore/sortmerna\n\n"
"The following paper should be cited if this resource is "
"used:\n\n"
"Kopylova, E., Noe L. and Touzet, H.,\n"
"SortMeRNA: fast and accurate filtering of ribosomal RNAs "
"in\n"
"metatranscriptomic data, Bioinformatics (2012) 28(24)\n"
)
return help_str
def sortmerna_ref_cluster(seq_path=None,
sortmerna_db=None,
refseqs_fp=None,
result_path=None,
tabular=False,
max_e_value=1,
similarity=0.97,
coverage=0.97,
threads=1,
best=1,
HALT_EXEC=False
):
"""Launch sortmerna OTU picker
Parameters
----------
seq_path : str
filepath to query sequences.
sortmerna_db : str
indexed reference database.
refseqs_fp : str
filepath of reference sequences.
result_path : str
filepath to output OTU map.
max_e_value : float, optional
E-value threshold [default: 1].
similarity : float, optional
similarity %id threshold [default: 0.97].
coverage : float, optional
query coverage % threshold [default: 0.97].
threads : int, optional
number of threads to use (OpenMP) [default: 1].
tabular : bool, optional
output BLAST tabular alignments [default: False].
best : int, optional
number of best alignments to output per read
[default: 1].
Returns
-------
clusters : dict of lists
OTU ids and reads mapping to them
failures : list
reads which did not align
"""
# Instantiate the object
smr = Sortmerna(HALT_EXEC=HALT_EXEC)
# Set input query sequences path
if seq_path is not None:
smr.Parameters['--reads'].on(seq_path)
else:
raise ValueError("Error: a read file is mandatory input.")
# Set the input reference sequence + indexed database path
if sortmerna_db is not None:
smr.Parameters['--ref'].on("%s,%s" % (refseqs_fp, sortmerna_db))
else:
raise ValueError("Error: an indexed database for reference set %s must"
" already exist.\nUse indexdb_rna to index the"
" database." % refseqs_fp)
if result_path is None:
raise ValueError("Error: the result path must be set.")
# Set output results path (for Blast alignments, clusters and failures)
output_dir = dirname(result_path)
if output_dir is not None:
output_file = join(output_dir, "sortmerna_otus")
smr.Parameters['--aligned'].on(output_file)
# Set E-value threshold
if max_e_value is not None:
smr.Parameters['-e'].on(max_e_value)
# Set similarity threshold
if similarity is not None:
smr.Parameters['--id'].on(similarity)
# Set query coverage threshold
if coverage is not None:
smr.Parameters['--coverage'].on(coverage)
# Set number of best alignments to output
if best is not None:
smr.Parameters['--best'].on(best)
# Set Blast tabular output
# The option --blast 3 represents an
# m8 blast tabular output + two extra
# columns containing the CIGAR string
# and the query coverage
if tabular:
smr.Parameters['--blast'].on("3")
# Set number of threads
if threads is not None:
smr.Parameters['-a'].on(threads)
# Run sortmerna
app_result = smr()
# Put clusters into a map of lists
f_otumap = app_result['OtuMap']
rows = (line.strip().split('\t') for line in f_otumap)
clusters = {r[0]: r[1:] for r in rows}
# Put failures into a list
f_failure = app_result['FastaForDenovo']
failures = [re.split('>| ', label)[0]
for label, seq in parse_fasta(f_failure)]
# remove the aligned FASTA file and failures FASTA file
# (currently these are re-constructed using pick_rep_set.py
# further in the OTU-picking pipeline)
smr_files_to_remove = [app_result['FastaForDenovo'].name,
app_result['FastaMatches'].name,
app_result['OtuMap'].name]
return clusters, failures, smr_files_to_remove
def sortmerna_map(seq_path,
output_dir,
refseqs_fp,
sortmerna_db,
e_value=1,
threads=1,
best=None,
num_alignments=None,
HALT_EXEC=False,
output_sam=False,
sam_SQ_tags=False,
blast_format=3,
print_all_reads=True,
):
"""Launch sortmerna mapper
Parameters
----------
seq_path : str
filepath to reads.
output_dir : str
dirpath to sortmerna output.
refseqs_fp : str
filepath of reference sequences.
sortmerna_db : str
indexed reference database.
e_value : float, optional
E-value threshold [default: 1].
threads : int, optional
number of threads to use (OpenMP) [default: 1].
best : int, optional
number of best alignments to output per read
[default: None].
num_alignments : int, optional
number of first alignments passing E-value threshold to
output per read [default: None].
HALT_EXEC : bool, debugging parameter
If passed, will exit just before the sortmerna command
is issued and will print out the command that would
have been called to stdout [default: False].
output_sam : bool, optional
flag to set SAM output format [default: False].
sam_SQ_tags : bool, optional
add SQ field to SAM output (if output_SAM is True)
[default: False].
blast_format : int, optional
Output Blast m8 tabular + 2 extra columns for CIGAR
string and query coverge [default: 3].
print_all_reads : bool, optional
output NULL alignments for non-aligned reads
[default: True].
Returns
-------
dict of result paths set in _get_result_paths()
"""
if not (blast_format or output_sam):
raise ValueError("Either Blast or SAM output alignment "
"format must be chosen.")
if (best and num_alignments):
raise ValueError("Only one of --best or --num_alignments "
"options must be chosen.")
# Instantiate the object
smr = Sortmerna(HALT_EXEC=HALT_EXEC)
# Set the input reference sequence + indexed database path
smr.Parameters['--ref'].on("%s,%s" % (refseqs_fp, sortmerna_db))
# Set input query sequences path
smr.Parameters['--reads'].on(seq_path)
# Set Blast tabular output
# The option --blast 3 represents an
# m8 blast tabular output + two extra
# columns containing the CIGAR string
# and the query coverage
if blast_format:
smr.Parameters['--blast'].on(blast_format)
# Output alignments in SAM format
if output_sam:
smr.Parameters['--sam'].on()
if sam_SQ_tags:
smr.Parameters['--SQ'].on()
# Turn on NULL string alignment output
if print_all_reads:
smr.Parameters['--print_all_reads'].on()
# Set output results path (for Blast alignments and log file)
output_file = join(output_dir, "sortmerna_map")
smr.Parameters['--aligned'].on(output_file)
# Set E-value threshold
if e_value is not None:
smr.Parameters['-e'].on(e_value)
# Set number of best alignments to output per read
if best is not None:
smr.Parameters['--best'].on(best)
# Set number of first alignments passing E-value threshold
# to output per read
if num_alignments is not None:
smr.Parameters['--num_alignments'].on(num_alignments)
# Set number of threads
if threads is not None:
smr.Parameters['-a'].on(threads)
# Turn off parameters related to OTU-picking
smr.Parameters['--fastx'].off()
smr.Parameters['--otu_map'].off()
smr.Parameters['--de_novo_otu'].off()
smr.Parameters['--id'].off()
smr.Parameters['--coverage'].off()
# Run sortmerna
app_result = smr()
return app_result
|
ekopylova/burrito-fillings
|
bfillings/sortmerna_v2.py
|
Python
|
bsd-3-clause
| 19,566
|
[
"BLAST"
] |
4b030a9a0c8c89f972d701ac2eb45f86180fe3b2b07dbee2bf3139624a75ff93
|
# ----------------------------------------------------------------------------
# Copyright (c) 2013--, scikit-bio development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
# ----------------------------------------------------------------------------
import io
def is_binary_file(file):
return isinstance(file, (io.BufferedReader, io.BufferedWriter,
io.BufferedRandom))
# Everything beyond this point will be some kind of hack needed to make
# everything work. It's not pretty and it doesn't make great sense much
# of the time. I am very sorry to the poor soul who has to read beyond.
class FlushDestructorMixin:
def __del__(self):
# By default, the destructor calls close(), which flushes and closes
# the underlying buffer. Override to only flush.
if not self.closed:
self.flush()
class SaneTextIOWrapper(FlushDestructorMixin, io.TextIOWrapper):
pass
class WrappedBufferedRandom(FlushDestructorMixin, io.BufferedRandom):
pass
class CompressedMixin(FlushDestructorMixin):
"""Act as a bridge between worlds"""
def __init__(self, before_file, *args, **kwargs):
self.streamable = kwargs.pop('streamable', True)
self._before_file = before_file
super(CompressedMixin, self).__init__(*args, **kwargs)
@property
def closed(self):
return self.raw.closed or self._before_file.closed
def close(self):
super(CompressedMixin, self).close()
# The above will not usually close before_file. We want the
# decompression to be transparent, so we don't want users to deal with
# this edge case. Instead we can just close the original now that we
# are being closed.
self._before_file.close()
class CompressedBufferedReader(CompressedMixin, io.BufferedReader):
pass
class CompressedBufferedWriter(CompressedMixin, io.BufferedWriter):
pass
class IterableStringReaderIO(io.StringIO):
def __init__(self, iterable, newline):
self._iterable = iterable
super(IterableStringReaderIO, self).__init__(''.join(iterable),
newline=newline)
class IterableStringWriterIO(IterableStringReaderIO):
def close(self):
if not self.closed:
backup = self.tell()
self.seek(0)
for line in self:
self._iterable.append(line)
self.seek(backup)
super(IterableStringWriterIO, self).close()
|
anderspitman/scikit-bio
|
skbio/io/_fileobject.py
|
Python
|
bsd-3-clause
| 2,615
|
[
"scikit-bio"
] |
4b1d795fdde0118bd78c6bdb4f5faa1c6f29f0200c9e8dca643e415982c97c71
|
import numpy as np
from sklearn import gaussian_process
from sklearn.base import BaseEstimator
MACHINE_EPSILON = np.finfo(np.double).eps
class GPS( BaseEstimator ):
'''
multivariete wrapper for gaussian process model for sklearn
'''
def __init__( self, n_outputs, regr='constant', corr='squared_exponential',
storage_mode='full', verbose=False, theta0=1e-1 ):
self.gps = [ gaussian_process.GaussianProcess( regr=regr, corr=corr,
storage_mode=storage_mode, verbose=verbose, theta0=theta0 ) for i in range( n_outputs ) ]
def fit( self, X, Y ):
assert( len( self.gps ) == Y.shape[ 1 ] )
for i in range( len( self.gps ) ):
try:
self.gps[i].fit( X, Y[ :, i ] )
except ValueError as e:
print( 'ValueError cought for i:{0}: e:{1}'.format( i, e ) )
raise e
return self.gps
def predict( self, X ):
n_outputs = len( self.gps )
Y = np.empty( (X.shape[0], n_outputs) )
for i in range( n_outputs ):
Y[ :, i ] = self.gps[ i ].predict( X )
return Y
|
marcino239/pilco
|
GPS.py
|
Python
|
gpl-2.0
| 1,033
|
[
"Gaussian"
] |
07b9082fc740d87b5b7325dc3386798c3a94e76911d46c7986b44b085baf68b8
|
#!/usr/bin/env python
# Copyright 2018-2019 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Peng Bao <baopeng@iccas.ac.cn>
# Qiming Sun <osirpt.sun@gmail.com>
#
'''
semi-grid Coulomb and eXchange without differencial density matrix
To lower the scaling of coulomb and exchange matrix construction for large system, one
coordinate is analitical and the other is grid. The traditional two electron
integrals turn to analytical one electron integrals and numerical integration
based on grid.(see Friesner, R. A. Chem. Phys. Lett. 1985, 116, 39)
Minimizing numerical errors using overlap fitting correction.(see
Lzsak, R. et. al. J. Chem. Phys. 2011, 135, 144105)
Grid screening for weighted AO value and DktXkg.
Two SCF steps: coarse grid then fine grid. There are 5 parameters can be changed:
# threshold for Xg and Fg screening
gthrd = 1e-10
# initial and final grids level
grdlvl_i = 0
grdlvl_f = 1
# norm_ddm threshold for grids change
thrd_nddm = 0.03
# set block size to adapt memory
sblk = 200
Set mf.direct_scf = False because no traditional 2e integrals
'''
import ctypes
import numpy
import scipy.linalg
from pyscf import lib
from pyscf import gto
from pyscf.lib import logger
from pyscf.df.incore import aux_e2
from pyscf.gto import moleintor
from pyscf.scf import _vhf
from pyscf.dft import gen_grid
def get_jk_favork(sgx, dm, hermi=1, with_j=True, with_k=True,
direct_scf_tol=1e-13):
t0 = logger.process_clock(), logger.perf_counter()
mol = sgx.mol
grids = sgx.grids
gthrd = sgx.grids_thrd
dms = numpy.asarray(dm)
dm_shape = dms.shape
nao = dm_shape[-1]
dms = dms.reshape(-1,nao,nao)
nset = dms.shape[0]
if sgx.debug:
batch_nuc = _gen_batch_nuc(mol)
else:
batch_jk = _gen_jk_direct(mol, 's2', with_j, with_k, direct_scf_tol,
sgx._opt, sgx.pjs)
t1 = logger.timer_debug1(mol, "sgX initialziation", *t0)
sn = numpy.zeros((nao,nao))
vj = numpy.zeros_like(dms)
vk = numpy.zeros_like(dms)
ngrids = grids.coords.shape[0]
max_memory = sgx.max_memory - lib.current_memory()[0]
sblk = sgx.blockdim
blksize = min(ngrids, max(4, int(min(sblk, max_memory*1e6/8/nao**2))))
tnuc = 0, 0
for i0, i1 in lib.prange(0, ngrids, blksize):
coords = grids.coords[i0:i1]
weights = grids.weights[i0:i1,None]
ao = mol.eval_gto('GTOval', coords)
wao = ao * grids.weights[i0:i1,None]
sn += lib.dot(ao.T, wao)
fg = lib.einsum('gi,xij->xgj', wao, dms)
mask = numpy.zeros(i1-i0, dtype=bool)
for i in range(nset):
mask |= numpy.any(fg[i]>gthrd, axis=1)
mask |= numpy.any(fg[i]<-gthrd, axis=1)
if not numpy.all(mask):
ao = ao[mask]
wao = wao[mask]
fg = fg[:,mask]
coords = coords[mask]
weights = weights[mask]
if sgx.debug:
tnuc = tnuc[0] - logger.process_clock(), tnuc[1] - logger.perf_counter()
gbn = batch_nuc(mol, coords)
tnuc = tnuc[0] + logger.process_clock(), tnuc[1] + logger.perf_counter()
if with_j:
jg = numpy.einsum('gij,xij->xg', gbn, dms)
if with_k:
gv = lib.einsum('gvt,xgt->xgv', gbn, fg)
gbn = None
else:
tnuc = tnuc[0] - logger.process_clock(), tnuc[1] - logger.perf_counter()
jg, gv = batch_jk(mol, coords, dms, fg.copy(), weights)
tnuc = tnuc[0] + logger.process_clock(), tnuc[1] + logger.perf_counter()
if with_j:
xj = lib.einsum('gv,xg->xgv', ao, jg)
for i in range(nset):
vj[i] += lib.einsum('gu,gv->uv', wao, xj[i])
if with_k:
for i in range(nset):
vk[i] += lib.einsum('gu,gv->uv', ao, gv[i])
jg = gv = None
t2 = logger.timer_debug1(mol, "sgX J/K builder", *t1)
tdot = t2[0] - t1[0] - tnuc[0] , t2[1] - t1[1] - tnuc[1]
logger.debug1(sgx, '(CPU, wall) time for integrals (%.2f, %.2f); '
'for tensor contraction (%.2f, %.2f)',
tnuc[0], tnuc[1], tdot[0], tdot[1])
ovlp = mol.intor_symmetric('int1e_ovlp')
proj = scipy.linalg.solve(sn, ovlp)
if with_j:
vj = lib.einsum('pi,xpj->xij', proj, vj)
vj = (vj + vj.transpose(0,2,1))*.5
if with_k:
vk = lib.einsum('pi,xpj->xij', proj, vk)
if hermi == 1:
vk = (vk + vk.transpose(0,2,1))*.5
logger.timer(mol, "vj and vk", *t0)
return vj.reshape(dm_shape), vk.reshape(dm_shape)
def get_jk_favorj(sgx, dm, hermi=1, with_j=True, with_k=True,
direct_scf_tol=1e-13):
t0 = logger.process_clock(), logger.perf_counter()
mol = sgx.mol
grids = sgx.grids
gthrd = sgx.grids_thrd
dms = numpy.asarray(dm)
dm_shape = dms.shape
nao = dm_shape[-1]
dms = dms.reshape(-1,nao,nao)
nset = dms.shape[0]
if sgx.debug:
batch_nuc = _gen_batch_nuc(mol)
else:
batch_jk = _gen_jk_direct(mol, 's2', with_j, with_k, direct_scf_tol,
sgx._opt, sgx.pjs)
sn = numpy.zeros((nao,nao))
ngrids = grids.coords.shape[0]
max_memory = sgx.max_memory - lib.current_memory()[0]
sblk = sgx.blockdim
blksize = min(ngrids, max(4, int(min(sblk, max_memory*1e6/8/nao**2))))
for i0, i1 in lib.prange(0, ngrids, blksize):
coords = grids.coords[i0:i1]
ao = mol.eval_gto('GTOval', coords)
wao = ao * grids.weights[i0:i1,None]
sn += lib.dot(ao.T, wao)
ovlp = mol.intor_symmetric('int1e_ovlp')
proj = scipy.linalg.solve(sn, ovlp)
proj_dm = lib.einsum('ki,xij->xkj', proj, dms)
t1 = logger.timer_debug1(mol, "sgX initialziation", *t0)
vj = numpy.zeros_like(dms)
vk = numpy.zeros_like(dms)
tnuc = 0, 0
for i0, i1 in lib.prange(0, ngrids, blksize):
coords = grids.coords[i0:i1]
weights = grids.weights[i0:i1,None]
ao = mol.eval_gto('GTOval', coords)
wao = ao * grids.weights[i0:i1,None]
fg = lib.einsum('gi,xij->xgj', wao, proj_dm)
mask = numpy.zeros(i1-i0, dtype=bool)
for i in range(nset):
mask |= numpy.any(fg[i]>gthrd, axis=1)
mask |= numpy.any(fg[i]<-gthrd, axis=1)
if not numpy.all(mask):
ao = ao[mask]
fg = fg[:,mask]
coords = coords[mask]
weights = weights[mask]
if with_j:
rhog = numpy.einsum('xgu,gu->xg', fg, ao)
else:
rhog = None
if sgx.debug:
tnuc = tnuc[0] - logger.process_clock(), tnuc[1] - logger.perf_counter()
gbn = batch_nuc(mol, coords)
tnuc = tnuc[0] + logger.process_clock(), tnuc[1] + logger.perf_counter()
if with_j:
jpart = numpy.einsum('guv,xg->xuv', gbn, rhog)
if with_k:
gv = lib.einsum('gtv,xgt->xgv', gbn, fg)
gbn = None
else:
tnuc = tnuc[0] - logger.process_clock(), tnuc[1] - logger.perf_counter()
if with_j: rhog = rhog.copy()
jpart, gv = batch_jk(mol, coords, rhog, fg.copy(), weights)
tnuc = tnuc[0] + logger.process_clock(), tnuc[1] + logger.perf_counter()
if with_j:
vj += jpart
if with_k:
for i in range(nset):
vk[i] += lib.einsum('gu,gv->uv', ao, gv[i])
jpart = gv = None
t2 = logger.timer_debug1(mol, "sgX J/K builder", *t1)
tdot = t2[0] - t1[0] - tnuc[0] , t2[1] - t1[1] - tnuc[1]
logger.debug1(sgx, '(CPU, wall) time for integrals (%.2f, %.2f); '
'for tensor contraction (%.2f, %.2f)',
tnuc[0], tnuc[1], tdot[0], tdot[1])
for i in range(nset):
lib.hermi_triu(vj[i], inplace=True)
if with_k and hermi == 1:
vk = (vk + vk.transpose(0,2,1))*.5
logger.timer(mol, "vj and vk", *t0)
return vj.reshape(dm_shape), vk.reshape(dm_shape)
def _gen_batch_nuc(mol):
'''Coulomb integrals of the given points and orbital pairs'''
cintopt = gto.moleintor.make_cintopt(mol._atm, mol._bas, mol._env, 'int3c2e')
def batch_nuc(mol, grid_coords, out=None):
fakemol = gto.fakemol_for_charges(grid_coords)
j3c = aux_e2(mol, fakemol, intor='int3c2e', aosym='s2ij', cintopt=cintopt)
return lib.unpack_tril(j3c.T, out=out)
return batch_nuc
def _gen_jk_direct(mol, aosym, with_j, with_k, direct_scf_tol, sgxopt=None, pjs=False):
'''Contraction between sgX Coulomb integrals and density matrices
J: einsum('guv,xg->xuv', gbn, dms) if dms == rho at grid
einsum('gij,xij->xg', gbn, dms) if dms are density matrices
K: einsum('gtv,xgt->xgv', gbn, fg)
'''
if sgxopt is None:
from pyscf.sgx import sgx
sgxopt = sgx._make_opt(mol, pjs=pjs)
sgxopt.direct_scf_tol = direct_scf_tol
ncomp = 1
nao = mol.nao
cintor = _vhf._fpointer(sgxopt._intor)
fdot = _vhf._fpointer('SGXdot_nrk')
drv = _vhf.libcvhf.SGXnr_direct_drv
def jk_part(mol, grid_coords, dms, fg, weights):
atm, bas, env = mol._atm, mol._bas, mol._env
ngrids = grid_coords.shape[0]
env = numpy.append(env, grid_coords.ravel())
env[gto.NGRIDS] = ngrids
env[gto.PTR_GRIDS] = mol._env.size
if pjs:
sgxopt.set_dm(fg / numpy.sqrt(numpy.abs(weights[None,:])),
mol._atm, mol._bas, env)
ao_loc = moleintor.make_loc(bas, sgxopt._intor)
shls_slice = (0, mol.nbas, 0, mol.nbas)
fg = numpy.ascontiguousarray(fg.transpose(0,2,1))
vj = vk = None
fjk = []
dmsptr = []
vjkptr = []
if with_j:
if dms[0].ndim == 1: # the value of density at each grid
vj = numpy.zeros((len(dms),ncomp,nao,nao))[:,0]
for i, dm in enumerate(dms):
dmsptr.append(dm.ctypes.data_as(ctypes.c_void_p))
vjkptr.append(vj[i].ctypes.data_as(ctypes.c_void_p))
fjk.append(_vhf._fpointer('SGXnr'+aosym+'_ijg_g_ij'))
else:
vj = numpy.zeros((len(dms),ncomp,ngrids))[:,0]
for i, dm in enumerate(dms):
dmsptr.append(dm.ctypes.data_as(ctypes.c_void_p))
vjkptr.append(vj[i].ctypes.data_as(ctypes.c_void_p))
fjk.append(_vhf._fpointer('SGXnr'+aosym+'_ijg_ji_g'))
if with_k:
vk = numpy.zeros((len(fg),ncomp,nao,ngrids))[:,0]
for i, dm in enumerate(fg):
dmsptr.append(dm.ctypes.data_as(ctypes.c_void_p))
vjkptr.append(vk[i].ctypes.data_as(ctypes.c_void_p))
fjk.append(_vhf._fpointer('SGXnr'+aosym+'_ijg_gj_gi'))
n_dm = len(fjk)
fjk = (ctypes.c_void_p*(n_dm))(*fjk)
dmsptr = (ctypes.c_void_p*(n_dm))(*dmsptr)
vjkptr = (ctypes.c_void_p*(n_dm))(*vjkptr)
drv(cintor, fdot, fjk, dmsptr, vjkptr, n_dm, ncomp,
(ctypes.c_int*4)(*shls_slice),
ao_loc.ctypes.data_as(ctypes.c_void_p),
sgxopt._cintopt, sgxopt._this,
atm.ctypes.data_as(ctypes.c_void_p), ctypes.c_int(mol.natm),
bas.ctypes.data_as(ctypes.c_void_p), ctypes.c_int(mol.nbas),
env.ctypes.data_as(ctypes.c_void_p),
ctypes.c_int(env.shape[0]),
ctypes.c_int(2 if aosym == 's2' else 1))
if vk is not None:
vk = vk.transpose(0,2,1)
vk = numpy.ascontiguousarray(vk)
return vj, vk
return jk_part
# pre for get_k
# Use default mesh grids and weights
def get_gridss(mol, level=1, gthrd=1e-10):
Ktime = (logger.process_clock(), logger.perf_counter())
grids = gen_grid.Grids(mol)
grids.level = level
grids.build()
ngrids = grids.weights.size
mask = []
for p0, p1 in lib.prange(0, ngrids, 10000):
ao_v = mol.eval_gto('GTOval', grids.coords[p0:p1])
ao_v *= grids.weights[p0:p1,None]
wao_v0 = ao_v
mask.append(numpy.any(wao_v0>gthrd, axis=1) |
numpy.any(wao_v0<-gthrd, axis=1))
mask = numpy.hstack(mask)
grids.coords = grids.coords[mask]
grids.weights = grids.weights[mask]
logger.debug(mol, 'threshold for grids screening %g', gthrd)
logger.debug(mol, 'number of grids %d', grids.weights.size)
logger.timer_debug1(mol, "Xg screening", *Ktime)
return grids
get_jk = get_jk_favorj
if __name__ == '__main__':
from pyscf import scf
from pyscf.sgx import sgx
mol = gto.Mole()
mol.build(
verbose = 0,
atom = [["O" , (0. , 0. , 0.)],
[1 , (0. , -0.757 , 0.587)],
[1 , (0. , 0.757 , 0.587)] ],
basis = 'ccpvdz',
)
dm = scf.RHF(mol).run().make_rdm1()
vjref, vkref = scf.hf.get_jk(mol, dm)
print(numpy.einsum('ij,ji->', vjref, dm))
print(numpy.einsum('ij,ji->', vkref, dm))
sgxobj = sgx.SGX(mol)
sgxobj.grids = get_gridss(mol, 0, 1e-10)
with lib.temporary_env(sgxobj, debug=True):
vj, vk = get_jk_favork(sgxobj, dm)
print(numpy.einsum('ij,ji->', vj, dm))
print(numpy.einsum('ij,ji->', vk, dm))
print(abs(vjref-vj).max().max())
print(abs(vkref-vk).max().max())
with lib.temporary_env(sgxobj, debug=False):
vj1, vk1 = get_jk_favork(sgxobj, dm)
print(abs(vj - vj1).max())
print(abs(vk - vk1).max())
with lib.temporary_env(sgxobj, debug=True):
vj, vk = get_jk_favorj(sgxobj, dm)
print(numpy.einsum('ij,ji->', vj, dm))
print(numpy.einsum('ij,ji->', vk, dm))
print(abs(vjref-vj).max().max())
print(abs(vkref-vk).max().max())
with lib.temporary_env(sgxobj, debug=False):
vj1, vk1 = get_jk_favorj(sgxobj, dm)
print(abs(vj - vj1).max())
print(abs(vk - vk1).max())
|
sunqm/pyscf
|
pyscf/sgx/sgx_jk.py
|
Python
|
apache-2.0
| 14,480
|
[
"PySCF"
] |
042f4dfa65b32545a2df8de81d301cc370598dab30e25889416186891efb69f4
|
# Copyright 2009 by Cymon J. Cox. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Command line wrapper for the multiple alignment program MUSCLE.
"""
__docformat__ = "epytext en" #Don't just use plain text in epydoc API pages!
from Bio.Application import _Option, _Switch, AbstractCommandline
class MuscleCommandline(AbstractCommandline):
r"""Command line wrapper for the multiple alignment program MUSCLE.
http://www.drive5.com/muscle/
Example:
>>> from Bio.Align.Applications import MuscleCommandline
>>> muscle_exe = r"C:\Program Files\Aligments\muscle3.8.31_i86win32.exe"
>>> in_file = r"C:\My Documents\unaligned.fasta"
>>> out_file = r"C:\My Documents\aligned.fasta"
>>> muscle_cline = MuscleCommandline(muscle_exe, input=in_file, out=out_file)
>>> print muscle_cline
C:\Program Files\Aligments\muscle3.8.31_i86win32.exe -in "C:\My Documents\unaligned.fasta" -out "C:\My Documents\aligned.fasta"
You would typically run the command line with muscle_cline() or via
the Python subprocess module, as described in the Biopython tutorial.
Citations:
Edgar, Robert C. (2004), MUSCLE: multiple sequence alignment with high
accuracy and high throughput, Nucleic Acids Research 32(5), 1792-97.
Edgar, R.C. (2004) MUSCLE: a multiple sequence alignment method with
reduced time and space complexity. BMC Bioinformatics 5(1): 113.
Last checked against version: 3.7, briefly against 3.8
"""
def __init__(self, cmd="muscle", **kwargs):
CLUSTERING_ALGORITHMS = ["upgma", "upgmb", "neighborjoining"]
DISTANCE_MEASURES_ITER1 = ["kmer6_6", "kmer20_3", "kmer20_4", "kbit20_3",
"kmer4_6"]
DISTANCE_MEASURES_ITER2 = DISTANCE_MEASURES_ITER1 + \
["pctid_kimura", "pctid_log"]
OBJECTIVE_SCORES = ["sp", "ps", "dp", "xp", "spf", "spm"]
TREE_ROOT_METHODS = ["pseudo", "midlongestspan", "minavgleafdist"]
SEQUENCE_TYPES = ["protein", "nucleo", "auto"]
WEIGHTING_SCHEMES = ["none", "clustalw", "henikoff", "henikoffpb",
"gsc", "threeway"]
self.parameters = \
[
#Can't use "in" as the final alias as this is a reserved word in python:
_Option(["-in", "in", "input"],
"Input filename",
filename=True,
equate=False),
_Option(["-out", "out"],
"Output filename",
filename=True,
equate=False),
_Switch(["-diags", "diags"],
"Find diagonals (faster for similar sequences)"),
_Switch(["-profile", "profile"],
"Perform a profile alignment"),
_Option(["-in1", "in1"],
"First input filename for profile alignment",
filename=True,
equate=False),
_Option(["-in2", "in2"],
"Second input filename for a profile alignment",
filename=True,
equate=False),
#anchorspacing Integer 32 Minimum spacing between
_Option(["-anchorspacing", "anchorspacing"],
"Minimum spacing between anchor columns",
checker_function=lambda x: isinstance(x, int),
equate=False),
#center Floating point [1] Center parameter.
# Should be negative.
_Option(["-center", "center"],
"Center parameter - should be negative",
checker_function=lambda x: isinstance(x, float),
equate=False),
#cluster1 upgma upgmb Clustering method.
_Option(["-cluster1", "cluster1"],
"Clustering method used in iteration 1",
checker_function=lambda x: x in CLUSTERING_ALGORITHMS,
equate=False),
#cluster2 upgmb cluster1 is used in
# neighborjoining iteration 1 and 2,
# cluster2 in later
# iterations.
_Option(["-cluster2", "cluster2"],
"Clustering method used in iteration 2",
checker_function=lambda x: x in CLUSTERING_ALGORITHMS,
equate=False),
#diaglength Integer 24 Minimum length of
# diagonal.
_Option(["-diaglength", "diaglength"],
"Minimum length of diagonal",
checker_function=lambda x: isinstance(x, int),
equate=True),
#diagmargin Integer 5 Discard this many
# positions at ends of
# diagonal.
_Option(["-diagmargin", "diagmargin"],
"Discard this many positions at ends of diagonal",
checker_function=lambda x: isinstance(x, int),
equate=False),
#distance1 kmer6_6 Kmer6_6 (amino) or Distance measure for
# kmer20_3 Kmer4_6 (nucleo) iteration 1.
# kmer20_4
# kbit20_3
# kmer4_6
_Option(["-distance1", "distance1"],
"Distance measure for iteration 1",
checker_function=lambda x: x in DISTANCE_MEASURES_ITER1,
equate=False),
#distance2 kmer6_6 pctid_kimura Distance measure for
# kmer20_3 iterations 2, 3 ...
# kmer20_4
# kbit20_3
# pctid_kimura
# pctid_log
_Option(["-distance2", "distance2"],
"Distance measure for iteration 2",
checker_function=lambda x: x in DISTANCE_MEASURES_ITER2,
equate=False),
#gapopen Floating point [1] The gap open score.
# Must be negative.
_Option(["-gapopen", "gapopen"],
"Gap open score - negative number",
checker_function=lambda x: isinstance(x, float),
equate=False),
#hydro Integer 5 Window size for
# determining whether a
# region is hydrophobic.
_Option(["-hydro", "hydro"],
"Window size for hydrophobic region",
checker_function=lambda x: isinstance(x, int),
equate=False),
#hydrofactor Floating point 1.2 Multiplier for gap
# open/close penalties in
# hydrophobic regions.
_Option(["-hydrofactor", "hydrofactor"],
"Multiplier for gap penalties in hydrophobic regions",
checker_function=lambda x: isinstance(x, float),
equate=False),
#log File name None. Log file name (delete
# existing file).
_Option(["-log", "log"],
"Log file name",
filename=True,
equate=False),
#loga File name None. Log file name (append
# to existing file).
_Option(["-loga", "loga"],
"Log file name (append to existing file)",
filename=True,
equate=False),
#maxdiagbreak Integer 1 Maximum distance
# between two diagonals
# that allows them to
# merge into one
# diagonal.
_Option(["-maxdiagbreak", "maxdiagbreak"],
"Maximum distance between two diagonals that allows "
"them to merge into one diagonal",
checker_function=lambda x: isinstance(x, int),
equate=False),
#maxhours Floating point None. Maximum time to run in
# hours. The actual time
# may exceed the
# requested limit by a
# few minutes. Decimals
# are allowed, so 1.5
# means one hour and 30
# minutes.
_Option(["-maxhours", "maxhours"],
"Maximum time to run in hours",
checker_function=lambda x: isinstance(x, float),
equate=False),
#maxiters Integer 1, 2 ... 16 Maximum number of
# iterations.
_Option(["-maxiters", "maxiters"],
"Maximum number of iterations",
checker_function=lambda x: isinstance(x, int),
equate=False),
#maxtrees Integer 1 Maximum number of new
# trees to build in
# iteration 2.
_Option(["-maxtrees", "maxtrees"],
"Maximum number of trees to build in iteration 2",
checker_function=lambda x: isinstance(x, int),
equate=False),
#minbestcolscore Floating point [1] Minimum score a column
# must have to be an
# anchor.
_Option(["-minbestcolscore", "minbestcolscore"],
"Minimum score a column must have to be an anchor",
checker_function=lambda x: isinstance(x, float),
equate=False),
#minsmoothscore Floating point [1] Minimum smoothed score
# a column must have to
# be an anchor.
_Option(["-minsmoothscore", "minsmoothscore"],
"Minimum smoothed score a column must have to "
"be an anchor",
checker_function=lambda x: isinstance(x, float),
equate=False),
#objscore sp spm Objective score used by
# ps tree dependent
# dp refinement.
# xp sp=sum-of-pairs score.
# spf spf=sum-of-pairs score
# spm (dimer approximation)
# spm=sp for < 100 seqs,
# otherwise spf
# dp=dynamic programming
# score.
# ps=average profile-
# sequence score.
# xp=cross profile score.
_Option(["-objscore", "objscore"],
"Objective score used by tree dependent refinement",
checker_function=lambda x: x in OBJECTIVE_SCORES,
equate=False),
#root1 pseudo psuedo Method used to root
_Option(["-root1", "root1"],
"Method used to root tree in iteration 1",
checker_function=lambda x: x in TREE_ROOT_METHODS,
equate=False),
#root2 midlongestspan tree; root1 is used in
# minavgleafdist iteration 1 and 2,
# root2 in later
# iterations.
_Option(["-root2", "root2"],
"Method used to root tree in iteration 2",
checker_function=lambda x: x in TREE_ROOT_METHODS,
equate=False),
#seqtype protein auto Sequence type.
# nucleo
# auto
_Option(["-seqtype", "seqtype"],
"Sequence type",
checker_function=lambda x: x in SEQUENCE_TYPES,
equate=False),
#smoothscoreceil Floating point [1] Maximum value of column
# score for smoothing
# purposes.
_Option(["-smoothscoreceil", "smoothscoreceil"],
"Maximum value of column score for smoothing",
checker_function=lambda x: isinstance(x, float),
equate=False),
#smoothwindow Integer 7 Window used for anchor
# column smoothing.
_Option(["-smoothwindow", "smoothwindow"],
"Window used for anchor column smoothing",
checker_function=lambda x: isinstance(x, int),
equate=False),
#SUEFF Floating point value 0.1 Constant used in UPGMB
# between 0 and 1. clustering. Determines
# the relative fraction
# of average linkage
# (SUEFF) vs. nearest-
# neighbor linkage (1
# SUEFF).
_Option(["-sueff", "sueff"],
"Constant used in UPGMB clustering",
checker_function=lambda x: isinstance(x, float),
equate=False),
#tree1 File name None Save tree produced in
_Option(["-tree1", "tree1"],
"Save Newick tree from iteration 1",
equate=False),
#tree2 first or second
# iteration to given file
# in Newick (Phylip-
# compatible) format.
_Option(["-tree2", "tree2"],
"Save Newick tree from iteration 2",
equate=False),
#weight1 none clustalw Sequence weighting
_Option(["-weight1", "weight1"],
"Weighting scheme used in iteration 1",
checker_function=lambda x: x in WEIGHTING_SCHEMES,
equate=False),
#weight2 henikoff scheme.
# henikoffpb weight1 is used in
# gsc iterations 1 and 2.
# clustalw weight2 is used for
# threeway tree-dependent
# refinement.
# none=all sequences have
# equal weight.
# henikoff=Henikoff &
# Henikoff weighting
# scheme.
# henikoffpb=Modified
# Henikoff scheme as used
# in PSI-BLAST.
# clustalw=CLUSTALW
# method.
# threeway=Gotoh three-
# way method.
_Option(["-weight2", "weight2"],
"Weighting scheme used in iteration 2",
checker_function=lambda x: x in WEIGHTING_SCHEMES,
equate=False),
#################### FORMATS #######################################
# Multiple formats can be specified on the command line
# If -msf appears it will be used regardless of other formats
# specified. If -clw appears (and not -msf), clustalw format will be
# used regardless of other formats specified. If both -clw and
# -clwstrict are specified -clwstrict will be used regardless of
# other formats specified. If -fasta is specified and not -msf,
# -clw, or clwstrict, fasta will be used. If -fasta and -html are
# specified -fasta will be used. Only if -html is specified alone
# will html be used. I kid ye not.
#clw no Write output in CLUSTALW format (default is
# FASTA).
_Switch(["-clw", "clw"],
"Write output in CLUSTALW format (with a MUSCLE header)"),
#clwstrict no Write output in CLUSTALW format with the
# "CLUSTAL W (1.81)" header rather than the
# MUSCLE version. This is useful when a post-
# processing step is picky about the file
# header.
_Switch(["-clwstrict", "clwstrict"],
"Write output in CLUSTALW format with version 1.81 header"),
#fasta yes Write output in FASTA format. Alternatives
# include clw,
# clwstrict, msf and html.
_Switch(["-fasta", "fasta"],
"Write output in FASTA format"),
#html no Write output in HTML format (default is
# FASTA).
_Switch(["-html", "html"],
"Write output in HTML format"),
#msf no Write output in MSF format (default is
# FASTA).
_Switch(["-msf", "msf"],
"Write output in MSF format"),
#Phylip interleaved - undocumented as of 3.7
_Switch(["-phyi", "phyi"],
"Write output in PHYLIP interleaved format"),
#Phylip sequential - undocumented as of 3.7
_Switch(["-phys", "phys"],
"Write output in PHYLIP sequential format"),
################## Additional specified output files #########
_Option(["-phyiout", "phyiout"],
"Write PHYLIP interleaved output to specified filename",
filename=True,
equate=False),
_Option(["-physout", "physout"],"Write PHYLIP sequential format to specified filename",
filename=True,
equate=False),
_Option(["-htmlout", "htmlout"],"Write HTML output to specified filename",
filename=True,
equate=False),
_Option(["-clwout", "clwout"],
"Write CLUSTALW output (with MUSCLE header) to specified "
"filename",
filename=True,
equate=False),
_Option(["-clwstrictout", "clwstrictout"],
"Write CLUSTALW output (with version 1.81 header) to "
"specified filename",
filename=True,
equate=False),
_Option(["-msfout", "msfout"],
"Write MSF format output to specified filename",
filename=True,
equate=False),
_Option(["-fastaout", "fastaout"],
"Write FASTA format output to specified filename",
filename=True,
equate=False),
############## END FORMATS ###################################
#anchors yes Use anchor optimization in tree dependent
# refinement iterations.
_Switch(["-anchors", "anchors"],
"Use anchor optimisation in tree dependent "
"refinement iterations"),
#noanchors no Disable anchor optimization. Default is
# anchors.
_Switch(["-noanchors", "noanchors"],
"Do not use anchor optimisation in tree dependent "
"refinement iterations"),
#group yes Group similar sequences together in the
# output. This is the default. See also
# stable.
_Switch(["-group", "group"],
"Group similar sequences in output"),
#stable no Preserve input order of sequences in output
# file. Default is to group sequences by
# similarity (group).
_Switch(["-stable", "stable"],
"Do not group similar sequences in output (not supported in v3.8)"),
############## log-expectation profile score ######################
# One of either -le, -sp, or -sv
#
# According to the doc, spn is default and the only option for
# nucleotides: this doesnt appear to be true. -le, -sp, and -sv can
# be used and produce numerically different logs (what is going on?)
#
#spn fails on proteins
#le maybe Use log-expectation profile score (VTML240).
# Alternatives are to use sp or sv. This is
# the default for amino acid sequences.
_Switch(["-le", "le"],
"Use log-expectation profile score (VTML240)"),
#sv no Use sum-of-pairs profile score (VTML240).
# Default is le.
_Switch(["-sv", "sv"],
"Use sum-of-pairs profile score (VTML240)"),
#sp no Use sum-of-pairs protein profile score
# (PAM200). Default is le.
_Switch(["-sp", "sp"],
"Use sum-of-pairs protein profile score (PAM200)"),
#spn maybe Use sum-of-pairs nucleotide profile score
# (BLASTZ parameters). This is the only option
# for nucleotides, and is therefore the
# default.
_Switch(["-spn", "spn"],
"Use sum-of-pairs protein nucleotide profile score"),
############## END log-expectation profile score ######################
#quiet no Do not display progress messages.
_Switch(["-quiet", "quiet"],
"Use sum-of-pairs protein nucleotide profile score"),
#refine no Input file is already aligned, skip first
# two iterations and begin tree dependent
# refinement.
_Switch(["-refine", "refine"],
"Only do tree dependent refinement"),
#core yes in muscle, Do not catch exceptions.
# no in muscled.
_Switch(["-core", "core"],
"Catch exceptions"),
#nocore no in muscle, Catch exceptions and give an error message
# yes in muscled. if possible.
_Switch(["-nocore", "nocore"],
"Do not catch exceptions"),
#termgapsfull no Terminal gaps penalized with full penalty.
# [1] Not fully supported in this version.
#
#termgapshalf yes Terminal gaps penalized with half penalty.
# [1] Not fully supported in this version.
#
#termgapshalflonger no Terminal gaps penalized with half penalty if
# gap relative to
# longer sequence, otherwise with full
# penalty.
# [1] Not fully supported in this version.
#verbose no Write parameter settings and progress
# messages to log file.
_Switch(["-verbose", "verbose"],
"Write parameter settings and progress"),
#version no Write version string to stdout and exit.
_Switch(["-version", "version"],
"Write version string to stdout and exit"),
]
AbstractCommandline.__init__(self, cmd, **kwargs)
def _test():
"""Run the module's doctests (PRIVATE)."""
print "Runing MUSCLE doctests..."
import doctest
doctest.testmod()
print "Done"
if __name__ == "__main__":
_test()
|
bryback/quickseq
|
genescript/Bio/Align/Applications/_Muscle.py
|
Python
|
mit
| 29,170
|
[
"BLAST",
"Biopython"
] |
04fcae316723db577a394e79c296ff3eab17f4c352670e55bb3b9d4a0ea94efb
|
'''
Calculate the coverage of each genome pair from the blastx results
Starting with the blastx converted to NC/NC ids, we want to calculate
the coverage at each position in every genome.
We will consider all the genomes with the most number of bases in the
phage as the top genomes
'''
import sys
from phage import Phage
phage=Phage()
try:
f=sys.argv[1]
except:
sys.exit(sys.argv[0] + " <blast output file converted to NC/NC format. Probably phage.genomes.blastx")
count={}
lens=phage.phageSequenceLengths()
bctG = set(phage.completeBacteriaIDs())
phgG = set(phage.phageIDs())
for p in phgG:
count[p]={}
sys.stderr.write("Reading " + f + "\n")
with open(f, 'r') as bin:
for l in bin:
p=l.strip().split("\t")
if p[0] not in phgG:
continue
if p[1] not in bctG:
continue
if p[1] not in count[p[0]]:
count[p[0]][p[1]]=[]
for i in range(lens[p[0]]+1):
count[p[0]][p[1]].append(0)
s = int(p[6])
e = int(p[7])
if e < s:
(s,e)=(e,s)
for i in range(s,e+1):
count[p[0]][p[1]][i]=1
sys.stderr.write("Found " + str(len(count)) + ' matches\n')
for p in count:
tot=0
genomes=[]
for b in count[p]:
c = sum(count[p][b])
if c > tot:
tot = c
genomes = [b]
elif c == tot:
genomes.append(b)
print(p + "\t" + "\t".join(genomes))
sys.stderr.write("Done")
|
linsalrob/PhageHosts
|
code/blastx_coverage.py
|
Python
|
mit
| 1,491
|
[
"BLAST"
] |
b85d41a66ebaa9ecd5adf8f3d3bad60ee50f3f2ddb85c423ef39d9f2da4367ce
|
"""Functions to plot a vega plot from Extremefill data
"""
# pylint: disable=no-value-for-parameter
import os
import uuid
import numpy as np
from skimage import measure
import xarray
# pylint: disable=redefined-builtin, no-name-in-module
from toolz.curried import pipe, juxt, valmap, concat, map, do
from scipy.interpolate import griddata
import pandas
import yaml
import vega
from IPython.display import display, publish_display_data
from .tools import tlam, enum, render_yaml, get_path, all_files, render_j2
def vega_plot_treant(treant):
"""Make a vega plot
Args:
treant: a treant
Returns
a vega.Vega type
>>> from click.testing import CliRunner
>>> from extremefill2D.fextreme import init_sim
>>> from extremefill2D.fextreme.tools import base_path
>>> with CliRunner().isolated_filesystem() as dir_:
... assert pipe(
... os.path.join(base_path(), 'scripts', 'params.json'),
... init_sim(data_path=dir_),
... vega_plot_treant,
... lambda x: type(x) is vega.Vega)
"""
return vega_plot_treants_together([treant])
def vega_plot_treants_together(treants):
"""Make a vega plot with multiple treants
Args:
treants: a list of treants
Returns:
a vega.Vega type
"""
return vega.Vega(render_spec(treants))
def vega_plot_treants(treants):
"""Make a vega plot with side-by-side plots
Args:
treants: a list of treants
Returns
a MultiVega instance
>>> from click.testing import CliRunner
>>> from extremefill2D.fextreme import init_sim
>>> from extremefill2D.fextreme.tools import base_path
>>> with CliRunner().isolated_filesystem() as dir_:
... assert pipe(
... os.path.join(base_path(), 'scripts', 'params.json'),
... init_sim(data_path=dir_),
... lambda x: [x, x],
... vega_plot_treants,
... lambda x: type(x) is MultiVega)
"""
return pipe(
treants,
map(lambda x: render_spec([x])),
list,
MultiVega
)
def render_spec(treants):
"""Turn a list of Extremefill treants into a sigle Vega plot
Args:
treants: a list of Extremefill treants
Returns:
a list of vega specs
"""
return pipe(
treants,
enum(lambda i, x: vega_contours(x, counter=i)),
concat,
list,
lambda x: render_yaml(os.path.join(get_path(__file__),
'templates',
'vega.yaml.j2'),
data=dict(data=x, title=treants[0].uuid[:8])),
yaml.load
)
def vega_contours(treant, counter=0):
"""
Get the contours as Vega data.
Args:
treant: a Treant object with data files
Returns:
contours formatted as Vega data
"""
return pipe(
treant,
all_files('*.nc'),
map(contours_from_datafile),
concat,
map(pandas.DataFrame),
map(lambda x: x.rename(columns={0: 'x', 1: 'y'})),
map(lambda x: x.to_dict(orient='records')),
map(map(valmap(float))),
map(list),
enum(lambda i, x: dict(name='contour_data{0}_{1}'.format(i, counter),
values=x)),
)
def contours_from_datafile(datafile):
"""Calculate the contours given a netcdf datafile
Args:
datafile: the netcdf datafile
Returns:
a list of contours
"""
return pipe(
datafile,
xarray.open_dataset,
lambda x: dict(x=x.x.values,
y=x.y.values,
z=x.distance.values,
dx=x.nominal_dx),
contours
)
def contours(data):
"""Get zero contours from x, y, z data
Args:
data: dictionary with (x, y, z, dx) keys
Returns:
a list of (N, 2) numpy arrays representing the contours
"""
def linspace_(arr, spacing):
"""Calcuate the linspace based on a spacing
"""
return pipe(
arr,
juxt(min, max),
tlam(lambda x_, y_: np.linspace(x_, y_, (y_ - x_) / spacing))
)
return pipe(
data,
lambda x: dict(xi=linspace_(x['x'], x['dx']),
yi=linspace_(x['y'], x['dx']),
**x),
lambda x: griddata((x['y'], x['x']),
x['z'],
(x['yi'][None, :], x['xi'][:, None]),
method='cubic'),
lambda x: measure.find_contours(x, 0.0),
map(lambda x: float(data['dx']) * x)
)
def render_html(ids):
"""Render the HTML for the IPython Vega plots.
Args:
ids: the tags for each div element
Returns:
the rendered HTML
"""
return render_j2(os.path.join(get_path(__file__),
'templates/multivega.html.j2'),
dict(ids=ids),
dict())
def html_publish_map(data):
"""Run IPython's 'publish_display_data' for each spec.
Args:
data: list of (id, spec) pairings
"""
pipe(
data,
map(lambda x: x[0]),
list,
lambda x: publish_display_data(
{'text/html': render_html(x)},
metadata={'jupyter-vega': '#{0}'.format(x[0])})
)
def js_publish(id_, inst):
"""Generate Vega JS
Args:
id_: a unique ID to tag the element
inst: a Vega instance
"""
publish_display_data(
# pylint: disable=protected-access
{'application/javascript': inst._generate_js(id_)},
metadata={'jupyter-vega': '#{0}'.format(id_)}
)
def ipython_display(specs):
"""Run publish_display_data for the JS and HTML
Args:
specs: a list of Vega specs
"""
pipe(
specs,
map(lambda x: (uuid.uuid4(), vega.Vega(x))),
list,
do(html_publish_map),
map(tlam(js_publish)),
list
)
class MultiVega(object): # pylint: disable=too-few-public-methods
"""Side-by-side vega plots
>>> from click.testing import CliRunner
>>> from extremefill2D.fextreme import init_sim
>>> from extremefill2D.fextreme.tools import base_path
>>> with CliRunner().isolated_filesystem() as dir_:
... inst = pipe(
... os.path.join(base_path(), 'scripts', 'params.json'),
... init_sim(data_path=dir_),
... lambda x: [x, x],
... vega_plot_treants,
... do(lambda x: x._ipython_display_())
... )
... inst.display()
"""
def __init__(self, specs):
self.specs = specs
def _ipython_display_(self):
ipython_display(self.specs)
def display(self):
"""Display in IPython Notebook.
"""
display(self)
|
wd15/extremefill2D
|
extremefill2D/fextreme/plot.py
|
Python
|
mit
| 6,906
|
[
"NetCDF"
] |
313ae23536821dbc70ca375f14e5e5e085b5c4aef9cba943badab6c8735cc3f7
|
"""
test_Projection.py
This file is part of ANNarchy.
Copyright (C) 2013-2016 Joseph Gussev <joseph.gussev@s2012.tu-chemnitz.de>,
Helge Uelo Dinkelbach <helge.dinkelbach@gmail.com>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
ANNarchy is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
import unittest
import numpy
from scipy import sparse
from ANNarchy import *
class test_ProjectionLIL(unittest.TestCase):
"""
Tests the functionality of the *Projection* object using a list-in-list
representation (currently the default in ANNarchy). We test:
*access to parameters
*method to get the ranks of post-synaptic neurons recieving synapses
*method to get the number of post-synaptic neurons recieving synapses
"""
@classmethod
def setUpClass(self):
"""
Compile the network for this test
"""
setup(structural_plasticity=False)
simple = Neuron(
parameters = "r=0",
)
Oja = Synapse(
parameters="""
tau = 5000.0
alpha = 8.0 : postsynaptic
""",
equations = """
dw/dt = -w
"""
)
pop1 = Population((8), neuron=simple)
pop2 = Population((4), neuron=simple)
# define a sparse matrix
weight_matrix = sparse.lil_matrix((4,8))
# HD (01.07.20): its not possible to use slicing here,
# as it produces FutureWarnings in scipy/numpy (for version >= 1.17)
for i in range(8):
weight_matrix[1, i] = 0.2
for i in range(2,6):
weight_matrix[3, i] = 0.5
# we need to flip the matrix (see 2.9.3.2 in documentation)
self.weight_matrix = weight_matrix.T
# set the pre-defined matrix
proj = Projection(
pre = pop1,
post = pop2,
target = "exc",
synapse = Oja
)
proj.connect_from_sparse(self.weight_matrix, storage_format="lil", storage_order="post_to_pre")
self.test_net = Network()
self.test_net.add([pop1, pop2, proj])
self.test_net.compile(silent=True)
self.net_proj = self.test_net.get(proj)
def setUp(self):
"""
In our *setUp()* function we reset the network before every test.
"""
self.test_net.reset()
def test_get_w(self):
"""
Test the direct access to the synaptic weight.
"""
# test row 1 (idx 0) with 8 elements should be 0.2
self.assertTrue(numpy.allclose(self.net_proj.w[0], 0.2))
# test row 3 (idx 1) with 8 elements should be 0.5
self.assertTrue(numpy.allclose(self.net_proj.w[1], 0.5))
def test_get_dendrite_w(self):
"""
Test the access through dendrite to the synaptic weight.
"""
# test row 1 with 8 elements should be 0.2
self.assertTrue(numpy.allclose(self.net_proj.dendrite(1).w, 0.2))
# test row 3 with 4 elements should be 0.5
self.assertTrue(numpy.allclose(self.net_proj.dendrite(3).w, 0.5))
def test_get_tau(self):
"""
Tests the direct access to the parameter *tau* of our *Projection*.
"""
# test row 1 (idx 0) with 8 elements
self.assertTrue(numpy.allclose(self.net_proj.tau[0], 5000.0))
# test row 3 (idx 1) with 4 elements
self.assertTrue(numpy.allclose(self.net_proj.tau[1], 5000.0))
def test_get_tau_2(self):
"""
Tests the access to the parameter *tau* of our *Projection* with the *get()* method.
"""
self.assertTrue(numpy.allclose(self.net_proj.get('tau')[0], 5000.0))
self.assertTrue(numpy.allclose(self.net_proj.get('tau')[1], 5000.0))
def test_get_alpha(self):
"""
Tests the direct access to the parameter *alpha* of our *Projection*.
"""
self.assertTrue(numpy.allclose(self.net_proj.alpha, 8.0))
def test_get_alpha_2(self):
"""
Tests the access to the parameter *alpha* of our *Projection* with the *get()* method.
"""
self.assertTrue(numpy.allclose(self.net_proj.get('alpha'), 8.0))
def test_get_size(self):
"""
Tests the *size* method, which returns the number of post-synaptic neurons recieving synapses.
"""
self.assertEqual(self.net_proj.size, 2)
def test_get_post_ranks(self):
"""
Tests the *post_ranks* method, which returns the ranks of post-synaptic neurons recieving synapses.
"""
self.assertEqual(self.net_proj.post_ranks, [1,3])
class test_ProjectionCSR(unittest.TestCase):
"""
Tests the functionality of the *Projection* object using a CSR representation,
where the first dimension encodes the post-synaptic neuron (post_to_pre). We test:
*access to parameters
*method to get the ranks of post-synaptic neurons recieving synapses
*method to get the number of post-synaptic neurons recieving synapses
"""
@classmethod
def setUpClass(self):
"""
Compile the network for this test
"""
setup(structural_plasticity=False)
simple = Neuron(
parameters = "r=0",
)
Oja = Synapse(
parameters="""
tau = 5000.0
alpha = 8.0 : postsynaptic
""",
equations = """
dw/dt = -w
"""
)
pop1 = Population((8), neuron=simple)
pop2 = Population((4), neuron=simple)
# define a sparse matrix
weight_matrix = sparse.lil_matrix((4,8))
# HD (01.07.20): its not possible to use slicing here,
# as it produces FutureWarnings in scipy/numpy (for version >= 1.17)
for i in range(8):
weight_matrix[1, i] = 0.2
for i in range(2,6):
weight_matrix[3, i] = 0.5
# we need to flip the matrix (see 2.9.3.2 in documentation)
self.weight_matrix = weight_matrix.T
# set the pre-defined matrix
proj = Projection(
pre = pop1,
post = pop2,
target = "exc",
synapse = Oja
)
proj.connect_from_sparse(self.weight_matrix, storage_format="csr", storage_order="post_to_pre")
self.test_net = Network()
self.test_net.add([pop1, pop2, proj])
self.test_net.compile(silent=True)
self.net_proj = self.test_net.get(proj)
def setUp(self):
"""
In our *setUp()* function we reset the network before every test.
"""
self.test_net.reset()
def test_get_w(self):
"""
Test the direct access to the synaptic weight.
"""
# test row 1 (idx 0) with 8 elements should be 0.2
self.assertTrue(numpy.allclose(self.net_proj.w[0], 0.2))
# test row 3 (idx 1) with 8 elements should be 0.5
self.assertTrue(numpy.allclose(self.net_proj.w[1], 0.5))
def test_get_dendrite_w(self):
"""
Test the access through dendrite to the synaptic weight.
"""
# test row 1 with 8 elements should be 0.2
self.assertTrue(numpy.allclose(self.net_proj.dendrite(1).w, 0.2))
# test row 3 with 4 elements should be 0.5
self.assertTrue(numpy.allclose(self.net_proj.dendrite(3).w, 0.5))
def test_get_tau(self):
"""
Tests the direct access to the parameter *tau* of our *Projection*.
"""
# test row 1 (idx 0) with 8 elements
self.assertTrue(numpy.allclose(self.net_proj.tau[0], 5000.0))
# test row 3 (idx 1) with 4 elements
self.assertTrue(numpy.allclose(self.net_proj.tau[1], 5000.0))
def test_get_tau_2(self):
"""
Tests the access to the parameter *tau* of our *Projection* with the *get()* method.
"""
self.assertTrue(numpy.allclose(self.net_proj.get('tau')[0], 5000.0))
self.assertTrue(numpy.allclose(self.net_proj.get('tau')[1], 5000.0))
def test_get_alpha(self):
"""
Tests the direct access to the parameter *alpha* of our *Projection*.
"""
self.assertTrue(numpy.allclose(self.net_proj.alpha, 8.0))
def test_get_alpha_2(self):
"""
Tests the access to the parameter *alpha* of our *Projection* with the *get()* method.
"""
self.assertTrue(numpy.allclose(self.net_proj.get('alpha'), 8.0))
def test_get_size(self):
"""
Tests the *size* method, which returns the number of post-synaptic neurons recieving synapses.
"""
self.assertEqual(self.net_proj.size, 2)
def test_get_post_ranks(self):
"""
Tests the *post_ranks* method, which returns the ranks of post-synaptic neurons recieving synapses.
"""
self.assertEqual(self.net_proj.post_ranks, [1,3])
class test_ProjectionCSR2(unittest.TestCase):
"""
Tests the functionality of the *Projection* object using a CSR representation,
where the first dimension encodes the pre-synaptic neuron (pre_to_post). We test:
*access to parameters
*method to get the ranks of post-synaptic neurons recieving synapses
*method to get the number of post-synaptic neurons recieving synapses
"""
@classmethod
def setUpClass(self):
"""
Compile the network for this test
"""
setup(structural_plasticity=False)
simple = Neuron(
parameters = "r=0",
)
Oja = Synapse(
parameters="""
tau = 5000.0
alpha = 8.0 : postsynaptic
""",
equations = """
dw/dt = -w
"""
)
pop1 = Population((8), neuron=simple)
pop2 = Population((4), neuron=simple)
# define a sparse matrix
weight_matrix = sparse.lil_matrix((4,8))
# HD (01.07.20): its not possible to use slicing here,
# as it produces FutureWarnings in scipy/numpy (for version >= 1.17)
for i in range(8):
weight_matrix[1, i] = 0.2
for i in range(2,6):
weight_matrix[3, i] = 0.5
# we need to flip the matrix (see 2.9.3.2 in documentation)
self.weight_matrix = weight_matrix.T
# set the pre-defined matrix
proj = Projection(
pre = pop1,
post = pop2,
target = "exc",
synapse = Oja
)
proj.connect_from_sparse(self.weight_matrix, storage_format="csr", storage_order="pre_to_post")
self.test_net = Network()
self.test_net.add([pop1, pop2, proj])
self.test_net.compile(silent=True)
self.net_proj = self.test_net.get(proj)
def setUp(self):
"""
In our *setUp()* function we reset the network before every test.
"""
self.test_net.reset()
def test_get_w(self):
"""
Test the direct access to the synaptic weight.
"""
# test row 1 (idx 0) with 8 elements should be 0.2
self.assertTrue(numpy.allclose(self.net_proj.w[0], 0.2))
# test row 3 (idx 1) with 8 elements should be 0.5
self.assertTrue(numpy.allclose(self.net_proj.w[1], 0.5))
def test_get_dendrite_w(self):
"""
Test the access through dendrite to the synaptic weight.
"""
# test row 1 with 8 elements should be 0.2
self.assertTrue(numpy.allclose(self.net_proj.dendrite(1).w, 0.2))
# test row 3 with 4 elements should be 0.5
self.assertTrue(numpy.allclose(self.net_proj.dendrite(3).w, 0.5))
def test_get_tau(self):
"""
Tests the direct access to the parameter *tau* of our *Projection*.
"""
# test row 1 (idx 0) with 8 elements
self.assertTrue(numpy.allclose(self.net_proj.tau[0], 5000.0))
# test row 3 (idx 1) with 4 elements
self.assertTrue(numpy.allclose(self.net_proj.tau[1], 5000.0))
def test_get_tau_2(self):
"""
Tests the access to the parameter *tau* of our *Projection* with the *get()* method.
"""
self.assertTrue(numpy.allclose(self.net_proj.get('tau')[0], 5000.0))
self.assertTrue(numpy.allclose(self.net_proj.get('tau')[1], 5000.0))
def test_get_alpha(self):
"""
Tests the direct access to the parameter *alpha* of our *Projection*.
"""
self.assertTrue(numpy.allclose(self.net_proj.alpha, 8.0))
def test_get_alpha_2(self):
"""
Tests the access to the parameter *alpha* of our *Projection* with the *get()* method.
"""
self.assertTrue(numpy.allclose(self.net_proj.get('alpha'), 8.0))
def test_get_size(self):
"""
Tests the *size* method, which returns the number of post-synaptic neurons recieving synapses.
"""
self.assertEqual(self.net_proj.size, 2)
def test_get_post_ranks(self):
"""
Tests the *post_ranks* method, which returns the ranks of post-synaptic neurons recieving synapses.
"""
self.assertEqual(self.net_proj.post_ranks, [1,3])
|
vitay/ANNarchy
|
tests/Unittests/test_Projection.py
|
Python
|
gpl-2.0
| 13,791
|
[
"NEURON"
] |
1d5bb7435c3367b8e1414c5c9e6b08edc55a48566217f4c7ad40c1a386aedba0
|
# Copyright (c) 2000-2013 LOGILAB S.A. (Paris, FRANCE).
# http://www.logilab.fr/ -- mailto:contact@logilab.fr
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation; either version 2 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
unittest for visitors.diadefs and extensions.diadefslib modules
"""
import os
import sys
import codecs
from os.path import join, dirname, abspath
from difflib import unified_diff
import unittest
from astroid import MANAGER
from astroid.inspector import Linker
from pylint.pyreverse.diadefslib import DefaultDiadefGenerator, DiadefsHandler
from pylint.pyreverse.writer import DotWriter
from pylint.pyreverse.utils import get_visibility
_DEFAULTS = {
'all_ancestors': None, 'show_associated': None,
'module_names': None,
'output_format': 'dot', 'diadefs_file': None, 'quiet': 0,
'show_ancestors': None, 'classes': (), 'all_associated': None,
'mode': 'PUB_ONLY', 'show_builtin': False, 'only_classnames': False
}
class Config(object):
"""config object for tests"""
def __init__(self):
for attr, value in _DEFAULTS.items():
setattr(self, attr, value)
def _file_lines(path):
# we don't care about the actual encoding, but python3 forces us to pick one
with codecs.open(path, encoding='latin1') as stream:
lines = [line.strip() for line in stream.readlines()
if (line.find('squeleton generated by ') == -1 and
not line.startswith('__revision__ = "$Id:'))]
return [line for line in lines if line]
def get_project(module, name=None):
"""return a astroid project representation"""
# flush cache
MANAGER._modules_by_name = {}
def _astroid_wrapper(func, modname):
return func(modname)
return MANAGER.project_from_files([module], _astroid_wrapper,
project_name=name)
CONFIG = Config()
class DotWriterTC(unittest.TestCase):
@classmethod
def setUpClass(cls):
project = get_project(os.path.join(os.path.dirname(__file__), 'data'))
linker = Linker(project)
handler = DiadefsHandler(CONFIG)
dd = DefaultDiadefGenerator(linker, handler).visit(project)
for diagram in dd:
diagram.extract_relationships()
writer = DotWriter(CONFIG)
writer.write(dd)
@classmethod
def tearDownClass(cls):
for fname in ('packages_No_Name.dot', 'classes_No_Name.dot',):
try:
os.remove(fname)
except:
continue
def _test_same_file(self, generated_file):
expected_file = os.path.join(os.path.dirname(__file__), 'data', generated_file)
generated = _file_lines(generated_file)
expected = _file_lines(expected_file)
generated = '\n'.join(generated)
expected = '\n'.join(expected)
files = "\n *** expected : %s, generated : %s \n" % (
expected_file, generated_file)
self.assertEqual(expected, generated, '%s%s' % (
files, '\n'.join(line for line in unified_diff(
expected.splitlines(), generated.splitlines() ))) )
os.remove(generated_file)
def test_package_diagram(self):
self._test_same_file('packages_No_Name.dot')
def test_class_diagram(self):
self._test_same_file('classes_No_Name.dot')
class GetVisibilityTC(unittest.TestCase):
def test_special(self):
for name in ["__reduce_ex__", "__setattr__"]:
self.assertEqual(get_visibility(name), 'special')
def test_private(self):
for name in ["__g_", "____dsf", "__23_9"]:
got = get_visibility(name)
self.assertEqual(got, 'private',
'got %s instead of private for value %s' % (got, name))
def test_public(self):
self.assertEqual(get_visibility('simple'), 'public')
def test_protected(self):
for name in ["_","__", "___", "____", "_____", "___e__",
"_nextsimple", "_filter_it_"]:
got = get_visibility(name)
self.assertEqual(got, 'protected',
'got %s instead of protected for value %s' % (got, name))
if __name__ == '__main__':
unittest.main()
|
devs1991/test_edx_docmode
|
venv/lib/python2.7/site-packages/pylint/test/unittest_pyreverse_writer.py
|
Python
|
agpl-3.0
| 4,831
|
[
"VisIt"
] |
bef41d5caa2306047a945484e46a5eb3cadd1d44d024589082ced2613d321613
|
# Copyright (c) 2012, Cloudscaling
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import re
"""
Guidelines for writing new hacking checks
- Use only for Nova specific tests. OpenStack general tests
should be submitted to the common 'hacking' module.
- Pick numbers in the range N3xx. Find the current test with
the highest allocated number and then pick the next value.
- Keep the test method code in the source file ordered based
on the N3xx value.
- List the new rule in the top level HACKING.rst file
- Add test cases for each new rule to nova/tests/unit/test_hacking.py
"""
UNDERSCORE_IMPORT_FILES = []
session_check = re.compile(r"\w*def [a-zA-Z0-9].*[(].*session.*[)]")
cfg_re = re.compile(r".*\scfg\.")
vi_header_re = re.compile(r"^#\s+vim?:.+")
virt_file_re = re.compile(r"\./nova/(?:tests/)?virt/(\w+)/")
virt_import_re = re.compile(
r"^\s*(?:import|from) nova\.(?:tests\.)?virt\.(\w+)")
virt_config_re = re.compile(
r"CONF\.import_opt\('.*?', 'nova\.virt\.(\w+)('|.)")
author_tag_re = (re.compile("^\s*#\s*@?(a|A)uthor:"),
re.compile("^\.\.\s+moduleauthor::"))
asse_trueinst_re = re.compile(
r"(.)*assertTrue\(isinstance\((\w|\.|\'|\"|\[|\])+, "
"(\w|\.|\'|\"|\[|\])+\)\)")
asse_equal_type_re = re.compile(
r"(.)*assertEqual\(type\((\w|\.|\'|\"|\[|\])+\), "
"(\w|\.|\'|\"|\[|\])+\)")
asse_equal_in_end_with_true_or_false_re = re.compile(r"assertEqual\("
r"(\w|[][.'\"])+ in (\w|[][.'\", ])+, (True|False)\)")
asse_equal_in_start_with_true_or_false_re = re.compile(r"assertEqual\("
r"(True|False), (\w|[][.'\"])+ in (\w|[][.'\", ])+\)")
asse_equal_end_with_none_re = re.compile(
r"assertEqual\(.*?,\s+None\)$")
asse_equal_start_with_none_re = re.compile(
r"assertEqual\(None,")
# NOTE(snikitin): Next two regexes weren't united to one for more readability.
# asse_true_false_with_in_or_not_in regex checks
# assertTrue/False(A in B) cases where B argument has no spaces
# asse_true_false_with_in_or_not_in_spaces regex checks cases
# where B argument has spaces and starts/ends with [, ', ".
# For example: [1, 2, 3], "some string", 'another string'.
# We have to separate these regexes to escape a false positives
# results. B argument should have spaces only if it starts
# with [, ", '. Otherwise checking of string
# "assertFalse(A in B and C in D)" will be false positives.
# In this case B argument is "B and C in D".
asse_true_false_with_in_or_not_in = re.compile(r"assert(True|False)\("
r"(\w|[][.'\"])+( not)? in (\w|[][.'\",])+(, .*)?\)")
asse_true_false_with_in_or_not_in_spaces = re.compile(r"assert(True|False)"
r"\((\w|[][.'\"])+( not)? in [\[|'|\"](\w|[][.'\", ])+"
r"[\[|'|\"](, .*)?\)")
asse_raises_regexp = re.compile(r"assertRaisesRegexp\(")
conf_attribute_set_re = re.compile(r"CONF\.[a-z0-9_.]+\s*=\s*\w")
translated_log = re.compile(
r"(.)*LOG\.(audit|debug|error|info|critical|exception|warning)"
"\(\s*_\(\s*('|\")")
mutable_default_args = re.compile(r"^\s*def .+\((.+=\{\}|.+=\[\])")
string_translation = re.compile(r"[^_]*_\(\s*('|\")")
underscore_import_check = re.compile(r"(.)*import _(.)*")
import_translation_for_log_or_exception = re.compile(
r"(.)*(from\snova.i18n\simport)\s_")
# We need this for cases where they have created their own _ function.
custom_underscore_check = re.compile(r"(.)*_\s*=\s*(.)*")
api_version_re = re.compile(r"@.*api_version")
dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)")
decorator_re = re.compile(r"@.*")
# TODO(dims): When other oslo libraries switch over non-namespace'd
# imports, we need to add them to the regexp below.
oslo_namespace_imports = re.compile(r"from[\s]*oslo[.]"
r"(concurrency|config|db|i18n|messaging|"
r"middleware|serialization|utils|vmware)")
oslo_namespace_imports_2 = re.compile(r"from[\s]*oslo[\s]*import[\s]*"
r"(concurrency|config|db|i18n|messaging|"
r"middleware|serialization|utils|vmware)")
oslo_namespace_imports_3 = re.compile(r"import[\s]*oslo\."
r"(concurrency|config|db|i18n|messaging|"
r"middleware|serialization|utils|vmware)")
class BaseASTChecker(ast.NodeVisitor):
"""Provides a simple framework for writing AST-based checks.
Subclasses should implement visit_* methods like any other AST visitor
implementation. When they detect an error for a particular node the
method should call ``self.add_error(offending_node)``. Details about
where in the code the error occurred will be pulled from the node
object.
Subclasses should also provide a class variable named CHECK_DESC to
be used for the human readable error message.
"""
def __init__(self, tree, filename):
"""This object is created automatically by pep8.
:param tree: an AST tree
:param filename: name of the file being analyzed
(ignored by our checks)
"""
self._tree = tree
self._errors = []
def run(self):
"""Called automatically by pep8."""
self.visit(self._tree)
return self._errors
def add_error(self, node, message=None):
"""Add an error caused by a node to the list of errors for pep8."""
message = message or self.CHECK_DESC
error = (node.lineno, node.col_offset, message, self.__class__)
self._errors.append(error)
def _check_call_names(self, call_node, names):
if isinstance(call_node, ast.Call):
if isinstance(call_node.func, ast.Name):
if call_node.func.id in names:
return True
return False
def import_no_db_in_virt(logical_line, filename):
"""Check for db calls from nova/virt
As of grizzly-2 all the database calls have been removed from
nova/virt, and we want to keep it that way.
N307
"""
if "nova/virt" in filename and not filename.endswith("fake.py"):
if logical_line.startswith("from nova import db"):
yield (0, "N307: nova.db import not allowed in nova/virt/*")
def no_db_session_in_public_api(logical_line, filename):
if "db/api.py" in filename:
if session_check.match(logical_line):
yield (0, "N309: public db api methods may not accept session")
def use_timeutils_utcnow(logical_line, filename):
# tools are OK to use the standard datetime module
if "/tools/" in filename:
return
msg = "N310: timeutils.utcnow() must be used instead of datetime.%s()"
datetime_funcs = ['now', 'utcnow']
for f in datetime_funcs:
pos = logical_line.find('datetime.%s' % f)
if pos != -1:
yield (pos, msg % f)
def _get_virt_name(regex, data):
m = regex.match(data)
if m is None:
return None
driver = m.group(1)
# Ignore things we mis-detect as virt drivers in the regex
if driver in ["test_virt_drivers", "driver", "firewall",
"disk", "api", "imagecache", "cpu", "hardware"]:
return None
# TODO(berrange): remove once bugs 1261826 and 126182 are
# fixed, or baremetal driver is removed, which is first.
if driver == "baremetal":
return None
return driver
def import_no_virt_driver_import_deps(physical_line, filename):
"""Check virt drivers' modules aren't imported by other drivers
Modules under each virt driver's directory are
considered private to that virt driver. Other drivers
in Nova must not access those drivers. Any code that
is to be shared should be refactored into a common
module
N311
"""
thisdriver = _get_virt_name(virt_file_re, filename)
thatdriver = _get_virt_name(virt_import_re, physical_line)
if (thatdriver is not None and
thisdriver is not None and
thisdriver != thatdriver):
return (0, "N311: importing code from other virt drivers forbidden")
def import_no_virt_driver_config_deps(physical_line, filename):
"""Check virt drivers' config vars aren't used by other drivers
Modules under each virt driver's directory are
considered private to that virt driver. Other drivers
in Nova must not use their config vars. Any config vars
that are to be shared should be moved into a common module
N312
"""
thisdriver = _get_virt_name(virt_file_re, filename)
thatdriver = _get_virt_name(virt_config_re, physical_line)
if (thatdriver is not None and
thisdriver is not None and
thisdriver != thatdriver):
return (0, "N312: using config vars from other virt drivers forbidden")
def capital_cfg_help(logical_line, tokens):
msg = "N313: capitalize help string"
if cfg_re.match(logical_line):
for t in range(len(tokens)):
if tokens[t][1] == "help":
txt = tokens[t + 2][1]
if len(txt) > 1 and txt[1].islower():
yield(0, msg)
def no_vi_headers(physical_line, line_number, lines):
"""Check for vi editor configuration in source files.
By default vi modelines can only appear in the first or
last 5 lines of a source file.
N314
"""
# NOTE(gilliard): line_number is 1-indexed
if line_number <= 5 or line_number > len(lines) - 5:
if vi_header_re.match(physical_line):
return 0, "N314: Don't put vi configuration in source files"
def assert_true_instance(logical_line):
"""Check for assertTrue(isinstance(a, b)) sentences
N316
"""
if asse_trueinst_re.match(logical_line):
yield (0, "N316: assertTrue(isinstance(a, b)) sentences not allowed")
def assert_equal_type(logical_line):
"""Check for assertEqual(type(A), B) sentences
N317
"""
if asse_equal_type_re.match(logical_line):
yield (0, "N317: assertEqual(type(A), B) sentences not allowed")
def assert_equal_none(logical_line):
"""Check for assertEqual(A, None) or assertEqual(None, A) sentences
N318
"""
res = (asse_equal_start_with_none_re.search(logical_line) or
asse_equal_end_with_none_re.search(logical_line))
if res:
yield (0, "N318: assertEqual(A, None) or assertEqual(None, A) "
"sentences not allowed")
def no_translate_logs(logical_line):
"""Check for 'LOG.*(_('
Starting with the Pike series, OpenStack no longer supports log
translation. We shouldn't translate logs.
- This check assumes that 'LOG' is a logger.
- Use filename so we can start enforcing this in specific folders
instead of needing to do so all at once.
C312
"""
if translated_log.match(logical_line):
yield(0, "C312: Log messages should not be translated!")
def no_import_translation_in_tests(logical_line, filename):
"""Check for 'from nova.i18n import _'
N337
"""
if 'nova/tests/' in filename:
res = import_translation_for_log_or_exception.match(logical_line)
if res:
yield(0, "N337 Don't import translation in tests")
def no_setting_conf_directly_in_tests(logical_line, filename):
"""Check for setting CONF.* attributes directly in tests
The value can leak out of tests affecting how subsequent tests run.
Using self.flags(option=value) is the preferred method to temporarily
set config options in tests.
N320
"""
if 'nova/tests/' in filename:
res = conf_attribute_set_re.match(logical_line)
if res:
yield (0, "N320: Setting CONF.* attributes directly in tests is "
"forbidden. Use self.flags(option=value) instead")
def no_mutable_default_args(logical_line):
msg = "N322: Method's default argument shouldn't be mutable!"
if mutable_default_args.match(logical_line):
yield (0, msg)
def check_explicit_underscore_import(logical_line, filename):
"""Check for explicit import of the _ function
We need to ensure that any files that are using the _() function
to translate logs are explicitly importing the _ function. We
can't trust unit test to catch whether the import has been
added so we need to check for it here.
"""
# Build a list of the files that have _ imported. No further
# checking needed once it is found.
if filename in UNDERSCORE_IMPORT_FILES:
pass
elif (underscore_import_check.match(logical_line) or
custom_underscore_check.match(logical_line)):
UNDERSCORE_IMPORT_FILES.append(filename)
elif string_translation.match(logical_line):
yield(0, "N323: Found use of _() without explicit import of _ !")
def use_jsonutils(logical_line, filename):
# the code below that path is not meant to be executed from neutron
# tree where jsonutils module is present, so don't enforce its usage
# for this subdirectory
if "plugins/xenserver" in filename:
return
# tools are OK to use the standard json module
if "/tools/" in filename:
return
msg = "N324: jsonutils.%(fun)s must be used instead of json.%(fun)s"
if "json." in logical_line:
json_funcs = ['dumps(', 'dump(', 'loads(', 'load(']
for f in json_funcs:
pos = logical_line.find('json.%s' % f)
if pos != -1:
yield (pos, msg % {'fun': f[:-1]})
def check_api_version_decorator(logical_line, previous_logical, blank_before,
filename):
msg = ("N332: the api_version decorator must be the first decorator"
" on a method.")
if blank_before == 0 and re.match(api_version_re, logical_line) \
and re.match(decorator_re, previous_logical):
yield(0, msg)
class CheckForStrUnicodeExc(BaseASTChecker):
"""Checks for the use of str() or unicode() on an exception.
This currently only handles the case where str() or unicode()
is used in the scope of an exception handler. If the exception
is passed into a function, returned from an assertRaises, or
used on an exception created in the same scope, this does not
catch it.
"""
CHECK_DESC = ('N325 str() and unicode() cannot be used on an '
'exception. Remove or use six.text_type()')
def __init__(self, tree, filename):
super(CheckForStrUnicodeExc, self).__init__(tree, filename)
self.name = []
self.already_checked = []
def visit_TryExcept(self, node):
for handler in node.handlers:
if handler.name:
self.name.append(handler.name.id)
super(CheckForStrUnicodeExc, self).generic_visit(node)
self.name = self.name[:-1]
else:
super(CheckForStrUnicodeExc, self).generic_visit(node)
def visit_Call(self, node):
if self._check_call_names(node, ['str', 'unicode']):
if node not in self.already_checked:
self.already_checked.append(node)
if isinstance(node.args[0], ast.Name):
if node.args[0].id in self.name:
self.add_error(node.args[0])
super(CheckForStrUnicodeExc, self).generic_visit(node)
class CheckForTransAdd(BaseASTChecker):
"""Checks for the use of concatenation on a translated string.
Translations should not be concatenated with other strings, but
should instead include the string being added to the translated
string to give the translators the most information.
"""
CHECK_DESC = ('N326 Translated messages cannot be concatenated. '
'String should be included in translated message.')
TRANS_FUNC = ['_', '_LI', '_LW', '_LE', '_LC']
def visit_BinOp(self, node):
if isinstance(node.op, ast.Add):
if self._check_call_names(node.left, self.TRANS_FUNC):
self.add_error(node.left)
elif self._check_call_names(node.right, self.TRANS_FUNC):
self.add_error(node.right)
super(CheckForTransAdd, self).generic_visit(node)
def check_oslo_namespace_imports(logical_line, blank_before, filename):
if re.match(oslo_namespace_imports, logical_line):
msg = ("N333: '%s' must be used instead of '%s'.") % (
logical_line.replace('oslo.', 'oslo_'),
logical_line)
yield(0, msg)
match = re.match(oslo_namespace_imports_2, logical_line)
if match:
msg = ("N333: 'module %s should not be imported "
"from oslo namespace.") % match.group(1)
yield(0, msg)
match = re.match(oslo_namespace_imports_3, logical_line)
if match:
msg = ("N333: 'module %s should not be imported "
"from oslo namespace.") % match.group(1)
yield(0, msg)
def assert_true_or_false_with_in(logical_line):
"""Check for assertTrue/False(A in B), assertTrue/False(A not in B),
assertTrue/False(A in B, message) or assertTrue/False(A not in B, message)
sentences.
N334
"""
res = (asse_true_false_with_in_or_not_in.search(logical_line) or
asse_true_false_with_in_or_not_in_spaces.search(logical_line))
if res:
yield (0, "N334: Use assertIn/NotIn(A, B) rather than "
"assertTrue/False(A in/not in B) when checking collection "
"contents.")
def assert_raises_regexp(logical_line):
"""Check for usage of deprecated assertRaisesRegexp
N335
"""
res = asse_raises_regexp.search(logical_line)
if res:
yield (0, "N335: assertRaisesRegex must be used instead "
"of assertRaisesRegexp")
def dict_constructor_with_list_copy(logical_line):
msg = ("N336: Must use a dict comprehension instead of a dict constructor"
" with a sequence of key-value pairs."
)
if dict_constructor_with_list_copy_re.match(logical_line):
yield (0, msg)
def assert_equal_in(logical_line):
"""Check for assertEqual(A in B, True), assertEqual(True, A in B),
assertEqual(A in B, False) or assertEqual(False, A in B) sentences
N338
"""
res = (asse_equal_in_start_with_true_or_false_re.search(logical_line) or
asse_equal_in_end_with_true_or_false_re.search(logical_line))
if res:
yield (0, "N338: Use assertIn/NotIn(A, B) rather than "
"assertEqual(A in B, True/False) when checking collection "
"contents.")
def factory(register):
register(import_no_db_in_virt)
register(no_db_session_in_public_api)
register(use_timeutils_utcnow)
register(import_no_virt_driver_import_deps)
register(import_no_virt_driver_config_deps)
register(capital_cfg_help)
register(no_vi_headers)
register(no_import_translation_in_tests)
register(assert_true_instance)
register(assert_equal_type)
register(assert_equal_none)
register(assert_raises_regexp)
register(no_translate_logs)
register(no_setting_conf_directly_in_tests)
register(no_mutable_default_args)
register(check_explicit_underscore_import)
register(use_jsonutils)
register(check_api_version_decorator)
register(CheckForStrUnicodeExc)
register(CheckForTransAdd)
register(check_oslo_namespace_imports)
register(assert_true_or_false_with_in)
register(dict_constructor_with_list_copy)
register(assert_equal_in)
|
stackforge/compute-hyperv
|
compute_hyperv/hacking/checks.py
|
Python
|
apache-2.0
| 20,313
|
[
"VisIt"
] |
8820b620909add14abaa1391b458de69ec2110f2abab0acf0f9007ea0cb525bf
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2006 Donald N. Allingham
# Copyright (C) 2008 Brian G. Matherly
# Copyright (C) 2010 Jakim Friant
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Written by Alex Roitman
"Rebuild reference map tables"
#-------------------------------------------------------------------------
#
# python modules
#
#-------------------------------------------------------------------------
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
#------------------------------------------------------------------------
#
# Set up logging
#
#------------------------------------------------------------------------
import logging
log = logging.getLogger(".RebuildRefMap")
#-------------------------------------------------------------------------
#
# gtk modules
#
#-------------------------------------------------------------------------
#-------------------------------------------------------------------------
#
# Gramps modules
#
#-------------------------------------------------------------------------
from gramps.gui.plug import tool
from gramps.gui.dialog import OkDialog
from gramps.gen.updatecallback import UpdateCallback
#-------------------------------------------------------------------------
#
# runTool
#
#-------------------------------------------------------------------------
class RebuildRefMap(tool.Tool, UpdateCallback):
def __init__(self, dbstate, user, options_class, name, callback=None):
uistate = user.uistate
tool.Tool.__init__(self, dbstate, options_class, name)
if self.db.readonly:
return
self.db.disable_signals()
if uistate:
self.callback = uistate.pulse_progressbar
uistate.set_busy_cursor(True)
uistate.progress.show()
uistate.push_message(dbstate, _("Rebuilding reference maps..."))
else:
self.callback = None
print(_("Rebuilding reference maps..."))
UpdateCallback.__init__(self, self.callback)
self.set_total(6)
self.db.reindex_reference_map(self.update)
self.reset()
if uistate:
uistate.set_busy_cursor(False)
uistate.progress.hide()
OkDialog(_("Reference maps rebuilt"),
_('All reference maps have been rebuilt.'),
parent=uistate.window)
else:
print(_("All reference maps have been rebuilt."))
self.db.enable_signals()
#------------------------------------------------------------------------
#
#
#
#------------------------------------------------------------------------
class RebuildRefMapOptions(tool.ToolOptions):
"""
Defines options and provides handling interface.
"""
def __init__(self, name,person_id=None):
tool.ToolOptions.__init__(self, name,person_id)
|
sam-m888/gramps
|
gramps/plugins/tool/rebuildrefmap.py
|
Python
|
gpl-2.0
| 3,591
|
[
"Brian"
] |
f3af1a9fa6148a660ee942ce4eee3e97693f7b85bea756f252ab07a342565e8e
|
# ======================================================================
#
# Cosmograil: cosmograil.tools.sextractor
#
# sextractor module.
#
# Author: Laurent Le Guillou <laurentl@ster.kuleuven.ac.be>
#
# $Id: sextractor.py,v 1.2 2005/07/06 21:40:43 hack Exp $
#
# ======================================================================
#
# "sextractor": wrapper around SExtractor.
#
# ======================================================================
#
# $Log: sextractor.py,v $
# Revision 1.2 2005/07/06 21:40:43 hack
# Tweakshifts version 0.5.0 (WJH):
# - added support for SExtractor PSET and user-supplied SExtractor config file
# - added 'nbright' parameter for selecting only 'nbright' objects for matching
# - redefined 'ascend' to 'fluxunits' of 'counts/cps/mag'
# - fixed bug in countSExtractorObjects()reported by Andy
# - turned off overwriting of output WCS file
#
# Revision 1.15 2005/06/29 13:07:41 hack
# Added Python interface to SExtractor to STSDAS$Python for use with 'tweakshifts'. WJH
# Added 3 more parameters to config
#
# Revision 1.14 2005/02/14 19:27:31 laurentl
# Added write facilities to rdb module.
#
# Revision 1.13 2005/02/14 17:47:02 laurentl
# Added iterator interface
#
# Revision 1.12 2005/02/14 17:16:30 laurentl
# clean now removes the NNW config file too.
#
# Revision 1.2 2005/02/14 17:13:49 laurentl
# *** empty log message ***
#
# Revision 1.1 2005/02/14 11:34:10 laurentl
# quality monitor now uses SExtractor wrapper.
#
# Revision 1.10 2005/02/11 14:40:35 laurentl
# minor changes
#
# Revision 1.9 2005/02/11 14:32:44 laurentl
# Fixed bugs in setup()
#
# Revision 1.8 2005/02/11 13:50:08 laurentl
# Fixed bugs in setup()
#
# Revision 1.7 2005/02/10 20:15:14 laurentl
# Improved SExtractor wrapper.
#
# Revision 1.6 2005/02/10 17:46:35 laurentl
# Greatly improved the SExtractor wrapper.
#
# Revision 1.5 2005/02/09 23:32:50 laurentl
# Implemented SExtractor wrapper
#
# Revision 1.4 2005/02/04 05:00:09 laurentl
# *** empty log message ***
#
# Revision 1.3 2005/01/06 13:37:11 laurentl
# *** empty log message ***
#
#
# ======================================================================
"""
A wrapper for SExtractor
A wrapper for SExtractor, the Source Extractor.
by Laurent Le Guillou
version: 1.15 - last modified: 2005-07-06
This wrapper allows you to configure SExtractor, run it and get
back its outputs without the need of editing SExtractor
configuration files. by default, configuration files are created
on-the-fly, and SExtractor is run silently via python.
Tested on SExtractor versions 2.2.1 and 2.3.2.
Example of use:
-----------------------------------------------------------------
import sextractor
# Create a SExtractor instance
sex = sextractor.SExtractor()
# Modify the SExtractor configuration
sex.config['GAIN'] = 0.938
sex.config['PIXEL_SCALE'] = .19
sex.config['VERBOSE_TYPE'] = "FULL"
sex.config['CHECKIMAGE_TYPE'] = "BACKGROUND"
# Add a parameter to the parameter list
sex.config['PARAMETERS_LIST'].append('FLUX_BEST')
# Lauch SExtractor on a FITS file
sex.run("nf260002.fits")
# Read the resulting catalog [first method, whole catalog at once]
catalog = sex.catalog()
for star in catalog:
print star['FLUX_BEST'], star['FLAGS']
if (star['FLAGS'] & sextractor.BLENDED):
print "This star is BLENDED"
# Read the resulting catalog [second method, whole catalog at once]
catalog_name = sex.config['CATALOG_NAME']
catalog_f = sextractor.open(catalog_name)
catalog = catalog_f.readlines()
for star in catalog:
print star['FLUX_BEST'], star['FLAGS']
if (star['FLAGS'] & sextractor.BLENDED):
print "This star is BLENDED"
catalog_f.close()
# Read the resulting catalog [third method, star by star]
catalog_name = sex.config['CATALOG_NAME']
catalog_f = sextractor.open(catalog_name)
star = catalog_f.readline()
while star:
print star['FLUX_BEST'], star['FLAGS']
if (star['FLAGS'] & sextractor.BLENDED):
print "This star is BLENDED"
star = catalog_f.readline()
catalog_f.close()
# Removing the configuration files, the catalog and
# the check image
sex.clean(config=True, catalog=True, check=True)
-----------------------------------------------------------------
"""
# ======================================================================
import __builtin__
import os
import subprocess
import re
import copy
from sexcatalog import *
# ======================================================================
__version__ = "1.15.0 (2005-07-06)"
# ======================================================================
class SExtractorException(Exception):
pass
# ======================================================================
nnw_config = \
"""NNW
# Neural Network Weights for the SExtractor star/galaxy classifier (V1.3)
# inputs: 9 for profile parameters + 1 for seeing.
# outputs: ``Stellarity index'' (0.0 to 1.0)
# Seeing FWHM range: from 0.025 to 5.5'' (images must have 1.5 < FWHM < 5 pixels)
# Optimized for Moffat profiles with 2<= beta <= 4.
3 10 10 1
-1.56604e+00 -2.48265e+00 -1.44564e+00 -1.24675e+00 -9.44913e-01 -5.22453e-01 4.61342e-02 8.31957e-01 2.15505e+00 2.64769e-01
3.03477e+00 2.69561e+00 3.16188e+00 3.34497e+00 3.51885e+00 3.65570e+00 3.74856e+00 3.84541e+00 4.22811e+00 3.27734e+00
-3.22480e-01 -2.12804e+00 6.50750e-01 -1.11242e+00 -1.40683e+00 -1.55944e+00 -1.84558e+00 -1.18946e-01 5.52395e-01 -4.36564e-01 -5.30052e+00
4.62594e-01 -3.29127e+00 1.10950e+00 -6.01857e-01 1.29492e-01 1.42290e+00 2.90741e+00 2.44058e+00 -9.19118e-01 8.42851e-01 -4.69824e+00
-2.57424e+00 8.96469e-01 8.34775e-01 2.18845e+00 2.46526e+00 8.60878e-02 -6.88080e-01 -1.33623e-02 9.30403e-02 1.64942e+00 -1.01231e+00
4.81041e+00 1.53747e+00 -1.12216e+00 -3.16008e+00 -1.67404e+00 -1.75767e+00 -1.29310e+00 5.59549e-01 8.08468e-01 -1.01592e-02 -7.54052e+00
1.01933e+01 -2.09484e+01 -1.07426e+00 9.87912e-01 6.05210e-01 -6.04535e-02 -5.87826e-01 -7.94117e-01 -4.89190e-01 -8.12710e-02 -2.07067e+01
-5.31793e+00 7.94240e+00 -4.64165e+00 -4.37436e+00 -1.55417e+00 7.54368e-01 1.09608e+00 1.45967e+00 1.62946e+00 -1.01301e+00 1.13514e-01
2.20336e-01 1.70056e+00 -5.20105e-01 -4.28330e-01 1.57258e-03 -3.36502e-01 -8.18568e-02 -7.16163e+00 8.23195e+00 -1.71561e-02 -1.13749e+01
3.75075e+00 7.25399e+00 -1.75325e+00 -2.68814e+00 -3.71128e+00 -4.62933e+00 -2.13747e+00 -1.89186e-01 1.29122e+00 -7.49380e-01 6.71712e-01
-8.41923e-01 4.64997e+00 5.65808e-01 -3.08277e-01 -1.01687e+00 1.73127e-01 -8.92130e-01 1.89044e+00 -2.75543e-01 -7.72828e-01 5.36745e-01
-3.65598e+00 7.56997e+00 -3.76373e+00 -1.74542e+00 -1.37540e-01 -5.55400e-01 -1.59195e-01 1.27910e-01 1.91906e+00 1.42119e+00 -4.35502e+00
-1.70059e+00 -3.65695e+00 1.22367e+00 -5.74367e-01 -3.29571e+00 2.46316e+00 5.22353e+00 2.42038e+00 1.22919e+00 -9.22250e-01 -2.32028e+00
0.00000e+00
1.00000e+00
"""
# ======================================================================
class SExtractor:
"""
A wrapper class to transparently use SExtractor.
"""
_SE_config = {
"CATALOG_NAME":
{"comment": "name of the output catalog",
"value": "py-sextractor.cat"},
"CATALOG_TYPE":
{"comment":
'"NONE","ASCII_HEAD","ASCII","FITS_1.0" or "FITS_LDAC"',
"value": "ASCII_HEAD"},
"PARAMETERS_NAME":
{"comment": "name of the file containing catalog contents",
"value": "py-sextractor.param"},
"DETECT_TYPE":
{"comment": '"CCD" or "PHOTO"',
"value": "CCD"},
"FLAG_IMAGE":
{"comment": "filename for an input FLAG-image",
"value": "flag.fits"},
"DETECT_MINAREA":
{"comment": "minimum number of pixels above threshold",
"value": 5},
"DETECT_THRESH":
{"comment": "<sigmas> or <threshold>,<ZP> in mag.arcsec-2",
"value": 1.5},
"ANALYSIS_THRESH":
{"comment": "<sigmas> or <threshold>,<ZP> in mag.arcsec-2",
"value": 1.5},
"FILTER":
{"comment": 'apply filter for detection ("Y" or "N")',
"value": 'Y'},
"FILTER_NAME":
{"comment": "name of the file containing the filter",
"value": "py-sextractor.conv"},
"DEBLEND_NTHRESH":
{"comment": "Number of deblending sub-thresholds",
"value": 32},
"DEBLEND_MINCONT":
{"comment": "Minimum contrast parameter for deblending",
"value": 0.005},
"CLEAN":
{"comment": "Clean spurious detections (Y or N)",
"value": 'Y'},
"CLEAN_PARAM":
{"comment": "Cleaning efficiency",
"value": 1.0},
"MASK_TYPE":
{"comment": 'type of detection MASKing: can be one of "NONE", "BLANK" or "CORRECT"',
"value": "CORRECT"},
"PHOT_APERTURES":
{"comment": "MAG_APER aperture diameter(s) in pixels",
"value": 5},
"PHOT_AUTOPARAMS":
{"comment": 'MAG_AUTO parameters: <Kron_fact>,<min_radius>',
"value": [2.5, 3.5]},
"SATUR_LEVEL":
{"comment": "level (in ADUs) at which arises saturation",
"value": 50000.0},
"MAG_ZEROPOINT":
{"comment": "magnitude zero-point",
"value": 0.0},
"MAG_GAMMA":
{"comment": "gamma of emulsion (for photographic scans)",
"value": 4.0},
"GAIN":
{"comment": "detector gain in e-/ADU",
"value": 0.0},
"PIXEL_SCALE":
{"comment": "size of pixel in arcsec (0=use FITS WCS info)",
"value": 1.0},
"SEEING_FWHM":
{"comment": "stellar FWHM in arcsec",
"value": 1.2},
"STARNNW_NAME":
{"comment": "Neural-Network_Weight table filename",
"value": "py-sextractor.nnw"},
"BACK_SIZE":
{"comment": "Background mesh: <size> or <width>,<height>",
"value": 64},
"BACK_TYPE":
{"comment": "Type of background to subtract: MANUAL or AUTO generated",
"value": 'MANUAL'},
"BACK_VALUE":
{"comment": "User-supplied constant value to be subtracted as sky",
"value": "0.0,0.0"},
"BACK_FILTERSIZE":
{"comment": "Background filter: <size> or <width>,<height>",
"value": 3},
"BACKPHOTO_TYPE":
{"comment": 'can be "GLOBAL" or "LOCAL"',
"value": "GLOBAL"},
"BACKPHOTO_THICK":
{"comment": "Thickness in pixels of the background local annulus",
"value": 24},
"CHECKIMAGE_TYPE":
{"comment": 'can be one of "NONE", "BACKGROUND", "MINIBACKGROUND", "-BACKGROUND", "OBJECTS", "-OBJECTS", "SEGMENTATION", "APERTURES", or "FILTERED"',
"value": "NONE"},
"CHECKIMAGE_NAME":
{"comment": "Filename for the check-image",
"value": "check.fits"},
"MEMORY_OBJSTACK":
{"comment": "number of objects in stack",
"value": 3000},
"MEMORY_PIXSTACK":
{"comment": "number of pixels in stack",
"value": 300000},
"MEMORY_BUFSIZE":
{"comment": "number of lines in buffer",
"value": 1024},
"VERBOSE_TYPE":
{"comment": 'can be "QUIET", "NORMAL" or "FULL"',
"value": "QUIET"},
# -- Extra-keys (will not be saved in the main configuration file
"PARAMETERS_LIST":
{"comment": '[Extra key] catalog contents (to put in PARAMETERS_NAME)',
"value": ["NUMBER", "FLUX_BEST", "FLUXERR_BEST",
"X_IMAGE", "Y_IMAGE", "FLAGS", "FWHM_IMAGE"]},
"CONFIG_FILE":
{"comment": '[Extra key] name of the main configuration file',
"value": "py-sextractor.sex"},
"FILTER_MASK":
{"comment": 'Array to put in the FILTER_MASK file',
"value": [[1, 2, 1],
[2, 4, 2],
[1, 2, 1]]}
}
# -- Special config. keys that should not go into the config. file.
_SE_config_special_keys = ["PARAMETERS_LIST", "CONFIG_FILE", "FILTER_MASK"]
# -- Dictionary of all possible parameters (from sexcatalog.py module)
_SE_parameters = SExtractorfile._SE_keys
def __init__(self):
"""
SExtractor class constructor.
"""
self.config = (
dict([(k, copy.deepcopy(SExtractor._SE_config[k]["value"]))
for k in SExtractor._SE_config.keys()]))
# print self.config
self.program = None
self.version = None
def setup(self, path=None):
"""
Look for SExtractor program ('sextractor', or 'sex').
If a full path is provided, only this path is checked.
Raise a SExtractorException if it failed.
Return program and version if it succeed.
"""
# -- Finding sextractor program and its version
# first look for 'sextractor', then 'sex'
candidates = ['sextractor', 'sex']
if (path):
candidates = [path]
selected = None
for candidate in candidates:
try:
p = subprocess.Popen(candidate, shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, close_fds=True)
(_out_err, _in) = (p.stdout, p.stdin)
versionline = _out_err.read()
if (versionline.find("SExtractor") != -1):
selected = candidate
break
except IOError:
continue
if not(selected):
raise SExtractorException, \
"""
Cannot find SExtractor program. Check your PATH,
or provide the SExtractor program path in the constructor.
"""
_program = selected
# print versionline
_version_match = re.search("[Vv]ersion ([0-9\.])+", versionline)
if not _version_match:
raise SExtractorException, \
"Cannot determine SExtractor version."
_version = _version_match.group()[8:]
if not _version:
raise SExtractorException, \
"Cannot determine SExtractor version."
# print "Use " + self.program + " [" + self.version + "]"
return _program, _version
def update_config(self):
"""
Update the configuration files according to the current
in-memory SExtractor configuration.
"""
# -- Write filter configuration file
# First check the filter itself
filter = self.config['FILTER_MASK']
rows = len(filter)
cols = len(filter[0]) # May raise ValueError, OK
filter_f = __builtin__.open(self.config['FILTER_NAME'], 'w')
filter_f.write("CONV NORM\n")
filter_f.write("# %dx%d Generated from sextractor.py module.\n" %
(rows, cols))
for row in filter:
filter_f.write(" ".join(map(repr, row)))
filter_f.write("\n")
filter_f.close()
# -- Write parameter list file
parameters_f = __builtin__.open(self.config['PARAMETERS_NAME'], 'w')
for parameter in self.config['PARAMETERS_LIST']:
print >>parameters_f, parameter
parameters_f.close()
# -- Write NNW configuration file
nnw_f = __builtin__.open(self.config['STARNNW_NAME'], 'w')
nnw_f.write(nnw_config)
nnw_f.close()
# -- Write main configuration file
main_f = __builtin__.open(self.config['CONFIG_FILE'], 'w')
for key in self.config.keys():
if (key in SExtractor._SE_config_special_keys):
continue
if (key == "PHOT_AUTOPARAMS"): # tuple instead of a single value
value = " ".join(map(str, self.config[key]))
else:
value = str(self.config[key])
print >>main_f, ("%-16s %-16s # %s" %
(key, value, SExtractor._SE_config[key]['comment']))
main_f.close()
def run(self, file, updateconfig=True, clean=False, path=None):
"""
Run SExtractor.
If updateconfig is True (default), the configuration
files will be updated before running SExtractor.
If clean is True (default: False), configuration files
(if any) will be deleted after SExtractor terminates.
"""
if updateconfig:
self.update_config()
# Try to find SExtractor program
# This will raise an exception if it failed
self.program, self.version = self.setup(path)
commandline = (
self.program + " -c " + self.config['CONFIG_FILE'] + " " + file)
# print commandline
rcode = os.system(commandline)
if (rcode):
raise SExtractorException, \
"SExtractor command [%s] failed." % commandline
if clean:
self.clean()
def catalog(self):
"""
Read the output catalog produced by the last SExtractor run.
Output is a list of dictionaries, with a dictionary for
each star: {'param1': value, 'param2': value, ...}.
"""
output_f = SExtractorfile(self.config['CATALOG_NAME'], 'r')
c = output_f.read()
output_f.close()
return c
def clean(self, config=True, catalog=False, check=False):
"""
Remove the generated SExtractor files (if any).
If config is True, remove generated configuration files.
If catalog is True, remove the output catalog.
If check is True, remove output check image.
"""
try:
if (config):
os.unlink(self.config['FILTER_NAME'])
os.unlink(self.config['PARAMETERS_NAME'])
os.unlink(self.config['STARNNW_NAME'])
os.unlink(self.config['CONFIG_FILE'])
if (catalog):
os.unlink(self.config['CATALOG_NAME'])
if (check):
os.unlink(self.config['CHECKIMAGE_NAME'])
except OSError:
pass
# ======================================================================
|
wschoenell/chimera
|
src/chimera/util/sextractor.py
|
Python
|
gpl-2.0
| 18,469
|
[
"Galaxy"
] |
99e2480cf2b2550c4356a4610512002fcfd542c42e275bdd21600e3fce049acf
|
#!/usr/bin/env ipython
from pylab import *
import numpy as np
from scipy.io.netcdf import netcdf_file
import console_colors as ccl
import os
MCwant = '2'
nbefore = 2
nafter = 4
WangFlag = '90.0' #'NaN'
fgap = 0.2
#v_lo = 550.0 #550.0 #450.0 #100.0
#v_hi = 3000.0 #3000.0 #550.0 #450.0
prexShift = 'wShiftCorr'
#+++++++++++++++ filtros
FILTER = {}
FILTER['vsw_filter'] = True #False
FILTER['B_filter'] = False
FILTER['filter_dR.icme'] = False
LO, HI = 550.0, 3000.0 #100.0, 450.0 #450.0, 550.0 #550.0, 3000.0
# NOTA: estos directorios deberian existir
dir_suffix = '/_test_Vmc_'
DIR_FIGS = '../../plots/MCflag%s/%s' % (MCwant, prexShift) + dir_suffix
DIR_ASCII = '../../ascii/MCflag%s/%s' % (MCwant, prexShift) + dir_suffix
#os.system('mkdir -p %s %s' % (DIR_FIGS, DIR_ASCII)) # si no existen, los creo!
print ccl.On + " ---> leyendo data de: " + DIR_ASCII + ccl.W
#FNAMEs = 'MCflag%s_%dbefore.%dafter_Wang%s_fgap%1.1f_vlo.%3.1f.vhi.%4.1f' % (MCwant, nbefore, nafter, WangFlag, fgap, v_lo, v_hi)
FNAMEs = 'MCflag%s_%dbefore.%dafter_fgap%1.1f' % (MCwant, nbefore, nafter, fgap)
FNAMEs += '_Wang%s' % (WangFlag)
if FILTER['vsw_filter']: FNAMEs += '_vlo.%03.1f.vhi.%04.1f' % (LO, HI)
if FILTER['B_filter']: FNAMEs += '_Blo.%2.2f.Bhi.%2.2f' % (LO, HI)
if FILTER['filter_dR.icme']: FNAMEs += '_dRlo.%2.2f.dRhi.%2.2f' % (LO, HI)
# _stuff_MCflag2_2before.4after_fgap0.2_Wang90.0_
#FNAME_FIGS = '%s/_hist_%s' % (DIR_FIGS, FNAMEs)
fname_inp = '%s/_stuff_%s.nc' % (DIR_ASCII, FNAMEs)
f_in = netcdf_file(fname_inp, 'r')
Pcc = f_in.variables['Pcc'].data
dt_sh_Pcc = f_in.variables['dt_sheath_Pcc'].data
Vsh = f_in.variables['V'].data
id_Pcc = set(f_in.variables['IDs_Pcc'].data)
id_Vsh = set(f_in.variables['IDs_V'].data)
ids = id_Pcc.intersection(id_Vsh)
#---------------------------------- calculamos las surface densities
nbins = 10
n = len(ids)
var_sh = np.zeros(n)
var_mc = np.zeros(n)
var_co = np.zeros(n)
#var = Pcc*dt_sh*Vsh
#for id, i in zip(ids, range(n)):
i=0
for ID_Pcc, i_Pcc in zip(id_Pcc, range(len(id_Pcc))):
for ID_Vsh, i_Vsh in zip(id_Vsh, range(len(id_Vsh))):
ok = (ID_Pcc==ID_Vsh) and (ID_Vsh in ids)
if ok:
var_sh[i] = Pcc[i_Pcc]*dt_sh_Pcc[i_Pcc]*Vsh[i_Vsh]
i+=1
var_sh *= 86400.*1e5
#---------------------------------- begin: figura
XRANGE = [0., 1.4e14]
fig = figure(1, figsize=(6,4))
ax = fig.add_subplot(111)
h, x = np.histogram(var_sh, bins=nbins, range=XRANGE, normed=True)
x = .5*(x[:-1] + x[1:])
dx = x[1]-x[0]
h *= dx*100.
avr = np.mean(var_sh)
med = np.median(var_sh)
LABEL = 'N: %d' % n
TIT1 = 'normalized histogram (area=100%)'
TIT2 = '%4.1f km/s < Vmc < %4.1f km/s' % (LO, HI)
TITLE = TIT1+'\n'+TIT2
ax.plot(x, h, 'o-', label=LABEL)
ax.axvline(x=avr, c='blue', alpha=.4, lw=4, label="mean")
ax.axvline(x=med, c='black', alpha=.4, lw=4, label="median")
ax.legend()
ax.grid()
ax.set_title(TITLE)
ax.set_xlabel('surface density at sheath $\sigma_{sh}$ [$1/cm^2$]')
ax.set_ylabel('[%]')
ax.set_xlim(XRANGE)
ax.set_ylim(0., 55.)
# generamos figura
fname_fig = '%s/_hist_.sh_%s.png' % (DIR_FIGS, FNAMEs)
savefig(fname_fig, format='png', dpi=135, bbox_inches='tight')
close(fig)
print ccl.Rn + " -------> genere: %s" % fname_fig + ccl.W
#---------------------------------- end: figura
#---------------------------------- begin: ascii
fname_txt = '%s/_hist_.sh_%s.txt' % (DIR_ASCII, FNAMEs)
data_out = np.array([x, h]).T
np.savetxt(fname_txt, data_out, fmt='%5.3g')
print ccl.Rn + " -------> genere: %s" % fname_txt + ccl.W
#---------------------------------- end: ascii
#
del Pcc, dt_sh_Pcc, Vsh # para evitar el RunTimeWarning del netcdf_file()
#pause(3)
f_in.close()
##
|
jimsrc/seatos
|
sheaths/src/surf.dens_for.paper/src/h_group_Vmc.py
|
Python
|
mit
| 3,903
|
[
"NetCDF"
] |
4fb98c1b93a96b7acb1427c04c05191bbd1997a4acdbde1f153190bf3f7a8347
|
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import re
from jinja2.compiler import generate
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleUndefinedVariable
from ansible.module_utils.six import text_type
from ansible.module_utils._text import to_native, to_text
from ansible.playbook.attribute import FieldAttribute
from ansible.utils.display import Display
display = Display()
DEFINED_REGEX = re.compile(r'(hostvars\[.+\]|[\w_]+)\s+(not\s+is|is|is\s+not)\s+(defined|undefined)')
LOOKUP_REGEX = re.compile(r'lookup\s*\(')
VALID_VAR_REGEX = re.compile("^[_A-Za-z][_a-zA-Z0-9]*$")
class Conditional:
'''
This is a mix-in class, to be used with Base to allow the object
to be run conditionally when a condition is met or skipped.
'''
_when = FieldAttribute(isa='list', default=list, extend=True, prepend=True)
def __init__(self, loader=None):
# when used directly, this class needs a loader, but we want to
# make sure we don't trample on the existing one if this class
# is used as a mix-in with a playbook base class
if not hasattr(self, '_loader'):
if loader is None:
raise AnsibleError("a loader must be specified when using Conditional() directly")
else:
self._loader = loader
super(Conditional, self).__init__()
def _validate_when(self, attr, name, value):
if not isinstance(value, list):
setattr(self, name, [value])
def extract_defined_undefined(self, conditional):
results = []
cond = conditional
m = DEFINED_REGEX.search(cond)
while m:
results.append(m.groups())
cond = cond[m.end():]
m = DEFINED_REGEX.search(cond)
return results
def evaluate_conditional(self, templar, all_vars):
'''
Loops through the conditionals set on this object, returning
False if any of them evaluate as such.
'''
# since this is a mix-in, it may not have an underlying datastructure
# associated with it, so we pull it out now in case we need it for
# error reporting below
ds = None
if hasattr(self, '_ds'):
ds = getattr(self, '_ds')
result = True
try:
for conditional in self.when:
# do evaluation
if conditional is None or conditional == '':
res = True
elif isinstance(conditional, bool):
res = conditional
else:
res = self._check_conditional(conditional, templar, all_vars)
# only update if still true, preserve false
if result:
result = res
display.debug("Evaluated conditional (%s): %s" % (conditional, res))
if not result:
break
except Exception as e:
raise AnsibleError("The conditional check '%s' failed. The error was: %s" % (to_native(conditional), to_native(e)), obj=ds)
return result
def _check_conditional(self, conditional, templar, all_vars):
'''
This method does the low-level evaluation of each conditional
set on this object, using jinja2 to wrap the conditionals for
evaluation.
'''
original = conditional
if templar.is_template(conditional):
display.warning('conditional statements should not include jinja2 '
'templating delimiters such as {{ }} or {%% %%}. '
'Found: %s' % conditional)
# make sure the templar is using the variables specified with this method
templar.available_variables = all_vars
try:
# if the conditional is "unsafe", disable lookups
disable_lookups = hasattr(conditional, '__UNSAFE__')
conditional = templar.template(conditional, disable_lookups=disable_lookups)
if not isinstance(conditional, text_type) or conditional == "":
return conditional
# update the lookups flag, as the string returned above may now be unsafe
# and we don't want future templating calls to do unsafe things
disable_lookups |= hasattr(conditional, '__UNSAFE__')
# First, we do some low-level jinja2 parsing involving the AST format of the
# statement to ensure we don't do anything unsafe (using the disable_lookup flag above)
class CleansingNodeVisitor(ast.NodeVisitor):
def generic_visit(self, node, inside_call=False, inside_yield=False):
if isinstance(node, ast.Call):
inside_call = True
elif isinstance(node, ast.Yield):
inside_yield = True
elif isinstance(node, ast.Str):
if disable_lookups:
if inside_call and node.s.startswith("__"):
# calling things with a dunder is generally bad at this point...
raise AnsibleError(
"Invalid access found in the conditional: '%s'" % conditional
)
elif inside_yield:
# we're inside a yield, so recursively parse and traverse the AST
# of the result to catch forbidden syntax from executing
parsed = ast.parse(node.s, mode='exec')
cnv = CleansingNodeVisitor()
cnv.visit(parsed)
# iterate over all child nodes
for child_node in ast.iter_child_nodes(node):
self.generic_visit(
child_node,
inside_call=inside_call,
inside_yield=inside_yield
)
try:
res = templar.environment.parse(conditional, None, None)
res = generate(res, templar.environment, None, None)
parsed = ast.parse(res, mode='exec')
cnv = CleansingNodeVisitor()
cnv.visit(parsed)
except Exception as e:
raise AnsibleError("Invalid conditional detected: %s" % to_native(e))
# and finally we generate and template the presented string and look at the resulting string
# NOTE The spaces around True and False are intentional to short-circuit literal_eval for
# jinja2_native=False and avoid its expensive calls.
presented = "{%% if %s %%} True {%% else %%} False {%% endif %%}" % conditional
val = templar.template(presented, disable_lookups=disable_lookups).strip()
if val == "True":
return True
elif val == "False":
return False
else:
raise AnsibleError("unable to evaluate conditional: %s" % original)
except (AnsibleUndefinedVariable, UndefinedError) as e:
# the templating failed, meaning most likely a variable was undefined. If we happened
# to be looking for an undefined variable, return True, otherwise fail
try:
# first we extract the variable name from the error message
var_name = re.compile(r"'(hostvars\[.+\]|[\w_]+)' is undefined").search(str(e)).groups()[0]
# next we extract all defined/undefined tests from the conditional string
def_undef = self.extract_defined_undefined(conditional)
# then we loop through these, comparing the error variable name against
# each def/undef test we found above. If there is a match, we determine
# whether the logic/state mean the variable should exist or not and return
# the corresponding True/False
for (du_var, logic, state) in def_undef:
# when we compare the var names, normalize quotes because something
# like hostvars['foo'] may be tested against hostvars["foo"]
if var_name.replace("'", '"') == du_var.replace("'", '"'):
# the should exist is a xor test between a negation in the logic portion
# against the state (defined or undefined)
should_exist = ('not' in logic) != (state == 'defined')
if should_exist:
return False
else:
return True
# as nothing above matched the failed var name, re-raise here to
# trigger the AnsibleUndefinedVariable exception again below
raise
except Exception:
raise AnsibleUndefinedVariable("error while evaluating conditional (%s): %s" % (original, e))
|
ansible/ansible
|
lib/ansible/playbook/conditional.py
|
Python
|
gpl-3.0
| 9,983
|
[
"VisIt"
] |
20c9d059974c716365a56b339a70b0b289fee73a6ede43cc85277c4e291ffb35
|
from setuptools import find_packages, setup
with open('analyticord/__init__.py', 'r') as f:
for line in f:
if line.startswith('__version__'):
version = line.strip().split('=')[1].strip(' \'"')
break
else:
version = '0.0.1'
with open('README.md', 'rb') as f:
readme = f.read().decode('utf-8')
REQUIRES = ["aiohttp"]
setup(
name='analyticord',
version=version,
description='',
long_description=readme,
author='Ben Simms',
author_email='ben@bensimms.moe',
maintainer='Ben Simms',
maintainer_email='ben@bensimms.moe',
url='https://github.com/nitros12/analyticord',
license='MIT',
keywords=[
'',
],
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
],
install_requires=REQUIRES,
#tests_require=['coverage', 'pytest'],
extras_require={
"docs": [
"sphinx-autodoc-typehints >= 1.2.1",
"sphinxcontrib-asyncio"
]},
packages=find_packages(),
)
|
Analyticord/module-python
|
setup.py
|
Python
|
mit
| 1,330
|
[
"MOE"
] |
b1046e90224ea9a98bd3d9959432fe27375191b4ff3af1e36a4e2121e857d6af
|
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
import os
import unittest
from monty.serialization import dumpfn, loadfn
from pymatgen.core.structure import Molecule
from pymatgen.io.qchem.outputs import QCOutput, check_for_structure_changes
from pymatgen.util.testing import PymatgenTest
try:
from openbabel import openbabel
openbabel # reference openbabel so it's not unused import
have_babel = True
except ImportError:
have_babel = False
__author__ = "Samuel Blau, Brandon Wood, Shyam Dwaraknath, Evan Spotte-Smith"
__copyright__ = "Copyright 2018, The Materials Project"
__version__ = "0.1"
single_job_dict = loadfn(os.path.join(os.path.dirname(__file__), "single_job.json"))
multi_job_dict = loadfn(os.path.join(os.path.dirname(__file__), "multi_job.json"))
property_list = {
"errors",
"multiple_outputs",
"completion",
"unrestricted",
"using_GEN_SCFMAN",
"final_energy",
"S2",
"optimization",
"energy_trajectory",
"opt_constraint",
"frequency_job",
"charge",
"multiplicity",
"species",
"initial_geometry",
"initial_molecule",
"SCF",
"Mulliken",
"optimized_geometry",
"optimized_zmat",
"molecule_from_optimized_geometry",
"last_geometry",
"molecule_from_last_geometry",
"geometries",
"gradients",
"frequency_mode_vectors",
"walltime",
"cputime",
"point_group",
"frequencies",
"IR_intens",
"IR_active",
"g_electrostatic",
"g_cavitation",
"g_dispersion",
"g_repulsion",
"total_contribution_pcm",
"ZPE",
"trans_enthalpy",
"vib_enthalpy",
"rot_enthalpy",
"gas_constant",
"trans_entropy",
"vib_entropy",
"rot_entropy",
"total_entropy",
"total_enthalpy",
"warnings",
"SCF_energy_in_the_final_basis_set",
"Total_energy_in_the_final_basis_set",
"solvent_method",
"solvent_data",
"using_dft_d3",
"single_point_job",
"force_job",
"pcm_gradients",
"CDS_gradients",
"RESP",
"trans_dip",
"transition_state",
"scan_job",
"optimized_geometries",
"molecules_from_optimized_geometries",
"scan_energies",
"scan_constraint_sets",
"hf_scf_energy",
"mp2_energy",
"ccsd_correlation_energy",
"ccsd_total_energy",
"ccsd(t)_correlation_energy",
"ccsd(t)_total_energy",
}
if have_babel:
property_list.add("structure_change")
single_job_out_names = {
"unable_to_determine_lambda_in_geom_opt.qcout",
"thiophene_wfs_5_carboxyl.qcout",
"hf.qcout",
"hf_opt_failed.qcout",
"no_reading.qcout",
"exit_code_134.qcout",
"negative_eigen.qcout",
"insufficient_memory.qcout",
"freq_seg_too_small.qcout",
"crowd_gradient_number.qcout",
"quinoxaline_anion.qcout",
"tfsi_nbo.qcout",
"crowd_nbo_charges.qcout",
"h2o_aimd.qcout",
"quinoxaline_anion.qcout",
"crowd_gradient_number.qcout",
"bsse.qcout",
"thiophene_wfs_5_carboxyl.qcout",
"time_nan_values.qcout",
"pt_dft_180.0.qcout",
"qchem_energies/hf-rimp2.qcout",
"qchem_energies/hf_b3lyp.qcout",
"qchem_energies/hf_ccsd(t).qcout",
"qchem_energies/hf_cosmo.qcout",
"qchem_energies/hf_hf.qcout",
"qchem_energies/hf_lxygjos.qcout",
"qchem_energies/hf_mosmp2.qcout",
"qchem_energies/hf_mp2.qcout",
"qchem_energies/hf_qcisd(t).qcout",
"qchem_energies/hf_riccsd(t).qcout",
"qchem_energies/hf_tpssh.qcout",
"qchem_energies/hf_xyg3.qcout",
"qchem_energies/hf_xygjos.qcout",
"qchem_energies/hf_wb97xd_gen_scfman.qcout",
"new_qchem_files/pt_n2_n_wb_180.0.qcout",
"new_qchem_files/pt_n2_trip_wb_90.0.qcout",
"new_qchem_files/pt_n2_gs_rimp2_pvqz_90.0.qcout",
"new_qchem_files/VC_solv_eps10.2.qcout",
"crazy_scf_values.qcout",
"new_qchem_files/N2.qcout",
"new_qchem_files/julian.qcout.gz",
"new_qchem_files/Frequency_no_equal.qout",
"new_qchem_files/gdm.qout",
"new_qchem_files/DinfH.qout",
"new_qchem_files/mpi_error.qout",
"new_qchem_files/molecule_read_error.qout",
"new_qchem_files/basis_not_supported.qout",
"new_qchem_files/lebdevpts.qout",
"new_qchem_files/Optimization_no_equal.qout",
"new_qchem_files/2068.qout",
"new_qchem_files/2620.qout",
"new_qchem_files/1746.qout",
"new_qchem_files/1570.qout",
"new_qchem_files/1570_2.qout",
"new_qchem_files/single_point.qout",
"new_qchem_files/roothaan_diis_gdm.qout",
"new_qchem_files/pes_scan_single_variable.qout",
"new_qchem_files/pes_scan_double_variable.qout",
"new_qchem_files/ts.out",
"new_qchem_files/ccsd.qout",
"new_qchem_files/ccsdt.qout",
}
multi_job_out_names = {
"not_enough_total_memory.qcout",
"new_qchem_files/VC_solv_eps10.qcout",
"new_qchem_files/MECLi_solv_eps10.qcout",
"pcm_solvent_deprecated.qcout",
"qchem43_batch_job.qcout",
"ferrocenium_1pos.qcout",
"CdBr2.qcout",
"killed.qcout",
"aux_mpi_time_mol.qcout",
"new_qchem_files/VCLi_solv_eps10.qcout",
}
class TestQCOutput(PymatgenTest):
@staticmethod
def generate_single_job_dict():
"""
Used to generate test dictionary for single jobs.
"""
single_job_dict = {}
for file in single_job_out_names:
single_job_dict[file] = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", file)).data
dumpfn(single_job_dict, "single_job.json")
@staticmethod
def generate_multi_job_dict():
"""
Used to generate test dictionary for multiple jobs.
"""
multi_job_dict = {}
for file in multi_job_out_names:
outputs = QCOutput.multiple_outputs_from_file(
QCOutput, os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", file), keep_sub_files=False
)
data = []
for sub_output in outputs:
data.append(sub_output.data)
multi_job_dict[file] = data
dumpfn(multi_job_dict, "multi_job.json")
def _test_property(self, key, single_outs, multi_outs):
for name, outdata in single_outs.items():
try:
self.assertEqual(outdata.get(key), single_job_dict[name].get(key))
except ValueError:
self.assertArrayEqual(outdata.get(key), single_job_dict[name].get(key))
for name, outputs in multi_outs.items():
for ii, sub_output in enumerate(outputs):
try:
self.assertEqual(sub_output.data.get(key), multi_job_dict[name][ii].get(key))
except ValueError:
self.assertArrayEqual(sub_output.data.get(key), multi_job_dict[name][ii].get(key))
def test_all(self):
self.maxDiff = None
single_outs = {}
for file in single_job_out_names:
single_outs[file] = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", file)).data
multi_outs = {}
for file in multi_job_out_names:
multi_outs[file] = QCOutput.multiple_outputs_from_file(
QCOutput, os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", file), keep_sub_files=False
)
for key in property_list:
print("Testing ", key)
self._test_property(key, single_outs, multi_outs)
@unittest.skipIf((not (have_babel)), "OpenBabel not installed.")
def test_structural_change(self):
t1 = Molecule.from_file(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "structural_change", "t1.xyz"))
t2 = Molecule.from_file(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "structural_change", "t2.xyz"))
t3 = Molecule.from_file(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "structural_change", "t3.xyz"))
thio_1 = Molecule.from_file(
os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "structural_change", "thiophene1.xyz")
)
thio_2 = Molecule.from_file(
os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "structural_change", "thiophene2.xyz")
)
frag_1 = Molecule.from_file(
os.path.join(
PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "test_structure_change", "frag_1.xyz"
)
)
frag_2 = Molecule.from_file(
os.path.join(
PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "test_structure_change", "frag_2.xyz"
)
)
self.assertEqual(check_for_structure_changes(t1, t1), "no_change")
self.assertEqual(check_for_structure_changes(t2, t3), "no_change")
self.assertEqual(check_for_structure_changes(t1, t2), "fewer_bonds")
self.assertEqual(check_for_structure_changes(t2, t1), "more_bonds")
self.assertEqual(check_for_structure_changes(thio_1, thio_2), "unconnected_fragments")
self.assertEqual(check_for_structure_changes(frag_1, frag_2), "bond_change")
def test_NBO_parsing(self):
data = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "nbo.qout")).data
self.assertEqual(len(data["nbo_data"]["natural_populations"]), 3)
self.assertEqual(len(data["nbo_data"]["hybridization_character"]), 4)
self.assertEqual(len(data["nbo_data"]["perturbation_energy"]), 2)
self.assertEqual(data["nbo_data"]["natural_populations"][0]["Density"][5], -0.08624)
self.assertEqual(data["nbo_data"]["hybridization_character"][-1]["atom 2 pol coeff"][35], "-0.7059")
next_to_last = list(data["nbo_data"]["perturbation_energy"][-1]["fock matrix element"].keys())[-2]
self.assertEqual(data["nbo_data"]["perturbation_energy"][-1]["fock matrix element"][next_to_last], 0.071)
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor type"][0], "RY*")
def test_NBO7_parsing(self):
data = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "nbo7_1.qout")).data
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["perturbation energy"][9], 15.73)
self.assertEqual(len(data["nbo_data"]["perturbation_energy"][0]["donor bond index"].keys()), 84)
self.assertEqual(len(data["nbo_data"]["perturbation_energy"][1]["donor bond index"].keys()), 29)
data = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "nbo7_2.qout")).data
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["perturbation energy"][13], 32.93)
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor type"][13], "LV")
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor type"][12], "RY")
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor atom 1 symbol"][12], "Mg")
data = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "nbo7_3.qout")).data
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["perturbation energy"][13], 34.54)
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor type"][13], "BD*")
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor atom 1 symbol"][13], "B")
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor atom 2 symbol"][13], "Mg")
self.assertEqual(data["nbo_data"]["perturbation_energy"][0]["acceptor atom 2 number"][13], 3)
def test_NBO5_vs_NBO7_hybridization_character(self):
data5 = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "nbo5_1.qout")).data
data7 = QCOutput(os.path.join(PymatgenTest.TEST_FILES_DIR, "molecules", "new_qchem_files", "nbo7_1.qout")).data
self.assertEqual(
len(data5["nbo_data"]["hybridization_character"]), len(data7["nbo_data"]["hybridization_character"])
)
self.assertEqual(
data5["nbo_data"]["hybridization_character"][3]["atom 2 pol coeff"][9],
data7["nbo_data"]["hybridization_character"][3]["atom 2 pol coeff"][9],
)
self.assertEqual(
data5["nbo_data"]["hybridization_character"][0]["s"][0],
data7["nbo_data"]["hybridization_character"][0]["s"][0],
)
self.assertEqual(data5["nbo_data"]["hybridization_character"][1]["bond index"][7], "149")
self.assertEqual(data7["nbo_data"]["hybridization_character"][1]["bond index"][7], "21")
if __name__ == "__main__":
unittest.main()
|
materialsproject/pymatgen
|
pymatgen/io/qchem/tests/test_outputs.py
|
Python
|
mit
| 12,614
|
[
"pymatgen"
] |
9efd620b5d8703b7d792b73b140af35fc74758e0199c6c348686007374481944
|
#!/usr/bin/env python
# Copyright 2012, 2013 The GalSim developers:
# https://github.com/GalSim-developers
#
# This file is part of GalSim: The modular galaxy image simulation toolkit.
#
# GalSim is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# GalSim is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GalSim. If not, see <http://www.gnu.org/licenses/>
#
__applicationName__ = "doxypy"
__blurb__ = """
doxypy is an input filter for Doxygen. It preprocesses python
files so that docstrings of classes and functions are reformatted
into Doxygen-conform documentation blocks.
"""
__doc__ = __blurb__ + \
"""
In order to make Doxygen preprocess files through doxypy, simply
add the following lines to your Doxyfile:
FILTER_SOURCE_FILES = YES
INPUT_FILTER = "python /path/to/doxypy.py"
"""
__version__ = "0.4.2"
__date__ = "14th October 2009"
__website__ = "http://code.foosel.org/doxypy"
__author__ = (
"Philippe 'demod' Neumann (doxypy at demod dot org)",
"Gina 'foosel' Haeussge (gina at foosel dot net)"
)
__licenseName__ = "GPL v2"
__license__ = """This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
import sys
import re
from optparse import OptionParser, OptionGroup
class FSM(object):
"""Implements a finite state machine.
Transitions are given as 4-tuples, consisting of an origin state, a target
state, a condition for the transition (given as a reference to a function
which gets called with a given piece of input) and a pointer to a function
to be called upon the execution of the given transition.
"""
"""
@var transitions holds the transitions
@var current_state holds the current state
@var current_input holds the current input
@var current_transition hold the currently active transition
"""
def __init__(self, start_state=None, transitions=[]):
self.transitions = transitions
self.current_state = start_state
self.current_input = None
self.current_transition = None
def setStartState(self, state):
self.current_state = state
def addTransition(self, from_state, to_state, condition, callback):
self.transitions.append([from_state, to_state, condition, callback])
def makeTransition(self, input):
"""Makes a transition based on the given input.
@param input input to parse by the FSM
"""
for transition in self.transitions:
[from_state, to_state, condition, callback] = transition
if from_state == self.current_state:
match = condition(input)
if match:
self.current_state = to_state
self.current_input = input
self.current_transition = transition
if options.debug:
print >>sys.stderr, "# FSM: executing (%s -> %s) for line '%s'" % (from_state, to_state, input)
callback(match)
return
class Doxypy(object):
def __init__(self):
string_prefixes = "[uU]?[rR]?"
self.start_single_comment_re = re.compile("^\s*%s(''')" % string_prefixes)
self.end_single_comment_re = re.compile("(''')\s*$")
self.start_double_comment_re = re.compile("^\s*%s(\"\"\")" % string_prefixes)
self.end_double_comment_re = re.compile("(\"\"\")\s*$")
self.single_comment_re = re.compile("^\s*%s(''').*(''')\s*$" % string_prefixes)
self.double_comment_re = re.compile("^\s*%s(\"\"\").*(\"\"\")\s*$" % string_prefixes)
self.defclass_re = re.compile("^(\s*)(def .+:|class .+:)")
self.empty_re = re.compile("^\s*$")
self.hashline_re = re.compile("^\s*#.*$")
self.importline_re = re.compile("^\s*(import |from .+ import)")
self.multiline_defclass_start_re = re.compile("^(\s*)(def|class)(\s.*)?$")
self.multiline_defclass_end_re = re.compile(":\s*$")
## Transition list format
# ["FROM", "TO", condition, action]
transitions = [
### FILEHEAD
# single line comments
["FILEHEAD", "FILEHEAD", self.single_comment_re.search, self.appendCommentLine],
["FILEHEAD", "FILEHEAD", self.double_comment_re.search, self.appendCommentLine],
# multiline comments
["FILEHEAD", "FILEHEAD_COMMENT_SINGLE", self.start_single_comment_re.search, self.appendCommentLine],
["FILEHEAD_COMMENT_SINGLE", "FILEHEAD", self.end_single_comment_re.search, self.appendCommentLine],
["FILEHEAD_COMMENT_SINGLE", "FILEHEAD_COMMENT_SINGLE", self.catchall, self.appendCommentLine],
["FILEHEAD", "FILEHEAD_COMMENT_DOUBLE", self.start_double_comment_re.search, self.appendCommentLine],
["FILEHEAD_COMMENT_DOUBLE", "FILEHEAD", self.end_double_comment_re.search, self.appendCommentLine],
["FILEHEAD_COMMENT_DOUBLE", "FILEHEAD_COMMENT_DOUBLE", self.catchall, self.appendCommentLine],
# other lines
["FILEHEAD", "FILEHEAD", self.empty_re.search, self.appendFileheadLine],
["FILEHEAD", "FILEHEAD", self.hashline_re.search, self.appendFileheadLine],
["FILEHEAD", "FILEHEAD", self.importline_re.search, self.appendFileheadLine],
["FILEHEAD", "DEFCLASS", self.defclass_re.search, self.resetCommentSearch],
["FILEHEAD", "DEFCLASS_MULTI", self.multiline_defclass_start_re.search, self.resetCommentSearch],
["FILEHEAD", "DEFCLASS_BODY", self.catchall, self.appendFileheadLine],
### DEFCLASS
# single line comments
["DEFCLASS", "DEFCLASS_BODY", self.single_comment_re.search, self.appendCommentLine],
["DEFCLASS", "DEFCLASS_BODY", self.double_comment_re.search, self.appendCommentLine],
# multiline comments
["DEFCLASS", "COMMENT_SINGLE", self.start_single_comment_re.search, self.appendCommentLine],
["COMMENT_SINGLE", "DEFCLASS_BODY", self.end_single_comment_re.search, self.appendCommentLine],
["COMMENT_SINGLE", "COMMENT_SINGLE", self.catchall, self.appendCommentLine],
["DEFCLASS", "COMMENT_DOUBLE", self.start_double_comment_re.search, self.appendCommentLine],
["COMMENT_DOUBLE", "DEFCLASS_BODY", self.end_double_comment_re.search, self.appendCommentLine],
["COMMENT_DOUBLE", "COMMENT_DOUBLE", self.catchall, self.appendCommentLine],
# other lines
["DEFCLASS", "DEFCLASS", self.empty_re.search, self.appendDefclassLine],
["DEFCLASS", "DEFCLASS", self.defclass_re.search, self.resetCommentSearch],
["DEFCLASS", "DEFCLASS_MULTI", self.multiline_defclass_start_re.search, self.resetCommentSearch],
["DEFCLASS", "DEFCLASS_BODY", self.catchall, self.stopCommentSearch],
### DEFCLASS_BODY
["DEFCLASS_BODY", "DEFCLASS", self.defclass_re.search, self.startCommentSearch],
["DEFCLASS_BODY", "DEFCLASS_MULTI", self.multiline_defclass_start_re.search, self.startCommentSearch],
["DEFCLASS_BODY", "DEFCLASS_BODY", self.catchall, self.appendNormalLine],
### DEFCLASS_MULTI
["DEFCLASS_MULTI", "DEFCLASS", self.multiline_defclass_end_re.search, self.appendDefclassLine],
["DEFCLASS_MULTI", "DEFCLASS_MULTI", self.catchall, self.appendDefclassLine],
]
self.fsm = FSM("FILEHEAD", transitions)
self.outstream = sys.stdout
self.output = []
self.comment = []
self.filehead = []
self.defclass = []
self.indent = ""
def __closeComment(self):
"""Appends any open comment block and triggering block to the output."""
if options.autobrief:
if len(self.comment) == 1 \
or (len(self.comment) > 2 and self.comment[1].strip() == ''):
self.comment[0] = self.__docstringSummaryToBrief(self.comment[0])
if self.comment:
block = self.makeCommentBlock()
self.output.extend(block)
if self.defclass:
self.output.extend(self.defclass)
def __docstringSummaryToBrief(self, line):
"""Adds \\brief to the docstrings summary line.
A \\brief is prepended, provided no other doxygen command is at the
start of the line.
"""
stripped = line.strip()
if stripped and not stripped[0] in ('@', '\\'):
return "\\brief " + line
else:
return line
def __flushBuffer(self):
"""Flushes the current outputbuffer to the outstream."""
if self.output:
try:
if options.debug:
print >>sys.stderr, "# OUTPUT: ", self.output
print >>self.outstream, "\n".join(self.output)
self.outstream.flush()
except IOError:
# Fix for FS#33. Catches "broken pipe" when doxygen closes
# stdout prematurely upon usage of INPUT_FILTER, INLINE_SOURCES
# and FILTER_SOURCE_FILES.
pass
self.output = []
def catchall(self, input):
"""The catchall-condition, always returns true."""
return True
def resetCommentSearch(self, match):
"""Restarts a new comment search for a different triggering line.
Closes the current commentblock and starts a new comment search.
"""
if options.debug:
print >>sys.stderr, "# CALLBACK: resetCommentSearch"
self.__closeComment()
self.startCommentSearch(match)
def startCommentSearch(self, match):
"""Starts a new comment search.
Saves the triggering line, resets the current comment and saves
the current indentation.
"""
if options.debug:
print >>sys.stderr, "# CALLBACK: startCommentSearch"
self.defclass = [self.fsm.current_input]
self.comment = []
self.indent = match.group(1)
def stopCommentSearch(self, match):
"""Stops a comment search.
Closes the current commentblock, resets the triggering line and
appends the current line to the output.
"""
if options.debug:
print >>sys.stderr, "# CALLBACK: stopCommentSearch"
self.__closeComment()
self.defclass = []
self.output.append(self.fsm.current_input)
def appendFileheadLine(self, match):
"""Appends a line in the FILEHEAD state.
Closes the open comment block, resets it and appends the current line.
"""
if options.debug:
print >>sys.stderr, "# CALLBACK: appendFileheadLine"
self.__closeComment()
self.comment = []
self.output.append(self.fsm.current_input)
def appendCommentLine(self, match):
"""Appends a comment line.
The comment delimiter is removed from multiline start and ends as
well as singleline comments.
"""
if options.debug:
print >>sys.stderr, "# CALLBACK: appendCommentLine"
(from_state, to_state, condition, callback) = self.fsm.current_transition
# single line comment
if (from_state == "DEFCLASS" and to_state == "DEFCLASS_BODY") \
or (from_state == "FILEHEAD" and to_state == "FILEHEAD"):
# remove comment delimiter from begin and end of the line
activeCommentDelim = match.group(1)
line = self.fsm.current_input
self.comment.append(line[line.find(activeCommentDelim)+len(activeCommentDelim):line.rfind(activeCommentDelim)])
if (to_state == "DEFCLASS_BODY"):
self.__closeComment()
self.defclass = []
# multiline start
elif from_state == "DEFCLASS" or from_state == "FILEHEAD":
# remove comment delimiter from begin of the line
activeCommentDelim = match.group(1)
line = self.fsm.current_input
self.comment.append(line[line.find(activeCommentDelim)+len(activeCommentDelim):])
# multiline end
elif to_state == "DEFCLASS_BODY" or to_state == "FILEHEAD":
# remove comment delimiter from end of the line
activeCommentDelim = match.group(1)
line = self.fsm.current_input
self.comment.append(line[0:line.rfind(activeCommentDelim)])
if (to_state == "DEFCLASS_BODY"):
self.__closeComment()
self.defclass = []
# in multiline comment
else:
# just append the comment line
self.comment.append(self.fsm.current_input)
def appendNormalLine(self, match):
"""Appends a line to the output."""
if options.debug:
print >>sys.stderr, "# CALLBACK: appendNormalLine"
self.output.append(self.fsm.current_input)
def appendDefclassLine(self, match):
"""Appends a line to the triggering block."""
if options.debug:
print >>sys.stderr, "# CALLBACK: appendDefclassLine"
self.defclass.append(self.fsm.current_input)
def makeCommentBlock(self):
"""Indents the current comment block with respect to the current
indentation level.
@returns a list of indented comment lines
"""
doxyStart = "##"
commentLines = self.comment
commentLines = map(lambda x: "%s# %s" % (self.indent, x), commentLines)
l = [self.indent + doxyStart]
l.extend(commentLines)
return l
def parse(self, input):
"""Parses a python file given as input string and returns the doxygen-
compatible representation.
@param input the python code to parse
@returns the modified python code
"""
lines = input.split("\n")
for line in lines:
self.fsm.makeTransition(line)
if self.fsm.current_state == "DEFCLASS":
self.__closeComment()
return "\n".join(self.output)
def parseFile(self, filename):
"""Parses a python file given as input string and returns the doxygen-
compatible representation.
@param input the python code to parse
@returns the modified python code
"""
f = open(filename, 'r')
for line in f:
self.parseLine(line.rstrip('\r\n'))
if self.fsm.current_state == "DEFCLASS":
self.__closeComment()
self.__flushBuffer()
f.close()
def parseLine(self, line):
"""Parse one line of python and flush the resulting output to the
outstream.
@param line the python code line to parse
"""
self.fsm.makeTransition(line)
self.__flushBuffer()
def optParse():
"""Parses commandline options."""
parser = OptionParser(prog=__applicationName__, version="%prog " + __version__)
parser.set_usage("%prog [options] filename")
parser.add_option("--autobrief",
action="store_true", dest="autobrief",
help="use the docstring summary line as \\brief description"
)
parser.add_option("--debug",
action="store_true", dest="debug",
help="enable debug output on stderr"
)
## parse options
global options
(options, filename) = parser.parse_args()
if not filename:
print >>sys.stderr, "No filename given."
sys.exit(-1)
return filename[0]
def main():
"""Starts the parser on the file given by the filename as the first
argument on the commandline.
"""
filename = optParse()
fsm = Doxypy()
fsm.parseFile(filename)
if __name__ == "__main__":
main()
|
mardom/GalSim
|
doc/doxypy.py
|
Python
|
gpl-3.0
| 14,788
|
[
"Galaxy"
] |
a3beeea788b9b27c488dbb6174d72b6cc9c15b54a34a4f34c71d718056472fdd
|
#!/usr/bin/env python
# Author:
# Contact: grubert@users.sf.net
# Copyright: This module has been placed in the public domain.
"""
man.py
======
This module provides a simple command line interface that uses the
man page writer to output from ReStructuredText source.
"""
import locale
try:
locale.setlocale(locale.LC_ALL, '')
except:
pass
from docutils.core import publish_cmdline, default_description
# $Id: manpage.py 5645 2014-09-21 08:25:13Z grubert $
# Author: Engelbert Gruber <grubert@users.sourceforge.net>
# Copyright: This module is put into the public domain.
"""
Simple man page writer for reStructuredText.
Man pages (short for "manual pages") contain system documentation on unix-like
systems. The pages are grouped in numbered sections:
1 executable programs and shell commands
2 system calls
3 library functions
4 special files
5 file formats
6 games
7 miscellaneous
8 system administration
Man pages are written *troff*, a text file formatting system.
See http://www.tldp.org/HOWTO/Man-Page for a start.
Man pages have no subsection only parts.
Standard parts
NAME ,
SYNOPSIS ,
DESCRIPTION ,
OPTIONS ,
FILES ,
SEE ALSO ,
BUGS ,
and
AUTHOR .
A unix-like system keeps an index of the DESCRIPTIONs, which is accesable
by the command whatis or apropos.
"""
# NOTE: the macros only work when at line start, so try the rule
# start new lines in visit_ functions.
__docformat__ = 'reStructuredText'
import sys
import os
import time
import re
from types import ListType
import docutils
from docutils import nodes, utils, writers, languages
FIELD_LIST_INDENT = 7
DEFINITION_LIST_INDENT = 7
OPTION_LIST_INDENT = 7
BLOCKQOUTE_INDENT = 3.5
# Define two macros so man/roff can calculate the
# indent/unindent margins by itself
MACRO_DEF = (r"""
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level magin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
""")
class Writer(writers.Writer):
supported = ('manpage')
"""Formats this writer supports."""
output = None
"""Final translated form of `document`."""
def __init__(self):
writers.Writer.__init__(self)
self.translator_class = Translator
def translate(self):
visitor = self.translator_class(self.document)
self.document.walkabout(visitor)
self.output = visitor.astext()
class Table:
def __init__(self):
self._rows = []
self._options = ['center', ]
self._tab_char = '\t'
self._coldefs = []
def new_row(self):
self._rows.append([])
def append_cell(self, cell_lines):
"""cell_lines is an array of lines"""
self._rows[-1].append(cell_lines)
if len(self._coldefs) < len(self._rows[-1]):
self._coldefs.append('l')
def astext(self):
text = '.TS\n'
text += ' '.join(self._options) + ';\n'
text += '|%s|.\n' % ('|'.join(self._coldefs))
for row in self._rows:
# row = array of cells. cell = array of lines.
# line above
text += '_\n'
max_lns_in_cell = 0
for cell in row:
max_lns_in_cell = max(len(cell), max_lns_in_cell)
for ln_cnt in range(max_lns_in_cell):
line = []
for cell in row:
if len(cell) > ln_cnt:
line.append(cell[ln_cnt])
else:
line.append(" ")
text += self._tab_char.join(line) + '\n'
text += '_\n'
text += '.TE\n'
return text
class Translator(nodes.NodeVisitor):
""""""
words_and_spaces = re.compile(r'\S+| +|\n')
document_start = """Man page generated from reStructeredText."""
def __init__(self, document):
nodes.NodeVisitor.__init__(self, document)
self.settings = settings = document.settings
lcode = settings.language_code
self.language = languages.get_language(lcode)
self.head = []
self.body = []
self.foot = []
self.section_level = 0
self.context = []
self.topic_class = ''
self.colspecs = []
self.compact_p = 1
self.compact_simple = None
# the list style "*" bullet or "#" numbered
self._list_char = []
# writing the header .TH and .SH NAME is postboned after
# docinfo.
self._docinfo = {
"title" : "", "subtitle" : "",
"manual_section" : "", "manual_group" : "",
"author" : "",
"date" : "",
"copyright" : "",
"version" : "",
}
self._in_docinfo = None
self._active_table = None
self._in_entry = None
self.header_written = 0
self.authors = []
self.section_level = 0
self._indent = [0]
# central definition of simple processing rules
# what to output on : visit, depart
self.defs = {
'indent' : ('.INDENT %.1f\n', '.UNINDENT\n'),
'definition' : ('', ''),
'definition_list' : ('', '.TP 0\n'),
'definition_list_item' : ('\n.TP', ''),
#field_list
#field
'field_name' : ('\n.TP\n.B ', '\n'),
'field_body' : ('', '.RE\n', ),
'literal' : ('\\fB', '\\fP'),
'literal_block' : ('\n.nf\n', '\n.fi\n'),
#option_list
'option_list_item' : ('\n.TP', ''),
#option_group, option
'description' : ('\n', ''),
'reference' : (r'\fI\%', r'\fP'),
#'target' : (r'\fI\%', r'\fP'),
'emphasis': ('\\fI', '\\fP'),
'strong' : ('\\fB', '\\fP'),
'term' : ('\n.B ', '\n'),
'title_reference' : ('\\fI', '\\fP'),
'problematic' : ('\n.nf\n', '\n.fi\n'),
}
# TODO dont specify the newline before a dot-command, but ensure
# check it is there.
def comment_begin(self, text):
"""Return commented version of the passed text WITHOUT end of line/comment."""
prefix = '\n.\\" '
return prefix+prefix.join(text.split('\n'))
def comment(self, text):
"""Return commented version of the passed text."""
return self.comment_begin(text)+'\n'
def astext(self):
"""Return the final formatted document as a string."""
if not self.header_written:
# ensure we get a ".TH" as viewers require it.
self.head.append(self.header())
return ''.join(self.head + self.body + self.foot)
def visit_Text(self, node):
text = node.astext().replace('-','\-')
text = text.replace("'","\\'")
self.body.append(text)
def depart_Text(self, node):
pass
def list_start(self, node):
class enum_char:
enum_style = {
'arabic' : (3,1),
'loweralpha' : (3,'a'),
'upperalpha' : (3,'A'),
'lowerroman' : (5,'i'),
'upperroman' : (5,'I'),
'bullet' : (2,'\\(bu'),
'emdash' : (2,'\\(em'),
}
def __init__(self, style):
if style == 'arabic':
if node.has_key('start'):
start = node['start']
else:
start = 1
self._style = (
len(str(len(node.children)))+2,
start )
# BUG: fix start for alpha
else:
self._style = self.enum_style[style]
self._cnt = -1
def next(self):
self._cnt += 1
# BUG add prefix postfix
try:
return "%d." % (self._style[1] + self._cnt)
except:
if self._style[1][0] == '\\':
return self._style[1]
# BUG romans dont work
# BUG alpha only a...z
return "%c." % (ord(self._style[1])+self._cnt)
def get_width(self):
return self._style[0]
def __repr__(self):
return 'enum_style%r' % list(self._style)
if node.has_key('enumtype'):
self._list_char.append(enum_char(node['enumtype']))
else:
self._list_char.append(enum_char('bullet'))
if len(self._list_char) > 1:
# indent nested lists
# BUG indentation depends on indentation of parent list.
self.indent(self._list_char[-2].get_width())
else:
self.indent(self._list_char[-1].get_width())
def list_end(self):
self.dedent()
self._list_char.pop()
def header(self):
tmpl = (".TH %(title)s %(manual_section)s"
" \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n"
".SH NAME\n"
"%(title)s \- %(subtitle)s\n")
return tmpl % self._docinfo
def append_header(self):
"""append header with .TH and .SH NAME"""
# TODO before everything
# .TH title section date source manual
if self.header_written:
return
self.body.append(self.header())
self.body.append(MACRO_DEF)
self.header_written = 1
def visit_address(self, node):
raise NotImplementedError, node.astext()
self.visit_docinfo_item(node, 'address', meta=None)
def depart_address(self, node):
self.depart_docinfo_item()
def visit_admonition(self, node, name):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'div', CLASS=name))
self.body.append('<p class="admonition-title">'
+ self.language.labels[name] + '</p>\n')
def depart_admonition(self):
raise NotImplementedError, node.astext()
self.body.append('</div>\n')
def visit_attention(self, node):
self.visit_admonition(node, 'attention')
def depart_attention(self, node):
self.depart_admonition()
def visit_author(self, node):
self._docinfo['author'] = node.astext()
raise nodes.SkipNode
def depart_author(self, node):
pass
def visit_authors(self, node):
self.body.append(self.comment('visit_authors'))
def depart_authors(self, node):
self.body.append(self.comment('depart_authors'))
def visit_block_quote(self, node):
#self.body.append(self.comment('visit_block_quote'))
# BUG/HACK: indent alway uses the _last_ indention,
# thus we need two of them.
self.indent(BLOCKQOUTE_INDENT)
self.indent(0)
def depart_block_quote(self, node):
#self.body.append(self.comment('depart_block_quote'))
self.dedent()
self.dedent()
def visit_bullet_list(self, node):
self.list_start(node)
def depart_bullet_list(self, node):
self.list_end()
def visit_caption(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'p', '', CLASS='caption'))
def depart_caption(self, node):
raise NotImplementedError, node.astext()
self.body.append('</p>\n')
def visit_caution(self, node):
self.visit_admonition(node, 'caution')
def depart_caution(self, node):
self.depart_admonition()
def visit_citation(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'table', CLASS='citation',
frame="void", rules="none"))
self.body.append('<colgroup><col class="label" /><col /></colgroup>\n'
'<col />\n'
'<tbody valign="top">\n'
'<tr>')
self.footnote_backrefs(node)
def depart_citation(self, node):
raise NotImplementedError, node.astext()
self.body.append('</td></tr>\n'
'</tbody>\n</table>\n')
def visit_citation_reference(self, node):
raise NotImplementedError, node.astext()
href = ''
if node.has_key('refid'):
href = '#' + node['refid']
elif node.has_key('refname'):
href = '#' + self.document.nameids[node['refname']]
self.body.append(self.starttag(node, 'a', '[', href=href,
CLASS='citation-reference'))
def depart_citation_reference(self, node):
raise NotImplementedError, node.astext()
self.body.append(']</a>')
def visit_classifier(self, node):
raise NotImplementedError, node.astext()
self.body.append(' <span class="classifier-delimiter">:</span> ')
self.body.append(self.starttag(node, 'span', '', CLASS='classifier'))
def depart_classifier(self, node):
raise NotImplementedError, node.astext()
self.body.append('</span>')
def visit_colspec(self, node):
self.colspecs.append(node)
def depart_colspec(self, node):
pass
def write_colspecs(self):
self.body.append("%s.\n" % ('L '*len(self.colspecs)))
def visit_comment(self, node,
sub=re.compile('-(?=-)').sub):
self.body.append(self.comment(node.astext()))
raise nodes.SkipNode
def visit_contact(self, node):
self.visit_docinfo_item(node, 'contact')
def depart_contact(self, node):
self.depart_docinfo_item()
def visit_copyright(self, node):
self._docinfo['copyright'] = node.astext()
raise nodes.SkipNode
def visit_danger(self, node):
self.visit_admonition(node, 'danger')
def depart_danger(self, node):
self.depart_admonition()
def visit_date(self, node):
self._docinfo['date'] = node.astext()
raise nodes.SkipNode
def visit_decoration(self, node):
pass
def depart_decoration(self, node):
pass
def visit_definition(self, node):
self.body.append(self.defs['definition'][0])
def depart_definition(self, node):
self.body.append(self.defs['definition'][1])
def visit_definition_list(self, node):
self.indent(DEFINITION_LIST_INDENT)
def depart_definition_list(self, node):
self.dedent()
def visit_definition_list_item(self, node):
self.body.append(self.defs['definition_list_item'][0])
def depart_definition_list_item(self, node):
self.body.append(self.defs['definition_list_item'][1])
def visit_description(self, node):
self.body.append(self.defs['description'][0])
def depart_description(self, node):
self.body.append(self.defs['description'][1])
def visit_docinfo(self, node):
self._in_docinfo = 1
def depart_docinfo(self, node):
self._in_docinfo = None
# TODO nothing should be written before this
self.append_header()
def visit_docinfo_item(self, node, name):
self.body.append(self.comment('%s: ' % self.language.labels[name]))
if len(node):
return
if isinstance(node[0], nodes.Element):
node[0].set_class('first')
if isinstance(node[0], nodes.Element):
node[-1].set_class('last')
def depart_docinfo_item(self):
pass
def visit_doctest_block(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'pre', CLASS='doctest-block'))
def depart_doctest_block(self, node):
raise NotImplementedError, node.astext()
self.body.append('\n</pre>\n')
def visit_document(self, node):
self.body.append(self.comment(self.document_start).lstrip())
# writing header is postboned
self.header_written = 0
def depart_document(self, node):
if self._docinfo['author']:
self.body.append('\n.SH AUTHOR\n%s\n'
% self._docinfo['author'])
if self._docinfo['copyright']:
self.body.append('\n.SH COPYRIGHT\n%s\n'
% self._docinfo['copyright'])
self.body.append(
self.comment(
'Generated by docutils manpage writer on %s.\n'
% (time.strftime('%Y-%m-%d %H:%M')) ) )
def visit_emphasis(self, node):
self.body.append(self.defs['emphasis'][0])
def depart_emphasis(self, node):
self.body.append(self.defs['emphasis'][1])
def visit_entry(self, node):
# BUG entries have to be on one line separated by tab force it.
self.context.append(len(self.body))
self._in_entry = 1
def depart_entry(self, node):
start = self.context.pop()
self._active_table.append_cell(self.body[start:])
del self.body[start:]
self._in_entry = 0
def visit_enumerated_list(self, node):
self.list_start(node)
def depart_enumerated_list(self, node):
self.list_end()
def visit_error(self, node):
self.visit_admonition(node, 'error')
def depart_error(self, node):
self.depart_admonition()
def visit_field(self, node):
#self.body.append(self.comment('visit_field'))
pass
def depart_field(self, node):
#self.body.append(self.comment('depart_field'))
pass
def visit_field_body(self, node):
#self.body.append(self.comment('visit_field_body'))
if self._in_docinfo:
self._docinfo[
self._field_name.lower().replace(" ","_")] = node.astext()
raise nodes.SkipNode
def depart_field_body(self, node):
pass
def visit_field_list(self, node):
self.indent(FIELD_LIST_INDENT)
def depart_field_list(self, node):
self.dedent('depart_field_list')
def visit_field_name(self, node):
if self._in_docinfo:
self._field_name = node.astext()
raise nodes.SkipNode
else:
self.body.append(self.defs['field_name'][0])
def depart_field_name(self, node):
self.body.append(self.defs['field_name'][1])
def visit_figure(self, node):
raise NotImplementedError, node.astext()
def depart_figure(self, node):
raise NotImplementedError, node.astext()
def visit_footer(self, node):
raise NotImplementedError, node.astext()
def depart_footer(self, node):
raise NotImplementedError, node.astext()
start = self.context.pop()
footer = (['<hr class="footer"/>\n',
self.starttag(node, 'div', CLASS='footer')]
+ self.body[start:] + ['</div>\n'])
self.body_suffix[:0] = footer
del self.body[start:]
def visit_footnote(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'table', CLASS='footnote',
frame="void", rules="none"))
self.body.append('<colgroup><col class="label" /><col /></colgroup>\n'
'<tbody valign="top">\n'
'<tr>')
self.footnote_backrefs(node)
def footnote_backrefs(self, node):
raise NotImplementedError, node.astext()
if self.settings.footnote_backlinks and node.hasattr('backrefs'):
backrefs = node['backrefs']
if len(backrefs) == 1:
self.context.append('')
self.context.append('<a class="fn-backref" href="#%s" '
'name="%s">' % (backrefs[0], node['id']))
else:
i = 1
backlinks = []
for backref in backrefs:
backlinks.append('<a class="fn-backref" href="#%s">%s</a>'
% (backref, i))
i += 1
self.context.append('<em>(%s)</em> ' % ', '.join(backlinks))
self.context.append('<a name="%s">' % node['id'])
else:
self.context.append('')
self.context.append('<a name="%s">' % node['id'])
def depart_footnote(self, node):
raise NotImplementedError, node.astext()
self.body.append('</td></tr>\n'
'</tbody>\n</table>\n')
def visit_footnote_reference(self, node):
raise NotImplementedError, node.astext()
href = ''
if node.has_key('refid'):
href = '#' + node['refid']
elif node.has_key('refname'):
href = '#' + self.document.nameids[node['refname']]
format = self.settings.footnote_references
if format == 'brackets':
suffix = '['
self.context.append(']')
elif format == 'superscript':
suffix = '<sup>'
self.context.append('</sup>')
else: # shouldn't happen
suffix = '???'
self.content.append('???')
self.body.append(self.starttag(node, 'a', suffix, href=href,
CLASS='footnote-reference'))
def depart_footnote_reference(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.context.pop() + '</a>')
def visit_generated(self, node):
pass
def depart_generated(self, node):
pass
def visit_header(self, node):
raise NotImplementedError, node.astext()
self.context.append(len(self.body))
def depart_header(self, node):
raise NotImplementedError, node.astext()
start = self.context.pop()
self.body_prefix.append(self.starttag(node, 'div', CLASS='header'))
self.body_prefix.extend(self.body[start:])
self.body_prefix.append('<hr />\n</div>\n')
del self.body[start:]
def visit_hint(self, node):
self.visit_admonition(node, 'hint')
def depart_hint(self, node):
self.depart_admonition()
def visit_image(self, node):
raise NotImplementedError, node.astext()
atts = node.attributes.copy()
atts['src'] = atts['uri']
del atts['uri']
if not atts.has_key('alt'):
atts['alt'] = atts['src']
if isinstance(node.parent, nodes.TextElement):
self.context.append('')
else:
self.body.append('<p>')
self.context.append('</p>\n')
self.body.append(self.emptytag(node, 'img', '', **atts))
def depart_image(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.context.pop())
def visit_important(self, node):
self.visit_admonition(node, 'important')
def depart_important(self, node):
self.depart_admonition()
def visit_label(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'td', '%s[' % self.context.pop(),
CLASS='label'))
def depart_label(self, node):
raise NotImplementedError, node.astext()
self.body.append(']</a></td><td>%s' % self.context.pop())
def visit_legend(self, node):
raise NotImplementedError, node.astext()
self.body.append(self.starttag(node, 'div', CLASS='legend'))
def depart_legend(self, node):
raise NotImplementedError, node.astext()
self.body.append('</div>\n')
def visit_line_block(self, node):
self.body.append('\n')
def depart_line_block(self, node):
self.body.append('\n')
def visit_line(self, node):
pass
def depart_line(self, node):
self.body.append('\n.br\n')
def visit_list_item(self, node):
# man 7 man argues to use ".IP" instead of ".TP"
self.body.append('\n.IP %s %d\n' % (
self._list_char[-1].next(),
self._list_char[-1].get_width(),) )
def depart_list_item(self, node):
pass
def visit_literal(self, node):
self.body.append(self.defs['literal'][0])
def depart_literal(self, node):
self.body.append(self.defs['literal'][1])
def visit_literal_block(self, node):
self.body.append(self.defs['literal_block'][0])
def depart_literal_block(self, node):
self.body.append(self.defs['literal_block'][1])
def visit_meta(self, node):
raise NotImplementedError, node.astext()
self.head.append(self.emptytag(node, 'meta', **node.attributes))
def depart_meta(self, node):
pass
def visit_note(self, node):
self.visit_admonition(node, 'note')
def depart_note(self, node):
self.depart_admonition()
def indent(self, by=0.5):
# if we are in a section ".SH" there already is a .RS
#self.body.append('\n[[debug: listchar: %r]]\n' % map(repr, self._list_char))
#self.body.append('\n[[debug: indent %r]]\n' % self._indent)
step = self._indent[-1]
self._indent.append(by)
self.body.append(self.defs['indent'][0] % step)
def dedent(self, name=''):
#self.body.append('\n[[debug: dedent %s %r]]\n' % (name, self._indent))
self._indent.pop()
self.body.append(self.defs['indent'][1])
def visit_option_list(self, node):
self.indent(OPTION_LIST_INDENT)
def depart_option_list(self, node):
self.dedent()
def visit_option_list_item(self, node):
# one item of the list
self.body.append(self.defs['option_list_item'][0])
def depart_option_list_item(self, node):
self.body.append(self.defs['option_list_item'][1])
def visit_option_group(self, node):
# as one option could have several forms it is a group
# options without parameter bold only, .B, -v
# options with parameter bold italic, .BI, -f file
# we do not know if .B or .BI
self.context.append('.B') # blind guess
self.context.append(len(self.body)) # to be able to insert later
self.context.append(0) # option counter
def depart_option_group(self, node):
self.context.pop() # the counter
start_position = self.context.pop()
text = self.body[start_position:]
del self.body[start_position:]
self.body.append('\n%s%s' % (self.context.pop(), ''.join(text)))
def visit_option(self, node):
# each form of the option will be presented separately
if self.context[-1]>0:
self.body.append(' ,')
if self.context[-3] == '.BI':
self.body.append('\\')
self.body.append(' ')
def depart_option(self, node):
self.context[-1] += 1
def visit_option_string(self, node):
# do not know if .B or .BI
pass
def depart_option_string(self, node):
pass
def visit_option_argument(self, node):
self.context[-3] = '.BI' # bold/italic alternate
if node['delimiter'] != ' ':
self.body.append('\\fn%s ' % node['delimiter'] )
elif self.body[len(self.body)-1].endswith('='):
# a blank only means no blank in output, just changing font
self.body.append(' ')
else:
# backslash blank blank
self.body.append('\\ ')
def depart_option_argument(self, node):
pass
def visit_organization(self, node):
raise NotImplementedError, node.astext()
self.visit_docinfo_item(node, 'organization')
def depart_organization(self, node):
raise NotImplementedError, node.astext()
self.depart_docinfo_item()
def visit_paragraph(self, node):
# BUG every but the first paragraph in a list must be intended
# TODO .PP or new line
return
def depart_paragraph(self, node):
# TODO .PP or an empty line
if not self._in_entry:
self.body.append('\n\n')
def visit_problematic(self, node):
self.body.append(self.defs['problematic'][0])
def depart_problematic(self, node):
self.body.append(self.defs['problematic'][1])
def visit_raw(self, node):
if node.get('format') == 'manpage':
self.body.append(node.astext())
# Keep non-manpage raw text out of output:
raise nodes.SkipNode
def visit_reference(self, node):
"""E.g. link or email address."""
self.body.append(self.defs['reference'][0])
def depart_reference(self, node):
self.body.append(self.defs['reference'][1])
def visit_revision(self, node):
self.visit_docinfo_item(node, 'revision')
def depart_revision(self, node):
self.depart_docinfo_item()
def visit_row(self, node):
self._active_table.new_row()
def depart_row(self, node):
pass
def visit_section(self, node):
self.section_level += 1
def depart_section(self, node):
self.section_level -= 1
def visit_status(self, node):
raise NotImplementedError, node.astext()
self.visit_docinfo_item(node, 'status', meta=None)
def depart_status(self, node):
self.depart_docinfo_item()
def visit_strong(self, node):
self.body.append(self.defs['strong'][1])
def depart_strong(self, node):
self.body.append(self.defs['strong'][1])
def visit_substitution_definition(self, node):
"""Internal only."""
raise nodes.SkipNode
def visit_substitution_reference(self, node):
self.unimplemented_visit(node)
def visit_subtitle(self, node):
self._docinfo["subtitle"] = node.astext()
raise nodes.SkipNode
def visit_system_message(self, node):
# TODO add report_level
#if node['level'] < self.document.reporter['writer'].report_level:
# Level is too low to display:
# raise nodes.SkipNode
self.body.append('\.SH system-message\n')
attr = {}
backref_text = ''
if node.hasattr('id'):
attr['name'] = node['id']
if node.hasattr('line'):
line = ', line %s' % node['line']
else:
line = ''
self.body.append('System Message: %s/%s (%s:%s)\n'
% (node['type'], node['level'], node['source'], line))
def depart_system_message(self, node):
self.body.append('\n')
def visit_table(self, node):
self._active_table = Table()
def depart_table(self, node):
self.body.append(self._active_table.astext())
self._active_table = None
def visit_target(self, node):
self.body.append(self.comment('visit_target'))
#self.body.append(self.defs['target'][0])
#self.body.append(node['refuri'])
def depart_target(self, node):
self.body.append(self.comment('depart_target'))
#self.body.append(self.defs['target'][1])
def visit_tbody(self, node):
pass
def depart_tbody(self, node):
pass
def visit_term(self, node):
self.body.append(self.defs['term'][0])
def depart_term(self, node):
self.body.append(self.defs['term'][1])
def visit_tgroup(self, node):
pass
def depart_tgroup(self, node):
pass
def visit_thead(self, node):
raise NotImplementedError, node.astext()
self.write_colspecs()
self.body.append(self.context.pop()) # '</colgroup>\n'
# There may or may not be a <thead>; this is for <tbody> to use:
self.context.append('')
self.body.append(self.starttag(node, 'thead', valign='bottom'))
def depart_thead(self, node):
raise NotImplementedError, node.astext()
self.body.append('</thead>\n')
def visit_tip(self, node):
self.visit_admonition(node, 'tip')
def depart_tip(self, node):
self.depart_admonition()
def visit_title(self, node):
if isinstance(node.parent, nodes.topic):
self.body.append(self.comment('topic-title'))
elif isinstance(node.parent, nodes.sidebar):
self.body.append(self.comment('sidebar-title'))
elif isinstance(node.parent, nodes.admonition):
self.body.append(self.comment('admonition-title'))
elif self.section_level == 0:
# document title for .TH
self._docinfo['title'] = node.astext()
raise nodes.SkipNode
elif self.section_level == 1:
self.body.append('\n.SH ')
else:
self.body.append('\n.SS ')
def depart_title(self, node):
self.body.append('\n')
def visit_title_reference(self, node):
"""inline citation reference"""
self.body.append(self.defs['title_reference'][0])
def depart_title_reference(self, node):
self.body.append(self.defs['title_reference'][1])
def visit_topic(self, node):
self.body.append(self.comment('topic: '+node.astext()))
raise nodes.SkipNode
##self.topic_class = node.get('class')
def depart_topic(self, node):
##self.topic_class = ''
pass
def visit_transition(self, node):
# .PP Begin a new paragraph and reset prevailing indent.
# .sp N leaves N lines of blank space.
# .ce centers the next line
self.body.append('\n.sp\n.ce\n----\n')
def depart_transition(self, node):
self.body.append('\n.ce 0\n.sp\n')
def visit_version(self, node):
self._docinfo["version"] = node.astext()
raise nodes.SkipNode
def visit_warning(self, node):
self.visit_admonition(node, 'warning')
def depart_warning(self, node):
self.depart_admonition()
def unimplemented_visit(self, node):
raise NotImplementedError('visiting unimplemented node type: %s'
% node.__class__.__name__)
# vim: set et ts=4 ai :
description = ("Generates plain man. " + default_description)
publish_cmdline(writer=Writer(), description=description)
|
intuivo/nautilus-skype
|
rst2man.py
|
Python
|
gpl-3.0
| 34,803
|
[
"VisIt"
] |
e746db7c594ad2e305a59f14ea45d82fb58acbb8e259ae6a8cb41b5666bc7c6b
|
# this program corresponds to special.py
### Means test is not done yet
# E Means test is giving error (E)
# F Means test is failing (F)
# EF Means test is giving error and Failing
#! Means test is segfaulting
# 8 Means test runs forever
### test_besselpoly
### test_mathieu_a
### test_mathieu_even_coef
### test_mathieu_odd_coef
### test_modfresnelp
### test_modfresnelm
# test_pbdv_seq
### test_pbvv_seq
### test_sph_harm
import itertools
import platform
import sys
import numpy as np
from numpy import (array, isnan, r_, arange, finfo, pi, sin, cos, tan, exp,
log, zeros, sqrt, asarray, inf, nan_to_num, real, arctan, float_)
import pytest
from pytest import raises as assert_raises
from numpy.testing import (assert_equal, assert_almost_equal,
assert_array_equal, assert_array_almost_equal, assert_approx_equal,
assert_, assert_allclose, assert_array_almost_equal_nulp,
suppress_warnings)
from scipy import special
import scipy.special._ufuncs as cephes
from scipy.special import ellipk
from scipy.special._testutils import with_special_errors, \
assert_func_equal, FuncData
import math
class TestCephes:
def test_airy(self):
cephes.airy(0)
def test_airye(self):
cephes.airye(0)
def test_binom(self):
n = np.array([0.264, 4, 5.2, 17])
k = np.array([2, 0.4, 7, 3.3])
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
rknown = np.array([[-0.097152, 0.9263051596159367, 0.01858423645695389,
-0.007581020651518199],[6, 2.0214389119675666, 0, 2.9827344527963846],
[10.92, 2.22993515861399, -0.00585728, 10.468891352063146],
[136, 3.5252179590758828, 19448, 1024.5526916174495]])
assert_func_equal(cephes.binom, rknown.ravel(), nk, rtol=1e-13)
# Test branches in implementation
np.random.seed(1234)
n = np.r_[np.arange(-7, 30), 1000*np.random.rand(30) - 500]
k = np.arange(0, 102)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
assert_func_equal(cephes.binom,
cephes.binom(nk[:,0], nk[:,1] * (1 + 1e-15)),
nk,
atol=1e-10, rtol=1e-10)
def test_binom_2(self):
# Test branches in implementation
np.random.seed(1234)
n = np.r_[np.logspace(1, 300, 20)]
k = np.arange(0, 102)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
assert_func_equal(cephes.binom,
cephes.binom(nk[:,0], nk[:,1] * (1 + 1e-15)),
nk,
atol=1e-10, rtol=1e-10)
def test_binom_exact(self):
@np.vectorize
def binom_int(n, k):
n = int(n)
k = int(k)
num = int(1)
den = int(1)
for i in range(1, k+1):
num *= i + n - k
den *= i
return float(num/den)
np.random.seed(1234)
n = np.arange(1, 15)
k = np.arange(0, 15)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
nk = nk[nk[:,0] >= nk[:,1]]
assert_func_equal(cephes.binom,
binom_int(nk[:,0], nk[:,1]),
nk,
atol=0, rtol=0)
def test_binom_nooverflow_8346(self):
# Test (binom(n, k) doesn't overflow prematurely */
dataset = [
(1000, 500, 2.70288240945436551e+299),
(1002, 501, 1.08007396880791225e+300),
(1004, 502, 4.31599279169058121e+300),
(1006, 503, 1.72468101616263781e+301),
(1008, 504, 6.89188009236419153e+301),
(1010, 505, 2.75402257948335448e+302),
(1012, 506, 1.10052048531923757e+303),
(1014, 507, 4.39774063758732849e+303),
(1016, 508, 1.75736486108312519e+304),
(1018, 509, 7.02255427788423734e+304),
(1020, 510, 2.80626776829962255e+305),
(1022, 511, 1.12140876377061240e+306),
(1024, 512, 4.48125455209897109e+306),
(1026, 513, 1.79075474304149900e+307),
(1028, 514, 7.15605105487789676e+307)
]
dataset = np.asarray(dataset)
FuncData(cephes.binom, dataset, (0, 1), 2, rtol=1e-12).check()
def test_bdtr(self):
assert_equal(cephes.bdtr(1,1,0.5),1.0)
def test_bdtri(self):
assert_equal(cephes.bdtri(1,3,0.5),0.5)
def test_bdtrc(self):
assert_equal(cephes.bdtrc(1,3,0.5),0.5)
def test_bdtrin(self):
assert_equal(cephes.bdtrin(1,0,1),5.0)
def test_bdtrik(self):
cephes.bdtrik(1,3,0.5)
def test_bei(self):
assert_equal(cephes.bei(0),0.0)
def test_beip(self):
assert_equal(cephes.beip(0),0.0)
def test_ber(self):
assert_equal(cephes.ber(0),1.0)
def test_berp(self):
assert_equal(cephes.berp(0),0.0)
def test_besselpoly(self):
assert_equal(cephes.besselpoly(0,0,0),1.0)
def test_beta(self):
assert_equal(cephes.beta(1,1),1.0)
assert_allclose(cephes.beta(-100.3, 1e-200), cephes.gamma(1e-200))
assert_allclose(cephes.beta(0.0342, 171), 24.070498359873497,
rtol=1e-13, atol=0)
def test_betainc(self):
assert_equal(cephes.betainc(1,1,1),1.0)
assert_allclose(cephes.betainc(0.0342, 171, 1e-10), 0.55269916901806648)
def test_betaln(self):
assert_equal(cephes.betaln(1,1),0.0)
assert_allclose(cephes.betaln(-100.3, 1e-200), cephes.gammaln(1e-200))
assert_allclose(cephes.betaln(0.0342, 170), 3.1811881124242447,
rtol=1e-14, atol=0)
def test_betaincinv(self):
assert_equal(cephes.betaincinv(1,1,1),1.0)
assert_allclose(cephes.betaincinv(0.0342, 171, 0.25),
8.4231316935498957e-21, rtol=3e-12, atol=0)
def test_beta_inf(self):
assert_(np.isinf(special.beta(-1, 2)))
def test_btdtr(self):
assert_equal(cephes.btdtr(1,1,1),1.0)
def test_btdtri(self):
assert_equal(cephes.btdtri(1,1,1),1.0)
def test_btdtria(self):
assert_equal(cephes.btdtria(1,1,1),5.0)
def test_btdtrib(self):
assert_equal(cephes.btdtrib(1,1,1),5.0)
def test_cbrt(self):
assert_approx_equal(cephes.cbrt(1),1.0)
def test_chdtr(self):
assert_equal(cephes.chdtr(1,0),0.0)
def test_chdtrc(self):
assert_equal(cephes.chdtrc(1,0),1.0)
def test_chdtri(self):
assert_equal(cephes.chdtri(1,1),0.0)
def test_chdtriv(self):
assert_equal(cephes.chdtriv(0,0),5.0)
def test_chndtr(self):
assert_equal(cephes.chndtr(0,1,0),0.0)
# Each row holds (x, nu, lam, expected_value)
# These values were computed using Wolfram Alpha with
# CDF[NoncentralChiSquareDistribution[nu, lam], x]
values = np.array([
[25.00, 20.0, 400, 4.1210655112396197139e-57],
[25.00, 8.00, 250, 2.3988026526832425878e-29],
[0.001, 8.00, 40., 5.3761806201366039084e-24],
[0.010, 8.00, 40., 5.45396231055999457039e-20],
[20.00, 2.00, 107, 1.39390743555819597802e-9],
[22.50, 2.00, 107, 7.11803307138105870671e-9],
[25.00, 2.00, 107, 3.11041244829864897313e-8],
[3.000, 2.00, 1.0, 0.62064365321954362734],
[350.0, 300., 10., 0.93880128006276407710],
[100.0, 13.5, 10., 0.99999999650104210949],
[700.0, 20.0, 400, 0.99999999925680650105],
[150.0, 13.5, 10., 0.99999999999999983046],
[160.0, 13.5, 10., 0.99999999999999999518], # 1.0
])
cdf = cephes.chndtr(values[:, 0], values[:, 1], values[:, 2])
assert_allclose(cdf, values[:, 3], rtol=1e-12)
assert_almost_equal(cephes.chndtr(np.inf, np.inf, 0), 2.0)
assert_almost_equal(cephes.chndtr(2, 1, np.inf), 0.0)
assert_(np.isnan(cephes.chndtr(np.nan, 1, 2)))
assert_(np.isnan(cephes.chndtr(5, np.nan, 2)))
assert_(np.isnan(cephes.chndtr(5, 1, np.nan)))
def test_chndtridf(self):
assert_equal(cephes.chndtridf(0,0,1),5.0)
def test_chndtrinc(self):
assert_equal(cephes.chndtrinc(0,1,0),5.0)
def test_chndtrix(self):
assert_equal(cephes.chndtrix(0,1,0),0.0)
def test_cosdg(self):
assert_equal(cephes.cosdg(0),1.0)
def test_cosm1(self):
assert_equal(cephes.cosm1(0),0.0)
def test_cotdg(self):
assert_almost_equal(cephes.cotdg(45),1.0)
def test_dawsn(self):
assert_equal(cephes.dawsn(0),0.0)
assert_allclose(cephes.dawsn(1.23), 0.50053727749081767)
def test_diric(self):
# Test behavior near multiples of 2pi. Regression test for issue
# described in gh-4001.
n_odd = [1, 5, 25]
x = np.array(2*np.pi + 5e-5).astype(np.float32)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=7)
x = np.array(2*np.pi + 1e-9).astype(np.float64)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=15)
x = np.array(2*np.pi + 1e-15).astype(np.float64)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=15)
if hasattr(np, 'float128'):
# No float128 available in 32-bit numpy
x = np.array(2*np.pi + 1e-12).astype(np.float128)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=19)
n_even = [2, 4, 24]
x = np.array(2*np.pi + 1e-9).astype(np.float64)
assert_almost_equal(special.diric(x, n_even), -1.0, decimal=15)
# Test at some values not near a multiple of pi
x = np.arange(0.2*np.pi, 1.0*np.pi, 0.2*np.pi)
octave_result = [0.872677996249965, 0.539344662916632,
0.127322003750035, -0.206011329583298]
assert_almost_equal(special.diric(x, 3), octave_result, decimal=15)
def test_diric_broadcasting(self):
x = np.arange(5)
n = np.array([1, 3, 7])
assert_(special.diric(x[:, np.newaxis], n).shape == (x.size, n.size))
def test_ellipe(self):
assert_equal(cephes.ellipe(1),1.0)
def test_ellipeinc(self):
assert_equal(cephes.ellipeinc(0,1),0.0)
def test_ellipj(self):
cephes.ellipj(0,1)
def test_ellipk(self):
assert_allclose(ellipk(0), pi/2)
def test_ellipkinc(self):
assert_equal(cephes.ellipkinc(0,0),0.0)
def test_erf(self):
assert_equal(cephes.erf(0), 0.0)
def test_erf_symmetry(self):
x = 5.905732037710919
assert_equal(cephes.erf(x) + cephes.erf(-x), 0.0)
def test_erfc(self):
assert_equal(cephes.erfc(0), 1.0)
def test_exp10(self):
assert_approx_equal(cephes.exp10(2),100.0)
def test_exp2(self):
assert_equal(cephes.exp2(2),4.0)
def test_expm1(self):
assert_equal(cephes.expm1(0),0.0)
assert_equal(cephes.expm1(np.inf), np.inf)
assert_equal(cephes.expm1(-np.inf), -1)
assert_equal(cephes.expm1(np.nan), np.nan)
def test_expm1_complex(self):
expm1 = cephes.expm1
assert_equal(expm1(0 + 0j), 0 + 0j)
assert_equal(expm1(complex(np.inf, 0)), complex(np.inf, 0))
assert_equal(expm1(complex(np.inf, 1)), complex(np.inf, np.inf))
assert_equal(expm1(complex(np.inf, 2)), complex(-np.inf, np.inf))
assert_equal(expm1(complex(np.inf, 4)), complex(-np.inf, -np.inf))
assert_equal(expm1(complex(np.inf, 5)), complex(np.inf, -np.inf))
assert_equal(expm1(complex(1, np.inf)), complex(np.nan, np.nan))
assert_equal(expm1(complex(0, np.inf)), complex(np.nan, np.nan))
assert_equal(expm1(complex(np.inf, np.inf)), complex(np.inf, np.nan))
assert_equal(expm1(complex(-np.inf, np.inf)), complex(-1, 0))
assert_equal(expm1(complex(-np.inf, np.nan)), complex(-1, 0))
assert_equal(expm1(complex(np.inf, np.nan)), complex(np.inf, np.nan))
assert_equal(expm1(complex(0, np.nan)), complex(np.nan, np.nan))
assert_equal(expm1(complex(1, np.nan)), complex(np.nan, np.nan))
assert_equal(expm1(complex(np.nan, 1)), complex(np.nan, np.nan))
assert_equal(expm1(complex(np.nan, np.nan)), complex(np.nan, np.nan))
@pytest.mark.xfail(reason='The real part of expm1(z) bad at these points')
def test_expm1_complex_hard(self):
# The real part of this function is difficult to evaluate when
# z.real = -log(cos(z.imag)).
y = np.array([0.1, 0.2, 0.3, 5, 11, 20])
x = -np.log(np.cos(y))
z = x + 1j*y
# evaluate using mpmath.expm1 with dps=1000
expected = np.array([-5.5507901846769623e-17+0.10033467208545054j,
2.4289354732893695e-18+0.20271003550867248j,
4.5235500262585768e-17+0.30933624960962319j,
7.8234305217489006e-17-3.3805150062465863j,
-1.3685191953697676e-16-225.95084645419513j,
8.7175620481291045e-17+2.2371609442247422j])
found = cephes.expm1(z)
# this passes.
assert_array_almost_equal_nulp(found.imag, expected.imag, 3)
# this fails.
assert_array_almost_equal_nulp(found.real, expected.real, 20)
def test_fdtr(self):
assert_equal(cephes.fdtr(1, 1, 0), 0.0)
# Computed using Wolfram Alpha: CDF[FRatioDistribution[1e-6, 5], 10]
assert_allclose(cephes.fdtr(1e-6, 5, 10), 0.9999940790193488,
rtol=1e-12)
def test_fdtrc(self):
assert_equal(cephes.fdtrc(1, 1, 0), 1.0)
# Computed using Wolfram Alpha:
# 1 - CDF[FRatioDistribution[2, 1/10], 1e10]
assert_allclose(cephes.fdtrc(2, 0.1, 1e10), 0.27223784621293512,
rtol=1e-12)
def test_fdtri(self):
assert_allclose(cephes.fdtri(1, 1, [0.499, 0.501]),
array([0.9937365, 1.00630298]), rtol=1e-6)
# From Wolfram Alpha:
# CDF[FRatioDistribution[1/10, 1], 3] = 0.8756751669632105666874...
p = 0.8756751669632105666874
assert_allclose(cephes.fdtri(0.1, 1, p), 3, rtol=1e-12)
@pytest.mark.xfail(reason='Returns nan on i686.')
def test_fdtri_mysterious_failure(self):
assert_allclose(cephes.fdtri(1, 1, 0.5), 1)
def test_fdtridfd(self):
assert_equal(cephes.fdtridfd(1,0,0),5.0)
def test_fresnel(self):
assert_equal(cephes.fresnel(0),(0.0,0.0))
def test_gamma(self):
assert_equal(cephes.gamma(5),24.0)
def test_gammainccinv(self):
assert_equal(cephes.gammainccinv(5,1),0.0)
def test_gammaln(self):
cephes.gammaln(10)
def test_gammasgn(self):
vals = np.array([-4, -3.5, -2.3, 1, 4.2], np.float64)
assert_array_equal(cephes.gammasgn(vals), np.sign(cephes.rgamma(vals)))
def test_gdtr(self):
assert_equal(cephes.gdtr(1,1,0),0.0)
def test_gdtr_inf(self):
assert_equal(cephes.gdtr(1,1,np.inf),1.0)
def test_gdtrc(self):
assert_equal(cephes.gdtrc(1,1,0),1.0)
def test_gdtria(self):
assert_equal(cephes.gdtria(0,1,1),0.0)
def test_gdtrib(self):
cephes.gdtrib(1,0,1)
# assert_equal(cephes.gdtrib(1,0,1),5.0)
def test_gdtrix(self):
cephes.gdtrix(1,1,.1)
def test_hankel1(self):
cephes.hankel1(1,1)
def test_hankel1e(self):
cephes.hankel1e(1,1)
def test_hankel2(self):
cephes.hankel2(1,1)
def test_hankel2e(self):
cephes.hankel2e(1,1)
def test_hyp1f1(self):
assert_approx_equal(cephes.hyp1f1(1,1,1), exp(1.0))
assert_approx_equal(cephes.hyp1f1(3,4,-6), 0.026056422099537251095)
cephes.hyp1f1(1,1,1)
def test_hyp2f1(self):
assert_equal(cephes.hyp2f1(1,1,1,0),1.0)
def test_i0(self):
assert_equal(cephes.i0(0),1.0)
def test_i0e(self):
assert_equal(cephes.i0e(0),1.0)
def test_i1(self):
assert_equal(cephes.i1(0),0.0)
def test_i1e(self):
assert_equal(cephes.i1e(0),0.0)
def test_it2i0k0(self):
cephes.it2i0k0(1)
def test_it2j0y0(self):
cephes.it2j0y0(1)
def test_it2struve0(self):
cephes.it2struve0(1)
def test_itairy(self):
cephes.itairy(1)
def test_iti0k0(self):
assert_equal(cephes.iti0k0(0),(0.0,0.0))
def test_itj0y0(self):
assert_equal(cephes.itj0y0(0),(0.0,0.0))
def test_itmodstruve0(self):
assert_equal(cephes.itmodstruve0(0),0.0)
def test_itstruve0(self):
assert_equal(cephes.itstruve0(0),0.0)
def test_iv(self):
assert_equal(cephes.iv(1,0),0.0)
def _check_ive(self):
assert_equal(cephes.ive(1,0),0.0)
def test_j0(self):
assert_equal(cephes.j0(0),1.0)
def test_j1(self):
assert_equal(cephes.j1(0),0.0)
def test_jn(self):
assert_equal(cephes.jn(0,0),1.0)
def test_jv(self):
assert_equal(cephes.jv(0,0),1.0)
def _check_jve(self):
assert_equal(cephes.jve(0,0),1.0)
def test_k0(self):
cephes.k0(2)
def test_k0e(self):
cephes.k0e(2)
def test_k1(self):
cephes.k1(2)
def test_k1e(self):
cephes.k1e(2)
def test_kei(self):
cephes.kei(2)
def test_keip(self):
assert_equal(cephes.keip(0),0.0)
def test_ker(self):
cephes.ker(2)
def test_kerp(self):
cephes.kerp(2)
def _check_kelvin(self):
cephes.kelvin(2)
def test_kn(self):
cephes.kn(1,1)
def test_kolmogi(self):
assert_equal(cephes.kolmogi(1),0.0)
assert_(np.isnan(cephes.kolmogi(np.nan)))
def test_kolmogorov(self):
assert_equal(cephes.kolmogorov(0), 1.0)
def test_kolmogp(self):
assert_equal(cephes._kolmogp(0), -0.0)
def test_kolmogc(self):
assert_equal(cephes._kolmogc(0), 0.0)
def test_kolmogci(self):
assert_equal(cephes._kolmogci(0), 0.0)
assert_(np.isnan(cephes._kolmogci(np.nan)))
def _check_kv(self):
cephes.kv(1,1)
def _check_kve(self):
cephes.kve(1,1)
def test_log1p(self):
log1p = cephes.log1p
assert_equal(log1p(0), 0.0)
assert_equal(log1p(-1), -np.inf)
assert_equal(log1p(-2), np.nan)
assert_equal(log1p(np.inf), np.inf)
def test_log1p_complex(self):
log1p = cephes.log1p
c = complex
assert_equal(log1p(0 + 0j), 0 + 0j)
assert_equal(log1p(c(-1, 0)), c(-np.inf, 0))
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "invalid value encountered in multiply")
assert_allclose(log1p(c(1, np.inf)), c(np.inf, np.pi/2))
assert_equal(log1p(c(1, np.nan)), c(np.nan, np.nan))
assert_allclose(log1p(c(-np.inf, 1)), c(np.inf, np.pi))
assert_equal(log1p(c(np.inf, 1)), c(np.inf, 0))
assert_allclose(log1p(c(-np.inf, np.inf)), c(np.inf, 3*np.pi/4))
assert_allclose(log1p(c(np.inf, np.inf)), c(np.inf, np.pi/4))
assert_equal(log1p(c(np.inf, np.nan)), c(np.inf, np.nan))
assert_equal(log1p(c(-np.inf, np.nan)), c(np.inf, np.nan))
assert_equal(log1p(c(np.nan, np.inf)), c(np.inf, np.nan))
assert_equal(log1p(c(np.nan, 1)), c(np.nan, np.nan))
assert_equal(log1p(c(np.nan, np.nan)), c(np.nan, np.nan))
def test_lpmv(self):
assert_equal(cephes.lpmv(0,0,1),1.0)
def test_mathieu_a(self):
assert_equal(cephes.mathieu_a(1,0),1.0)
def test_mathieu_b(self):
assert_equal(cephes.mathieu_b(1,0),1.0)
def test_mathieu_cem(self):
assert_equal(cephes.mathieu_cem(1,0,0),(1.0,0.0))
# Test AMS 20.2.27
@np.vectorize
def ce_smallq(m, q, z):
z *= np.pi/180
if m == 0:
return 2**(-0.5) * (1 - .5*q*cos(2*z)) # + O(q^2)
elif m == 1:
return cos(z) - q/8 * cos(3*z) # + O(q^2)
elif m == 2:
return cos(2*z) - q*(cos(4*z)/12 - 1/4) # + O(q^2)
else:
return cos(m*z) - q*(cos((m+2)*z)/(4*(m+1)) - cos((m-2)*z)/(4*(m-1))) # + O(q^2)
m = np.arange(0, 100)
q = np.r_[0, np.logspace(-30, -9, 10)]
assert_allclose(cephes.mathieu_cem(m[:,None], q[None,:], 0.123)[0],
ce_smallq(m[:,None], q[None,:], 0.123),
rtol=1e-14, atol=0)
def test_mathieu_sem(self):
assert_equal(cephes.mathieu_sem(1,0,0),(0.0,1.0))
# Test AMS 20.2.27
@np.vectorize
def se_smallq(m, q, z):
z *= np.pi/180
if m == 1:
return sin(z) - q/8 * sin(3*z) # + O(q^2)
elif m == 2:
return sin(2*z) - q*sin(4*z)/12 # + O(q^2)
else:
return sin(m*z) - q*(sin((m+2)*z)/(4*(m+1)) - sin((m-2)*z)/(4*(m-1))) # + O(q^2)
m = np.arange(1, 100)
q = np.r_[0, np.logspace(-30, -9, 10)]
assert_allclose(cephes.mathieu_sem(m[:,None], q[None,:], 0.123)[0],
se_smallq(m[:,None], q[None,:], 0.123),
rtol=1e-14, atol=0)
def test_mathieu_modcem1(self):
assert_equal(cephes.mathieu_modcem1(1,0,0),(0.0,0.0))
def test_mathieu_modcem2(self):
cephes.mathieu_modcem2(1,1,1)
# Test reflection relation AMS 20.6.19
m = np.arange(0, 4)[:,None,None]
q = np.r_[np.logspace(-2, 2, 10)][None,:,None]
z = np.linspace(0, 1, 7)[None,None,:]
y1 = cephes.mathieu_modcem2(m, q, -z)[0]
fr = -cephes.mathieu_modcem2(m, q, 0)[0] / cephes.mathieu_modcem1(m, q, 0)[0]
y2 = -cephes.mathieu_modcem2(m, q, z)[0] - 2*fr*cephes.mathieu_modcem1(m, q, z)[0]
assert_allclose(y1, y2, rtol=1e-10)
def test_mathieu_modsem1(self):
assert_equal(cephes.mathieu_modsem1(1,0,0),(0.0,0.0))
def test_mathieu_modsem2(self):
cephes.mathieu_modsem2(1,1,1)
# Test reflection relation AMS 20.6.20
m = np.arange(1, 4)[:,None,None]
q = np.r_[np.logspace(-2, 2, 10)][None,:,None]
z = np.linspace(0, 1, 7)[None,None,:]
y1 = cephes.mathieu_modsem2(m, q, -z)[0]
fr = cephes.mathieu_modsem2(m, q, 0)[1] / cephes.mathieu_modsem1(m, q, 0)[1]
y2 = cephes.mathieu_modsem2(m, q, z)[0] - 2*fr*cephes.mathieu_modsem1(m, q, z)[0]
assert_allclose(y1, y2, rtol=1e-10)
def test_mathieu_overflow(self):
# Check that these return NaNs instead of causing a SEGV
assert_equal(cephes.mathieu_cem(10000, 0, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_sem(10000, 0, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_cem(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_sem(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modcem1(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modsem1(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modcem2(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modsem2(10000, 1.5, 1.3), (np.nan, np.nan))
def test_mathieu_ticket_1847(self):
# Regression test --- this call had some out-of-bounds access
# and could return nan occasionally
for k in range(60):
v = cephes.mathieu_modsem2(2, 100, -1)
# Values from ACM TOMS 804 (derivate by numerical differentiation)
assert_allclose(v[0], 0.1431742913063671074347, rtol=1e-10)
assert_allclose(v[1], 0.9017807375832909144719, rtol=1e-4)
def test_modfresnelm(self):
cephes.modfresnelm(0)
def test_modfresnelp(self):
cephes.modfresnelp(0)
def _check_modstruve(self):
assert_equal(cephes.modstruve(1,0),0.0)
def test_nbdtr(self):
assert_equal(cephes.nbdtr(1,1,1),1.0)
def test_nbdtrc(self):
assert_equal(cephes.nbdtrc(1,1,1),0.0)
def test_nbdtri(self):
assert_equal(cephes.nbdtri(1,1,1),1.0)
def __check_nbdtrik(self):
cephes.nbdtrik(1,.4,.5)
def test_nbdtrin(self):
assert_equal(cephes.nbdtrin(1,0,0),5.0)
def test_ncfdtr(self):
assert_equal(cephes.ncfdtr(1,1,1,0),0.0)
def test_ncfdtri(self):
assert_equal(cephes.ncfdtri(1, 1, 1, 0), 0.0)
f = [0.5, 1, 1.5]
p = cephes.ncfdtr(2, 3, 1.5, f)
assert_allclose(cephes.ncfdtri(2, 3, 1.5, p), f)
def test_ncfdtridfd(self):
dfd = [1, 2, 3]
p = cephes.ncfdtr(2, dfd, 0.25, 15)
assert_allclose(cephes.ncfdtridfd(2, p, 0.25, 15), dfd)
def test_ncfdtridfn(self):
dfn = [0.1, 1, 2, 3, 1e4]
p = cephes.ncfdtr(dfn, 2, 0.25, 15)
assert_allclose(cephes.ncfdtridfn(p, 2, 0.25, 15), dfn, rtol=1e-5)
def test_ncfdtrinc(self):
nc = [0.5, 1.5, 2.0]
p = cephes.ncfdtr(2, 3, nc, 15)
assert_allclose(cephes.ncfdtrinc(2, 3, p, 15), nc)
def test_nctdtr(self):
assert_equal(cephes.nctdtr(1,0,0),0.5)
assert_equal(cephes.nctdtr(9, 65536, 45), 0.0)
assert_approx_equal(cephes.nctdtr(np.inf, 1., 1.), 0.5, 5)
assert_(np.isnan(cephes.nctdtr(2., np.inf, 10.)))
assert_approx_equal(cephes.nctdtr(2., 1., np.inf), 1.)
assert_(np.isnan(cephes.nctdtr(np.nan, 1., 1.)))
assert_(np.isnan(cephes.nctdtr(2., np.nan, 1.)))
assert_(np.isnan(cephes.nctdtr(2., 1., np.nan)))
def __check_nctdtridf(self):
cephes.nctdtridf(1,0.5,0)
def test_nctdtrinc(self):
cephes.nctdtrinc(1,0,0)
def test_nctdtrit(self):
cephes.nctdtrit(.1,0.2,.5)
def test_nrdtrimn(self):
assert_approx_equal(cephes.nrdtrimn(0.5,1,1),1.0)
def test_nrdtrisd(self):
assert_allclose(cephes.nrdtrisd(0.5,0.5,0.5), 0.0,
atol=0, rtol=0)
def test_obl_ang1(self):
cephes.obl_ang1(1,1,1,0)
def test_obl_ang1_cv(self):
result = cephes.obl_ang1_cv(1,1,1,1,0)
assert_almost_equal(result[0],1.0)
assert_almost_equal(result[1],0.0)
def _check_obl_cv(self):
assert_equal(cephes.obl_cv(1,1,0),2.0)
def test_obl_rad1(self):
cephes.obl_rad1(1,1,1,0)
def test_obl_rad1_cv(self):
cephes.obl_rad1_cv(1,1,1,1,0)
def test_obl_rad2(self):
cephes.obl_rad2(1,1,1,0)
def test_obl_rad2_cv(self):
cephes.obl_rad2_cv(1,1,1,1,0)
def test_pbdv(self):
assert_equal(cephes.pbdv(1,0),(0.0,1.0))
def test_pbvv(self):
cephes.pbvv(1,0)
def test_pbwa(self):
cephes.pbwa(1,0)
def test_pdtr(self):
val = cephes.pdtr(0, 1)
assert_almost_equal(val, np.exp(-1))
# Edge case: m = 0.
val = cephes.pdtr([0, 1, 2], 0)
assert_array_equal(val, [1, 1, 1])
def test_pdtrc(self):
val = cephes.pdtrc(0, 1)
assert_almost_equal(val, 1 - np.exp(-1))
# Edge case: m = 0.
val = cephes.pdtrc([0, 1, 2], 0.0)
assert_array_equal(val, [0, 0, 0])
def test_pdtri(self):
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "floating point number truncated to an integer")
cephes.pdtri(0.5,0.5)
def test_pdtrik(self):
k = cephes.pdtrik(0.5, 1)
assert_almost_equal(cephes.gammaincc(k + 1, 1), 0.5)
# Edge case: m = 0 or very small.
k = cephes.pdtrik([[0], [0.25], [0.95]], [0, 1e-20, 1e-6])
assert_array_equal(k, np.zeros((3, 3)))
def test_pro_ang1(self):
cephes.pro_ang1(1,1,1,0)
def test_pro_ang1_cv(self):
assert_array_almost_equal(cephes.pro_ang1_cv(1,1,1,1,0),
array((1.0,0.0)))
def _check_pro_cv(self):
assert_equal(cephes.pro_cv(1,1,0),2.0)
def test_pro_rad1(self):
cephes.pro_rad1(1,1,1,0.1)
def test_pro_rad1_cv(self):
cephes.pro_rad1_cv(1,1,1,1,0)
def test_pro_rad2(self):
cephes.pro_rad2(1,1,1,0)
def test_pro_rad2_cv(self):
cephes.pro_rad2_cv(1,1,1,1,0)
def test_psi(self):
cephes.psi(1)
def test_radian(self):
assert_equal(cephes.radian(0,0,0),0)
def test_rgamma(self):
assert_equal(cephes.rgamma(1),1.0)
def test_round(self):
assert_equal(cephes.round(3.4),3.0)
assert_equal(cephes.round(-3.4),-3.0)
assert_equal(cephes.round(3.6),4.0)
assert_equal(cephes.round(-3.6),-4.0)
assert_equal(cephes.round(3.5),4.0)
assert_equal(cephes.round(-3.5),-4.0)
def test_shichi(self):
cephes.shichi(1)
def test_sici(self):
cephes.sici(1)
s, c = cephes.sici(np.inf)
assert_almost_equal(s, np.pi * 0.5)
assert_almost_equal(c, 0)
s, c = cephes.sici(-np.inf)
assert_almost_equal(s, -np.pi * 0.5)
assert_(np.isnan(c), "cosine integral(-inf) is not nan")
def test_sindg(self):
assert_equal(cephes.sindg(90),1.0)
def test_smirnov(self):
assert_equal(cephes.smirnov(1,.1),0.9)
assert_(np.isnan(cephes.smirnov(1,np.nan)))
def test_smirnovp(self):
assert_equal(cephes._smirnovp(1, .1), -1)
assert_equal(cephes._smirnovp(2, 0.75), -2*(0.25)**(2-1))
assert_equal(cephes._smirnovp(3, 0.75), -3*(0.25)**(3-1))
assert_(np.isnan(cephes._smirnovp(1, np.nan)))
def test_smirnovc(self):
assert_equal(cephes._smirnovc(1,.1),0.1)
assert_(np.isnan(cephes._smirnovc(1,np.nan)))
x10 = np.linspace(0, 1, 11, endpoint=True)
assert_almost_equal(cephes._smirnovc(3, x10), 1-cephes.smirnov(3, x10))
x4 = np.linspace(0, 1, 5, endpoint=True)
assert_almost_equal(cephes._smirnovc(4, x4), 1-cephes.smirnov(4, x4))
def test_smirnovi(self):
assert_almost_equal(cephes.smirnov(1,cephes.smirnovi(1,0.4)),0.4)
assert_almost_equal(cephes.smirnov(1,cephes.smirnovi(1,0.6)),0.6)
assert_(np.isnan(cephes.smirnovi(1,np.nan)))
def test_smirnovci(self):
assert_almost_equal(cephes._smirnovc(1,cephes._smirnovci(1,0.4)),0.4)
assert_almost_equal(cephes._smirnovc(1,cephes._smirnovci(1,0.6)),0.6)
assert_(np.isnan(cephes._smirnovci(1,np.nan)))
def test_spence(self):
assert_equal(cephes.spence(1),0.0)
def test_stdtr(self):
assert_equal(cephes.stdtr(1,0),0.5)
assert_almost_equal(cephes.stdtr(1,1), 0.75)
assert_almost_equal(cephes.stdtr(1,2), 0.852416382349)
def test_stdtridf(self):
cephes.stdtridf(0.7,1)
def test_stdtrit(self):
cephes.stdtrit(1,0.7)
def test_struve(self):
assert_equal(cephes.struve(0,0),0.0)
def test_tandg(self):
assert_equal(cephes.tandg(45),1.0)
def test_tklmbda(self):
assert_almost_equal(cephes.tklmbda(1,1),1.0)
def test_y0(self):
cephes.y0(1)
def test_y1(self):
cephes.y1(1)
def test_yn(self):
cephes.yn(1,1)
def test_yv(self):
cephes.yv(1,1)
def _check_yve(self):
cephes.yve(1,1)
def test_wofz(self):
z = [complex(624.2,-0.26123), complex(-0.4,3.), complex(0.6,2.),
complex(-1.,1.), complex(-1.,-9.), complex(-1.,9.),
complex(-0.0000000234545,1.1234), complex(-3.,5.1),
complex(-53,30.1), complex(0.0,0.12345),
complex(11,1), complex(-22,-2), complex(9,-28),
complex(21,-33), complex(1e5,1e5), complex(1e14,1e14)
]
w = [
complex(-3.78270245518980507452677445620103199303131110e-7,
0.000903861276433172057331093754199933411710053155),
complex(0.1764906227004816847297495349730234591778719532788,
-0.02146550539468457616788719893991501311573031095617),
complex(0.2410250715772692146133539023007113781272362309451,
0.06087579663428089745895459735240964093522265589350),
complex(0.30474420525691259245713884106959496013413834051768,
-0.20821893820283162728743734725471561394145872072738),
complex(7.317131068972378096865595229600561710140617977e34,
8.321873499714402777186848353320412813066170427e34),
complex(0.0615698507236323685519612934241429530190806818395,
-0.00676005783716575013073036218018565206070072304635),
complex(0.3960793007699874918961319170187598400134746631,
-5.593152259116644920546186222529802777409274656e-9),
complex(0.08217199226739447943295069917990417630675021771804,
-0.04701291087643609891018366143118110965272615832184),
complex(0.00457246000350281640952328010227885008541748668738,
-0.00804900791411691821818731763401840373998654987934),
complex(0.8746342859608052666092782112565360755791467973338452,
0.),
complex(0.00468190164965444174367477874864366058339647648741,
0.0510735563901306197993676329845149741675029197050),
complex(-0.0023193175200187620902125853834909543869428763219,
-0.025460054739731556004902057663500272721780776336),
complex(9.11463368405637174660562096516414499772662584e304,
3.97101807145263333769664875189354358563218932e305),
complex(-4.4927207857715598976165541011143706155432296e281,
-2.8019591213423077494444700357168707775769028e281),
complex(2.820947917809305132678577516325951485807107151e-6,
2.820947917668257736791638444590253942253354058e-6),
complex(2.82094791773878143474039725787438662716372268e-15,
2.82094791773878143474039725773333923127678361e-15)
]
assert_func_equal(cephes.wofz, w, z, rtol=1e-13)
class TestAiry:
def test_airy(self):
# This tests the airy function to ensure 8 place accuracy in computation
x = special.airy(.99)
assert_array_almost_equal(x,array([0.13689066,-0.16050153,1.19815925,0.92046818]),8)
x = special.airy(.41)
assert_array_almost_equal(x,array([0.25238916,-.23480512,0.80686202,0.51053919]),8)
x = special.airy(-.36)
assert_array_almost_equal(x,array([0.44508477,-0.23186773,0.44939534,0.48105354]),8)
def test_airye(self):
a = special.airye(0.01)
b = special.airy(0.01)
b1 = [None]*4
for n in range(2):
b1[n] = b[n]*exp(2.0/3.0*0.01*sqrt(0.01))
for n in range(2,4):
b1[n] = b[n]*exp(-abs(real(2.0/3.0*0.01*sqrt(0.01))))
assert_array_almost_equal(a,b1,6)
def test_bi_zeros(self):
bi = special.bi_zeros(2)
bia = (array([-1.17371322, -3.2710930]),
array([-2.29443968, -4.07315509]),
array([-0.45494438, 0.39652284]),
array([0.60195789, -0.76031014]))
assert_array_almost_equal(bi,bia,4)
bi = special.bi_zeros(5)
assert_array_almost_equal(bi[0],array([-1.173713222709127,
-3.271093302836352,
-4.830737841662016,
-6.169852128310251,
-7.376762079367764]),11)
assert_array_almost_equal(bi[1],array([-2.294439682614122,
-4.073155089071828,
-5.512395729663599,
-6.781294445990305,
-7.940178689168587]),10)
assert_array_almost_equal(bi[2],array([-0.454944383639657,
0.396522836094465,
-0.367969161486959,
0.349499116831805,
-0.336026240133662]),11)
assert_array_almost_equal(bi[3],array([0.601957887976239,
-0.760310141492801,
0.836991012619261,
-0.88947990142654,
0.929983638568022]),10)
def test_ai_zeros(self):
ai = special.ai_zeros(1)
assert_array_almost_equal(ai,(array([-2.33810741]),
array([-1.01879297]),
array([0.5357]),
array([0.7012])),4)
def test_ai_zeros_big(self):
z, zp, ai_zpx, aip_zx = special.ai_zeros(50000)
ai_z, aip_z, _, _ = special.airy(z)
ai_zp, aip_zp, _, _ = special.airy(zp)
ai_envelope = 1/abs(z)**(1./4)
aip_envelope = abs(zp)**(1./4)
# Check values
assert_allclose(ai_zpx, ai_zp, rtol=1e-10)
assert_allclose(aip_zx, aip_z, rtol=1e-10)
# Check they are zeros
assert_allclose(ai_z/ai_envelope, 0, atol=1e-10, rtol=0)
assert_allclose(aip_zp/aip_envelope, 0, atol=1e-10, rtol=0)
# Check first zeros, DLMF 9.9.1
assert_allclose(z[:6],
[-2.3381074105, -4.0879494441, -5.5205598281,
-6.7867080901, -7.9441335871, -9.0226508533], rtol=1e-10)
assert_allclose(zp[:6],
[-1.0187929716, -3.2481975822, -4.8200992112,
-6.1633073556, -7.3721772550, -8.4884867340], rtol=1e-10)
def test_bi_zeros_big(self):
z, zp, bi_zpx, bip_zx = special.bi_zeros(50000)
_, _, bi_z, bip_z = special.airy(z)
_, _, bi_zp, bip_zp = special.airy(zp)
bi_envelope = 1/abs(z)**(1./4)
bip_envelope = abs(zp)**(1./4)
# Check values
assert_allclose(bi_zpx, bi_zp, rtol=1e-10)
assert_allclose(bip_zx, bip_z, rtol=1e-10)
# Check they are zeros
assert_allclose(bi_z/bi_envelope, 0, atol=1e-10, rtol=0)
assert_allclose(bip_zp/bip_envelope, 0, atol=1e-10, rtol=0)
# Check first zeros, DLMF 9.9.2
assert_allclose(z[:6],
[-1.1737132227, -3.2710933028, -4.8307378417,
-6.1698521283, -7.3767620794, -8.4919488465], rtol=1e-10)
assert_allclose(zp[:6],
[-2.2944396826, -4.0731550891, -5.5123957297,
-6.7812944460, -7.9401786892, -9.0195833588], rtol=1e-10)
class TestAssocLaguerre:
def test_assoc_laguerre(self):
a1 = special.genlaguerre(11,1)
a2 = special.assoc_laguerre(.2,11,1)
assert_array_almost_equal(a2,a1(.2),8)
a2 = special.assoc_laguerre(1,11,1)
assert_array_almost_equal(a2,a1(1),8)
class TestBesselpoly:
def test_besselpoly(self):
pass
class TestKelvin:
def test_bei(self):
mbei = special.bei(2)
assert_almost_equal(mbei, 0.9722916273066613,5) # this may not be exact
def test_beip(self):
mbeip = special.beip(2)
assert_almost_equal(mbeip,0.91701361338403631,5) # this may not be exact
def test_ber(self):
mber = special.ber(2)
assert_almost_equal(mber,0.75173418271380821,5) # this may not be exact
def test_berp(self):
mberp = special.berp(2)
assert_almost_equal(mberp,-0.49306712470943909,5) # this may not be exact
def test_bei_zeros(self):
# Abramowitz & Stegun, Table 9.12
bi = special.bei_zeros(5)
assert_array_almost_equal(bi,array([5.02622,
9.45541,
13.89349,
18.33398,
22.77544]),4)
def test_beip_zeros(self):
bip = special.beip_zeros(5)
assert_array_almost_equal(bip,array([3.772673304934953,
8.280987849760042,
12.742147523633703,
17.193431752512542,
21.641143941167325]),8)
def test_ber_zeros(self):
ber = special.ber_zeros(5)
assert_array_almost_equal(ber,array([2.84892,
7.23883,
11.67396,
16.11356,
20.55463]),4)
def test_berp_zeros(self):
brp = special.berp_zeros(5)
assert_array_almost_equal(brp,array([6.03871,
10.51364,
14.96844,
19.41758,
23.86430]),4)
def test_kelvin(self):
mkelv = special.kelvin(2)
assert_array_almost_equal(mkelv,(special.ber(2) + special.bei(2)*1j,
special.ker(2) + special.kei(2)*1j,
special.berp(2) + special.beip(2)*1j,
special.kerp(2) + special.keip(2)*1j),8)
def test_kei(self):
mkei = special.kei(2)
assert_almost_equal(mkei,-0.20240006776470432,5)
def test_keip(self):
mkeip = special.keip(2)
assert_almost_equal(mkeip,0.21980790991960536,5)
def test_ker(self):
mker = special.ker(2)
assert_almost_equal(mker,-0.041664513991509472,5)
def test_kerp(self):
mkerp = special.kerp(2)
assert_almost_equal(mkerp,-0.10660096588105264,5)
def test_kei_zeros(self):
kei = special.kei_zeros(5)
assert_array_almost_equal(kei,array([3.91467,
8.34422,
12.78256,
17.22314,
21.66464]),4)
def test_keip_zeros(self):
keip = special.keip_zeros(5)
assert_array_almost_equal(keip,array([4.93181,
9.40405,
13.85827,
18.30717,
22.75379]),4)
# numbers come from 9.9 of A&S pg. 381
def test_kelvin_zeros(self):
tmp = special.kelvin_zeros(5)
berz,beiz,kerz,keiz,berpz,beipz,kerpz,keipz = tmp
assert_array_almost_equal(berz,array([2.84892,
7.23883,
11.67396,
16.11356,
20.55463]),4)
assert_array_almost_equal(beiz,array([5.02622,
9.45541,
13.89349,
18.33398,
22.77544]),4)
assert_array_almost_equal(kerz,array([1.71854,
6.12728,
10.56294,
15.00269,
19.44382]),4)
assert_array_almost_equal(keiz,array([3.91467,
8.34422,
12.78256,
17.22314,
21.66464]),4)
assert_array_almost_equal(berpz,array([6.03871,
10.51364,
14.96844,
19.41758,
23.86430]),4)
assert_array_almost_equal(beipz,array([3.77267,
# table from 1927 had 3.77320
# but this is more accurate
8.28099,
12.74215,
17.19343,
21.64114]),4)
assert_array_almost_equal(kerpz,array([2.66584,
7.17212,
11.63218,
16.08312,
20.53068]),4)
assert_array_almost_equal(keipz,array([4.93181,
9.40405,
13.85827,
18.30717,
22.75379]),4)
def test_ker_zeros(self):
ker = special.ker_zeros(5)
assert_array_almost_equal(ker,array([1.71854,
6.12728,
10.56294,
15.00269,
19.44381]),4)
def test_kerp_zeros(self):
kerp = special.kerp_zeros(5)
assert_array_almost_equal(kerp,array([2.66584,
7.17212,
11.63218,
16.08312,
20.53068]),4)
class TestBernoulli:
def test_bernoulli(self):
brn = special.bernoulli(5)
assert_array_almost_equal(brn,array([1.0000,
-0.5000,
0.1667,
0.0000,
-0.0333,
0.0000]),4)
class TestBeta:
def test_beta(self):
bet = special.beta(2,4)
betg = (special.gamma(2)*special.gamma(4))/special.gamma(6)
assert_almost_equal(bet,betg,8)
def test_betaln(self):
betln = special.betaln(2,4)
bet = log(abs(special.beta(2,4)))
assert_almost_equal(betln,bet,8)
def test_betainc(self):
btinc = special.betainc(1,1,.2)
assert_almost_equal(btinc,0.2,8)
def test_betaincinv(self):
y = special.betaincinv(2,4,.5)
comp = special.betainc(2,4,y)
assert_almost_equal(comp,.5,5)
class TestCombinatorics:
def test_comb(self):
assert_array_almost_equal(special.comb([10, 10], [3, 4]), [120., 210.])
assert_almost_equal(special.comb(10, 3), 120.)
assert_equal(special.comb(10, 3, exact=True), 120)
assert_equal(special.comb(10, 3, exact=True, repetition=True), 220)
assert_allclose([special.comb(20, k, exact=True) for k in range(21)],
special.comb(20, list(range(21))), atol=1e-15)
ii = np.iinfo(int).max + 1
assert_equal(special.comb(ii, ii-1, exact=True), ii)
expected = 100891344545564193334812497256
assert_equal(special.comb(100, 50, exact=True), expected)
def test_comb_with_np_int64(self):
n = 70
k = 30
np_n = np.int64(n)
np_k = np.int64(k)
assert_equal(special.comb(np_n, np_k, exact=True),
special.comb(n, k, exact=True))
def test_comb_zeros(self):
assert_equal(special.comb(2, 3, exact=True), 0)
assert_equal(special.comb(-1, 3, exact=True), 0)
assert_equal(special.comb(2, -1, exact=True), 0)
assert_equal(special.comb(2, -1, exact=False), 0)
assert_array_almost_equal(special.comb([2, -1, 2, 10], [3, 3, -1, 3]),
[0., 0., 0., 120.])
def test_perm(self):
assert_array_almost_equal(special.perm([10, 10], [3, 4]), [720., 5040.])
assert_almost_equal(special.perm(10, 3), 720.)
assert_equal(special.perm(10, 3, exact=True), 720)
def test_perm_zeros(self):
assert_equal(special.perm(2, 3, exact=True), 0)
assert_equal(special.perm(-1, 3, exact=True), 0)
assert_equal(special.perm(2, -1, exact=True), 0)
assert_equal(special.perm(2, -1, exact=False), 0)
assert_array_almost_equal(special.perm([2, -1, 2, 10], [3, 3, -1, 3]),
[0., 0., 0., 720.])
class TestTrigonometric:
def test_cbrt(self):
cb = special.cbrt(27)
cbrl = 27**(1.0/3.0)
assert_approx_equal(cb,cbrl)
def test_cbrtmore(self):
cb1 = special.cbrt(27.9)
cbrl1 = 27.9**(1.0/3.0)
assert_almost_equal(cb1,cbrl1,8)
def test_cosdg(self):
cdg = special.cosdg(90)
cdgrl = cos(pi/2.0)
assert_almost_equal(cdg,cdgrl,8)
def test_cosdgmore(self):
cdgm = special.cosdg(30)
cdgmrl = cos(pi/6.0)
assert_almost_equal(cdgm,cdgmrl,8)
def test_cosm1(self):
cs = (special.cosm1(0),special.cosm1(.3),special.cosm1(pi/10))
csrl = (cos(0)-1,cos(.3)-1,cos(pi/10)-1)
assert_array_almost_equal(cs,csrl,8)
def test_cotdg(self):
ct = special.cotdg(30)
ctrl = tan(pi/6.0)**(-1)
assert_almost_equal(ct,ctrl,8)
def test_cotdgmore(self):
ct1 = special.cotdg(45)
ctrl1 = tan(pi/4.0)**(-1)
assert_almost_equal(ct1,ctrl1,8)
def test_specialpoints(self):
assert_almost_equal(special.cotdg(45), 1.0, 14)
assert_almost_equal(special.cotdg(-45), -1.0, 14)
assert_almost_equal(special.cotdg(90), 0.0, 14)
assert_almost_equal(special.cotdg(-90), 0.0, 14)
assert_almost_equal(special.cotdg(135), -1.0, 14)
assert_almost_equal(special.cotdg(-135), 1.0, 14)
assert_almost_equal(special.cotdg(225), 1.0, 14)
assert_almost_equal(special.cotdg(-225), -1.0, 14)
assert_almost_equal(special.cotdg(270), 0.0, 14)
assert_almost_equal(special.cotdg(-270), 0.0, 14)
assert_almost_equal(special.cotdg(315), -1.0, 14)
assert_almost_equal(special.cotdg(-315), 1.0, 14)
assert_almost_equal(special.cotdg(765), 1.0, 14)
def test_sinc(self):
# the sinc implementation and more extensive sinc tests are in numpy
assert_array_equal(special.sinc([0]), 1)
assert_equal(special.sinc(0.0), 1.0)
def test_sindg(self):
sn = special.sindg(90)
assert_equal(sn,1.0)
def test_sindgmore(self):
snm = special.sindg(30)
snmrl = sin(pi/6.0)
assert_almost_equal(snm,snmrl,8)
snm1 = special.sindg(45)
snmrl1 = sin(pi/4.0)
assert_almost_equal(snm1,snmrl1,8)
class TestTandg:
def test_tandg(self):
tn = special.tandg(30)
tnrl = tan(pi/6.0)
assert_almost_equal(tn,tnrl,8)
def test_tandgmore(self):
tnm = special.tandg(45)
tnmrl = tan(pi/4.0)
assert_almost_equal(tnm,tnmrl,8)
tnm1 = special.tandg(60)
tnmrl1 = tan(pi/3.0)
assert_almost_equal(tnm1,tnmrl1,8)
def test_specialpoints(self):
assert_almost_equal(special.tandg(0), 0.0, 14)
assert_almost_equal(special.tandg(45), 1.0, 14)
assert_almost_equal(special.tandg(-45), -1.0, 14)
assert_almost_equal(special.tandg(135), -1.0, 14)
assert_almost_equal(special.tandg(-135), 1.0, 14)
assert_almost_equal(special.tandg(180), 0.0, 14)
assert_almost_equal(special.tandg(-180), 0.0, 14)
assert_almost_equal(special.tandg(225), 1.0, 14)
assert_almost_equal(special.tandg(-225), -1.0, 14)
assert_almost_equal(special.tandg(315), -1.0, 14)
assert_almost_equal(special.tandg(-315), 1.0, 14)
class TestEllip:
def test_ellipj_nan(self):
"""Regression test for #912."""
special.ellipj(0.5, np.nan)
def test_ellipj(self):
el = special.ellipj(0.2,0)
rel = [sin(0.2),cos(0.2),1.0,0.20]
assert_array_almost_equal(el,rel,13)
def test_ellipk(self):
elk = special.ellipk(.2)
assert_almost_equal(elk,1.659623598610528,11)
assert_equal(special.ellipkm1(0.0), np.inf)
assert_equal(special.ellipkm1(1.0), pi/2)
assert_equal(special.ellipkm1(np.inf), 0.0)
assert_equal(special.ellipkm1(np.nan), np.nan)
assert_equal(special.ellipkm1(-1), np.nan)
assert_allclose(special.ellipk(-10), 0.7908718902387385)
def test_ellipkinc(self):
elkinc = special.ellipkinc(pi/2,.2)
elk = special.ellipk(0.2)
assert_almost_equal(elkinc,elk,15)
alpha = 20*pi/180
phi = 45*pi/180
m = sin(alpha)**2
elkinc = special.ellipkinc(phi,m)
assert_almost_equal(elkinc,0.79398143,8)
# From pg. 614 of A & S
assert_equal(special.ellipkinc(pi/2, 0.0), pi/2)
assert_equal(special.ellipkinc(pi/2, 1.0), np.inf)
assert_equal(special.ellipkinc(pi/2, -np.inf), 0.0)
assert_equal(special.ellipkinc(pi/2, np.nan), np.nan)
assert_equal(special.ellipkinc(pi/2, 2), np.nan)
assert_equal(special.ellipkinc(0, 0.5), 0.0)
assert_equal(special.ellipkinc(np.inf, 0.5), np.inf)
assert_equal(special.ellipkinc(-np.inf, 0.5), -np.inf)
assert_equal(special.ellipkinc(np.inf, np.inf), np.nan)
assert_equal(special.ellipkinc(np.inf, -np.inf), np.nan)
assert_equal(special.ellipkinc(-np.inf, -np.inf), np.nan)
assert_equal(special.ellipkinc(-np.inf, np.inf), np.nan)
assert_equal(special.ellipkinc(np.nan, 0.5), np.nan)
assert_equal(special.ellipkinc(np.nan, np.nan), np.nan)
assert_allclose(special.ellipkinc(0.38974112035318718, 1), 0.4, rtol=1e-14)
assert_allclose(special.ellipkinc(1.5707, -10), 0.79084284661724946)
def test_ellipkinc_2(self):
# Regression test for gh-3550
# ellipkinc(phi, mbad) was NaN and mvals[2:6] were twice the correct value
mbad = 0.68359375000000011
phi = 0.9272952180016123
m = np.nextafter(mbad, 0)
mvals = []
for j in range(10):
mvals.append(m)
m = np.nextafter(m, 1)
f = special.ellipkinc(phi, mvals)
assert_array_almost_equal_nulp(f, np.full_like(f, 1.0259330100195334), 1)
# this bug also appears at phi + n * pi for at least small n
f1 = special.ellipkinc(phi + pi, mvals)
assert_array_almost_equal_nulp(f1, np.full_like(f1, 5.1296650500976675), 2)
def test_ellipkinc_singular(self):
# ellipkinc(phi, 1) has closed form and is finite only for phi in (-pi/2, pi/2)
xlog = np.logspace(-300, -17, 25)
xlin = np.linspace(1e-17, 0.1, 25)
xlin2 = np.linspace(0.1, pi/2, 25, endpoint=False)
assert_allclose(special.ellipkinc(xlog, 1), np.arcsinh(np.tan(xlog)), rtol=1e14)
assert_allclose(special.ellipkinc(xlin, 1), np.arcsinh(np.tan(xlin)), rtol=1e14)
assert_allclose(special.ellipkinc(xlin2, 1), np.arcsinh(np.tan(xlin2)), rtol=1e14)
assert_equal(special.ellipkinc(np.pi/2, 1), np.inf)
assert_allclose(special.ellipkinc(-xlog, 1), np.arcsinh(np.tan(-xlog)), rtol=1e14)
assert_allclose(special.ellipkinc(-xlin, 1), np.arcsinh(np.tan(-xlin)), rtol=1e14)
assert_allclose(special.ellipkinc(-xlin2, 1), np.arcsinh(np.tan(-xlin2)), rtol=1e14)
assert_equal(special.ellipkinc(-np.pi/2, 1), np.inf)
def test_ellipe(self):
ele = special.ellipe(.2)
assert_almost_equal(ele,1.4890350580958529,8)
assert_equal(special.ellipe(0.0), pi/2)
assert_equal(special.ellipe(1.0), 1.0)
assert_equal(special.ellipe(-np.inf), np.inf)
assert_equal(special.ellipe(np.nan), np.nan)
assert_equal(special.ellipe(2), np.nan)
assert_allclose(special.ellipe(-10), 3.6391380384177689)
def test_ellipeinc(self):
eleinc = special.ellipeinc(pi/2,.2)
ele = special.ellipe(0.2)
assert_almost_equal(eleinc,ele,14)
# pg 617 of A & S
alpha, phi = 52*pi/180,35*pi/180
m = sin(alpha)**2
eleinc = special.ellipeinc(phi,m)
assert_almost_equal(eleinc, 0.58823065, 8)
assert_equal(special.ellipeinc(pi/2, 0.0), pi/2)
assert_equal(special.ellipeinc(pi/2, 1.0), 1.0)
assert_equal(special.ellipeinc(pi/2, -np.inf), np.inf)
assert_equal(special.ellipeinc(pi/2, np.nan), np.nan)
assert_equal(special.ellipeinc(pi/2, 2), np.nan)
assert_equal(special.ellipeinc(0, 0.5), 0.0)
assert_equal(special.ellipeinc(np.inf, 0.5), np.inf)
assert_equal(special.ellipeinc(-np.inf, 0.5), -np.inf)
assert_equal(special.ellipeinc(np.inf, -np.inf), np.inf)
assert_equal(special.ellipeinc(-np.inf, -np.inf), -np.inf)
assert_equal(special.ellipeinc(np.inf, np.inf), np.nan)
assert_equal(special.ellipeinc(-np.inf, np.inf), np.nan)
assert_equal(special.ellipeinc(np.nan, 0.5), np.nan)
assert_equal(special.ellipeinc(np.nan, np.nan), np.nan)
assert_allclose(special.ellipeinc(1.5707, -10), 3.6388185585822876)
def test_ellipeinc_2(self):
# Regression test for gh-3550
# ellipeinc(phi, mbad) was NaN and mvals[2:6] were twice the correct value
mbad = 0.68359375000000011
phi = 0.9272952180016123
m = np.nextafter(mbad, 0)
mvals = []
for j in range(10):
mvals.append(m)
m = np.nextafter(m, 1)
f = special.ellipeinc(phi, mvals)
assert_array_almost_equal_nulp(f, np.full_like(f, 0.84442884574781019), 2)
# this bug also appears at phi + n * pi for at least small n
f1 = special.ellipeinc(phi + pi, mvals)
assert_array_almost_equal_nulp(f1, np.full_like(f1, 3.3471442287390509), 4)
class TestErf:
def test_erf(self):
er = special.erf(.25)
assert_almost_equal(er,0.2763263902,8)
def test_erf_zeros(self):
erz = special.erf_zeros(5)
erzr = array([1.45061616+1.88094300j,
2.24465928+2.61657514j,
2.83974105+3.17562810j,
3.33546074+3.64617438j,
3.76900557+4.06069723j])
assert_array_almost_equal(erz,erzr,4)
def _check_variant_func(self, func, other_func, rtol, atol=0):
np.random.seed(1234)
n = 10000
x = np.random.pareto(0.02, n) * (2*np.random.randint(0, 2, n) - 1)
y = np.random.pareto(0.02, n) * (2*np.random.randint(0, 2, n) - 1)
z = x + 1j*y
with np.errstate(all='ignore'):
w = other_func(z)
w_real = other_func(x).real
mask = np.isfinite(w)
w = w[mask]
z = z[mask]
mask = np.isfinite(w_real)
w_real = w_real[mask]
x = x[mask]
# test both real and complex variants
assert_func_equal(func, w, z, rtol=rtol, atol=atol)
assert_func_equal(func, w_real, x, rtol=rtol, atol=atol)
def test_erfc_consistent(self):
self._check_variant_func(
cephes.erfc,
lambda z: 1 - cephes.erf(z),
rtol=1e-12,
atol=1e-14 # <- the test function loses precision
)
def test_erfcx_consistent(self):
self._check_variant_func(
cephes.erfcx,
lambda z: np.exp(z*z) * cephes.erfc(z),
rtol=1e-12
)
def test_erfi_consistent(self):
self._check_variant_func(
cephes.erfi,
lambda z: -1j * cephes.erf(1j*z),
rtol=1e-12
)
def test_dawsn_consistent(self):
self._check_variant_func(
cephes.dawsn,
lambda z: sqrt(pi)/2 * np.exp(-z*z) * cephes.erfi(z),
rtol=1e-12
)
def test_erf_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, -1, 1]
assert_allclose(special.erf(vals), expected, rtol=1e-15)
def test_erfc_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, 2, 0]
assert_allclose(special.erfc(vals), expected, rtol=1e-15)
def test_erfcx_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, np.inf, 0]
assert_allclose(special.erfcx(vals), expected, rtol=1e-15)
def test_erfi_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, -np.inf, np.inf]
assert_allclose(special.erfi(vals), expected, rtol=1e-15)
def test_dawsn_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, -0.0, 0.0]
assert_allclose(special.dawsn(vals), expected, rtol=1e-15)
def test_wofz_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan + np.nan * 1.j, 0.-0.j, 0.+0.j]
assert_allclose(special.wofz(vals), expected, rtol=1e-15)
class TestEuler:
def test_euler(self):
eu0 = special.euler(0)
eu1 = special.euler(1)
eu2 = special.euler(2) # just checking segfaults
assert_allclose(eu0, [1], rtol=1e-15)
assert_allclose(eu1, [1, 0], rtol=1e-15)
assert_allclose(eu2, [1, 0, -1], rtol=1e-15)
eu24 = special.euler(24)
mathworld = [1,1,5,61,1385,50521,2702765,199360981,
19391512145,2404879675441,
370371188237525,69348874393137901,
15514534163557086905]
correct = zeros((25,),'d')
for k in range(0,13):
if (k % 2):
correct[2*k] = -float(mathworld[k])
else:
correct[2*k] = float(mathworld[k])
with np.errstate(all='ignore'):
err = nan_to_num((eu24-correct)/correct)
errmax = max(err)
assert_almost_equal(errmax, 0.0, 14)
class TestExp:
def test_exp2(self):
ex = special.exp2(2)
exrl = 2**2
assert_equal(ex,exrl)
def test_exp2more(self):
exm = special.exp2(2.5)
exmrl = 2**(2.5)
assert_almost_equal(exm,exmrl,8)
def test_exp10(self):
ex = special.exp10(2)
exrl = 10**2
assert_approx_equal(ex,exrl)
def test_exp10more(self):
exm = special.exp10(2.5)
exmrl = 10**(2.5)
assert_almost_equal(exm,exmrl,8)
def test_expm1(self):
ex = (special.expm1(2),special.expm1(3),special.expm1(4))
exrl = (exp(2)-1,exp(3)-1,exp(4)-1)
assert_array_almost_equal(ex,exrl,8)
def test_expm1more(self):
ex1 = (special.expm1(2),special.expm1(2.1),special.expm1(2.2))
exrl1 = (exp(2)-1,exp(2.1)-1,exp(2.2)-1)
assert_array_almost_equal(ex1,exrl1,8)
class TestFactorialFunctions:
def test_factorial(self):
# Some known values, float math
assert_array_almost_equal(special.factorial(0), 1)
assert_array_almost_equal(special.factorial(1), 1)
assert_array_almost_equal(special.factorial(2), 2)
assert_array_almost_equal([6., 24., 120.],
special.factorial([3, 4, 5], exact=False))
assert_array_almost_equal(special.factorial([[5, 3], [4, 3]]),
[[120, 6], [24, 6]])
# Some known values, integer math
assert_equal(special.factorial(0, exact=True), 1)
assert_equal(special.factorial(1, exact=True), 1)
assert_equal(special.factorial(2, exact=True), 2)
assert_equal(special.factorial(5, exact=True), 120)
assert_equal(special.factorial(15, exact=True), 1307674368000)
# ndarray shape is maintained
assert_equal(special.factorial([7, 4, 15, 10], exact=True),
[5040, 24, 1307674368000, 3628800])
assert_equal(special.factorial([[5, 3], [4, 3]], True),
[[120, 6], [24, 6]])
# object arrays
assert_equal(special.factorial(np.arange(-3, 22), True),
special.factorial(np.arange(-3, 22), False))
# int64 array
assert_equal(special.factorial(np.arange(-3, 15), True),
special.factorial(np.arange(-3, 15), False))
# int32 array
assert_equal(special.factorial(np.arange(-3, 5), True),
special.factorial(np.arange(-3, 5), False))
# Consistent output for n < 0
for exact in (True, False):
assert_array_equal(0, special.factorial(-3, exact))
assert_array_equal([1, 2, 0, 0],
special.factorial([1, 2, -5, -4], exact))
for n in range(0, 22):
# Compare all with math.factorial
correct = math.factorial(n)
assert_array_equal(correct, special.factorial(n, True))
assert_array_equal(correct, special.factorial([n], True)[0])
assert_allclose(float(correct), special.factorial(n, False))
assert_allclose(float(correct), special.factorial([n], False)[0])
# Compare exact=True vs False, scalar vs array
assert_array_equal(special.factorial(n, True),
special.factorial(n, False))
assert_array_equal(special.factorial([n], True),
special.factorial([n], False))
@pytest.mark.parametrize('x, exact', [
(1, True),
(1, False),
(np.array(1), True),
(np.array(1), False),
])
def test_factorial_0d_return_type(self, x, exact):
assert np.isscalar(special.factorial(x, exact=exact))
def test_factorial2(self):
assert_array_almost_equal([105., 384., 945.],
special.factorial2([7, 8, 9], exact=False))
assert_equal(special.factorial2(7, exact=True), 105)
def test_factorialk(self):
assert_equal(special.factorialk(5, 1, exact=True), 120)
assert_equal(special.factorialk(5, 3, exact=True), 10)
@pytest.mark.parametrize('x, exact', [
(np.nan, True),
(np.nan, False),
(np.array([np.nan]), True),
(np.array([np.nan]), False),
])
def test_nan_inputs(self, x, exact):
result = special.factorial(x, exact=exact)
assert_(np.isnan(result))
# GH-13122: special.factorial() argument should be an array of integers.
# On Python 3.10, math.factorial() reject float.
# On Python 3.9, a DeprecationWarning is emitted.
# A numpy array casts all integers to float if the array contains a
# single NaN.
@pytest.mark.skipif(sys.version_info >= (3, 10),
reason="Python 3.10+ math.factorial() requires int")
def test_mixed_nan_inputs(self):
x = np.array([np.nan, 1, 2, 3, np.nan])
with suppress_warnings() as sup:
sup.filter(DeprecationWarning, "Using factorial\\(\\) with floats is deprecated")
result = special.factorial(x, exact=True)
assert_equal(np.array([np.nan, 1, 2, 6, np.nan]), result)
result = special.factorial(x, exact=False)
assert_equal(np.array([np.nan, 1, 2, 6, np.nan]), result)
class TestFresnel:
def test_fresnel(self):
frs = array(special.fresnel(.5))
assert_array_almost_equal(frs,array([0.064732432859999287, 0.49234422587144644]),8)
def test_fresnel_inf1(self):
frs = special.fresnel(np.inf)
assert_equal(frs, (0.5, 0.5))
def test_fresnel_inf2(self):
frs = special.fresnel(-np.inf)
assert_equal(frs, (-0.5, -0.5))
# values from pg 329 Table 7.11 of A & S
# slightly corrected in 4th decimal place
def test_fresnel_zeros(self):
szo, czo = special.fresnel_zeros(5)
assert_array_almost_equal(szo,
array([2.0093+0.2885j,
2.8335+0.2443j,
3.4675+0.2185j,
4.0026+0.2009j,
4.4742+0.1877j]),3)
assert_array_almost_equal(czo,
array([1.7437+0.3057j,
2.6515+0.2529j,
3.3204+0.2240j,
3.8757+0.2047j,
4.3611+0.1907j]),3)
vals1 = special.fresnel(szo)[0]
vals2 = special.fresnel(czo)[1]
assert_array_almost_equal(vals1,0,14)
assert_array_almost_equal(vals2,0,14)
def test_fresnelc_zeros(self):
szo, czo = special.fresnel_zeros(6)
frc = special.fresnelc_zeros(6)
assert_array_almost_equal(frc,czo,12)
def test_fresnels_zeros(self):
szo, czo = special.fresnel_zeros(5)
frs = special.fresnels_zeros(5)
assert_array_almost_equal(frs,szo,12)
class TestGamma:
def test_gamma(self):
gam = special.gamma(5)
assert_equal(gam,24.0)
def test_gammaln(self):
gamln = special.gammaln(3)
lngam = log(special.gamma(3))
assert_almost_equal(gamln,lngam,8)
def test_gammainccinv(self):
gccinv = special.gammainccinv(.5,.5)
gcinv = special.gammaincinv(.5,.5)
assert_almost_equal(gccinv,gcinv,8)
@with_special_errors
def test_gammaincinv(self):
y = special.gammaincinv(.4,.4)
x = special.gammainc(.4,y)
assert_almost_equal(x,0.4,1)
y = special.gammainc(10, 0.05)
x = special.gammaincinv(10, 2.5715803516000736e-20)
assert_almost_equal(0.05, x, decimal=10)
assert_almost_equal(y, 2.5715803516000736e-20, decimal=10)
x = special.gammaincinv(50, 8.20754777388471303050299243573393e-18)
assert_almost_equal(11.0, x, decimal=10)
@with_special_errors
def test_975(self):
# Regression test for ticket #975 -- switch point in algorithm
# check that things work OK at the point, immediately next floats
# around it, and a bit further away
pts = [0.25,
np.nextafter(0.25, 0), 0.25 - 1e-12,
np.nextafter(0.25, 1), 0.25 + 1e-12]
for xp in pts:
y = special.gammaincinv(.4, xp)
x = special.gammainc(0.4, y)
assert_allclose(x, xp, rtol=1e-12)
def test_rgamma(self):
rgam = special.rgamma(8)
rlgam = 1/special.gamma(8)
assert_almost_equal(rgam,rlgam,8)
def test_infinity(self):
assert_(np.isinf(special.gamma(-1)))
assert_equal(special.rgamma(-1), 0)
class TestHankel:
def test_negv1(self):
assert_almost_equal(special.hankel1(-3,2), -special.hankel1(3,2), 14)
def test_hankel1(self):
hank1 = special.hankel1(1,.1)
hankrl = (special.jv(1,.1) + special.yv(1,.1)*1j)
assert_almost_equal(hank1,hankrl,8)
def test_negv1e(self):
assert_almost_equal(special.hankel1e(-3,2), -special.hankel1e(3,2), 14)
def test_hankel1e(self):
hank1e = special.hankel1e(1,.1)
hankrle = special.hankel1(1,.1)*exp(-.1j)
assert_almost_equal(hank1e,hankrle,8)
def test_negv2(self):
assert_almost_equal(special.hankel2(-3,2), -special.hankel2(3,2), 14)
def test_hankel2(self):
hank2 = special.hankel2(1,.1)
hankrl2 = (special.jv(1,.1) - special.yv(1,.1)*1j)
assert_almost_equal(hank2,hankrl2,8)
def test_neg2e(self):
assert_almost_equal(special.hankel2e(-3,2), -special.hankel2e(3,2), 14)
def test_hankl2e(self):
hank2e = special.hankel2e(1,.1)
hankrl2e = special.hankel2e(1,.1)
assert_almost_equal(hank2e,hankrl2e,8)
class TestHyper:
def test_h1vp(self):
h1 = special.h1vp(1,.1)
h1real = (special.jvp(1,.1) + special.yvp(1,.1)*1j)
assert_almost_equal(h1,h1real,8)
def test_h2vp(self):
h2 = special.h2vp(1,.1)
h2real = (special.jvp(1,.1) - special.yvp(1,.1)*1j)
assert_almost_equal(h2,h2real,8)
def test_hyp0f1(self):
# scalar input
assert_allclose(special.hyp0f1(2.5, 0.5), 1.21482702689997, rtol=1e-12)
assert_allclose(special.hyp0f1(2.5, 0), 1.0, rtol=1e-15)
# float input, expected values match mpmath
x = special.hyp0f1(3.0, [-1.5, -1, 0, 1, 1.5])
expected = np.array([0.58493659229143, 0.70566805723127, 1.0,
1.37789689539747, 1.60373685288480])
assert_allclose(x, expected, rtol=1e-12)
# complex input
x = special.hyp0f1(3.0, np.array([-1.5, -1, 0, 1, 1.5]) + 0.j)
assert_allclose(x, expected.astype(complex), rtol=1e-12)
# test broadcasting
x1 = [0.5, 1.5, 2.5]
x2 = [0, 1, 0.5]
x = special.hyp0f1(x1, x2)
expected = [1.0, 1.8134302039235093, 1.21482702689997]
assert_allclose(x, expected, rtol=1e-12)
x = special.hyp0f1(np.row_stack([x1] * 2), x2)
assert_allclose(x, np.row_stack([expected] * 2), rtol=1e-12)
assert_raises(ValueError, special.hyp0f1,
np.row_stack([x1] * 3), [0, 1])
def test_hyp0f1_gh5764(self):
# Just checks the point that failed; there's a more systematic
# test in test_mpmath
res = special.hyp0f1(0.8, 0.5 + 0.5*1J)
# The expected value was generated using mpmath
assert_almost_equal(res, 1.6139719776441115 + 1J*0.80893054061790665)
def test_hyp1f1(self):
hyp1 = special.hyp1f1(.1,.1,.3)
assert_almost_equal(hyp1, 1.3498588075760032,7)
# test contributed by Moritz Deger (2008-05-29)
# https://github.com/scipy/scipy/issues/1186 (Trac #659)
# reference data obtained from mathematica [ a, b, x, m(a,b,x)]:
# produced with test_hyp1f1.nb
ref_data = array([[-8.38132975e+00, -1.28436461e+01, -2.91081397e+01, 1.04178330e+04],
[2.91076882e+00, -6.35234333e+00, -1.27083993e+01, 6.68132725e+00],
[-1.42938258e+01, 1.80869131e-01, 1.90038728e+01, 1.01385897e+05],
[5.84069088e+00, 1.33187908e+01, 2.91290106e+01, 1.59469411e+08],
[-2.70433202e+01, -1.16274873e+01, -2.89582384e+01, 1.39900152e+24],
[4.26344966e+00, -2.32701773e+01, 1.91635759e+01, 6.13816915e+21],
[1.20514340e+01, -3.40260240e+00, 7.26832235e+00, 1.17696112e+13],
[2.77372955e+01, -1.99424687e+00, 3.61332246e+00, 3.07419615e+13],
[1.50310939e+01, -2.91198675e+01, -1.53581080e+01, -3.79166033e+02],
[1.43995827e+01, 9.84311196e+00, 1.93204553e+01, 2.55836264e+10],
[-4.08759686e+00, 1.34437025e+01, -1.42072843e+01, 1.70778449e+01],
[8.05595738e+00, -1.31019838e+01, 1.52180721e+01, 3.06233294e+21],
[1.81815804e+01, -1.42908793e+01, 9.57868793e+00, -2.84771348e+20],
[-2.49671396e+01, 1.25082843e+01, -1.71562286e+01, 2.36290426e+07],
[2.67277673e+01, 1.70315414e+01, 6.12701450e+00, 7.77917232e+03],
[2.49565476e+01, 2.91694684e+01, 6.29622660e+00, 2.35300027e+02],
[6.11924542e+00, -1.59943768e+00, 9.57009289e+00, 1.32906326e+11],
[-1.47863653e+01, 2.41691301e+01, -1.89981821e+01, 2.73064953e+03],
[2.24070483e+01, -2.93647433e+00, 8.19281432e+00, -6.42000372e+17],
[8.04042600e-01, 1.82710085e+01, -1.97814534e+01, 5.48372441e-01],
[1.39590390e+01, 1.97318686e+01, 2.37606635e+00, 5.51923681e+00],
[-4.66640483e+00, -2.00237930e+01, 7.40365095e+00, 4.50310752e+00],
[2.76821999e+01, -6.36563968e+00, 1.11533984e+01, -9.28725179e+23],
[-2.56764457e+01, 1.24544906e+00, 1.06407572e+01, 1.25922076e+01],
[3.20447808e+00, 1.30874383e+01, 2.26098014e+01, 2.03202059e+04],
[-1.24809647e+01, 4.15137113e+00, -2.92265700e+01, 2.39621411e+08],
[2.14778108e+01, -2.35162960e+00, -1.13758664e+01, 4.46882152e-01],
[-9.85469168e+00, -3.28157680e+00, 1.67447548e+01, -1.07342390e+07],
[1.08122310e+01, -2.47353236e+01, -1.15622349e+01, -2.91733796e+03],
[-2.67933347e+01, -3.39100709e+00, 2.56006986e+01, -5.29275382e+09],
[-8.60066776e+00, -8.02200924e+00, 1.07231926e+01, 1.33548320e+06],
[-1.01724238e-01, -1.18479709e+01, -2.55407104e+01, 1.55436570e+00],
[-3.93356771e+00, 2.11106818e+01, -2.57598485e+01, 2.13467840e+01],
[3.74750503e+00, 1.55687633e+01, -2.92841720e+01, 1.43873509e-02],
[6.99726781e+00, 2.69855571e+01, -1.63707771e+01, 3.08098673e-02],
[-2.31996011e+01, 3.47631054e+00, 9.75119815e-01, 1.79971073e-02],
[2.38951044e+01, -2.91460190e+01, -2.50774708e+00, 9.56934814e+00],
[1.52730825e+01, 5.77062507e+00, 1.21922003e+01, 1.32345307e+09],
[1.74673917e+01, 1.89723426e+01, 4.94903250e+00, 9.90859484e+01],
[1.88971241e+01, 2.86255413e+01, 5.52360109e-01, 1.44165360e+00],
[1.02002319e+01, -1.66855152e+01, -2.55426235e+01, 6.56481554e+02],
[-1.79474153e+01, 1.22210200e+01, -1.84058212e+01, 8.24041812e+05],
[-1.36147103e+01, 1.32365492e+00, -7.22375200e+00, 9.92446491e+05],
[7.57407832e+00, 2.59738234e+01, -1.34139168e+01, 3.64037761e-02],
[2.21110169e+00, 1.28012666e+01, 1.62529102e+01, 1.33433085e+02],
[-2.64297569e+01, -1.63176658e+01, -1.11642006e+01, -2.44797251e+13],
[-2.46622944e+01, -3.02147372e+00, 8.29159315e+00, -3.21799070e+05],
[-1.37215095e+01, -1.96680183e+01, 2.91940118e+01, 3.21457520e+12],
[-5.45566105e+00, 2.81292086e+01, 1.72548215e-01, 9.66973000e-01],
[-1.55751298e+00, -8.65703373e+00, 2.68622026e+01, -3.17190834e+16],
[2.45393609e+01, -2.70571903e+01, 1.96815505e+01, 1.80708004e+37],
[5.77482829e+00, 1.53203143e+01, 2.50534322e+01, 1.14304242e+06],
[-1.02626819e+01, 2.36887658e+01, -2.32152102e+01, 7.28965646e+02],
[-1.30833446e+00, -1.28310210e+01, 1.87275544e+01, -9.33487904e+12],
[5.83024676e+00, -1.49279672e+01, 2.44957538e+01, -7.61083070e+27],
[-2.03130747e+01, 2.59641715e+01, -2.06174328e+01, 4.54744859e+04],
[1.97684551e+01, -2.21410519e+01, -2.26728740e+01, 3.53113026e+06],
[2.73673444e+01, 2.64491725e+01, 1.57599882e+01, 1.07385118e+07],
[5.73287971e+00, 1.21111904e+01, 1.33080171e+01, 2.63220467e+03],
[-2.82751072e+01, 2.08605881e+01, 9.09838900e+00, -6.60957033e-07],
[1.87270691e+01, -1.74437016e+01, 1.52413599e+01, 6.59572851e+27],
[6.60681457e+00, -2.69449855e+00, 9.78972047e+00, -2.38587870e+12],
[1.20895561e+01, -2.51355765e+01, 2.30096101e+01, 7.58739886e+32],
[-2.44682278e+01, 2.10673441e+01, -1.36705538e+01, 4.54213550e+04],
[-4.50665152e+00, 3.72292059e+00, -4.83403707e+00, 2.68938214e+01],
[-7.46540049e+00, -1.08422222e+01, -1.72203805e+01, -2.09402162e+02],
[-2.00307551e+01, -7.50604431e+00, -2.78640020e+01, 4.15985444e+19],
[1.99890876e+01, 2.20677419e+01, -2.51301778e+01, 1.23840297e-09],
[2.03183823e+01, -7.66942559e+00, 2.10340070e+01, 1.46285095e+31],
[-2.90315825e+00, -2.55785967e+01, -9.58779316e+00, 2.65714264e-01],
[2.73960829e+01, -1.80097203e+01, -2.03070131e+00, 2.52908999e+02],
[-2.11708058e+01, -2.70304032e+01, 2.48257944e+01, 3.09027527e+08],
[2.21959758e+01, 4.00258675e+00, -1.62853977e+01, -9.16280090e-09],
[1.61661840e+01, -2.26845150e+01, 2.17226940e+01, -8.24774394e+33],
[-3.35030306e+00, 1.32670581e+00, 9.39711214e+00, -1.47303163e+01],
[7.23720726e+00, -2.29763909e+01, 2.34709682e+01, -9.20711735e+29],
[2.71013568e+01, 1.61951087e+01, -7.11388906e-01, 2.98750911e-01],
[8.40057933e+00, -7.49665220e+00, 2.95587388e+01, 6.59465635e+29],
[-1.51603423e+01, 1.94032322e+01, -7.60044357e+00, 1.05186941e+02],
[-8.83788031e+00, -2.72018313e+01, 1.88269907e+00, 1.81687019e+00],
[-1.87283712e+01, 5.87479570e+00, -1.91210203e+01, 2.52235612e+08],
[-5.61338513e-01, 2.69490237e+01, 1.16660111e-01, 9.97567783e-01],
[-5.44354025e+00, -1.26721408e+01, -4.66831036e+00, 1.06660735e-01],
[-2.18846497e+00, 2.33299566e+01, 9.62564397e+00, 3.03842061e-01],
[6.65661299e+00, -2.39048713e+01, 1.04191807e+01, 4.73700451e+13],
[-2.57298921e+01, -2.60811296e+01, 2.74398110e+01, -5.32566307e+11],
[-1.11431826e+01, -1.59420160e+01, -1.84880553e+01, -1.01514747e+02],
[6.50301931e+00, 2.59859051e+01, -2.33270137e+01, 1.22760500e-02],
[-1.94987891e+01, -2.62123262e+01, 3.90323225e+00, 1.71658894e+01],
[7.26164601e+00, -1.41469402e+01, 2.81499763e+01, -2.50068329e+31],
[-1.52424040e+01, 2.99719005e+01, -2.85753678e+01, 1.31906693e+04],
[5.24149291e+00, -1.72807223e+01, 2.22129493e+01, 2.50748475e+25],
[3.63207230e-01, -9.54120862e-02, -2.83874044e+01, 9.43854939e-01],
[-2.11326457e+00, -1.25707023e+01, 1.17172130e+00, 1.20812698e+00],
[2.48513582e+00, 1.03652647e+01, -1.84625148e+01, 6.47910997e-02],
[2.65395942e+01, 2.74794672e+01, 1.29413428e+01, 2.89306132e+05],
[-9.49445460e+00, 1.59930921e+01, -1.49596331e+01, 3.27574841e+02],
[-5.89173945e+00, 9.96742426e+00, 2.60318889e+01, -3.15842908e-01],
[-1.15387239e+01, -2.21433107e+01, -2.17686413e+01, 1.56724718e-01],
[-5.30592244e+00, -2.42752190e+01, 1.29734035e+00, 1.31985534e+00]])
for a,b,c,expected in ref_data:
result = special.hyp1f1(a,b,c)
assert_(abs(expected - result)/expected < 1e-4)
def test_hyp1f1_gh2957(self):
hyp1 = special.hyp1f1(0.5, 1.5, -709.7827128933)
hyp2 = special.hyp1f1(0.5, 1.5, -709.7827128934)
assert_almost_equal(hyp1, hyp2, 12)
def test_hyp1f1_gh2282(self):
hyp = special.hyp1f1(0.5, 1.5, -1000)
assert_almost_equal(hyp, 0.028024956081989643, 12)
def test_hyp2f1(self):
# a collection of special cases taken from AMS 55
values = [[0.5, 1, 1.5, 0.2**2, 0.5/0.2*log((1+0.2)/(1-0.2))],
[0.5, 1, 1.5, -0.2**2, 1./0.2*arctan(0.2)],
[1, 1, 2, 0.2, -1/0.2*log(1-0.2)],
[3, 3.5, 1.5, 0.2**2,
0.5/0.2/(-5)*((1+0.2)**(-5)-(1-0.2)**(-5))],
[-3, 3, 0.5, sin(0.2)**2, cos(2*3*0.2)],
[3, 4, 8, 1, special.gamma(8)*special.gamma(8-4-3)/special.gamma(8-3)/special.gamma(8-4)],
[3, 2, 3-2+1, -1, 1./2**3*sqrt(pi) *
special.gamma(1+3-2)/special.gamma(1+0.5*3-2)/special.gamma(0.5+0.5*3)],
[5, 2, 5-2+1, -1, 1./2**5*sqrt(pi) *
special.gamma(1+5-2)/special.gamma(1+0.5*5-2)/special.gamma(0.5+0.5*5)],
[4, 0.5+4, 1.5-2*4, -1./3, (8./9)**(-2*4)*special.gamma(4./3) *
special.gamma(1.5-2*4)/special.gamma(3./2)/special.gamma(4./3-2*4)],
# and some others
# ticket #424
[1.5, -0.5, 1.0, -10.0, 4.1300097765277476484],
# negative integer a or b, with c-a-b integer and x > 0.9
[-2,3,1,0.95,0.715],
[2,-3,1,0.95,-0.007],
[-6,3,1,0.95,0.0000810625],
[2,-5,1,0.95,-0.000029375],
# huge negative integers
(10, -900, 10.5, 0.99, 1.91853705796607664803709475658e-24),
(10, -900, -10.5, 0.99, 3.54279200040355710199058559155e-18),
]
for i, (a, b, c, x, v) in enumerate(values):
cv = special.hyp2f1(a, b, c, x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_hyperu(self):
val1 = special.hyperu(1,0.1,100)
assert_almost_equal(val1,0.0098153,7)
a,b = [0.3,0.6,1.2,-2.7],[1.5,3.2,-0.4,-3.2]
a,b = asarray(a), asarray(b)
z = 0.5
hypu = special.hyperu(a,b,z)
hprl = (pi/sin(pi*b))*(special.hyp1f1(a,b,z) /
(special.gamma(1+a-b)*special.gamma(b)) -
z**(1-b)*special.hyp1f1(1+a-b,2-b,z)
/ (special.gamma(a)*special.gamma(2-b)))
assert_array_almost_equal(hypu,hprl,12)
def test_hyperu_gh2287(self):
assert_almost_equal(special.hyperu(1, 1.5, 20.2),
0.048360918656699191, 12)
class TestBessel:
def test_itj0y0(self):
it0 = array(special.itj0y0(.2))
assert_array_almost_equal(it0,array([0.19933433254006822, -0.34570883800412566]),8)
def test_it2j0y0(self):
it2 = array(special.it2j0y0(.2))
assert_array_almost_equal(it2,array([0.0049937546274601858, -0.43423067011231614]),8)
def test_negv_iv(self):
assert_equal(special.iv(3,2), special.iv(-3,2))
def test_j0(self):
oz = special.j0(.1)
ozr = special.jn(0,.1)
assert_almost_equal(oz,ozr,8)
def test_j1(self):
o1 = special.j1(.1)
o1r = special.jn(1,.1)
assert_almost_equal(o1,o1r,8)
def test_jn(self):
jnnr = special.jn(1,.2)
assert_almost_equal(jnnr,0.099500832639235995,8)
def test_negv_jv(self):
assert_almost_equal(special.jv(-3,2), -special.jv(3,2), 14)
def test_jv(self):
values = [[0, 0.1, 0.99750156206604002],
[2./3, 1e-8, 0.3239028506761532e-5],
[2./3, 1e-10, 0.1503423854873779e-6],
[3.1, 1e-10, 0.1711956265409013e-32],
[2./3, 4.0, -0.2325440850267039],
]
for i, (v, x, y) in enumerate(values):
yc = special.jv(v, x)
assert_almost_equal(yc, y, 8, err_msg='test #%d' % i)
def test_negv_jve(self):
assert_almost_equal(special.jve(-3,2), -special.jve(3,2), 14)
def test_jve(self):
jvexp = special.jve(1,.2)
assert_almost_equal(jvexp,0.099500832639235995,8)
jvexp1 = special.jve(1,.2+1j)
z = .2+1j
jvexpr = special.jv(1,z)*exp(-abs(z.imag))
assert_almost_equal(jvexp1,jvexpr,8)
def test_jn_zeros(self):
jn0 = special.jn_zeros(0,5)
jn1 = special.jn_zeros(1,5)
assert_array_almost_equal(jn0,array([2.4048255577,
5.5200781103,
8.6537279129,
11.7915344391,
14.9309177086]),4)
assert_array_almost_equal(jn1,array([3.83171,
7.01559,
10.17347,
13.32369,
16.47063]),4)
jn102 = special.jn_zeros(102,5)
assert_allclose(jn102, array([110.89174935992040343,
117.83464175788308398,
123.70194191713507279,
129.02417238949092824,
134.00114761868422559]), rtol=1e-13)
jn301 = special.jn_zeros(301,5)
assert_allclose(jn301, array([313.59097866698830153,
323.21549776096288280,
331.22338738656748796,
338.39676338872084500,
345.03284233056064157]), rtol=1e-13)
def test_jn_zeros_slow(self):
jn0 = special.jn_zeros(0, 300)
assert_allclose(jn0[260-1], 816.02884495068867280, rtol=1e-13)
assert_allclose(jn0[280-1], 878.86068707124422606, rtol=1e-13)
assert_allclose(jn0[300-1], 941.69253065317954064, rtol=1e-13)
jn10 = special.jn_zeros(10, 300)
assert_allclose(jn10[260-1], 831.67668514305631151, rtol=1e-13)
assert_allclose(jn10[280-1], 894.51275095371316931, rtol=1e-13)
assert_allclose(jn10[300-1], 957.34826370866539775, rtol=1e-13)
jn3010 = special.jn_zeros(3010,5)
assert_allclose(jn3010, array([3036.86590780927,
3057.06598526482,
3073.66360690272,
3088.37736494778,
3101.86438139042]), rtol=1e-8)
def test_jnjnp_zeros(self):
jn = special.jn
def jnp(n, x):
return (jn(n-1,x) - jn(n+1,x))/2
for nt in range(1, 30):
z, n, m, t = special.jnjnp_zeros(nt)
for zz, nn, tt in zip(z, n, t):
if tt == 0:
assert_allclose(jn(nn, zz), 0, atol=1e-6)
elif tt == 1:
assert_allclose(jnp(nn, zz), 0, atol=1e-6)
else:
raise AssertionError("Invalid t return for nt=%d" % nt)
def test_jnp_zeros(self):
jnp = special.jnp_zeros(1,5)
assert_array_almost_equal(jnp, array([1.84118,
5.33144,
8.53632,
11.70600,
14.86359]),4)
jnp = special.jnp_zeros(443,5)
assert_allclose(special.jvp(443, jnp), 0, atol=1e-15)
def test_jnyn_zeros(self):
jnz = special.jnyn_zeros(1,5)
assert_array_almost_equal(jnz,(array([3.83171,
7.01559,
10.17347,
13.32369,
16.47063]),
array([1.84118,
5.33144,
8.53632,
11.70600,
14.86359]),
array([2.19714,
5.42968,
8.59601,
11.74915,
14.89744]),
array([3.68302,
6.94150,
10.12340,
13.28576,
16.44006])),5)
def test_jvp(self):
jvprim = special.jvp(2,2)
jv0 = (special.jv(1,2)-special.jv(3,2))/2
assert_almost_equal(jvprim,jv0,10)
def test_k0(self):
ozk = special.k0(.1)
ozkr = special.kv(0,.1)
assert_almost_equal(ozk,ozkr,8)
def test_k0e(self):
ozke = special.k0e(.1)
ozker = special.kve(0,.1)
assert_almost_equal(ozke,ozker,8)
def test_k1(self):
o1k = special.k1(.1)
o1kr = special.kv(1,.1)
assert_almost_equal(o1k,o1kr,8)
def test_k1e(self):
o1ke = special.k1e(.1)
o1ker = special.kve(1,.1)
assert_almost_equal(o1ke,o1ker,8)
def test_jacobi(self):
a = 5*np.random.random() - 1
b = 5*np.random.random() - 1
P0 = special.jacobi(0,a,b)
P1 = special.jacobi(1,a,b)
P2 = special.jacobi(2,a,b)
P3 = special.jacobi(3,a,b)
assert_array_almost_equal(P0.c,[1],13)
assert_array_almost_equal(P1.c,array([a+b+2,a-b])/2.0,13)
cp = [(a+b+3)*(a+b+4), 4*(a+b+3)*(a+2), 4*(a+1)*(a+2)]
p2c = [cp[0],cp[1]-2*cp[0],cp[2]-cp[1]+cp[0]]
assert_array_almost_equal(P2.c,array(p2c)/8.0,13)
cp = [(a+b+4)*(a+b+5)*(a+b+6),6*(a+b+4)*(a+b+5)*(a+3),
12*(a+b+4)*(a+2)*(a+3),8*(a+1)*(a+2)*(a+3)]
p3c = [cp[0],cp[1]-3*cp[0],cp[2]-2*cp[1]+3*cp[0],cp[3]-cp[2]+cp[1]-cp[0]]
assert_array_almost_equal(P3.c,array(p3c)/48.0,13)
def test_kn(self):
kn1 = special.kn(0,.2)
assert_almost_equal(kn1,1.7527038555281462,8)
def test_negv_kv(self):
assert_equal(special.kv(3.0, 2.2), special.kv(-3.0, 2.2))
def test_kv0(self):
kv0 = special.kv(0,.2)
assert_almost_equal(kv0, 1.7527038555281462, 10)
def test_kv1(self):
kv1 = special.kv(1,0.2)
assert_almost_equal(kv1, 4.775972543220472, 10)
def test_kv2(self):
kv2 = special.kv(2,0.2)
assert_almost_equal(kv2, 49.51242928773287, 10)
def test_kn_largeorder(self):
assert_allclose(special.kn(32, 1), 1.7516596664574289e+43)
def test_kv_largearg(self):
assert_equal(special.kv(0, 1e19), 0)
def test_negv_kve(self):
assert_equal(special.kve(3.0, 2.2), special.kve(-3.0, 2.2))
def test_kve(self):
kve1 = special.kve(0,.2)
kv1 = special.kv(0,.2)*exp(.2)
assert_almost_equal(kve1,kv1,8)
z = .2+1j
kve2 = special.kve(0,z)
kv2 = special.kv(0,z)*exp(z)
assert_almost_equal(kve2,kv2,8)
def test_kvp_v0n1(self):
z = 2.2
assert_almost_equal(-special.kv(1,z), special.kvp(0,z, n=1), 10)
def test_kvp_n1(self):
v = 3.
z = 2.2
xc = -special.kv(v+1,z) + v/z*special.kv(v,z)
x = special.kvp(v,z, n=1)
assert_almost_equal(xc, x, 10) # this function (kvp) is broken
def test_kvp_n2(self):
v = 3.
z = 2.2
xc = (z**2+v**2-v)/z**2 * special.kv(v,z) + special.kv(v+1,z)/z
x = special.kvp(v, z, n=2)
assert_almost_equal(xc, x, 10)
def test_y0(self):
oz = special.y0(.1)
ozr = special.yn(0,.1)
assert_almost_equal(oz,ozr,8)
def test_y1(self):
o1 = special.y1(.1)
o1r = special.yn(1,.1)
assert_almost_equal(o1,o1r,8)
def test_y0_zeros(self):
yo,ypo = special.y0_zeros(2)
zo,zpo = special.y0_zeros(2,complex=1)
all = r_[yo,zo]
allval = r_[ypo,zpo]
assert_array_almost_equal(abs(special.yv(0.0,all)),0.0,11)
assert_array_almost_equal(abs(special.yv(1,all)-allval),0.0,11)
def test_y1_zeros(self):
y1 = special.y1_zeros(1)
assert_array_almost_equal(y1,(array([2.19714]),array([0.52079])),5)
def test_y1p_zeros(self):
y1p = special.y1p_zeros(1,complex=1)
assert_array_almost_equal(y1p,(array([0.5768+0.904j]), array([-0.7635+0.5892j])),3)
def test_yn_zeros(self):
an = special.yn_zeros(4,2)
assert_array_almost_equal(an,array([5.64515, 9.36162]),5)
an = special.yn_zeros(443,5)
assert_allclose(an, [450.13573091578090314, 463.05692376675001542,
472.80651546418663566, 481.27353184725625838,
488.98055964441374646], rtol=1e-15)
def test_ynp_zeros(self):
ao = special.ynp_zeros(0,2)
assert_array_almost_equal(ao,array([2.19714133, 5.42968104]),6)
ao = special.ynp_zeros(43,5)
assert_allclose(special.yvp(43, ao), 0, atol=1e-15)
ao = special.ynp_zeros(443,5)
assert_allclose(special.yvp(443, ao), 0, atol=1e-9)
def test_ynp_zeros_large_order(self):
ao = special.ynp_zeros(443,5)
assert_allclose(special.yvp(443, ao), 0, atol=1e-14)
def test_yn(self):
yn2n = special.yn(1,.2)
assert_almost_equal(yn2n,-3.3238249881118471,8)
def test_negv_yv(self):
assert_almost_equal(special.yv(-3,2), -special.yv(3,2), 14)
def test_yv(self):
yv2 = special.yv(1,.2)
assert_almost_equal(yv2,-3.3238249881118471,8)
def test_negv_yve(self):
assert_almost_equal(special.yve(-3,2), -special.yve(3,2), 14)
def test_yve(self):
yve2 = special.yve(1,.2)
assert_almost_equal(yve2,-3.3238249881118471,8)
yve2r = special.yv(1,.2+1j)*exp(-1)
yve22 = special.yve(1,.2+1j)
assert_almost_equal(yve22,yve2r,8)
def test_yvp(self):
yvpr = (special.yv(1,.2) - special.yv(3,.2))/2.0
yvp1 = special.yvp(2,.2)
assert_array_almost_equal(yvp1,yvpr,10)
def _cephes_vs_amos_points(self):
"""Yield points at which to compare Cephes implementation to AMOS"""
# check several points, including large-amplitude ones
v = [-120, -100.3, -20., -10., -1., -.5, 0., 1., 12.49, 120., 301]
z = [-1300, -11, -10, -1, 1., 10., 200.5, 401., 600.5, 700.6, 1300,
10003]
yield from itertools.product(v, z)
# check half-integers; these are problematic points at least
# for cephes/iv
yield from itertools.product(0.5 + arange(-60, 60), [3.5])
def check_cephes_vs_amos(self, f1, f2, rtol=1e-11, atol=0, skip=None):
for v, z in self._cephes_vs_amos_points():
if skip is not None and skip(v, z):
continue
c1, c2, c3 = f1(v, z), f1(v,z+0j), f2(int(v), z)
if np.isinf(c1):
assert_(np.abs(c2) >= 1e300, (v, z))
elif np.isnan(c1):
assert_(c2.imag != 0, (v, z))
else:
assert_allclose(c1, c2, err_msg=(v, z), rtol=rtol, atol=atol)
if v == int(v):
assert_allclose(c3, c2, err_msg=(v, z),
rtol=rtol, atol=atol)
@pytest.mark.xfail(platform.machine() == 'ppc64le',
reason="fails on ppc64le")
def test_jv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.jv, special.jn, rtol=1e-10, atol=1e-305)
@pytest.mark.xfail(platform.machine() == 'ppc64le',
reason="fails on ppc64le")
def test_yv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.yv, special.yn, rtol=1e-11, atol=1e-305)
def test_yv_cephes_vs_amos_only_small_orders(self):
skipper = lambda v, z: (abs(v) > 50)
self.check_cephes_vs_amos(special.yv, special.yn, rtol=1e-11, atol=1e-305, skip=skipper)
def test_iv_cephes_vs_amos(self):
with np.errstate(all='ignore'):
self.check_cephes_vs_amos(special.iv, special.iv, rtol=5e-9, atol=1e-305)
@pytest.mark.slow
def test_iv_cephes_vs_amos_mass_test(self):
N = 1000000
np.random.seed(1)
v = np.random.pareto(0.5, N) * (-1)**np.random.randint(2, size=N)
x = np.random.pareto(0.2, N) * (-1)**np.random.randint(2, size=N)
imsk = (np.random.randint(8, size=N) == 0)
v[imsk] = v[imsk].astype(int)
with np.errstate(all='ignore'):
c1 = special.iv(v, x)
c2 = special.iv(v, x+0j)
# deal with differences in the inf and zero cutoffs
c1[abs(c1) > 1e300] = np.inf
c2[abs(c2) > 1e300] = np.inf
c1[abs(c1) < 1e-300] = 0
c2[abs(c2) < 1e-300] = 0
dc = abs(c1/c2 - 1)
dc[np.isnan(dc)] = 0
k = np.argmax(dc)
# Most error apparently comes from AMOS and not our implementation;
# there are some problems near integer orders there
assert_(dc[k] < 2e-7, (v[k], x[k], special.iv(v[k], x[k]), special.iv(v[k], x[k]+0j)))
def test_kv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.kv, special.kn, rtol=1e-9, atol=1e-305)
self.check_cephes_vs_amos(special.kv, special.kv, rtol=1e-9, atol=1e-305)
def test_ticket_623(self):
assert_allclose(special.jv(3, 4), 0.43017147387562193)
assert_allclose(special.jv(301, 1300), 0.0183487151115275)
assert_allclose(special.jv(301, 1296.0682), -0.0224174325312048)
def test_ticket_853(self):
"""Negative-order Bessels"""
# cephes
assert_allclose(special.jv(-1, 1), -0.4400505857449335)
assert_allclose(special.jv(-2, 1), 0.1149034849319005)
assert_allclose(special.yv(-1, 1), 0.7812128213002887)
assert_allclose(special.yv(-2, 1), -1.650682606816255)
assert_allclose(special.iv(-1, 1), 0.5651591039924851)
assert_allclose(special.iv(-2, 1), 0.1357476697670383)
assert_allclose(special.kv(-1, 1), 0.6019072301972347)
assert_allclose(special.kv(-2, 1), 1.624838898635178)
assert_allclose(special.jv(-0.5, 1), 0.43109886801837607952)
assert_allclose(special.yv(-0.5, 1), 0.6713967071418031)
assert_allclose(special.iv(-0.5, 1), 1.231200214592967)
assert_allclose(special.kv(-0.5, 1), 0.4610685044478945)
# amos
assert_allclose(special.jv(-1, 1+0j), -0.4400505857449335)
assert_allclose(special.jv(-2, 1+0j), 0.1149034849319005)
assert_allclose(special.yv(-1, 1+0j), 0.7812128213002887)
assert_allclose(special.yv(-2, 1+0j), -1.650682606816255)
assert_allclose(special.iv(-1, 1+0j), 0.5651591039924851)
assert_allclose(special.iv(-2, 1+0j), 0.1357476697670383)
assert_allclose(special.kv(-1, 1+0j), 0.6019072301972347)
assert_allclose(special.kv(-2, 1+0j), 1.624838898635178)
assert_allclose(special.jv(-0.5, 1+0j), 0.43109886801837607952)
assert_allclose(special.jv(-0.5, 1+1j), 0.2628946385649065-0.827050182040562j)
assert_allclose(special.yv(-0.5, 1+0j), 0.6713967071418031)
assert_allclose(special.yv(-0.5, 1+1j), 0.967901282890131+0.0602046062142816j)
assert_allclose(special.iv(-0.5, 1+0j), 1.231200214592967)
assert_allclose(special.iv(-0.5, 1+1j), 0.77070737376928+0.39891821043561j)
assert_allclose(special.kv(-0.5, 1+0j), 0.4610685044478945)
assert_allclose(special.kv(-0.5, 1+1j), 0.06868578341999-0.38157825981268j)
assert_allclose(special.jve(-0.5,1+0.3j), special.jv(-0.5, 1+0.3j)*exp(-0.3))
assert_allclose(special.yve(-0.5,1+0.3j), special.yv(-0.5, 1+0.3j)*exp(-0.3))
assert_allclose(special.ive(-0.5,0.3+1j), special.iv(-0.5, 0.3+1j)*exp(-0.3))
assert_allclose(special.kve(-0.5,0.3+1j), special.kv(-0.5, 0.3+1j)*exp(0.3+1j))
assert_allclose(special.hankel1(-0.5, 1+1j), special.jv(-0.5, 1+1j) + 1j*special.yv(-0.5,1+1j))
assert_allclose(special.hankel2(-0.5, 1+1j), special.jv(-0.5, 1+1j) - 1j*special.yv(-0.5,1+1j))
def test_ticket_854(self):
"""Real-valued Bessel domains"""
assert_(isnan(special.jv(0.5, -1)))
assert_(isnan(special.iv(0.5, -1)))
assert_(isnan(special.yv(0.5, -1)))
assert_(isnan(special.yv(1, -1)))
assert_(isnan(special.kv(0.5, -1)))
assert_(isnan(special.kv(1, -1)))
assert_(isnan(special.jve(0.5, -1)))
assert_(isnan(special.ive(0.5, -1)))
assert_(isnan(special.yve(0.5, -1)))
assert_(isnan(special.yve(1, -1)))
assert_(isnan(special.kve(0.5, -1)))
assert_(isnan(special.kve(1, -1)))
assert_(isnan(special.airye(-1)[0:2]).all(), special.airye(-1))
assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1))
def test_gh_7909(self):
assert_(special.kv(1.5, 0) == np.inf)
assert_(special.kve(1.5, 0) == np.inf)
def test_ticket_503(self):
"""Real-valued Bessel I overflow"""
assert_allclose(special.iv(1, 700), 1.528500390233901e302)
assert_allclose(special.iv(1000, 1120), 1.301564549405821e301)
def test_iv_hyperg_poles(self):
assert_allclose(special.iv(-0.5, 1), 1.231200214592967)
def iv_series(self, v, z, n=200):
k = arange(0, n).astype(float_)
r = (v+2*k)*log(.5*z) - special.gammaln(k+1) - special.gammaln(v+k+1)
r[isnan(r)] = inf
r = exp(r)
err = abs(r).max() * finfo(float_).eps * n + abs(r[-1])*10
return r.sum(), err
def test_i0_series(self):
for z in [1., 10., 200.5]:
value, err = self.iv_series(0, z)
assert_allclose(special.i0(z), value, atol=err, err_msg=z)
def test_i1_series(self):
for z in [1., 10., 200.5]:
value, err = self.iv_series(1, z)
assert_allclose(special.i1(z), value, atol=err, err_msg=z)
def test_iv_series(self):
for v in [-20., -10., -1., 0., 1., 12.49, 120.]:
for z in [1., 10., 200.5, -1+2j]:
value, err = self.iv_series(v, z)
assert_allclose(special.iv(v, z), value, atol=err, err_msg=(v, z))
def test_i0(self):
values = [[0.0, 1.0],
[1e-10, 1.0],
[0.1, 0.9071009258],
[0.5, 0.6450352706],
[1.0, 0.4657596077],
[2.5, 0.2700464416],
[5.0, 0.1835408126],
[20.0, 0.0897803119],
]
for i, (x, v) in enumerate(values):
cv = special.i0(x) * exp(-x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_i0e(self):
oize = special.i0e(.1)
oizer = special.ive(0,.1)
assert_almost_equal(oize,oizer,8)
def test_i1(self):
values = [[0.0, 0.0],
[1e-10, 0.4999999999500000e-10],
[0.1, 0.0452984468],
[0.5, 0.1564208032],
[1.0, 0.2079104154],
[5.0, 0.1639722669],
[20.0, 0.0875062222],
]
for i, (x, v) in enumerate(values):
cv = special.i1(x) * exp(-x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_i1e(self):
oi1e = special.i1e(.1)
oi1er = special.ive(1,.1)
assert_almost_equal(oi1e,oi1er,8)
def test_iti0k0(self):
iti0 = array(special.iti0k0(5))
assert_array_almost_equal(iti0,array([31.848667776169801, 1.5673873907283657]),5)
def test_it2i0k0(self):
it2k = special.it2i0k0(.1)
assert_array_almost_equal(it2k,array([0.0012503906973464409, 3.3309450354686687]),6)
def test_iv(self):
iv1 = special.iv(0,.1)*exp(-.1)
assert_almost_equal(iv1,0.90710092578230106,10)
def test_negv_ive(self):
assert_equal(special.ive(3,2), special.ive(-3,2))
def test_ive(self):
ive1 = special.ive(0,.1)
iv1 = special.iv(0,.1)*exp(-.1)
assert_almost_equal(ive1,iv1,10)
def test_ivp0(self):
assert_almost_equal(special.iv(1,2), special.ivp(0,2), 10)
def test_ivp(self):
y = (special.iv(0,2) + special.iv(2,2))/2
x = special.ivp(1,2)
assert_almost_equal(x,y,10)
class TestLaguerre:
def test_laguerre(self):
lag0 = special.laguerre(0)
lag1 = special.laguerre(1)
lag2 = special.laguerre(2)
lag3 = special.laguerre(3)
lag4 = special.laguerre(4)
lag5 = special.laguerre(5)
assert_array_almost_equal(lag0.c,[1],13)
assert_array_almost_equal(lag1.c,[-1,1],13)
assert_array_almost_equal(lag2.c,array([1,-4,2])/2.0,13)
assert_array_almost_equal(lag3.c,array([-1,9,-18,6])/6.0,13)
assert_array_almost_equal(lag4.c,array([1,-16,72,-96,24])/24.0,13)
assert_array_almost_equal(lag5.c,array([-1,25,-200,600,-600,120])/120.0,13)
def test_genlaguerre(self):
k = 5*np.random.random() - 0.9
lag0 = special.genlaguerre(0,k)
lag1 = special.genlaguerre(1,k)
lag2 = special.genlaguerre(2,k)
lag3 = special.genlaguerre(3,k)
assert_equal(lag0.c,[1])
assert_equal(lag1.c,[-1,k+1])
assert_almost_equal(lag2.c,array([1,-2*(k+2),(k+1.)*(k+2.)])/2.0)
assert_almost_equal(lag3.c,array([-1,3*(k+3),-3*(k+2)*(k+3),(k+1)*(k+2)*(k+3)])/6.0)
# Base polynomials come from Abrahmowitz and Stegan
class TestLegendre:
def test_legendre(self):
leg0 = special.legendre(0)
leg1 = special.legendre(1)
leg2 = special.legendre(2)
leg3 = special.legendre(3)
leg4 = special.legendre(4)
leg5 = special.legendre(5)
assert_equal(leg0.c, [1])
assert_equal(leg1.c, [1,0])
assert_almost_equal(leg2.c, array([3,0,-1])/2.0, decimal=13)
assert_almost_equal(leg3.c, array([5,0,-3,0])/2.0)
assert_almost_equal(leg4.c, array([35,0,-30,0,3])/8.0)
assert_almost_equal(leg5.c, array([63,0,-70,0,15,0])/8.0)
class TestLambda:
def test_lmbda(self):
lam = special.lmbda(1,.1)
lamr = (array([special.jn(0,.1), 2*special.jn(1,.1)/.1]),
array([special.jvp(0,.1), -2*special.jv(1,.1)/.01 + 2*special.jvp(1,.1)/.1]))
assert_array_almost_equal(lam,lamr,8)
class TestLog1p:
def test_log1p(self):
l1p = (special.log1p(10), special.log1p(11), special.log1p(12))
l1prl = (log(11), log(12), log(13))
assert_array_almost_equal(l1p,l1prl,8)
def test_log1pmore(self):
l1pm = (special.log1p(1), special.log1p(1.1), special.log1p(1.2))
l1pmrl = (log(2),log(2.1),log(2.2))
assert_array_almost_equal(l1pm,l1pmrl,8)
class TestLegendreFunctions:
def test_clpmn(self):
z = 0.5+0.3j
clp = special.clpmn(2, 2, z, 3)
assert_array_almost_equal(clp,
(array([[1.0000, z, 0.5*(3*z*z-1)],
[0.0000, sqrt(z*z-1), 3*z*sqrt(z*z-1)],
[0.0000, 0.0000, 3*(z*z-1)]]),
array([[0.0000, 1.0000, 3*z],
[0.0000, z/sqrt(z*z-1), 3*(2*z*z-1)/sqrt(z*z-1)],
[0.0000, 0.0000, 6*z]])),
7)
def test_clpmn_close_to_real_2(self):
eps = 1e-10
m = 1
n = 3
x = 0.5
clp_plus = special.clpmn(m, n, x+1j*eps, 2)[0][m, n]
clp_minus = special.clpmn(m, n, x-1j*eps, 2)[0][m, n]
assert_array_almost_equal(array([clp_plus, clp_minus]),
array([special.lpmv(m, n, x),
special.lpmv(m, n, x)]),
7)
def test_clpmn_close_to_real_3(self):
eps = 1e-10
m = 1
n = 3
x = 0.5
clp_plus = special.clpmn(m, n, x+1j*eps, 3)[0][m, n]
clp_minus = special.clpmn(m, n, x-1j*eps, 3)[0][m, n]
assert_array_almost_equal(array([clp_plus, clp_minus]),
array([special.lpmv(m, n, x)*np.exp(-0.5j*m*np.pi),
special.lpmv(m, n, x)*np.exp(0.5j*m*np.pi)]),
7)
def test_clpmn_across_unit_circle(self):
eps = 1e-7
m = 1
n = 1
x = 1j
for type in [2, 3]:
assert_almost_equal(special.clpmn(m, n, x+1j*eps, type)[0][m, n],
special.clpmn(m, n, x-1j*eps, type)[0][m, n], 6)
def test_inf(self):
for z in (1, -1):
for n in range(4):
for m in range(1, n):
lp = special.clpmn(m, n, z)
assert_(np.isinf(lp[1][1,1:]).all())
lp = special.lpmn(m, n, z)
assert_(np.isinf(lp[1][1,1:]).all())
def test_deriv_clpmn(self):
# data inside and outside of the unit circle
zvals = [0.5+0.5j, -0.5+0.5j, -0.5-0.5j, 0.5-0.5j,
1+1j, -1+1j, -1-1j, 1-1j]
m = 2
n = 3
for type in [2, 3]:
for z in zvals:
for h in [1e-3, 1e-3j]:
approx_derivative = (special.clpmn(m, n, z+0.5*h, type)[0]
- special.clpmn(m, n, z-0.5*h, type)[0])/h
assert_allclose(special.clpmn(m, n, z, type)[1],
approx_derivative,
rtol=1e-4)
def test_lpmn(self):
lp = special.lpmn(0,2,.5)
assert_array_almost_equal(lp,(array([[1.00000,
0.50000,
-0.12500]]),
array([[0.00000,
1.00000,
1.50000]])),4)
def test_lpn(self):
lpnf = special.lpn(2,.5)
assert_array_almost_equal(lpnf,(array([1.00000,
0.50000,
-0.12500]),
array([0.00000,
1.00000,
1.50000])),4)
def test_lpmv(self):
lp = special.lpmv(0,2,.5)
assert_almost_equal(lp,-0.125,7)
lp = special.lpmv(0,40,.001)
assert_almost_equal(lp,0.1252678976534484,7)
# XXX: this is outside the domain of the current implementation,
# so ensure it returns a NaN rather than a wrong answer.
with np.errstate(all='ignore'):
lp = special.lpmv(-1,-1,.001)
assert_(lp != 0 or np.isnan(lp))
def test_lqmn(self):
lqmnf = special.lqmn(0,2,.5)
lqf = special.lqn(2,.5)
assert_array_almost_equal(lqmnf[0][0],lqf[0],4)
assert_array_almost_equal(lqmnf[1][0],lqf[1],4)
def test_lqmn_gt1(self):
"""algorithm for real arguments changes at 1.0001
test against analytical result for m=2, n=1
"""
x0 = 1.0001
delta = 0.00002
for x in (x0-delta, x0+delta):
lq = special.lqmn(2, 1, x)[0][-1, -1]
expected = 2/(x*x-1)
assert_almost_equal(lq, expected)
def test_lqmn_shape(self):
a, b = special.lqmn(4, 4, 1.1)
assert_equal(a.shape, (5, 5))
assert_equal(b.shape, (5, 5))
a, b = special.lqmn(4, 0, 1.1)
assert_equal(a.shape, (5, 1))
assert_equal(b.shape, (5, 1))
def test_lqn(self):
lqf = special.lqn(2,.5)
assert_array_almost_equal(lqf,(array([0.5493, -0.7253, -0.8187]),
array([1.3333, 1.216, -0.8427])),4)
class TestMathieu:
def test_mathieu_a(self):
pass
def test_mathieu_even_coef(self):
special.mathieu_even_coef(2,5)
# Q not defined broken and cannot figure out proper reporting order
def test_mathieu_odd_coef(self):
# same problem as above
pass
class TestFresnelIntegral:
def test_modfresnelp(self):
pass
def test_modfresnelm(self):
pass
class TestOblCvSeq:
def test_obl_cv_seq(self):
obl = special.obl_cv_seq(0,3,1)
assert_array_almost_equal(obl,array([-0.348602,
1.393206,
5.486800,
11.492120]),5)
class TestParabolicCylinder:
def test_pbdn_seq(self):
pb = special.pbdn_seq(1,.1)
assert_array_almost_equal(pb,(array([0.9975,
0.0998]),
array([-0.0499,
0.9925])),4)
def test_pbdv(self):
special.pbdv(1,.2)
1/2*(.2)*special.pbdv(1,.2)[0] - special.pbdv(0,.2)[0]
def test_pbdv_seq(self):
pbn = special.pbdn_seq(1,.1)
pbv = special.pbdv_seq(1,.1)
assert_array_almost_equal(pbv,(real(pbn[0]),real(pbn[1])),4)
def test_pbdv_points(self):
# simple case
eta = np.linspace(-10, 10, 5)
z = 2**(eta/2)*np.sqrt(np.pi)/special.gamma(.5-.5*eta)
assert_allclose(special.pbdv(eta, 0.)[0], z, rtol=1e-14, atol=1e-14)
# some points
assert_allclose(special.pbdv(10.34, 20.44)[0], 1.3731383034455e-32, rtol=1e-12)
assert_allclose(special.pbdv(-9.53, 3.44)[0], 3.166735001119246e-8, rtol=1e-12)
def test_pbdv_gradient(self):
x = np.linspace(-4, 4, 8)[:,None]
eta = np.linspace(-10, 10, 5)[None,:]
p = special.pbdv(eta, x)
eps = 1e-7 + 1e-7*abs(x)
dp = (special.pbdv(eta, x + eps)[0] - special.pbdv(eta, x - eps)[0]) / eps / 2.
assert_allclose(p[1], dp, rtol=1e-6, atol=1e-6)
def test_pbvv_gradient(self):
x = np.linspace(-4, 4, 8)[:,None]
eta = np.linspace(-10, 10, 5)[None,:]
p = special.pbvv(eta, x)
eps = 1e-7 + 1e-7*abs(x)
dp = (special.pbvv(eta, x + eps)[0] - special.pbvv(eta, x - eps)[0]) / eps / 2.
assert_allclose(p[1], dp, rtol=1e-6, atol=1e-6)
class TestPolygamma:
# from Table 6.2 (pg. 271) of A&S
def test_polygamma(self):
poly2 = special.polygamma(2,1)
poly3 = special.polygamma(3,1)
assert_almost_equal(poly2,-2.4041138063,10)
assert_almost_equal(poly3,6.4939394023,10)
# Test polygamma(0, x) == psi(x)
x = [2, 3, 1.1e14]
assert_almost_equal(special.polygamma(0, x), special.psi(x))
# Test broadcasting
n = [0, 1, 2]
x = [0.5, 1.5, 2.5]
expected = [-1.9635100260214238, 0.93480220054467933,
-0.23620405164172739]
assert_almost_equal(special.polygamma(n, x), expected)
expected = np.row_stack([expected]*2)
assert_almost_equal(special.polygamma(n, np.row_stack([x]*2)),
expected)
assert_almost_equal(special.polygamma(np.row_stack([n]*2), x),
expected)
class TestProCvSeq:
def test_pro_cv_seq(self):
prol = special.pro_cv_seq(0,3,1)
assert_array_almost_equal(prol,array([0.319000,
2.593084,
6.533471,
12.514462]),5)
class TestPsi:
def test_psi(self):
ps = special.psi(1)
assert_almost_equal(ps,-0.57721566490153287,8)
class TestRadian:
def test_radian(self):
rad = special.radian(90,0,0)
assert_almost_equal(rad,pi/2.0,5)
def test_radianmore(self):
rad1 = special.radian(90,1,60)
assert_almost_equal(rad1,pi/2+0.0005816135199345904,5)
class TestRiccati:
def test_riccati_jn(self):
N, x = 2, 0.2
S = np.empty((N, N))
for n in range(N):
j = special.spherical_jn(n, x)
jp = special.spherical_jn(n, x, derivative=True)
S[0,n] = x*j
S[1,n] = x*jp + j
assert_array_almost_equal(S, special.riccati_jn(n, x), 8)
def test_riccati_yn(self):
N, x = 2, 0.2
C = np.empty((N, N))
for n in range(N):
y = special.spherical_yn(n, x)
yp = special.spherical_yn(n, x, derivative=True)
C[0,n] = x*y
C[1,n] = x*yp + y
assert_array_almost_equal(C, special.riccati_yn(n, x), 8)
class TestRound:
def test_round(self):
rnd = list(map(int,(special.round(10.1),special.round(10.4),special.round(10.5),special.round(10.6))))
# Note: According to the documentation, scipy.special.round is
# supposed to round to the nearest even number if the fractional
# part is exactly 0.5. On some platforms, this does not appear
# to work and thus this test may fail. However, this unit test is
# correctly written.
rndrl = (10,10,10,11)
assert_array_equal(rnd,rndrl)
def test_sph_harm():
# Tests derived from tables in
# https://en.wikipedia.org/wiki/Table_of_spherical_harmonics
sh = special.sph_harm
pi = np.pi
exp = np.exp
sqrt = np.sqrt
sin = np.sin
cos = np.cos
assert_array_almost_equal(sh(0,0,0,0),
0.5/sqrt(pi))
assert_array_almost_equal(sh(-2,2,0.,pi/4),
0.25*sqrt(15./(2.*pi)) *
(sin(pi/4))**2.)
assert_array_almost_equal(sh(-2,2,0.,pi/2),
0.25*sqrt(15./(2.*pi)))
assert_array_almost_equal(sh(2,2,pi,pi/2),
0.25*sqrt(15/(2.*pi)) *
exp(0+2.*pi*1j)*sin(pi/2.)**2.)
assert_array_almost_equal(sh(2,4,pi/4.,pi/3.),
(3./8.)*sqrt(5./(2.*pi)) *
exp(0+2.*pi/4.*1j) *
sin(pi/3.)**2. *
(7.*cos(pi/3.)**2.-1))
assert_array_almost_equal(sh(4,4,pi/8.,pi/6.),
(3./16.)*sqrt(35./(2.*pi)) *
exp(0+4.*pi/8.*1j)*sin(pi/6.)**4.)
def test_sph_harm_ufunc_loop_selection():
# see https://github.com/scipy/scipy/issues/4895
dt = np.dtype(np.complex128)
assert_equal(special.sph_harm(0, 0, 0, 0).dtype, dt)
assert_equal(special.sph_harm([0], 0, 0, 0).dtype, dt)
assert_equal(special.sph_harm(0, [0], 0, 0).dtype, dt)
assert_equal(special.sph_harm(0, 0, [0], 0).dtype, dt)
assert_equal(special.sph_harm(0, 0, 0, [0]).dtype, dt)
assert_equal(special.sph_harm([0], [0], [0], [0]).dtype, dt)
class TestStruve:
def _series(self, v, z, n=100):
"""Compute Struve function & error estimate from its power series."""
k = arange(0, n)
r = (-1)**k * (.5*z)**(2*k+v+1)/special.gamma(k+1.5)/special.gamma(k+v+1.5)
err = abs(r).max() * finfo(float_).eps * n
return r.sum(), err
def test_vs_series(self):
"""Check Struve function versus its power series"""
for v in [-20, -10, -7.99, -3.4, -1, 0, 1, 3.4, 12.49, 16]:
for z in [1, 10, 19, 21, 30]:
value, err = self._series(v, z)
assert_allclose(special.struve(v, z), value, rtol=0, atol=err), (v, z)
def test_some_values(self):
assert_allclose(special.struve(-7.99, 21), 0.0467547614113, rtol=1e-7)
assert_allclose(special.struve(-8.01, 21), 0.0398716951023, rtol=1e-8)
assert_allclose(special.struve(-3.0, 200), 0.0142134427432, rtol=1e-12)
assert_allclose(special.struve(-8.0, -41), 0.0192469727846, rtol=1e-11)
assert_equal(special.struve(-12, -41), -special.struve(-12, 41))
assert_equal(special.struve(+12, -41), -special.struve(+12, 41))
assert_equal(special.struve(-11, -41), +special.struve(-11, 41))
assert_equal(special.struve(+11, -41), +special.struve(+11, 41))
assert_(isnan(special.struve(-7.1, -1)))
assert_(isnan(special.struve(-10.1, -1)))
def test_regression_679(self):
"""Regression test for #679"""
assert_allclose(special.struve(-1.0, 20 - 1e-8), special.struve(-1.0, 20 + 1e-8))
assert_allclose(special.struve(-2.0, 20 - 1e-8), special.struve(-2.0, 20 + 1e-8))
assert_allclose(special.struve(-4.3, 20 - 1e-8), special.struve(-4.3, 20 + 1e-8))
def test_chi2_smalldf():
assert_almost_equal(special.chdtr(0.6,3), 0.957890536704110)
def test_ch2_inf():
assert_equal(special.chdtr(0.7,np.inf), 1.0)
def test_chi2c_smalldf():
assert_almost_equal(special.chdtrc(0.6,3), 1-0.957890536704110)
def test_chi2_inv_smalldf():
assert_almost_equal(special.chdtri(0.6,1-0.957890536704110), 3)
def test_agm_simple():
rtol = 1e-13
# Gauss's constant
assert_allclose(1/special.agm(1, np.sqrt(2)), 0.834626841674073186,
rtol=rtol)
# These values were computed using Wolfram Alpha, with the
# function ArithmeticGeometricMean[a, b].
agm13 = 1.863616783244897
agm15 = 2.604008190530940
agm35 = 3.936235503649555
assert_allclose(special.agm([[1], [3]], [1, 3, 5]),
[[1, agm13, agm15],
[agm13, 3, agm35]], rtol=rtol)
# Computed by the iteration formula using mpmath,
# with mpmath.mp.prec = 1000:
agm12 = 1.4567910310469068
assert_allclose(special.agm(1, 2), agm12, rtol=rtol)
assert_allclose(special.agm(2, 1), agm12, rtol=rtol)
assert_allclose(special.agm(-1, -2), -agm12, rtol=rtol)
assert_allclose(special.agm(24, 6), 13.458171481725614, rtol=rtol)
assert_allclose(special.agm(13, 123456789.5), 11111458.498599306,
rtol=rtol)
assert_allclose(special.agm(1e30, 1), 2.229223055945383e+28, rtol=rtol)
assert_allclose(special.agm(1e-22, 1), 0.030182566420169886, rtol=rtol)
assert_allclose(special.agm(1e150, 1e180), 2.229223055945383e+178,
rtol=rtol)
assert_allclose(special.agm(1e180, 1e-150), 2.0634722510162677e+177,
rtol=rtol)
assert_allclose(special.agm(1e-150, 1e-170), 3.3112619670463756e-152,
rtol=rtol)
fi = np.finfo(1.0)
assert_allclose(special.agm(fi.tiny, fi.max), 1.9892072050015473e+305,
rtol=rtol)
assert_allclose(special.agm(0.75*fi.max, fi.max), 1.564904312298045e+308,
rtol=rtol)
assert_allclose(special.agm(fi.tiny, 3*fi.tiny), 4.1466849866735005e-308,
rtol=rtol)
# zero, nan and inf cases.
assert_equal(special.agm(0, 0), 0)
assert_equal(special.agm(99, 0), 0)
assert_equal(special.agm(-1, 10), np.nan)
assert_equal(special.agm(0, np.inf), np.nan)
assert_equal(special.agm(np.inf, 0), np.nan)
assert_equal(special.agm(0, -np.inf), np.nan)
assert_equal(special.agm(-np.inf, 0), np.nan)
assert_equal(special.agm(np.inf, -np.inf), np.nan)
assert_equal(special.agm(-np.inf, np.inf), np.nan)
assert_equal(special.agm(1, np.nan), np.nan)
assert_equal(special.agm(np.nan, -1), np.nan)
assert_equal(special.agm(1, np.inf), np.inf)
assert_equal(special.agm(np.inf, 1), np.inf)
assert_equal(special.agm(-1, -np.inf), -np.inf)
assert_equal(special.agm(-np.inf, -1), -np.inf)
def test_legacy():
# Legacy behavior: truncating arguments to integers
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "floating point number truncated to an integer")
assert_equal(special.expn(1, 0.3), special.expn(1.8, 0.3))
assert_equal(special.nbdtrc(1, 2, 0.3), special.nbdtrc(1.8, 2.8, 0.3))
assert_equal(special.nbdtr(1, 2, 0.3), special.nbdtr(1.8, 2.8, 0.3))
assert_equal(special.nbdtri(1, 2, 0.3), special.nbdtri(1.8, 2.8, 0.3))
assert_equal(special.pdtri(1, 0.3), special.pdtri(1.8, 0.3))
assert_equal(special.kn(1, 0.3), special.kn(1.8, 0.3))
assert_equal(special.yn(1, 0.3), special.yn(1.8, 0.3))
assert_equal(special.smirnov(1, 0.3), special.smirnov(1.8, 0.3))
assert_equal(special.smirnovi(1, 0.3), special.smirnovi(1.8, 0.3))
@with_special_errors
def test_error_raising():
assert_raises(special.SpecialFunctionError, special.iv, 1, 1e99j)
def test_xlogy():
def xfunc(x, y):
with np.errstate(invalid='ignore'):
if x == 0 and not np.isnan(y):
return x
else:
return x*np.log(y)
z1 = np.asarray([(0,0), (0, np.nan), (0, np.inf), (1.0, 2.0)], dtype=float)
z2 = np.r_[z1, [(0, 1j), (1, 1j)]]
w1 = np.vectorize(xfunc)(z1[:,0], z1[:,1])
assert_func_equal(special.xlogy, w1, z1, rtol=1e-13, atol=1e-13)
w2 = np.vectorize(xfunc)(z2[:,0], z2[:,1])
assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13)
def test_xlog1py():
def xfunc(x, y):
with np.errstate(invalid='ignore'):
if x == 0 and not np.isnan(y):
return x
else:
return x * np.log1p(y)
z1 = np.asarray([(0,0), (0, np.nan), (0, np.inf), (1.0, 2.0),
(1, 1e-30)], dtype=float)
w1 = np.vectorize(xfunc)(z1[:,0], z1[:,1])
assert_func_equal(special.xlog1py, w1, z1, rtol=1e-13, atol=1e-13)
def test_entr():
def xfunc(x):
if x < 0:
return -np.inf
else:
return -special.xlogy(x, x)
values = (0, 0.5, 1.0, np.inf)
signs = [-1, 1]
arr = []
for sgn, v in itertools.product(signs, values):
arr.append(sgn * v)
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z)
assert_func_equal(special.entr, w, z, rtol=1e-13, atol=1e-13)
def test_kl_div():
def xfunc(x, y):
if x < 0 or y < 0 or (y == 0 and x != 0):
# extension of natural domain to preserve convexity
return np.inf
elif np.isposinf(x) or np.isposinf(y):
# limits within the natural domain
return np.inf
elif x == 0:
return y
else:
return special.xlogy(x, x/y) - x + y
values = (0, 0.5, 1.0)
signs = [-1, 1]
arr = []
for sgna, va, sgnb, vb in itertools.product(signs, values, signs, values):
arr.append((sgna*va, sgnb*vb))
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.kl_div, w, z, rtol=1e-13, atol=1e-13)
def test_rel_entr():
def xfunc(x, y):
if x > 0 and y > 0:
return special.xlogy(x, x/y)
elif x == 0 and y >= 0:
return 0
else:
return np.inf
values = (0, 0.5, 1.0)
signs = [-1, 1]
arr = []
for sgna, va, sgnb, vb in itertools.product(signs, values, signs, values):
arr.append((sgna*va, sgnb*vb))
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.rel_entr, w, z, rtol=1e-13, atol=1e-13)
def test_huber():
assert_equal(special.huber(-1, 1.5), np.inf)
assert_allclose(special.huber(2, 1.5), 0.5 * np.square(1.5))
assert_allclose(special.huber(2, 2.5), 2 * (2.5 - 0.5 * 2))
def xfunc(delta, r):
if delta < 0:
return np.inf
elif np.abs(r) < delta:
return 0.5 * np.square(r)
else:
return delta * (np.abs(r) - 0.5 * delta)
z = np.random.randn(10, 2)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.huber, w, z, rtol=1e-13, atol=1e-13)
def test_pseudo_huber():
def xfunc(delta, r):
if delta < 0:
return np.inf
elif (not delta) or (not r):
return 0
else:
return delta**2 * (np.sqrt(1 + (r/delta)**2) - 1)
z = np.array(np.random.randn(10, 2).tolist() + [[0, 0.5], [0.5, 0]])
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.pseudo_huber, w, z, rtol=1e-13, atol=1e-13)
|
WarrenWeckesser/scipy
|
scipy/special/tests/test_basic.py
|
Python
|
bsd-3-clause
| 132,780
|
[
"Elk"
] |
c6169fa82fe4bb1bfb89c0ce91747631e8a3a2752bb5a98fb2f671ee51ece360
|
"""Unittest for the input/output facilities class."""
# build-in modules
import unittest
import tempfile
import os
# third-party modules
import scipy
# path changes
# own modules
from medpy.io import load, save
from medpy.core.logger import Logger
# information
__author__ = "Oskar Maier"
__version__ = "r0.2.2, 2012-05-25"
__email__ = "oskar.maier@googlemail.com"
__status__ = "Release"
__description__ = "Input/output facilities unittest."
# code
class TestIOFacilities(unittest.TestCase):
####
# Comprehensive list of image format endings
####
# The most important image formats for medical image processing
__important = ['.nii', '.nii.gz', '.hdr', '.img', '.img.gz', '.dcm', '.dicom', '.mhd', '.nrrd', '.mha']
# list of image formats ITK is theoretically able to load
__itk = ['.analyze', # failed saving
'.hdr',
'.img',
'.bmp',
'.dcm',
'.gdcm', # failed saving
'.dicom',
'.4x', # failed saving
'.5x', # failed saving
'.ge', # failed saving
'.ge4', # failed saving
'.ge4x', # failed saving
'.ge5', # failed saving
'.ge5x', # failed saving
'.gipl',
'.h5',
'.hdf5',
'.he5',
'.ipl', # failed saving
'.jpg',
'.jpeg',
'.lsm',
'.mha',
'.mhd',
'.pic',
'.png',
'.raw', # failed saving
'.vision', # failed saving
'.siemens', # failed saving
'.spr',
'.sdt', # failed saving
'.stimulate', # failed saving
'.tif',
'.tiff',
'.vtk',
'.bio', # failed saving
'.biorad', # failed saving
'.brains', # failed saving
'.brains2', # failed saving
'.brains2mask', # failed saving
'.bruker', # failed saving
'.bruker2d', # failed saving
'.bruker2dseq', # failed saving
'.mnc', # failed saving
'.mnc2', # failed saving
'.minc', # failed saving
'.minc2', # failed saving
'.nii',
'.nifti', # failed saving
'.nhdr',
'.nrrd',
'.philips', # failed saving
'.philipsreq', # failed saving
'.rec', # failed saving
'.par', # failed saving
'.recpar', # failed saving
'.vox', # failed saving
'.voxbo', # failed saving
'.voxbocub'] # failed saving
##########
# Combinations to avoid due to technical problems, dim->file ending pairs
#########
__avoid = {} # e.g. {4: ('.dcm', '.dicom')}
def test_SaveLoad(self):
"""
The bases essence of this test is to check if any one image format in any one
dimension can be saved and read, as this is the only base requirement for using
medpy.
Additionally checks the basic expected behaviour of the load and save
functionality.
Since this usually does not make much sense, this implementation allows also to
set a switch (verboose) which causes the test to print a comprehensive overview
over which image formats with how many dimensions and which pixel data types
can be read and written.
"""
####
# VERBOOSE SETTINGS
# The following are three variables that can be used to print some nicely
# formatted additional output. When one of them is set to True, this unittest
# should be run stand-alone.
####
# Print a list of supported image types, dimensions and pixel data types
supported = True
# Print a list of image types that were tested but are not supported
notsupported = False
# Print a list of image type, dimensions and pixel data types configurations,
# that seem to work but failed the consistency tests. These should be handled
# with special care, as they might be the source of errors.
inconsistent = False
####
# OTHER SETTINGS
####
# debug settings
logger = Logger.getInstance()
#logger.setLevel(logging.DEBUG)
# run test either for most important formats or for all
#__suffixes = self.__important # (choice 1)
__suffixes = self.__important + self.__itk # (choice 2)
# dimensions and dtypes to check
__suffixes = list(set(__suffixes))
__ndims = [1, 2, 3, 4, 5]
__dtypes = [scipy.bool_,
scipy.int8, scipy.int16, scipy.int32, scipy.int64,
scipy.uint8, scipy.uint16, scipy.uint32, scipy.uint64,
scipy.float32, scipy.float64,
scipy.complex64, scipy.complex128]
# prepare struct to save settings that passed the test
valid_types = dict.fromkeys(__suffixes)
for k1 in valid_types:
valid_types[k1] = dict.fromkeys(__ndims)
for k2 in valid_types[k1]:
valid_types[k1][k2] = []
# prepare struct to save settings that did not
unsupported_type = dict.fromkeys(__suffixes)
for k1 in unsupported_type:
unsupported_type[k1] = dict.fromkeys(__ndims)
for k2 in unsupported_type[k1]:
unsupported_type[k1][k2] = dict.fromkeys(__dtypes)
# prepare struct to save settings that did not pass the data integrity test
invalid_types = dict.fromkeys(__suffixes)
for k1 in invalid_types:
invalid_types[k1] = dict.fromkeys(__ndims)
for k2 in invalid_types[k1]:
invalid_types[k1][k2] = dict.fromkeys(__dtypes)
# create artifical images, save them, load them again and compare them
path = tempfile.mkdtemp()
try:
for ndim in __ndims:
logger.debug('Testing for dimension {}...'.format(ndim))
arr_base = scipy.random.randint(0, 10, list(range(10, ndim + 10)))
for dtype in __dtypes:
arr_save = arr_base.astype(dtype)
for suffix in __suffixes:
# do not run test, if in avoid array
if ndim in self.__avoid and suffix in self.__avoid[ndim]:
unsupported_type[suffix][ndim][dtype] = "Test skipped, as combination in the tests __avoid array."
continue
image = '{}/img{}'.format(path, suffix)
try:
# attempt to save the image
save(arr_save, image)
self.assertTrue(os.path.exists(image), 'Image of type {} with shape={}/dtype={} has been saved without exception, but the file does not exist.'.format(suffix, arr_save.shape, dtype))
# attempt to load the image
arr_load, header = load(image)
self.assertTrue(header, 'Image of type {} with shape={}/dtype={} has been loaded without exception, but no header has been supplied (got: {})'.format(suffix, arr_save.shape, dtype, header))
# check for data consistency
msg = self.__diff(arr_save, arr_load)
if msg:
invalid_types[suffix][ndim][dtype] = msg
#elif list == type(valid_types[suffix][ndim]):
else:
valid_types[suffix][ndim].append(dtype)
# remove image
if os.path.exists(image): os.remove(image)
except Exception as e: # clean up
try:
unsupported_type[suffix][ndim][dtype] = str(e.args)
except Exception as _:
unsupported_type[suffix][ndim][dtype] = e.message
if os.path.exists(image): os.remove(image)
except Exception:
if not os.listdir(path): os.rmdir(path)
else: logger.debug('Could not delete temporary directory {}. Is not empty.'.format(path))
raise
if supported:
print('\nsave() and load() support (at least) the following image configurations:')
print('type\tndim\tdtypes')
for suffix in valid_types:
for ndim, dtypes in list(valid_types[suffix].items()):
if list == type(dtypes) and not 0 == len(dtypes):
print(('{}\t{}D\t{}'.format(suffix, ndim, [str(x).split('.')[-1][:-2] for x in dtypes])))
if notsupported:
print('\nthe following configurations are not supported:')
print('type\tndim\tdtype\t\terror')
for suffix in unsupported_type:
for ndim in unsupported_type[suffix]:
for dtype, msg in list(unsupported_type[suffix][ndim].items()):
if msg:
print(('{}\t{}D\t{}\t\t{}'.format(suffix, ndim, str(dtype).split('.')[-1][:-2], msg)))
if inconsistent:
print('\nthe following configurations show inconsistent saving and loading behaviour:')
print('type\tndim\tdtype\t\terror')
for suffix in invalid_types:
for ndim in invalid_types[suffix]:
for dtype, msg in list(invalid_types[suffix][ndim].items()):
if msg:
print(('{}\t{}D\t{}\t\t{}'.format(suffix, ndim, str(dtype).split('.')[-1][:-2], msg)))
def __diff(self, arr1, arr2):
"""
Returns an error message if the two supplied arrays differ, otherwise false.
"""
if not arr1.ndim == arr2.ndim:
return 'ndim differs ({} to {})'.format(arr1.ndim, arr2.ndim)
elif not self.__is_lossless(arr1.dtype.type, arr2.dtype.type):
return 'loss of data due to conversion from {} to {}'.format(arr1.dtype.type, arr2.dtype.type)
elif not arr1.shape == arr2.shape:
return 'shapes differs ({} to {}).'.format(arr1.shape, arr2.shape)
elif not (arr1 == arr2).all():
return 'contents differs'
else: return False
def __is_lossless(self, _from, _to):
"""
Returns True if a data conversion from dtype _from to _to is lossless, otherwise
False.
"""
__int_order = [scipy.int8, scipy.int16, scipy.int32, scipy.int64]
__uint_order = [scipy.uint8, scipy.int16, scipy.uint16, scipy.int32, scipy.uint32, scipy.int64, scipy.uint64]
__float_order = [scipy.float32, scipy.float64, scipy.float128]
__complex_order = [scipy.complex64, scipy.complex128, scipy.complex256]
__bool_order = [scipy.bool_, scipy.int8, scipy.uint8, scipy.int16, scipy.uint16, scipy.int32, scipy.uint32, scipy.int64, scipy.uint64]
__orders = [__int_order, __uint_order, __float_order, __complex_order, __bool_order]
for order in __orders:
if _from in order:
if _to in order[order.index(_from):]: return True
else: return False
return False
if __name__ == '__main__':
unittest.main()
|
loli/medpy
|
tests/io_/loadsave.py
|
Python
|
gpl-3.0
| 11,899
|
[
"VTK"
] |
2fee0024b34183b0b038aaff4d41fb17153ae47997ac1f48da34acadcadc2679
|
#!/usr/bin/python
"""
# Created on Aug 12, 2016
#
# @author: Gaurav Rastogi (grastogi@avinetworks.com) GitHub ID: grastogi23
#
# module_check: supported
#
# Copyright: (c) 2016 Gaurav Rastogi, <grastogi@avinetworks.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
"""
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_gslbservice_patch_member
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Avi API Module
description:
- This module can be used for calling any resources defined in Avi REST API. U(https://avinetworks.com/)
- This module is useful for invoking HTTP Patch methods and accessing resources that do not have an REST object associated with them.
version_added: 2.5
requirements: [ avisdk ]
options:
data:
description:
- HTTP body of GSLB Service Member in YAML or JSON format.
params:
description:
- Query parameters passed to the HTTP API.
name:
description:
- Name of the GSLB Service
required: true
state:
description:
- The state that should be applied to the member. Member is
- identified using field member.ip.addr.
default: present
choices: ["absent","present"]
extends_documentation_fragment:
- avi
'''
EXAMPLES = '''
- name: Patch GSLB Service to add a new member and group
avi_gslbservice_patch_member:
controller: "{{ controller }}"
username: "{{ username }}"
password: "{{ password }}"
name: gs-3
api_version: 17.2.1
data:
group:
name: newfoo
priority: 60
members:
- enabled: true
ip:
addr: 10.30.10.66
type: V4
ratio: 3
- name: Patch GSLB Service to delete an existing member
avi_gslbservice_patch_member:
controller: "{{ controller }}"
username: "{{ username }}"
password: "{{ password }}"
name: gs-3
state: absent
api_version: 17.2.1
data:
group:
name: newfoo
members:
- enabled: true
ip:
addr: 10.30.10.68
type: V4
ratio: 3
- name: Update priority of GSLB Service Pool
avi_gslbservice_patch_member:
controller: ""
username: ""
password: ""
name: gs-3
state: present
api_version: 17.2.1
data:
group:
name: newfoo
priority: 42
'''
RETURN = '''
obj:
description: Avi REST resource
returned: success, changed
type: dict
'''
import json
import time
from ansible.module_utils.basic import AnsibleModule
from copy import deepcopy
HAS_AVI = True
try:
from ansible.module_utils.network.avi.avi import (
avi_common_argument_spec, HAS_AVI)
from avi.sdk.avi_api import ApiSession
from avi.sdk.utils.ansible_utils import (
avi_obj_cmp, cleanup_absent_fields, ansible_return,
AviCheckModeResponse, AviCredentials)
except ImportError:
HAS_AVI = False
def delete_member(module, check_mode, api, tenant, tenant_uuid,
existing_obj, data, api_version):
members = data.get('group', {}).get('members', [])
patched_member_ids = set([m['ip']['addr'] for m in members if 'fqdn' not in m])
patched_member_fqdns = set([m['fqdn'] for m in members if 'fqdn' in m])
changed = False
rsp = None
if existing_obj and (patched_member_ids or patched_member_fqdns):
groups = [group for group in existing_obj.get('groups', [])
if group['name'] == data['group']['name']]
if groups:
changed = any(
[(lambda g: g['ip']['addr'] in patched_member_ids)(m)
for m in groups[0].get('members', []) if 'fqdn' not in m])
changed = changed or any(
[(lambda g: g['fqdn'] in patched_member_fqdns)(m)
for m in groups[0].get('members', []) if 'fqdn' in m])
if check_mode or not changed:
return changed, rsp
# should not come here if not found
group = groups[0]
new_members = []
for m in group.get('members', []):
if 'fqdn' in m:
if m['fqdn'] not in patched_member_fqdns:
new_members.append(m)
elif 'ip' in m:
if m['ip']['addr'] not in patched_member_ids:
new_members.append(m)
group['members'] = new_members
if not group['members']:
# Delete this group from the existing objects if it is empty.
# Controller also does not allow empty group.
existing_obj['groups'] = [
grp for grp in existing_obj.get('groups', []) if
grp['name'] != data['group']['name']]
# remove the members that are part of the list
# update the object
# added api version for AVI api call.
rsp = api.put('gslbservice/%s' % existing_obj['uuid'], data=existing_obj,
tenant=tenant, tenant_uuid=tenant_uuid, api_version=api_version)
return changed, rsp
def add_member(module, check_mode, api, tenant, tenant_uuid,
existing_obj, data, name, api_version):
rsp = None
if not existing_obj:
# create the object
changed = True
if check_mode:
rsp = AviCheckModeResponse(obj=None)
else:
# creates group with single member
req = {'name': name,
'groups': [data['group']]
}
# added api version for AVI api call.
rsp = api.post('gslbservice', data=req, tenant=tenant,
tenant_uuid=tenant_uuid, api_version=api_version)
else:
# found GSLB object
req = deepcopy(existing_obj)
if 'groups' not in req:
req['groups'] = []
groups = [group for group in req['groups']
if group['name'] == data['group']['name']]
if not groups:
# did not find the group
req['groups'].append(data['group'])
else:
# just update the existing group with members
group = groups[0]
group_info_wo_members = deepcopy(data['group'])
group_info_wo_members.pop('members', None)
group.update(group_info_wo_members)
if 'members' not in group:
group['members'] = []
new_members = []
for patch_member in data['group'].get('members', []):
found = False
for m in group['members']:
if 'fqdn' in patch_member and m.get('fqdn', '') == patch_member['fqdn']:
found = True
break
elif m['ip']['addr'] == patch_member['ip']['addr']:
found = True
break
if not found:
new_members.append(patch_member)
else:
m.update(patch_member)
# add any new members
group['members'].extend(new_members)
cleanup_absent_fields(req)
changed = not avi_obj_cmp(req, existing_obj)
if changed and not check_mode:
obj_path = '%s/%s' % ('gslbservice', existing_obj['uuid'])
# added api version for AVI api call.
rsp = api.put(obj_path, data=req, tenant=tenant,
tenant_uuid=tenant_uuid, api_version=api_version)
return changed, rsp
def main():
argument_specs = dict(
params=dict(type='dict'),
data=dict(type='dict'),
name=dict(type='str', required=True),
state=dict(default='present',
choices=['absent', 'present'])
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(argument_spec=argument_specs)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
api_creds = AviCredentials()
api_creds.update_from_ansible_module(module)
api = ApiSession.get_session(
api_creds.controller, api_creds.username, password=api_creds.password,
timeout=api_creds.timeout, tenant=api_creds.tenant,
tenant_uuid=api_creds.tenant_uuid, token=api_creds.token,
port=api_creds.port)
tenant = api_creds.tenant
tenant_uuid = api_creds.tenant_uuid
params = module.params.get('params', None)
data = module.params.get('data', None)
gparams = deepcopy(params) if params else {}
gparams.update({'include_refs': '', 'include_name': ''})
name = module.params.get('name', '')
state = module.params['state']
# Get the api version from module.
api_version = api_creds.api_version
"""
state: present
1. Check if the GSLB service is present
2. If not then create the GSLB service with the member
3. Check if the group exists
4. if not then create the group with the member
5. Check if the member is present
if not then add the member
state: absent
1. check if GSLB service is present if not then exit
2. check if group is present. if not then exit
3. check if member is present. if present then remove it.
"""
obj_type = 'gslbservice'
# Added api version to call
existing_obj = api.get_object_by_name(
obj_type, name, tenant=tenant, tenant_uuid=tenant_uuid,
params={'include_refs': '', 'include_name': ''}, api_version=api_version)
check_mode = module.check_mode
if state == 'absent':
# Added api version to call
changed, rsp = delete_member(module, check_mode, api, tenant,
tenant_uuid, existing_obj, data, api_version)
else:
# Added api version to call
changed, rsp = add_member(module, check_mode, api, tenant, tenant_uuid,
existing_obj, data, name, api_version)
if check_mode or not changed:
return module.exit_json(changed=changed, obj=existing_obj)
return ansible_return(module, rsp, changed, req=data)
if __name__ == '__main__':
main()
|
drmrd/ansible
|
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py
|
Python
|
gpl-3.0
| 10,405
|
[
"VisIt"
] |
1a685c0e1a8148931a45fd315192129743f69c892808d759d421b7fa058c86c0
|
'''
Copyright 2017, Fujitsu Network Communications, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
""" Selenium keywords for Generic Browser Actions """
import os
from urlparse import urlparse
from Framework.ClassUtils.WSelenium.browser_mgmt import BrowserManagement
from Actions.SeleniumActions.verify_actions import verify_actions
from Actions.SeleniumActions.elementlocator_actions import elementlocator_actions
import Framework.Utils as Utils
from Framework.Utils import selenium_Utils
from Framework.Utils import data_Utils
from Framework.Utils import xml_Utils
from Framework.Utils.testcase_Utils import pNote, pSubStep
from Framework.ClassUtils.json_utils_class import JsonUtils
from Framework.Utils.rest_Utils import remove_invalid_req_args
class browser_actions(object):
"""This is a class that deals with all 'browser' related functionality like
opening and closing a browser, maximizing a browser window, navigating to
a URL, resizing a browser window."""
def __init__(self, *args, **kwargs):
"""This is a constructor for the browser_actions class"""
self.resultfile = Utils.config_Utils.resultfile
self.datafile = Utils.config_Utils.datafile
self.logsdir = Utils.config_Utils.logsdir
self.filename = Utils.config_Utils.filename
self.logfile = Utils.config_Utils.logfile
self.jsonobj = JsonUtils()
# Browser object is the Selenium Utils for all the browser related operations
self.browser_object = BrowserManagement()
self.verify_obj = verify_actions()
self.elementlocator_obj = elementlocator_actions()
def browser_launch(self, system_name, browser_name="all", type="firefox",
url=None, ip=None, remote=None, element_config_file=None,
element_tag=None, headless_mode=None):
"""
The Keyword would launch a browser and Navigate to the url, if provided by the user.
--------------------------------------------------------------------------------------
This keyword does not validate the url provided by the user. Please use
navigate_to_url_with_verification instead of providing a url with this keyword if you
need to verify the navigation result.
--------------------------------------------------------------------------------------
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. ip = Specify this tag as a direct child of the <system> tag
This tag would contain information about the IP of the
remote machine on which you want your testcase to run
Eg: <ip>167.125.0.1</ip>
3. remote = Specify this tag as a direct child of the <system> tag
This tag when set to set, would use the IP above and
start up a browser on that machine. If this tag is set
to 'no', a browser would launch on your machine
Eg: <remote>yes</remote>
4. type = This <type> tag is a child of the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
5. browser_name = This <browser_name> tag is a child of the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
6. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json file which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
9. headless_mode = Run selenium test in headless mode
Used in system with no GUI component
The next 5 arguments are added for Selenium 3 with Firefox
Please use them inside the browser tag in system data file
binary = The absolute path of the browser executable
Eg: <binary>../../firefox/firefox</binary>
gecko_path = The absolute path of the geckodriver
geckodriver is mandatory if using Firefox version 47 or above
This also required Selenium 3.5 or above
For more information please visit:
https://github.com/mozilla/geckodriver#selenium
Eg: <gecko_path>../../../geckodriver</gecko_path>
gecko_log = The absolute path for the geckodriver log to be saved
This file only get generated if firefox is launched with geckodriver
and failuer/error occur
Default is the testcase log directory
proxy_ip = This <proxy_ip> tag refers to the ip of the proxy
server. When a proxy is required this tag has to set
Eg: <proxy_ip>xx.xxx.xx.xx</proxy_ip>
proxy_port = This <proxy_port> tag refers to the port of the
proxy server. When a proxy is required for
remote connection this tag has to set.
Eg: <proxy_port>yyyy</proxy_port>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. ip(str) = IP of the remote machine
6. remote(str) = 'yes' or 'no' to indicate whether you want to
connect to the given aboveIP
7. element_config_file (str) = location of the element configuration
file that contains all element
locators
8. element_tag (str) = particular element in the json fie which
contains relevant information to that element
9. headless_mode(str) = Enable headless_mode
:Returns:
1. status(bool)= True / False.
2. output_dict(dict) = dictionary containing information about the
browser
"""
arguments = locals()
arguments.pop('self')
status = True
output_dict = {}
wdesc = "Opens browser instances"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
# Get optional argument from system data file if it is not specified in keyword arg
optional_arg_keys = ["ip", "remote", "headless_mode"]
optional_args = {}
for arg in optional_arg_keys:
if arguments.get(arg, None) is None:
optional_args[arg] = data_Utils.getSystemData(self.datafile, system_name, arg)
optional_args["webdriver_remote_url"] = optional_args["ip"]\
if str(optional_args["remote"]).strip().lower() == "yes" else False
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system", "name", system_name)
browser_list = []
# Create a list of browser
if system.findall("browser") is not None:
browser_list.extend(system.findall("browser"))
if system.find("browsers") is not None:
browser_list.extend(system.find("browsers").findall("browser"))
if not browser_list:
"No browser found in system: {}, please check datafile".format(system_name)
status = False
# Headless mode operation
enable_headless = Utils.data_Utils.get_object_from_datarepository("wt_enable_headless")
if str(optional_args["headless_mode"]).strip().lower() in ["yes", "y"] or enable_headless:
status = selenium_Utils.create_display()
if not status:
browser_list = []
else:
output_dict[system_name+"_headless"] = True
output_dict["headless_display"] = True
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
browser_optional_arg_keys = {"binary": None, "gecko_path": None, "proxy_ip": None,
"proxy_port": None, "gecko_log": None}
# Adding browser_optional_arg_keys to arguments to get corresponding values from datafile.
arguments.update(browser_optional_arg_keys)
browser_details = selenium_Utils.\
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
# Call utils to launch correct type of browser
# Need to pass the binary, gecko_path, proxy_ip, proxy_port, gecko_log
# if specified in the datafile
browser_optional_args = {}
for arg in browser_optional_arg_keys:
if browser_details.get(arg) is not None:
browser_optional_args[arg] = browser_details.get(arg)
browser_inst = self.browser_object.open_browser(
browser_details["type"], **browser_optional_args)
if browser_inst:
browser_fullname = "{0}_{1}".format(system_name,
browser_details["browser_name"])
output_dict[browser_fullname] = browser_inst
url = browser_details["url"]
if url is not None:
urlschema = urlparse(url)
if urlschema.scheme:
result = self.browser_object.go_to(url, browser_inst)
else:
result = False
pNote("Protocol scheme in your URL: \'{0}\' is missing, protocol could"
"be http/ftp/file".format(url), "error")
else:
result = True
else:
pNote("could not open browser on system={0}, name={1}".format
(system_name, browser_details["browser_name"]), "error")
result = False
status = status and result
else:
pNote("Cannot load correct browser detail in system {}, please check datafile".\
format(system_name))
Utils.testcase_Utils.report_substep_status(status)
return status, output_dict
def browser_maximize(self, system_name, type="firefox", browser_name="all"):
"""
This will maximize the browser window.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will get maximized"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.get_object_from_datarepository(
system_name + "_" + browser_details["browser_name"])
headless = Utils.data_Utils.get_object_from_datarepository(
system_name + "_headless")
if current_browser:
status &= self.browser_object.maximize_browser_window(current_browser, headless)
else:
pNote("Browser of system {0} and name {1} not found in the"
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status &= False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
return status
def browser_launch_and_maximize(self, system_name, browser_name="all", type="firefox",
url=None, ip=None, remote=None, element_config_file=None,
element_tag=None):
"""
This will launch a browser and maximize the browser window if it is set.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_the_system"/>
2. ip = Specify this tag as a direct child of the <system> tag
This tag would contain information about the IP of the
remote machine on which you want your testcase to run
Eg: <ip>167.125.0.1</ip>
3. remote = Specify this tag as a direct child of the <system> tag
This tag when set to set, would use the IP above and
start up a browser on that machine. If this tag is set
to 'no', a browser would launch on your machine
Eg: <remote>yes</remote>
4. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
5. browser_name = This <browser_name> tag is a child tag the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
6. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json file which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. ip(str) = IP of the remote machine
6. remote(str) = 'yes' or 'no' to indicate whether you want to
connect to the given aboveIP
7. element_config_file (str) = location of the element configuration
file that contains all element
locators
8. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
2. output_dict(dict) = dictionary containing information about the
browser
"""
wdesc = "Opens browser instances and maximizes them"
pNote(wdesc)
pSubStep(wdesc)
status, output_dict = self.browser_launch(system_name=system_name, type=type,
browser_name=browser_name, url=url, ip=ip,
remote=remote,
element_config_file=element_config_file,
element_tag=element_tag)
if status:
for current_browser in output_dict:
self.browser_object.maximize_browser_window(output_dict[current_browser])
return status, output_dict
def navigate_to_url(self, system_name, type="firefox", browser_name="all",
url=None, element_config_file=None, element_tag=None):
"""
This will navigate the browser tab to given URL.
-----------------------------------------------------------------------------
This keyword does not validate the url provided by the user. Please use
navigate_to_url_with_verification if you need to verify the navigation result.
------------------------------------------------------------------------------
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
6. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. element_config_file (str) = location of the element configuration
file that contains all element
locators
6. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The webpage would be directed to the given URL"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.go_to(browser_details["url"],
current_browser)
else:
pNote("Browser of system {0} and name {1} not found in "
"the datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
return status
def navigate_to_url_with_verification(self, system_name, type="firefox", browser_name="all",
url=None, element_config_file=None, element_tag=None,
value_type=None, expected_value=None, locator_type=None,
locator=None):
"""
The webpage would be directed to the given URL and then whether the navigation was
successful or not would be verified
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
6. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
7.locator_type = This contains information about the type of
locator that you want to use. Can be 'xpath',
'id', 'css', 'link', 'tag','class', 'name'
8. locator = This contains the value of the locator. Something like
"form", "nav-tags", "//[dh./dhh[yby]"
9. expected_value = This <expected_value> tag is a child og the
<browser> tag in the data file. This tag would
contain the the value you expect the browser to
have. This can be either a url, page title,
page source, or page name
Eg: <expected_value>http://www.google.com</expected_value>
10. value_type =This <value_type> tag is a child of the <browser>
tag in the data file. This tag would contain the
type of browser information that you want to verify.
It can either be current_url, title, name, or
page_source
Eg: <value_type>title</value_type>
USING LOCATOR_TYPE & LOCATOR, VALUE_TYPE & EXPECTED_VALUE
=========================================================
Please provide either the locator type and locator or provide value_type and
expected_value for the verificationr to be performed successfully
Note: Even though, current_url is an acceptable value_type, it is not recommended that
you use it since it can result in a false positive. Please use it only if you are
sure that the verification would go through correctly.
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. element_config_file (str) = location of the element configuration
file that contains all element
locators
6. element_tag (str) = particular element in the json fie which
contains relevant information to that element
7. locator_type (str) = type of the locator - xpath, id, etc.
8. locator (str) = locator by which the element should be located.
9. expected_value (str) = The expected value of the information
retrieved from the web page.
10. value_type(str) = Type of page information that you wat to
verify: current_url, name, title, or
page_source
:Returns:
1. status(bool)= True / False.
"""
wdesc = "The webpage would be directed to the given URL and then whether the navigation " \
"was successful or not would be verified."
pNote(wdesc)
pSubStep(wdesc)
status = self.navigate_to_url(system_name=system_name, type=type, browser_name=browser_name,
url=url, element_config_file=element_config_file,
element_tag=element_tag)
if all((value_type is not None, expected_value is not None)):
status = status and self.verify_obj.verify_page_by_property(system_name=system_name,
expected_value=expected_value,
value_type=value_type,
browser_name=browser_name,
element_config_file=element_config_file,
element_tag=element_tag)
elif all((locator is not None, locator_type is not None)):
status = status and self.elementlocator_obj.get_element(system_name=system_name,
locator_type=locator_type,
locator=locator,
element_tag=element_tag,
element_config_file=element_config_file,
browser_name=browser_name)[0]
else:
pNote("The navigation result could not be verified as enough information was not "
"provided. Either the locator and locator_type or the value_type and "
"expected_value should be given.", "error")
status = False
Utils.testcase_Utils.report_substep_status(status)
return status
def navigate_forward(self, system_name, type="firefox", browser_name="all"):
"""
This will take you forward in the browser history.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will navigate forward"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.go_forward(current_browser)
else:
pNote("Browser of system {0} and name {1} not found in "
"the datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def navigate_backward(self, system_name, type="firefox", browser_name="all"):
"""
This will take you backward in browser history.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will navigate backward"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.go_back(current_browser)
else:
pNote("Browser of system {0} and name {1} not found in "
"the datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def browser_refresh(self, system_name, type="firefox", browser_name="all"):
"""
This will refresh the browser window.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will be refreshed"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.reload_page(current_browser)
else:
pNote("Browser of system {0} and name {1} not found in "
"the datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def browser_reload(self, system_name, type="firefox", browser_name="all"):
"""
This will reload the browser window.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will be reloaded"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.hard_reload_page(current_browser)
else:
pNote("Browser of system {0} and name {1} not found in "
"the datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def browser_close(self, system_name, type="firefox", browser_name="all"):
"""
This will close the browser window.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will be closed"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.close_browser(current_browser)
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
return status
def set_window_size(self, system_name, xsize=None, ysize=None,
type="firefox", browser_name="all",
element_config_file=None, element_tag=None):
"""
This will set the browser window to a particular size.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. xsize = This <xsize> tag is a child og the <browser> tag in the
data file. This tag would contain the information about
the x co-ordinate of the window
Eg: <xsize>500</zsixe>
6. ysize = This <ysize> tag is a child og the <browser> tag in the
data file. This tag would contain the information about
the y co-ordinate of the window
Eg: <ysize>750</ysize>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. xsize (int/str) = The x co-ordinate
3. ysize (int/str) = The y co-ordinate
4. type(str) = Type of browser: firefox, chrome, ie.
5. browser_name(str) = Unique name for this particular browser
6. url(str) = URL to which the browser should be directed
7. element_config_file (str) = location of the element configuration
file that contains all element
locators
8. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will be resized"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.set_window_size(int(browser_details["xsize"]), int(browser_details["ysize"]),
current_browser)
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def set_window_position(self, system_name, xpos=None, ypos=None,
type="firefox", browser_name="all",
element_config_file=None, element_tag=None):
"""
This will set the browser window to a particular position.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. xpos = This <xpos> tag is a child og the <browser> tag in the
data file. This tag would contain the information about
the x co-ordinate of the window
Eg: <xpos>500</xpos>
5. ypos = This <ypos> tag is a child og the <browser> tag in the
data file. This tag would contain the information about
the y co-ordinate of the window
Eg: <ypos>750</ypos>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. xpos (int/str) = The x co-ordinate
3. ypos (int/str) = The y co-ordinate
4. type(str) = Type of browser: firefox, chrome, ie.
5. browser_name(str) = Unique name for this particular browser
6. url(str) = URL to which the browser should be directed
7. element_config_file (str) = location of the element configuration
file that contains all element
locators
8. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will be set to a new position"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
self.browser_object.set_window_position(int(browser_details["xpos"]),
int(browser_details["ypos"]),
current_browser)
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def open_a_new_tab(self, system_name, type="firefox", browser_name="all",
element_config_file=None, element_tag=None, url=None):
"""This will open a new tab.
DISCLAIMER - A new window will be opened for firefox as Selenium does not
support tabs in Firefox.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
6. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. element_config_file (str) = location of the element configuration
file that contains all element
locators
6. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will open a new tab"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
status &= self.browser_object.open_tab(current_browser,
browser_details["url"],
browser_details["type"])
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status &= False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def switch_between_tabs(self, system_name, type="firefox",
browser_name="all", tab_number=None,
element_config_file=None, element_tag=None):
"""
This keyword will let you switch between all open tabs.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. tab_number = This <tab_number> tag is a child og the <browser>
tag in the data file. This tag would contain the
information about the tab number that you want to
switch to from the current tab
Eg: <tab_number>3</tab_number>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. tab_number (int/str) = The tab number that you want to switch to.
3. type(str) = Type of browser: firefox, chrome, ie.
4. browser_name(str) = Unique name for this particular browser
5. url(str) = URL to which the browser should be directed
6. element_config_file (str) = location of the element configuration
file that contains all element
locators
7. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will switch between tabs"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
status = self.browser_object.\
switch_tab(current_browser,
browser_details["tab_number"],
browser_details["type"])
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def close_a_tab(self, system_name, type="firefox", browser_name="all",
tab_number=None, element_config_file=None,
element_tag=None):
"""
This keyword will let you close an open tab.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. tab_number = This <tab_number> tag is a child og the <browser>
tag in the data file. This tag would contain the
information about the tab number that you want to
close
Eg: <tab_number>3</tab_number>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. tab_number (int/str) = The tab number that you want to close.
3. type(str) = Type of browser: firefox, chrome, ie.
4. browser_name(str) = Unique name for this particular browser
5. url(str) = URL to which the browser should be directed
6. element_config_file (str) = location of the element configuration
file that contains all element
locators
7. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will switch between tabs"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
status = self.browser_object.\
close_tab(current_browser,
browser_details["tab_number"],
browser_details["type"])
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def get_window_size(self, system_name, type="firefox", browser_name="all",
element_config_file=None, element_tag=None):
"""
This keyword will return window size.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
6. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. element_config_file (str) = location of the element configuration
file that contains all element
locators
6. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will return current window size"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
width, height = self.browser_object.\
get_window_size(current_browser)
pNote("Window width: {0} and window height: {1}".format(width, height))
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def get_window_position(self, system_name, type="firefox",
browser_name="all", element_config_file=None,
element_tag=None):
"""
This keyword will return the window position.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
6. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. element_config_file (str) = location of the element configuration
file that contains all element
locators
6. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "The browser will return current window position"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
x, y = self.browser_object.\
get_window_position(current_browser)
pNote("Window X co-ordinate: {0} and window Y "
"co-ordinate: {1}".format(x, y))
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def save_screenshot(self, system_name, type="firefox", directory=None,
filename=None, browser_name="all",
element_config_file=None, element_tag=None):
"""
This keyword will save a screenshot of the current browser window.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. directory = This <directory> tag is a child og the <browser> tag
in the data file. This tag would contain the
information about the directory in which you want to
store the screenshot. If left empty, the screenshots
would be saved in the Logs directory
Eg: <directory>/home/user/screenshots</directory>
5. filename = This <filename> tag is a child of the <browser> tag in
the data file. This tag would contain the information
about the name of file that you want the screenshot to
have. If left empty, the screenshot file would be
saved with the name screenshot_*timestamp*
Eg: <filename>new_screenshot</filename>
7. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/selenium_config.json
</element_config_file>
8. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. directory (str) = The directory that would store the
screenshots.
3. filename (str) = Name of the screenshot file
4. type(str) = Type of browser: firefox, chrome, ie.
5. browser_name(str) = Unique name for this particular browser
6. url(str) = URL to which the browser should be directed
7. element_config_file (str) = location of the element configuration
file that contains all element
locators
8. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "A screenshot of the current browser window would be saved"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
if directory is not None:
status = self.browser_object.\
save_screenshot(current_browser,
browser_details["filename"],
browser_details["directory"])
else:
status = self.browser_object.\
save_screenshot(current_browser,
browser_details["filename"],
os.path.dirname(self.logsdir))
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def delete_cookies(self, system_name, type="firefox", browser_name="all",
element_config_file=None, element_tag=None):
"""
This keyword will delete all cookies of a browser instance.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
4. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
5. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
6. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. browser_name(str) = Unique name for this particular browser
4. url(str) = URL to which the browser should be directed
5. element_config_file (str) = location of the element configuration
file that contains all element
locators
6. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "All cookies of this browser instance would be deleted"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
status = self.browser_object.delete_all_cookies_in_browser(current_browser)
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
def delete_a_cookie(self, system_name, cookie_name, type="firefox",
browser_name="all", element_config_file=None,
element_tag=None):
"""
This keyword will delete a particular cookie of a browser instance.
:Datafile Usage:
Tags or attributes to be used in input datafile for the system or
subsystem. If both tag and attribute is provided the attribute will
be used.
1. system_name = This attribute can be specified in the datafile as
a <system> tag directly under the <credentials>
tag. An attribute "name" has to be added to this
tag and the value of that attribute would be taken
in as value to this keyword attribute.
<system name="name_of_thy_system"/>
2. type = This <type> tag is a child og the <browser> tag in the
data file. The type of browser that should be opened can
be added in here.
Eg: <type>firefox</type>
3. cookie_name = This <cookie_name> tag is a child og the <browser>
tag in the data file. The name of the ccokie that
you want to delete can be added here.
Eg: <cookie_name>gmail_cookie</cookie_name>
4. browser_name = This <browser_name> tag is a child og the
<browser> tag in the data file. Each browser
instance should have a unique name. This name can
be added here
Eg: <browser_name>Unique_name_1</browser_name>
5. url = The URL that you want to open your browser to can be added
in the <url> tag under the <browser> tag.
Eg: <url>https://www.google.com</url>
6. element_config_file = This <element_config_file> tag is a child
of the <browser> tag in the data file. This
stores the location of the element
configuration file that contains all
element locators.
Eg: <element_config_file>
../Config_files/slenium_config.json
</element_config_file>
7. element_tag = This element_tag refers to a particular element in
the json fie which contains relevant information to
that element. If you want to use this one element
through out the testcase for a particular browser,
you can include it in the data file. If this not
the case, then you should create an argument tag
in the relevant testcase step and add the value
directly in the testcase step.
FOR DATA FILE
Eg: <element_tag>json_name_1</element_tag>
FOR TEST CASE
Eg: <argument name="element_tag" value="json_name_1">
:Arguments:
1. system_name(str) = the system name.
2. type(str) = Type of browser: firefox, chrome, ie.
3. cookie_name (str) = Name of the cookie that you want to delete
4. browser_name(str) = Unique name for this particular browser
5. url(str) = URL to which the browser should be directed
6. element_config_file (str) = location of the element configuration
file that contains all element
locators
7. element_tag (str) = particular element in the json fie which
contains relevant information to that element
:Returns:
1. status(bool)= True / False.
"""
arguments = locals()
arguments.pop('self')
status = True
wdesc = "A particular cookie of the browser instance would be deleted"
pNote(wdesc)
pSubStep(wdesc)
browser_details = {}
system = xml_Utils.getElementWithTagAttribValueMatch(self.datafile,
"system",
"name",
system_name)
browser_list = system.findall("browser")
try:
browser_list.extend(system.find("browsers").findall("browser"))
except AttributeError:
pass
if not browser_list:
browser_list.append(1)
browser_details = arguments
for browser in browser_list:
arguments = Utils.data_Utils.get_default_ecf_and_et(arguments, self.datafile, browser)
if browser_details == {}:
browser_details = selenium_Utils. \
get_browser_details(browser, datafile=self.datafile, **arguments)
if browser_details is not None:
current_browser = Utils.data_Utils.\
get_object_from_datarepository(system_name + "_" +
browser_details["browser_name"])
if current_browser:
status = self.browser_object.delete_a_specific_cookie(
current_browser, browser_details["cookie_name"])
else:
pNote("Browser of system {0} and name {1} not found in the "
"datarepository"
.format(system_name, browser_details["browser_name"]),
"Exception")
status = False
browser_details = {}
Utils.testcase_Utils.report_substep_status(status)
if current_browser:
selenium_Utils.save_screenshot_onerror(status, current_browser)
return status
|
warriorframework/warriorframework
|
warrior/Actions/SeleniumActions/browser_actions.py
|
Python
|
apache-2.0
| 119,270
|
[
"VisIt"
] |
99124bc89afb4b2c8fd544d11a9f17b679b613d6f94d7ba4e15082d35e9a1ad0
|
# stack.py
#
# Created by Brett H. Andrews on 13 Jun 2017.
import os
from os.path import join
import pickle
import re
import click
import numpy as np
import pandas as pd
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
from stacking.spectral_processing import make_grid
@click.command()
@click.option('--filelists', default=None, multiple=True)
@click.option('--overwrite', '-o', default=False, is_flag=True)
@click.option('--samples', default=100)
def stack_resample(filelists, overwrite, samples):
"""Stack spectra using bootstrap resampling.
Parameters:
filelists (str):
Filelists of spectra to stack. Default is ``None``, which
will use all stacks.
overwrite (bool):
If ``True``, overwrite existing files. Default is ``False``.
samples (int):
Number of samples (with replacement) to draw. Default is
``100``.
"""
path_mzr = join(os.path.expanduser('~'), 'projects', 'mzr')
path_dr7 = join(path_mzr, 'stacks', 'dr7_M0.1e')
path_filelists = join(path_dr7, 'filelists')
if not filelists:
filelists = os.listdir(path_filelists)
filelists.sort(key=lambda s: float(s.split('M')[1].split('_')[0]))
# filelists = ['M9.4_9.5.txt', 'M9.5_9.6.txt', 'M9.6_9.7.txt']
path_snr = join(path_dr7, 'results', 'snr', 'snr.csv')
if os.path.isfile(path_snr) and not overwrite:
raise ValueError(f'Not written (overwrite with --overwrite): {path_snr}')
full_snr = []
window_snr = []
indices = []
for filelist in filelists:
click.echo(filelist)
binpar = filelist.split('.txt')[0]
path_raw = join(path_dr7, binpar, 'raw_stack')
path_spec = join(path_raw, binpar + '.pck')
with open(path_spec, 'rb') as fin:
spectra = pickle.load(fin)
stack = np.mean(spectra, axis=0)
check_stack(stack, binpar, path_raw, path_dr7)
grid = make_grid()
indices.append(binpar)
spec_mean, spec_std = resample_stacks(spectra, samples)
spec_snr = stack / spec_std
spec_median_snr = np.median(spec_snr)
window_median_snr = np.median(spec_snr[(grid >=4400) & (grid <= 4450)])
full_snr.append(spec_median_snr)
window_snr.append(window_median_snr)
snr = pd.DataFrame(list(map(list, zip(*(full_snr, window_snr)))), index=indices,
columns=['full spec', '4400-4450 A'])
snr[snr > 1e14] = np.nan # M7.0_7.1 only has one galaxy so resampling it is meaningless
snr.to_csv(path_snr)
def resample_stacks(spectra, samples):
"""Resample stacks.
Parameters:
spectra (array):
Spectra to be sampled from.
samples (int):
Number of samples to draw.
Returns:
array, array: mean and std spectra
"""
grid = make_grid()
stacks_resampled = np.zeros((samples, len(grid)))
for ii in range(samples):
spectra_resampled = sklearn.utils.resample(spectra)
stacks_resampled[ii] = np.mean(spectra_resampled, axis=0)
spec_mean = np.mean(stacks_resampled, axis=0)
spec_std = np.std(stacks_resampled, ddof=1, axis=0)
return spec_mean, spec_std
def check_stack(stack, binpar, path_raw, path_dr7):
"""Check that current stack agrees with stack from AM13.
Parameters:
stack (array):
Current stack.
binpar (str):
Bin parameters.
path_raw (str):
Path to raw_stack directory.
path_dr7 (str):
Path to dr7_M0.1e directory.
"""
for filename in os.listdir(path_raw):
fm = re.fullmatch(f'{binpar}_n\d+.txt', filename)
if fm is not None:
path_stack_comp = join(path_dr7, binpar, 'raw_stack', fm.string)
break
comp = pd.read_csv(path_stack_comp, delim_whitespace=True, header=None, names=['wave', 'flux'])
stack_comp = comp.flux.values
assert np.abs(stack - stack_comp).max() < 1e-6, \
'stack and comparison stack differ by more than 1e-6'
|
bretthandrews/stacking
|
stacking/stack.py
|
Python
|
bsd-3-clause
| 4,103
|
[
"Galaxy"
] |
41f97df38c24d660b4aadc0b4ee08fb7ded2c8ae6f70151212af5307d935b9fc
|
from sys import exit
from pyneb import RecAtom
from scipy.interpolate import interp1d
from uncertainties import ufloat, UFloat
from uncertainties.unumpy import uarray, exp as unum_exp, log10 as unum_log10, pow as unum_pow
from uncertainties.umath import pow as umath_pow, log10 as umath_log10, exp as umath_exp, isnan as un_isnan
from pandas import read_csv
from numpy import loadtxt, where, array, ndarray, ones, concatenate, power as np_power, dot, arange, empty, isnan
from os import path, getcwd
class ReddeningLaws():
def __init__(self):
self.SpectraEdges_Limit = 200
self.Hbeta_wavelength = 4861.3316598713955
#List of hydrogen recombination lines in our EmissionLines coordinates log. WARNING: We have removed the very blended ones
self.RecombRatios_Ions = array(['HBal_20_2', 'HBal_19_2','HBal_18_2', 'HBal_17_2', 'HBal_16_2','HBal_15_2','HBal_12_2','HBal_11_2',
'HBal_9_2','Hdelta_6_2','Hgamma_5_2','Hbeta_4_2','Halpha_3_2','HPas_20_3','HPas19_3','HPas_18_3','HPas_17_3','HPas_16_3','HPas_15_3','HPas_14_3',
'HPas13_3','HPas12_3','HPas_11_3','HPas_10_3','HPas_9_3','HPas_8_3','HPas_7_3'])
#Dictionary with the reddening curves
self.reddening_curves_calc = {'MM72' : self.f_Miller_Mathews1972,
'CCM89' : self.X_x_Cardelli1989,
'G03_bar' : self.X_x_Gordon2003_bar,
'G03_average' : self.X_x_Gordon2003_average,
'G03_supershell' : self.X_x_Gordon2003_supershell
}
__location__ = path.realpath(path.join(getcwd(), path.dirname(__file__)))
self.red_laws_folder = __location__[0:__location__.find('dazer')] + 'dazer/'
def checking_for_ufloat(self, x):
if isinstance(x, UFloat):
param = x.nominal_value
else:
param = x
return param
def compare_RecombCoeffs(self, obj_data, lineslog_frame, spectral_limit = 200):
#Create hidrogen atom object
self.H1_atom = RecAtom('H', 1)
linformat_df = read_csv('/home/vital/workspace/dazer/format/emlines_pyneb_optical_infrared.dz', index_col=0, names=['ion', 'lambda_theo', 'latex_format'], delim_whitespace=True)
lineslog_frame['latex_format'] = 'none'
for line in lineslog_frame.index:
if '_w' not in line: #Structure to avoid wide components
lineslog_frame.loc[line,'latex_format'] = r'${}$'.format(linformat_df.loc[line, 'latex_format'])
#Load electron temperature and density (if not available it will use Te = 10000K and ne = 100cm^-3)
T_e = self.checking_for_ufloat(obj_data.TeSIII) if ~isnan(self.checking_for_ufloat(obj_data.TeSIII)) else 10000.0
n_e = self.checking_for_ufloat(obj_data.neSII) if ~isnan(self.checking_for_ufloat(obj_data.neSII)) else 100.0
#Get the Hidrogen recombination lines that we have observed in this object
Obs_Hindx = lineslog_frame.Ion.isin(self.RecombRatios_Ions)
#Calculate recombination coefficients and reddening curve values
Obs_H_Emis = empty(len(Obs_Hindx))
Obs_Ions = lineslog_frame.loc[Obs_Hindx, 'Ion'].values
for i in range(len(Obs_Ions)):
TransitionCode = Obs_Ions[i][Obs_Ions[i].find('_')+1:len(Obs_Ions[i])]
Obs_H_Emis[i] = self.H1_atom.getEmissivity(tem = T_e, den = n_e, label = TransitionCode)
#Normalize by Hbeta (this new constant is necessary to avoid a zero sigma)
Hbeta_Emis = self.H1_atom.getEmissivity(tem = T_e, den = n_e, label = '4_2')
Fbeta_flux = ufloat(lineslog_frame.loc['H1_4861A']['line_Flux'].nominal_value, lineslog_frame.loc['H1_4861A']['line_Flux'].std_dev)
#Load theoretical and observational recombination ratios to data frame
lineslog_frame.loc[Obs_Hindx,'line_Emissivity'] = Obs_H_Emis
lineslog_frame.loc[Obs_Hindx,'line_TheoRecombRatio'] = Obs_H_Emis / Hbeta_Emis
lineslog_frame.loc[Obs_Hindx,'line_ObsRecombRatio'] = lineslog_frame.loc[Obs_Hindx, 'line_Flux'].values / Fbeta_flux
#Get indeces of emissions in each arm
ObsBlue_Hindx = Obs_Hindx & (lineslog_frame.lambda_theo < obj_data.join_wavelength)
ObsRed_Hindx = Obs_Hindx & (lineslog_frame.lambda_theo > obj_data.join_wavelength)
#Recalculate red arm coefficients so they are normalized by red arm line (the most intense)
if (ObsRed_Hindx.sum()) > 0: #Can only work if there is at least one line
idx_Redmax = lineslog_frame.loc[ObsRed_Hindx]['flux_intg'].idxmax() #We do not use line_flux because it is a ufloat and it does not work with idxmax
H_Redmax_flux = lineslog_frame.loc[idx_Redmax,'line_Flux']
H_Redmax_emis = lineslog_frame.loc[idx_Redmax,'line_Emissivity']
Flux_Redmax = ufloat(H_Redmax_flux.nominal_value * Hbeta_Emis / H_Redmax_emis, H_Redmax_flux.std_dev * Hbeta_Emis / H_Redmax_emis)
lineslog_frame.loc[ObsRed_Hindx,'line_ObsRecombRatio'] = lineslog_frame.loc[ObsRed_Hindx, 'line_Flux'].values / Flux_Redmax
#Load x axis values: (f_lambda - f_Hbeta) and y axis values: log(F/Fbeta)_theo - log(F/Fbeta)_obs to dataframe
lineslog_frame.loc[Obs_Hindx,'x axis values'] = lineslog_frame.loc[Obs_Hindx, 'line_f'].values - lineslog_frame.loc['H1_4861A']['line_f']
lineslog_frame.loc[Obs_Hindx,'y axis values'] = unum_log10(lineslog_frame.loc[Obs_Hindx,'line_TheoRecombRatio'].values) - unum_log10(lineslog_frame.loc[Obs_Hindx,'line_ObsRecombRatio'].values)
#Compute all the possible configuration of points and store them
output_dict = {}
#--- All points:
output_dict['all_x'] = lineslog_frame.loc[Obs_Hindx,'x axis values'].values
output_dict['all_y'] = lineslog_frame.loc[Obs_Hindx,'y axis values'].values
output_dict['all_ions'] = list(lineslog_frame.loc[Obs_Hindx,'latex_format'].values)
#--- By arm
output_dict['blue_x'] = lineslog_frame.loc[ObsBlue_Hindx,'x axis values'].values
output_dict['blue_y'] = lineslog_frame.loc[ObsBlue_Hindx,'y axis values'].values
output_dict['blue_ions'] = list(lineslog_frame.loc[ObsBlue_Hindx,'latex_format'].values)
output_dict['red_x'] = lineslog_frame.loc[ObsRed_Hindx,'x axis values'].values
output_dict['red_y'] = lineslog_frame.loc[ObsRed_Hindx,'y axis values'].values
output_dict['red_ions'] = list(lineslog_frame.loc[ObsRed_Hindx,'latex_format'].values)
#--- Store fluxes
output_dict['Blue_ObsRatio'] = unum_log10(lineslog_frame.loc[ObsBlue_Hindx,'line_ObsRecombRatio'].values)
output_dict['Red_ObsRatio'] = unum_log10(lineslog_frame.loc[ObsRed_Hindx,'line_ObsRecombRatio'].values)
output_dict['line_Flux_Blue'] = lineslog_frame.loc[ObsBlue_Hindx,'line_Flux'].values
output_dict['line_Flux_Red'] = lineslog_frame.loc[ObsRed_Hindx,'line_Flux'].values
output_dict['line_wave_Blue'] = lineslog_frame.loc[ObsBlue_Hindx,'lambda_theo'].values
output_dict['line_wave_Red'] = lineslog_frame.loc[ObsRed_Hindx,'lambda_theo'].values
#--- Inside limits
if obj_data.h_gamma_valid == 'yes':
in_idcs = Obs_Hindx
elif obj_data.h_gamma_valid == 'no':
wave_idx = ((lineslog_frame.lambda_theo > (obj_data.Wmin_Blue + spectral_limit)) & (lineslog_frame.lambda_theo < (obj_data.join_wavelength - spectral_limit))) \
| ((lineslog_frame.lambda_theo > (obj_data.join_wavelength + spectral_limit)) & (lineslog_frame.lambda_theo < (obj_data.Wmax_Red - spectral_limit)))
in_idcs = Obs_Hindx & wave_idx
output_dict['in_x'] = lineslog_frame.loc[in_idcs,'x axis values'].values
output_dict['in_y'] = lineslog_frame.loc[in_idcs,'y axis values'].values
output_dict['in_ions'] = list(lineslog_frame.loc[in_idcs,'latex_format'].values)
#--- Outside limis
if obj_data.h_gamma_valid == 'no':
wave_idx = (lineslog_frame.lambda_theo < (obj_data.Wmin_Blue + spectral_limit)) | (lineslog_frame.lambda_theo > (obj_data.Wmax_Red - spectral_limit))
out_idcs = Obs_Hindx & wave_idx
output_dict['out_x'] = lineslog_frame.loc[out_idcs,'x axis values'].values
output_dict['out_y'] = lineslog_frame.loc[out_idcs,'y axis values'].values
output_dict['out_ions'] = list(lineslog_frame.loc[out_idcs,'latex_format'].values)
else:
output_dict['out_x'] = None
output_dict['out_y'] = None
output_dict['out_ions'] = None
return output_dict
def deredden_lines(self, lines_frame, reddening_curve, cHbeta = None, E_BV = None, R_v = None):
#cHbeta format
if isinstance(cHbeta, float):
cHbeta_mag = cHbeta
elif isinstance(cHbeta, UFloat):
cHbeta_mag = cHbeta
#If it is negative we set it to zero
if cHbeta_mag < 0.0:
cHbeta_mag = 0.0
#By default we perform the calculation using the colour excess
E_BV = E_BV if E_BV != None else self.Ebv_from_cHbeta(cHbeta, reddening_curve, R_v)
#Get reddening curves f_value
lines_fluxes = lines_frame.line_Flux.values
lines_wavelengths = lines_frame.lambda_theo.values
lines_Xx = self.reddening_Xx(lines_wavelengths, reddening_curve, R_v)
lines_frame['line_Xx'] = lines_Xx
#Get line intensities
lines_int = lines_fluxes * unum_pow(10, 0.4 * lines_Xx * E_BV)
lines_frame['line_Int'] = lines_int
#We recalculate the equivalent width using the intensity of the lines (instead of the flux)
continua_flux = uarray(lines_frame.zerolev_mean.values, lines_frame.zerolev_std.values)
continua_int = continua_flux * unum_pow(10, 0.4 * lines_Xx * E_BV)
lines_frame['con_dered'] = continua_int
lines_frame['Eqw_dered'] = lines_int / continua_int
#For extra testing we add integrated and gaussian values derredden
lines_brut_flux = uarray(lines_frame['flux_intg'].values, lines_frame['flux_intg_er'].values)
lines_gauss_flux = uarray(lines_frame['flux_gauss'].values, lines_frame['flux_gauss_er'].values)
lines_frame['line_IntBrute_dered'] = lines_brut_flux * unum_pow(10, 0.4 * lines_Xx * E_BV)
lines_frame['line_IntGauss_dered'] = lines_gauss_flux * unum_pow(10, 0.4 * lines_Xx * E_BV)
return
def derreddening_spectrum(self, wave, flux, reddening_curve, cHbeta = None, E_BV = None, R_v = None):
#cHbeta format
if isinstance(cHbeta, float):
cHbeta_mag = cHbeta
elif isinstance(cHbeta, UFloat):
cHbeta_mag = cHbeta
#If it is negative we set it to zero
if cHbeta_mag < 0.0:
cHbeta_mag = 0.0
#By default we perform the calculation using the colour excess
E_BV = E_BV if E_BV != None else self.Ebv_from_cHbeta(cHbeta, reddening_curve, R_v)
#Perform dereddening
wavelength_range_Xx = self.reddening_Xx(wave, reddening_curve, R_v)
flux_range_derred = flux * np_power(10, 0.4 * wavelength_range_Xx * E_BV)
return flux_range_derred
def reddening_spectrum(self, wave, flux, reddening_curve, cHbeta = None, E_BV = None, R_v = None):
#cHbeta format
if isinstance(cHbeta, float):
cHbeta_mag = cHbeta
elif isinstance(cHbeta, UFloat):
cHbeta_mag = cHbeta
#If it is negative we set it to zero
if cHbeta_mag < 0.0:
cHbeta_mag = 0.0
#By default we perform the calculation using the colour excess
E_BV = E_BV if E_BV != None else self.Ebv_from_cHbeta(cHbeta, reddening_curve, R_v)
#Perform dereddening
wavelength_range_Xx = self.reddening_Xx(wave, reddening_curve, R_v)
flux_range_red = flux * np_power(10, - 0.4 * wavelength_range_Xx * E_BV)
return flux_range_red
def Ebv_from_cHbeta(self, cHbeta, reddening_curve, R_v):
if cHbeta == None:
exit('Warning: no cHbeta or E(B-V) provided to reddening curve, code aborted')
else:
if cHbeta != None:
E_BV = cHbeta * 2.5 / self.reddening_Xx(array([self.Hbeta_wavelength]), reddening_curve, R_v)[0]
return E_BV
def flambda_from_Xx(self, Xx, reddening_curve, R_v):
X_Hbeta = self.reddening_Xx(array([self.Hbeta_wavelength]), reddening_curve, R_v)[0]
f_lines = Xx/X_Hbeta - 1
return f_lines
def reddening_Xx(self, waves, curve_methodology, R_v):
self.R_v = R_v
self.wavelength_rc = waves
return self.reddening_curves_calc[curve_methodology]()
def f_Miller_Mathews1972(self):
if isinstance(self.wavelength_rc, ndarray):
y = 1.0/(self.wavelength_rc/10000.0)
y_beta = 1.0/(4862.683/10000.0)
ind_low = where(y <= 2.29)[0]
ind_high = where(y > 2.29)[0]
dm_lam_low = 0.74 * y[ind_low] - 0.34 + 0.341 * self.R_v - 1.014
dm_lam_high = 0.43 * y[ind_high] + 0.37 + 0.341 * self.R_v - 1.014
dm_beta = 0.74 * y_beta - 0.34 + 0.341 * self.R_v - 1.014
dm_lam = concatenate((dm_lam_low, dm_lam_high))
f = dm_lam/dm_beta - 1
else:
y = 1.0/(self.wavelength_rc/10000.0)
y_beta = 1.0/(4862.683/10000.0)
if y <= 2.29:
dm_lam = 0.74 * y - 0.34 + 0.341 * self.R_v - 1.014
else:
dm_lam = 0.43 * y + 0.37 + 0.341 * self.R_v - 1.014
dm_beta = 0.74 * y_beta - 0.34 + 0.341 * self.R_v - 1.014
f = dm_lam/dm_beta - 1
return f
def X_x_Cardelli1989(self):
x_true = 1.0 / (self.wavelength_rc / 10000.0)
y = x_true - 1.82
y_coeffs = array([ones(len(y)), y, np_power(y, 2), np_power(y, 3), np_power(y, 4), np_power(y, 5), np_power(y, 6), np_power(y, 7)])
a_coeffs = array([1, 0.17699, -0.50447, -0.02427, 0.72085, 0.01979, -0.77530, 0.32999])
b_coeffs = array([0, 1.41338, 2.28305, 1.07233, -5.38434, -0.62251, 5.30260, -2.09002])
a_x = dot(a_coeffs,y_coeffs)
b_x = dot(b_coeffs,y_coeffs)
X_x = a_x + b_x / self.R_v
return X_x
def X_x_Gordon2003_bar(self):
#Default R_V is 3.4
R_v = self.R_v if self.R_v != None else 3.4 #This is not very nice
x = 1.0 / (self.wavelength_rc / 10000.0)
#This file format has 1/um in column 0 and A_x/A_V in column 1
file_data = loadtxt('/home/vital/workspace/dazer/bin/lib/Astro_Libraries/gordon_2003_SMC_bar.txt')
#This file has column
Xx_interpolator = interp1d(file_data[:, 0], file_data[:, 1])
X_x = R_v * Xx_interpolator(x)
return X_x
def X_x_Gordon2003_average(self):
#Default R_V is 3.4
R_v = self.R_v if self.R_v != None else 3.4 #This is not very nice
x = 1.0 / (self.wavelength_rc / 10000.0)
#This file format has 1/um in column 0 and A_x/A_V in column 1
file_data = loadtxt(self.red_laws_folder + 'bin/lib/Astro_Libraries/literature_data/gordon_2003_LMC_average.txt')
#This file has column
Xx_interpolator = interp1d(file_data[:, 0], file_data[:, 1])
X_x = R_v * Xx_interpolator(x)
return X_x
def X_x_Gordon2003_supershell(self):
#Default R_V is 3.4
R_v = self.R_v if self.R_v != None else 3.4 #This is not very nice
x = 1.0 / (self.wavelength_rc / 10000.0)
#This file format has 1/um in column 0 and A_x/A_V in column 1
file_data = loadtxt('/home/vital/workspace/dazer/bin/lib/Astro_Libraries/gordon_2003_LMC2_supershell.txt')
#This file has column
Xx_interpolator = interp1d(file_data[:, 0], file_data[:, 1])
X_x = R_v * Xx_interpolator(x)
return X_x
def Epm_ReddeningPoints(self):
x_true = arange(1.0, 2.8, 0.1) #in microns -1
X_Angs = 1 / x_true * 1e4
Xx = array([1.36, 1.44, 1.84, 2.04, 2.24, 2.44, 2.66, 2.88, 3.14, 3.36, 3.56, 3.77, 3.96, 4.15, 4.26, 4.40, 4.52, 4.64])
f_lambda = array([-0.63,-0.61,-0.5, -0.45, -0.39, -0.34, -0.28, -0.22, -0.15, -0.09, -0.03, 0.02, 0.08, 0.13, 0.16, 0.20, 0.23, 0.26])
return x_true, X_Angs, Xx, f_lambda
|
Delosari/dazer
|
bin/lib/Astro_Libraries/Reddening_Corrections.py
|
Python
|
mit
| 18,132
|
[
"Gaussian"
] |
0d3a4a6c92ea4c8dc4695f5e14ee61ac7fa6e85404785c8f3fc5deda4c614311
|
"""
Acceptance tests for the teams feature.
"""
import json
import random
import time
from dateutil.parser import parse
import ddt
from nose.plugins.attrib import attr
from selenium.common.exceptions import TimeoutException
from uuid import uuid4
from ..helpers import get_modal_alert, EventsTestMixin, UniqueCourseTest
from ...fixtures import LMS_BASE_URL
from ...fixtures.course import CourseFixture
from ...fixtures.discussion import (
Thread,
MultipleThreadFixture
)
from ...pages.lms.auto_auth import AutoAuthPage
from ...pages.lms.course_info import CourseInfoPage
from ...pages.lms.learner_profile import LearnerProfilePage
from ...pages.lms.tab_nav import TabNavPage
from ...pages.lms.teams import (
TeamsPage,
MyTeamsPage,
BrowseTopicsPage,
BrowseTeamsPage,
TeamManagementPage,
EditMembershipPage,
TeamPage
)
from ...pages.common.utils import confirm_prompt
TOPICS_PER_PAGE = 12
class TeamsTabBase(EventsTestMixin, UniqueCourseTest):
"""Base class for Teams Tab tests"""
def setUp(self):
super(TeamsTabBase, self).setUp()
self.tab_nav = TabNavPage(self.browser)
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.teams_page = TeamsPage(self.browser, self.course_id)
def create_topics(self, num_topics):
"""Create `num_topics` test topics."""
return [{u"description": i, u"name": i, u"id": i} for i in map(str, xrange(num_topics))]
def create_teams(self, topic, num_teams, time_between_creation=0):
"""Create `num_teams` teams belonging to `topic`."""
teams = []
for i in xrange(num_teams):
team = {
'course_id': self.course_id,
'topic_id': topic['id'],
'name': 'Team {}'.format(i),
'description': 'Description {}'.format(i),
'language': 'aa',
'country': 'AF'
}
teams.append(self.post_team_data(team))
# Sadly, this sleep is necessary in order to ensure that
# sorting by last_activity_at works correctly when running
# in Jenkins.
time.sleep(time_between_creation)
return teams
def post_team_data(self, team_data):
"""Given a JSON representation of a team, post it to the server."""
response = self.course_fixture.session.post(
LMS_BASE_URL + '/api/team/v0/teams/',
data=json.dumps(team_data),
headers=self.course_fixture.headers
)
self.assertEqual(response.status_code, 200)
return json.loads(response.text)
def create_membership(self, username, team_id):
"""Assign `username` to `team_id`."""
response = self.course_fixture.session.post(
LMS_BASE_URL + '/api/team/v0/team_membership/',
data=json.dumps({'username': username, 'team_id': team_id}),
headers=self.course_fixture.headers
)
return json.loads(response.text)
def set_team_configuration(self, configuration, enroll_in_course=True, global_staff=False):
"""
Sets team configuration on the course and calls auto-auth on the user.
"""
#pylint: disable=attribute-defined-outside-init
self.course_fixture = CourseFixture(**self.course_info)
if configuration:
self.course_fixture.add_advanced_settings(
{u"teams_configuration": {u"value": configuration}}
)
self.course_fixture.install()
enroll_course_id = self.course_id if enroll_in_course else None
#pylint: disable=attribute-defined-outside-init
self.user_info = AutoAuthPage(self.browser, course_id=enroll_course_id, staff=global_staff).visit().user_info
self.course_info_page.visit()
def verify_teams_present(self, present):
"""
Verifies whether or not the teams tab is present. If it should be present, also
checks the text on the page (to ensure view is working).
"""
if present:
self.assertIn("Teams", self.tab_nav.tab_names)
self.teams_page.visit()
self.assertEqual(self.teams_page.active_tab(), 'browse')
else:
self.assertNotIn("Teams", self.tab_nav.tab_names)
def verify_teams(self, page, expected_teams):
"""Verify that the list of team cards on the current page match the expected teams in order."""
def assert_team_equal(expected_team, team_card_name, team_card_description):
"""
Helper to assert that a single team card has the expected name and
description.
"""
self.assertEqual(expected_team['name'], team_card_name)
self.assertEqual(expected_team['description'], team_card_description)
team_card_names = page.team_names
team_card_descriptions = page.team_descriptions
map(assert_team_equal, expected_teams, team_card_names, team_card_descriptions)
def verify_my_team_count(self, expected_number_of_teams):
""" Verify the number of teams shown on "My Team". """
# We are doing these operations on this top-level page object to avoid reloading the page.
self.teams_page.verify_my_team_count(expected_number_of_teams)
def only_team_events(self, event):
"""Filter out all non-team events."""
return event['event_type'].startswith('edx.team.')
@ddt.ddt
@attr('shard_5')
class TeamsTabTest(TeamsTabBase):
"""
Tests verifying when the Teams tab is present.
"""
def test_teams_not_enabled(self):
"""
Scenario: teams tab should not be present if no team configuration is set
Given I am enrolled in a course without team configuration
When I view the course info page
Then I should not see the Teams tab
"""
self.set_team_configuration(None)
self.verify_teams_present(False)
def test_teams_not_enabled_no_topics(self):
"""
Scenario: teams tab should not be present if team configuration does not specify topics
Given I am enrolled in a course with no topics in the team configuration
When I view the course info page
Then I should not see the Teams tab
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": []})
self.verify_teams_present(False)
def test_teams_not_enabled_not_enrolled(self):
"""
Scenario: teams tab should not be present if student is not enrolled in the course
Given there is a course with team configuration and topics
And I am not enrolled in that course, and am not global staff
When I view the course info page
Then I should not see the Teams tab
"""
self.set_team_configuration(
{u"max_team_size": 10, u"topics": self.create_topics(1)},
enroll_in_course=False
)
self.verify_teams_present(False)
def test_teams_enabled(self):
"""
Scenario: teams tab should be present if user is enrolled in the course and it has team configuration
Given I am enrolled in a course with team configuration and topics
When I view the course info page
Then I should see the Teams tab
And the correct content should be on the page
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": self.create_topics(1)})
self.verify_teams_present(True)
def test_teams_enabled_global_staff(self):
"""
Scenario: teams tab should be present if user is not enrolled in the course, but is global staff
Given there is a course with team configuration
And I am not enrolled in that course, but am global staff
When I view the course info page
Then I should see the Teams tab
And the correct content should be on the page
"""
self.set_team_configuration(
{u"max_team_size": 10, u"topics": self.create_topics(1)},
enroll_in_course=False,
global_staff=True
)
self.verify_teams_present(True)
@ddt.data(
'topics/{topic_id}',
'topics/{topic_id}/search',
'teams/{topic_id}/{team_id}/edit-team',
'teams/{topic_id}/{team_id}'
)
def test_unauthorized_error_message(self, route):
"""Ensure that an error message is shown to the user if they attempt
to take an action which makes an AJAX request while not signed
in.
"""
topics = self.create_topics(1)
topic = topics[0]
self.set_team_configuration(
{u'max_team_size': 10, u'topics': topics},
global_staff=True
)
team = self.create_teams(topic, 1)[0]
self.teams_page.visit()
self.browser.delete_cookie('sessionid')
url = self.browser.current_url.split('#')[0]
self.browser.get(
'{url}#{route}'.format(
url=url,
route=route.format(
topic_id=topic['id'],
team_id=team['id']
)
)
)
self.teams_page.wait_for_ajax()
self.assertEqual(
self.teams_page.warning_message,
u"Your request could not be completed. Reload the page and try again."
)
@ddt.data(
('browse', '.topics-list'),
# TODO: find a reliable way to match the "My Teams" tab
# ('my-teams', 'div.teams-list'),
('teams/{topic_id}/{team_id}', 'div.discussion-module'),
('topics/{topic_id}/create-team', 'div.create-team-instructions'),
('topics/{topic_id}', '.teams-list'),
('not-a-real-route', 'div.warning')
)
@ddt.unpack
def test_url_routing(self, route, selector):
"""Ensure that navigating to a URL route correctly updates the page
content.
"""
topics = self.create_topics(1)
topic = topics[0]
self.set_team_configuration({
u'max_team_size': 10,
u'topics': topics
})
team = self.create_teams(topic, 1)[0]
self.teams_page.visit()
# Get the base URL (the URL without any trailing fragment)
url = self.browser.current_url
fragment_index = url.find('#')
if fragment_index >= 0:
url = url[0:fragment_index]
self.browser.get(
'{url}#{route}'.format(
url=url,
route=route.format(
topic_id=topic['id'],
team_id=team['id']
))
)
self.teams_page.wait_for_ajax()
self.assertTrue(self.teams_page.q(css=selector).present)
self.assertTrue(self.teams_page.q(css=selector).visible)
@attr('shard_5')
class MyTeamsTest(TeamsTabBase):
"""
Tests for the "My Teams" tab of the Teams page.
"""
def setUp(self):
super(MyTeamsTest, self).setUp()
self.topic = {u"name": u"Example Topic", u"id": "example_topic", u"description": "Description"}
self.set_team_configuration({'course_id': self.course_id, 'max_team_size': 10, 'topics': [self.topic]})
self.my_teams_page = MyTeamsPage(self.browser, self.course_id)
self.page_viewed_event = {
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'my-teams',
'topic_id': None,
'team_id': None
}
}
def test_not_member_of_any_teams(self):
"""
Scenario: Visiting the My Teams page when user is not a member of any team should not display any teams.
Given I am enrolled in a course with a team configuration and a topic but am not a member of a team
When I visit the My Teams page
And I should see no teams
And I should see a message that I belong to no teams.
"""
with self.assert_events_match_during(self.only_team_events, expected_events=[self.page_viewed_event]):
self.my_teams_page.visit()
self.assertEqual(len(self.my_teams_page.team_cards), 0, msg='Expected to see no team cards')
self.assertEqual(
self.my_teams_page.q(css='.page-content-main').text,
[u'You are not currently a member of any team.']
)
def test_member_of_a_team(self):
"""
Scenario: Visiting the My Teams page when user is a member of a team should display the teams.
Given I am enrolled in a course with a team configuration and a topic and am a member of a team
When I visit the My Teams page
Then I should see a pagination header showing the number of teams
And I should see all the expected team cards
And I should not see a pagination footer
"""
teams = self.create_teams(self.topic, 1)
self.create_membership(self.user_info['username'], teams[0]['id'])
with self.assert_events_match_during(self.only_team_events, expected_events=[self.page_viewed_event]):
self.my_teams_page.visit()
self.verify_teams(self.my_teams_page, teams)
@attr('shard_5')
@ddt.ddt
class BrowseTopicsTest(TeamsTabBase):
"""
Tests for the Browse tab of the Teams page.
"""
def setUp(self):
super(BrowseTopicsTest, self).setUp()
self.topics_page = BrowseTopicsPage(self.browser, self.course_id)
@ddt.data(('name', False), ('team_count', True))
@ddt.unpack
def test_sort_topics(self, sort_order, reverse):
"""
Scenario: the user should be able to sort the list of topics by name or team count
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics
Then I should see a list of topics for the course
When I choose a sort order
Then I should see the paginated list of topics in that order
"""
topics = self.create_topics(TOPICS_PER_PAGE + 1)
self.set_team_configuration({u"max_team_size": 100, u"topics": topics})
for i, topic in enumerate(random.sample(topics, len(topics))):
self.create_teams(topic, i)
topic['team_count'] = i
self.topics_page.visit()
self.topics_page.sort_topics_by(sort_order)
topic_names = self.topics_page.topic_names
self.assertEqual(len(topic_names), TOPICS_PER_PAGE)
self.assertEqual(
topic_names,
[t['name'] for t in sorted(topics, key=lambda t: t[sort_order], reverse=reverse)][:TOPICS_PER_PAGE]
)
def test_sort_topics_update(self):
"""
Scenario: the list of topics should remain sorted after updates
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics and choose a sort order
Then I should see the paginated list of topics in that order
When I create a team in one of those topics
And I return to the topics list
Then I should see the topics in the correct sorted order
"""
topics = self.create_topics(3)
self.set_team_configuration({u"max_team_size": 100, u"topics": topics})
self.topics_page.visit()
self.topics_page.sort_topics_by('team_count')
topic_name = self.topics_page.topic_names[-1]
topic = [t for t in topics if t['name'] == topic_name][0]
self.topics_page.browse_teams_for_topic(topic_name)
browse_teams_page = BrowseTeamsPage(self.browser, self.course_id, topic)
self.assertTrue(browse_teams_page.is_browser_on_page())
browse_teams_page.click_create_team_link()
create_team_page = TeamManagementPage(self.browser, self.course_id, topic)
create_team_page.value_for_text_field(field_id='name', value='Team Name', press_enter=False)
create_team_page.value_for_textarea_field(
field_id='description',
value='Team description.'
)
create_team_page.submit_form()
team_page = TeamPage(self.browser, self.course_id)
self.assertTrue(team_page.is_browser_on_page())
team_page.click_all_topics()
self.assertTrue(self.topics_page.is_browser_on_page())
self.topics_page.wait_for_ajax()
self.assertEqual(topic_name, self.topics_page.topic_names[0])
def test_list_topics(self):
"""
Scenario: a list of topics should be visible in the "Browse" tab
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics
Then I should see a list of topics for the course
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": self.create_topics(2)})
self.topics_page.visit()
self.assertEqual(len(self.topics_page.topic_cards), 2)
self.assertTrue(self.topics_page.get_pagination_header_text().startswith('Showing 1-2 out of 2 total'))
self.assertFalse(self.topics_page.pagination_controls_visible())
self.assertFalse(self.topics_page.is_previous_page_button_enabled())
self.assertFalse(self.topics_page.is_next_page_button_enabled())
def test_topic_pagination(self):
"""
Scenario: a list of topics should be visible in the "Browse" tab, paginated 12 per page
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics
Then I should see only the first 12 topics
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": self.create_topics(20)})
self.topics_page.visit()
self.assertEqual(len(self.topics_page.topic_cards), TOPICS_PER_PAGE)
self.assertTrue(self.topics_page.get_pagination_header_text().startswith('Showing 1-12 out of 20 total'))
self.assertTrue(self.topics_page.pagination_controls_visible())
self.assertFalse(self.topics_page.is_previous_page_button_enabled())
self.assertTrue(self.topics_page.is_next_page_button_enabled())
def test_go_to_numbered_page(self):
"""
Scenario: topics should be able to be navigated by page number
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics
And I enter a valid page number in the page number input
Then I should see that page of topics
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": self.create_topics(25)})
self.topics_page.visit()
self.topics_page.go_to_page(3)
self.assertEqual(len(self.topics_page.topic_cards), 1)
self.assertTrue(self.topics_page.is_previous_page_button_enabled())
self.assertFalse(self.topics_page.is_next_page_button_enabled())
def test_go_to_invalid_page(self):
"""
Scenario: browsing topics should not respond to invalid page numbers
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics
And I enter an invalid page number in the page number input
Then I should stay on the current page
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": self.create_topics(13)})
self.topics_page.visit()
self.topics_page.go_to_page(3)
self.assertEqual(self.topics_page.get_current_page_number(), 1)
def test_page_navigation_buttons(self):
"""
Scenario: browsing topics should not respond to invalid page numbers
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse topics
When I press the next page button
Then I should move to the next page
When I press the previous page button
Then I should move to the previous page
"""
self.set_team_configuration({u"max_team_size": 10, u"topics": self.create_topics(13)})
self.topics_page.visit()
self.topics_page.press_next_page_button()
self.assertEqual(len(self.topics_page.topic_cards), 1)
self.assertTrue(self.topics_page.get_pagination_header_text().startswith('Showing 13-13 out of 13 total'))
self.topics_page.press_previous_page_button()
self.assertEqual(len(self.topics_page.topic_cards), TOPICS_PER_PAGE)
self.assertTrue(self.topics_page.get_pagination_header_text().startswith('Showing 1-12 out of 13 total'))
def test_topic_description_truncation(self):
"""
Scenario: excessively long topic descriptions should be truncated so
as to fit within a topic card.
Given I am enrolled in a course with a team configuration and a topic
with a long description
When I visit the Teams page
And I browse topics
Then I should see a truncated topic description
"""
initial_description = "A" + " really" * 50 + " long description"
self.set_team_configuration(
{u"max_team_size": 1, u"topics": [{"name": "", "id": "", "description": initial_description}]}
)
self.topics_page.visit()
truncated_description = self.topics_page.topic_descriptions[0]
self.assertLess(len(truncated_description), len(initial_description))
self.assertTrue(truncated_description.endswith('...'))
self.assertIn(truncated_description.split('...')[0], initial_description)
def test_go_to_teams_list(self):
"""
Scenario: Clicking on a Topic Card should take you to the
teams list for that Topic.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page
And I browse topics
And I click on the arrow link to view teams for the first topic
Then I should be on the browse teams page
"""
topic = {u"name": u"Example Topic", u"id": u"example_topic", u"description": "Description"}
self.set_team_configuration(
{u"max_team_size": 1, u"topics": [topic]}
)
self.topics_page.visit()
self.topics_page.browse_teams_for_topic('Example Topic')
browse_teams_page = BrowseTeamsPage(self.browser, self.course_id, topic)
self.assertTrue(browse_teams_page.is_browser_on_page())
self.assertEqual(browse_teams_page.header_name, 'Example Topic')
self.assertEqual(browse_teams_page.header_description, 'Description')
def test_page_viewed_event(self):
"""
Scenario: Visiting the browse topics page should fire a page viewed event.
Given I am enrolled in a course with a team configuration and a topic
When I visit the browse topics page
Then my browser should post a page viewed event
"""
topic = {u"name": u"Example Topic", u"id": u"example_topic", u"description": "Description"}
self.set_team_configuration(
{u"max_team_size": 1, u"topics": [topic]}
)
events = [{
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'browse',
'topic_id': None,
'team_id': None
}
}]
with self.assert_events_match_during(self.only_team_events, expected_events=events):
self.topics_page.visit()
@attr('shard_5')
@ddt.ddt
class BrowseTeamsWithinTopicTest(TeamsTabBase):
"""
Tests for browsing Teams within a Topic on the Teams page.
"""
TEAMS_PAGE_SIZE = 10
def setUp(self):
super(BrowseTeamsWithinTopicTest, self).setUp()
self.topic = {u"name": u"Example Topic", u"id": "example_topic", u"description": "Description"}
self.max_team_size = 10
self.set_team_configuration({
'course_id': self.course_id,
'max_team_size': self.max_team_size,
'topics': [self.topic]
})
self.browse_teams_page = BrowseTeamsPage(self.browser, self.course_id, self.topic)
self.topics_page = BrowseTopicsPage(self.browser, self.course_id)
def teams_with_default_sort_order(self, teams):
"""Return a list of teams sorted according to the default ordering
(last_activity_at, with a secondary sort by open slots).
"""
return sorted(
sorted(teams, key=lambda t: len(t['membership']), reverse=True),
key=lambda t: parse(t['last_activity_at']).replace(microsecond=0),
reverse=True
)
def verify_page_header(self):
"""Verify that the page header correctly reflects the current topic's name and description."""
self.assertEqual(self.browse_teams_page.header_name, self.topic['name'])
self.assertEqual(self.browse_teams_page.header_description, self.topic['description'])
def verify_search_header(self, search_results_page, search_query):
"""Verify that the page header correctly reflects the current topic's name and description."""
self.assertEqual(search_results_page.header_name, 'Team Search')
self.assertEqual(
search_results_page.header_description,
'Showing results for "{search_query}"'.format(search_query=search_query)
)
def verify_on_page(self, teams_page, page_num, total_teams, pagination_header_text, footer_visible):
"""
Verify that we are on the correct team list page.
Arguments:
teams_page (BaseTeamsPage): The teams page object that should be the current page.
page_num (int): The one-indexed page number that we expect to be on
total_teams (list): An unsorted list of all the teams for the
current topic
pagination_header_text (str): Text we expect to see in the
pagination header.
footer_visible (bool): Whether we expect to see the pagination
footer controls.
"""
sorted_teams = self.teams_with_default_sort_order(total_teams)
self.assertTrue(teams_page.get_pagination_header_text().startswith(pagination_header_text))
self.verify_teams(
teams_page,
sorted_teams[(page_num - 1) * self.TEAMS_PAGE_SIZE:page_num * self.TEAMS_PAGE_SIZE]
)
self.assertEqual(
teams_page.pagination_controls_visible(),
footer_visible,
msg='Expected paging footer to be ' + 'visible' if footer_visible else 'invisible'
)
@ddt.data(
('open_slots', 'last_activity_at', True),
('last_activity_at', 'open_slots', True)
)
@ddt.unpack
def test_sort_teams(self, sort_order, secondary_sort_order, reverse):
"""
Scenario: the user should be able to sort the list of teams by open slots or last activity
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse teams within a topic
Then I should see a list of teams for that topic
When I choose a sort order
Then I should see the paginated list of teams in that order
"""
teams = self.create_teams(self.topic, self.TEAMS_PAGE_SIZE + 1)
for i, team in enumerate(random.sample(teams, len(teams))):
for _ in range(i):
user_info = AutoAuthPage(self.browser, course_id=self.course_id).visit().user_info
self.create_membership(user_info['username'], team['id'])
team['open_slots'] = self.max_team_size - i
# Parse last activity date, removing microseconds because
# the Django ORM does not support them. Will be fixed in
# Django 1.8.
team['last_activity_at'] = parse(team['last_activity_at']).replace(microsecond=0)
# Re-authenticate as staff after creating users
AutoAuthPage(
self.browser,
course_id=self.course_id,
staff=True
).visit()
self.browse_teams_page.visit()
self.browse_teams_page.sort_teams_by(sort_order)
team_names = self.browse_teams_page.team_names
self.assertEqual(len(team_names), self.TEAMS_PAGE_SIZE)
sorted_teams = [
team['name']
for team in sorted(
sorted(teams, key=lambda t: t[secondary_sort_order], reverse=reverse),
key=lambda t: t[sort_order],
reverse=reverse
)
][:self.TEAMS_PAGE_SIZE]
self.assertEqual(team_names, sorted_teams)
def test_default_sort_order(self):
"""
Scenario: the list of teams should be sorted by last activity by default
Given I am enrolled in a course with team configuration and topics
When I visit the Teams page
And I browse teams within a topic
Then I should see a list of teams for that topic, sorted by last activity
"""
self.create_teams(self.topic, self.TEAMS_PAGE_SIZE + 1)
self.browse_teams_page.visit()
self.assertEqual(self.browse_teams_page.sort_order, 'last activity')
def test_no_teams(self):
"""
Scenario: Visiting a topic with no teams should not display any teams.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page for that topic
Then I should see the correct page header
And I should see a pagination header showing no teams
And I should see no teams
And I should see a button to add a team
And I should not see a pagination footer
"""
self.browse_teams_page.visit()
self.verify_page_header()
self.assertTrue(self.browse_teams_page.get_pagination_header_text().startswith('Showing 0 out of 0 total'))
self.assertEqual(len(self.browse_teams_page.team_cards), 0, msg='Expected to see no team cards')
self.assertFalse(
self.browse_teams_page.pagination_controls_visible(),
msg='Expected paging footer to be invisible'
)
def test_teams_one_page(self):
"""
Scenario: Visiting a topic with fewer teams than the page size should
all those teams on one page.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page for that topic
Then I should see the correct page header
And I should see a pagination header showing the number of teams
And I should see all the expected team cards
And I should see a button to add a team
And I should not see a pagination footer
"""
teams = self.teams_with_default_sort_order(
self.create_teams(self.topic, self.TEAMS_PAGE_SIZE, time_between_creation=1)
)
self.browse_teams_page.visit()
self.verify_page_header()
self.assertTrue(self.browse_teams_page.get_pagination_header_text().startswith('Showing 1-10 out of 10 total'))
self.verify_teams(self.browse_teams_page, teams)
self.assertFalse(
self.browse_teams_page.pagination_controls_visible(),
msg='Expected paging footer to be invisible'
)
def test_teams_navigation_buttons(self):
"""
Scenario: The user should be able to page through a topic's team list
using navigation buttons when it is longer than the page size.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page for that topic
Then I should see the correct page header
And I should see that I am on the first page of results
When I click on the next page button
Then I should see that I am on the second page of results
And when I click on the previous page button
Then I should see that I am on the first page of results
"""
teams = self.create_teams(self.topic, self.TEAMS_PAGE_SIZE + 1, time_between_creation=1)
self.browse_teams_page.visit()
self.verify_page_header()
self.verify_on_page(self.browse_teams_page, 1, teams, 'Showing 1-10 out of 11 total', True)
self.browse_teams_page.press_next_page_button()
self.verify_on_page(self.browse_teams_page, 2, teams, 'Showing 11-11 out of 11 total', True)
self.browse_teams_page.press_previous_page_button()
self.verify_on_page(self.browse_teams_page, 1, teams, 'Showing 1-10 out of 11 total', True)
def test_teams_page_input(self):
"""
Scenario: The user should be able to page through a topic's team list
using the page input when it is longer than the page size.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page for that topic
Then I should see the correct page header
And I should see that I am on the first page of results
When I input the second page
Then I should see that I am on the second page of results
When I input the first page
Then I should see that I am on the first page of results
"""
teams = self.create_teams(self.topic, self.TEAMS_PAGE_SIZE + 10, time_between_creation=1)
self.browse_teams_page.visit()
self.verify_page_header()
self.verify_on_page(self.browse_teams_page, 1, teams, 'Showing 1-10 out of 20 total', True)
self.browse_teams_page.go_to_page(2)
self.verify_on_page(self.browse_teams_page, 2, teams, 'Showing 11-20 out of 20 total', True)
self.browse_teams_page.go_to_page(1)
self.verify_on_page(self.browse_teams_page, 1, teams, 'Showing 1-10 out of 20 total', True)
def test_browse_team_topics(self):
"""
Scenario: User should be able to navigate to "browse all teams" and "search team description" links.
Given I am enrolled in a course with teams enabled
When I visit the Teams page for a topic
Then I should see the correct page header
And I should see the link to "browse teams in other topics"
When I should navigate to that link
Then I should see the topic browse page
"""
self.browse_teams_page.visit()
self.verify_page_header()
self.browse_teams_page.click_browse_all_teams_link()
self.assertTrue(self.topics_page.is_browser_on_page())
def test_search(self):
"""
Scenario: User should be able to search for a team
Given I am enrolled in a course with teams enabled
When I visit the Teams page for that topic
And I search for 'banana'
Then I should see the search result page
And the search header should be shown
And 0 results should be shown
And my browser should fire a page viewed event for the search page
And a searched event should have been fired
"""
# Note: all searches will return 0 results with the mock search server
# used by Bok Choy.
search_text = 'banana'
self.create_teams(self.topic, 5)
self.browse_teams_page.visit()
events = [{
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'search-teams',
'topic_id': self.topic['id'],
'team_id': None
}
}, {
'event_type': 'edx.team.searched',
'event': {
'search_text': search_text,
'topic_id': self.topic['id'],
'number_of_results': 0
}
}]
with self.assert_events_match_during(self.only_team_events, expected_events=events, in_order=False):
search_results_page = self.browse_teams_page.search(search_text)
self.verify_search_header(search_results_page, search_text)
self.assertTrue(search_results_page.get_pagination_header_text().startswith('Showing 0 out of 0 total'))
def test_page_viewed_event(self):
"""
Scenario: Visiting the browse page should fire a page viewed event.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page
Then my browser should post a page viewed event for the teams page
"""
self.create_teams(self.topic, 5)
events = [{
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'single-topic',
'topic_id': self.topic['id'],
'team_id': None
}
}]
with self.assert_events_match_during(self.only_team_events, expected_events=events):
self.browse_teams_page.visit()
def test_team_name_xss(self):
"""
Scenario: Team names should be HTML-escaped on the teams page
Given I am enrolled in a course with teams enabled
When I visit the Teams page for a topic, with a team name containing JS code
Then I should not see any alerts
"""
self.post_team_data({
'course_id': self.course_id,
'topic_id': self.topic['id'],
'name': '<script>alert("XSS")</script>',
'description': 'Description',
'language': 'aa',
'country': 'AF'
})
with self.assertRaises(TimeoutException):
self.browser.get(self.browse_teams_page.url)
alert = get_modal_alert(self.browser)
alert.accept()
@attr('shard_5')
class TeamFormActions(TeamsTabBase):
"""
Base class for create, edit, and delete team.
"""
TEAM_DESCRIPTION = 'The Avengers are a fictional team of superheroes.'
topic = {'name': 'Example Topic', 'id': 'example_topic', 'description': 'Description'}
TEAMS_NAME = 'Avengers'
def setUp(self):
super(TeamFormActions, self).setUp()
self.team_management_page = TeamManagementPage(self.browser, self.course_id, self.topic)
def verify_page_header(self, title, description, breadcrumbs):
"""
Verify that the page header correctly reflects the
create team header, description and breadcrumb.
"""
self.assertEqual(self.team_management_page.header_page_name, title)
self.assertEqual(self.team_management_page.header_page_description, description)
self.assertEqual(self.team_management_page.header_page_breadcrumbs, breadcrumbs)
def verify_and_navigate_to_create_team_page(self):
"""Navigates to the create team page and verifies."""
self.browse_teams_page.click_create_team_link()
self.verify_page_header(
title='Create a New Team',
description='Create a new team if you can\'t find an existing team to join, '
'or if you would like to learn with friends you know.',
breadcrumbs='All Topics {topic_name}'.format(topic_name=self.topic['name'])
)
def verify_and_navigate_to_edit_team_page(self):
"""Navigates to the edit team page and verifies."""
# pylint: disable=no-member
self.assertEqual(self.team_page.team_name, self.team['name'])
self.assertTrue(self.team_page.edit_team_button_present)
self.team_page.click_edit_team_button()
self.team_management_page.wait_for_page()
# Edit page header.
self.verify_page_header(
title='Edit Team',
description='If you make significant changes, make sure you notify '
'members of the team before making these changes.',
breadcrumbs='All Topics {topic_name} {team_name}'.format(
topic_name=self.topic['name'],
team_name=self.team['name']
)
)
def verify_team_info(self, name, description, location, language):
"""Verify the team information on team page."""
# pylint: disable=no-member
self.assertEqual(self.team_page.team_name, name)
self.assertEqual(self.team_page.team_description, description)
self.assertEqual(self.team_page.team_location, location)
self.assertEqual(self.team_page.team_language, language)
def fill_create_or_edit_form(self):
"""Fill the create/edit team form fields with appropriate values."""
self.team_management_page.value_for_text_field(
field_id='name',
value=self.TEAMS_NAME,
press_enter=False
)
self.team_management_page.value_for_textarea_field(
field_id='description',
value=self.TEAM_DESCRIPTION
)
self.team_management_page.value_for_dropdown_field(field_id='language', value='English')
self.team_management_page.value_for_dropdown_field(field_id='country', value='Pakistan')
def verify_all_fields_exist(self):
"""
Verify the fields for create/edit page.
"""
self.assertEqual(
self.team_management_page.message_for_field('name'),
'A name that identifies your team (maximum 255 characters).'
)
self.assertEqual(
self.team_management_page.message_for_textarea_field('description'),
'A short description of the team to help other learners understand '
'the goals or direction of the team (maximum 300 characters).'
)
self.assertEqual(
self.team_management_page.message_for_field('country'),
'The country that team members primarily identify with.'
)
self.assertEqual(
self.team_management_page.message_for_field('language'),
'The language that team members primarily use to communicate with each other.'
)
@ddt.ddt
class CreateTeamTest(TeamFormActions):
"""
Tests for creating a new Team within a Topic on the Teams page.
"""
def setUp(self):
super(CreateTeamTest, self).setUp()
self.set_team_configuration({'course_id': self.course_id, 'max_team_size': 10, 'topics': [self.topic]})
self.browse_teams_page = BrowseTeamsPage(self.browser, self.course_id, self.topic)
self.browse_teams_page.visit()
def test_user_can_see_create_team_page(self):
"""
Scenario: The user should be able to see the create team page via teams list page.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Teams page for that topic
Then I should see the Create Team page link on bottom
And When I click create team link
Then I should see the create team page.
And I should see the create team header
And I should also see the help messages for fields.
"""
self.verify_and_navigate_to_create_team_page()
self.verify_all_fields_exist()
def test_user_can_see_error_message_for_missing_data(self):
"""
Scenario: The user should be able to see error message in case of missing required field.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Create Team page for that topic
Then I should see the Create Team header and form
And When I click create team button without filling required fields
Then I should see the error message and highlighted fields.
"""
self.verify_and_navigate_to_create_team_page()
self.team_management_page.submit_form()
self.assertEqual(
self.team_management_page.validation_message_text,
'Check the highlighted fields below and try again.'
)
self.assertTrue(self.team_management_page.error_for_field(field_id='name'))
self.assertTrue(self.team_management_page.error_for_field(field_id='description'))
def test_user_can_see_error_message_for_incorrect_data(self):
"""
Scenario: The user should be able to see error message in case of increasing length for required fields.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Create Team page for that topic
Then I should see the Create Team header and form
When I add text > than 255 characters for name field
And I click Create button
Then I should see the error message for exceeding length.
"""
self.verify_and_navigate_to_create_team_page()
# Fill the name field with >255 characters to see validation message.
self.team_management_page.value_for_text_field(
field_id='name',
value='EdX is a massive open online course (MOOC) provider and online learning platform. '
'It hosts online university-level courses in a wide range of disciplines to a worldwide '
'audience, some at no charge. It also conducts research into learning based on how '
'people use its platform. EdX was created for students and institutions that seek to'
'transform themselves through cutting-edge technologies, innovative pedagogy, and '
'rigorous courses. More than 70 schools, nonprofits, corporations, and international'
'organizations offer or plan to offer courses on the edX website. As of 22 October 2014,'
'edX has more than 4 million users taking more than 500 courses online.',
press_enter=False
)
self.team_management_page.submit_form()
self.assertEqual(
self.team_management_page.validation_message_text,
'Check the highlighted fields below and try again.'
)
self.assertTrue(self.team_management_page.error_for_field(field_id='name'))
def test_user_can_create_new_team_successfully(self):
"""
Scenario: The user should be able to create new team.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Create Team page for that topic
Then I should see the Create Team header and form
When I fill all the fields present with appropriate data
And I click Create button
Then I expect analytics events to be emitted
And I should see the page for my team
And I should see the message that says "You are member of this team"
And the new team should be added to the list of teams within the topic
And the number of teams should be updated on the topic card
And if I switch to "My Team", the newly created team is displayed
"""
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.browse_teams_page.visit()
self.verify_and_navigate_to_create_team_page()
self.fill_create_or_edit_form()
expected_events = [
{
'event_type': 'edx.team.created'
},
{
'event_type': 'edx.team.learner_added',
'event': {
'add_method': 'added_on_create',
}
}
]
with self.assert_events_match_during(event_filter=self.only_team_events, expected_events=expected_events):
self.team_management_page.submit_form()
# Verify that the page is shown for the new team
team_page = TeamPage(self.browser, self.course_id)
team_page.wait_for_page()
self.assertEqual(team_page.team_name, self.TEAMS_NAME)
self.assertEqual(team_page.team_description, self.TEAM_DESCRIPTION)
self.assertEqual(team_page.team_user_membership_text, 'You are a member of this team.')
# Verify the new team was added to the topic list
self.teams_page.click_specific_topic("Example Topic")
self.teams_page.verify_topic_team_count(1)
self.teams_page.click_all_topics()
self.teams_page.verify_team_count_in_first_topic(1)
# Verify that if one switches to "My Team" without reloading the page, the newly created team is shown.
self.verify_my_team_count(1)
def test_user_can_cancel_the_team_creation(self):
"""
Scenario: The user should be able to cancel the creation of new team.
Given I am enrolled in a course with a team configuration and a topic
When I visit the Create Team page for that topic
Then I should see the Create Team header and form
When I click Cancel button
Then I should see teams list page without any new team.
And if I switch to "My Team", it shows no teams
"""
self.assertTrue(self.browse_teams_page.get_pagination_header_text().startswith('Showing 0 out of 0 total'))
self.verify_and_navigate_to_create_team_page()
self.team_management_page.cancel_team()
self.assertTrue(self.browse_teams_page.is_browser_on_page())
self.assertTrue(self.browse_teams_page.get_pagination_header_text().startswith('Showing 0 out of 0 total'))
self.teams_page.click_all_topics()
self.teams_page.verify_team_count_in_first_topic(0)
self.verify_my_team_count(0)
def test_page_viewed_event(self):
"""
Scenario: Visiting the create team page should fire a page viewed event.
Given I am enrolled in a course with a team configuration and a topic
When I visit the create team page
Then my browser should post a page viewed event
"""
events = [{
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'new-team',
'topic_id': self.topic['id'],
'team_id': None
}
}]
with self.assert_events_match_during(self.only_team_events, expected_events=events):
self.verify_and_navigate_to_create_team_page()
@ddt.ddt
class DeleteTeamTest(TeamFormActions):
"""
Tests for deleting teams.
"""
def setUp(self):
super(DeleteTeamTest, self).setUp()
self.set_team_configuration(
{'course_id': self.course_id, 'max_team_size': 10, 'topics': [self.topic]},
global_staff=True
)
self.team = self.create_teams(self.topic, num_teams=1)[0]
self.team_page = TeamPage(self.browser, self.course_id, team=self.team)
#need to have a membership to confirm it gets deleted as well
self.create_membership(self.user_info['username'], self.team['id'])
self.team_page.visit()
def test_cancel_delete(self):
"""
Scenario: The user should be able to cancel the Delete Team dialog
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the Delete Team button
When I click the delete team button
And I cancel the prompt
And I refresh the page
Then I should still see the team
"""
self.delete_team(cancel=True)
self.assertTrue(self.team_management_page.is_browser_on_page())
self.browser.refresh()
self.team_management_page.wait_for_page()
self.assertEqual(
' '.join(('All Topics', self.topic['name'], self.team['name'])),
self.team_management_page.header_page_breadcrumbs
)
@ddt.data('Moderator', 'Community TA', 'Administrator', None)
def test_delete_team(self, role):
"""
Scenario: The user should be able to see and navigate to the delete team page.
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the Delete Team button
When I click the delete team button
And I confirm the prompt
Then I should see the browse teams page
And the team should not be present
"""
# If role is None, remain logged in as global staff
if role is not None:
AutoAuthPage(
self.browser,
course_id=self.course_id,
staff=False,
roles=role
).visit()
self.team_page.visit()
self.delete_team(require_notification=False)
browse_teams_page = BrowseTeamsPage(self.browser, self.course_id, self.topic)
self.assertTrue(browse_teams_page.is_browser_on_page())
self.assertNotIn(self.team['name'], browse_teams_page.team_names)
def delete_team(self, **kwargs):
"""
Delete a team. Passes `kwargs` to `confirm_prompt`.
Expects edx.team.deleted event to be emitted, with correct course_id.
Also expects edx.team.learner_removed event to be emitted for the
membership that is removed as a part of the delete operation.
"""
self.team_page.click_edit_team_button()
self.team_management_page.wait_for_page()
self.team_management_page.delete_team_button.click()
if 'cancel' in kwargs and kwargs['cancel'] is True:
confirm_prompt(self.team_management_page, **kwargs)
else:
expected_events = [
{
'event_type': 'edx.team.deleted',
'event': {
'team_id': self.team['id']
}
},
{
'event_type': 'edx.team.learner_removed',
'event': {
'team_id': self.team['id'],
'remove_method': 'team_deleted',
'user_id': self.user_info['user_id']
}
}
]
with self.assert_events_match_during(
event_filter=self.only_team_events, expected_events=expected_events
):
confirm_prompt(self.team_management_page, **kwargs)
def test_delete_team_updates_topics(self):
"""
Scenario: Deleting a team should update the team count on the topics page
Given I am staff user for a course with a team
And I delete a team
When I navigate to the browse topics page
Then the team count for the deletd team's topic should be updated
"""
self.delete_team(require_notification=False)
BrowseTeamsPage(self.browser, self.course_id, self.topic).click_all_topics()
topics_page = BrowseTopicsPage(self.browser, self.course_id)
self.assertTrue(topics_page.is_browser_on_page())
self.teams_page.verify_topic_team_count(0)
@ddt.ddt
class EditTeamTest(TeamFormActions):
"""
Tests for editing the team.
"""
def setUp(self):
super(EditTeamTest, self).setUp()
self.set_team_configuration(
{'course_id': self.course_id, 'max_team_size': 10, 'topics': [self.topic]},
global_staff=True
)
self.team = self.create_teams(self.topic, num_teams=1)[0]
self.team_page = TeamPage(self.browser, self.course_id, team=self.team)
self.team_page.visit()
def test_staff_can_navigate_to_edit_team_page(self):
"""
Scenario: The user should be able to see and navigate to the edit team page.
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the edit team page
And I should see the edit team header
And I should also see the help messages for fields
"""
self.verify_and_navigate_to_edit_team_page()
self.verify_all_fields_exist()
def test_staff_can_edit_team_successfully(self):
"""
Scenario: The staff should be able to edit team successfully.
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the edit team page
And an analytics event should be fired
When I edit all the fields with appropriate data
And I click Update button
Then I should see the page for my team with updated data
"""
self.verify_team_info(
name=self.team['name'],
description=self.team['description'],
location='Afghanistan',
language='Afar'
)
self.verify_and_navigate_to_edit_team_page()
self.fill_create_or_edit_form()
expected_events = [
{
'event_type': 'edx.team.changed',
'event': {
'team_id': self.team['id'],
'field': 'country',
'old': 'AF',
'new': 'PK',
'truncated': [],
}
},
{
'event_type': 'edx.team.changed',
'event': {
'team_id': self.team['id'],
'field': 'name',
'old': self.team['name'],
'new': self.TEAMS_NAME,
'truncated': [],
}
},
{
'event_type': 'edx.team.changed',
'event': {
'team_id': self.team['id'],
'field': 'language',
'old': 'aa',
'new': 'en',
'truncated': [],
}
},
{
'event_type': 'edx.team.changed',
'event': {
'team_id': self.team['id'],
'field': 'description',
'old': self.team['description'],
'new': self.TEAM_DESCRIPTION,
'truncated': [],
}
},
]
with self.assert_events_match_during(event_filter=self.only_team_events, expected_events=expected_events):
self.team_management_page.submit_form()
self.team_page.wait_for_page()
self.verify_team_info(
name=self.TEAMS_NAME,
description=self.TEAM_DESCRIPTION,
location='Pakistan',
language='English'
)
def test_staff_can_cancel_the_team_edit(self):
"""
Scenario: The user should be able to cancel the editing of team.
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the edit team page
Then I should see the Edit Team header
When I click Cancel button
Then I should see team page page without changes.
"""
self.verify_team_info(
name=self.team['name'],
description=self.team['description'],
location='Afghanistan',
language='Afar'
)
self.verify_and_navigate_to_edit_team_page()
self.fill_create_or_edit_form()
self.team_management_page.cancel_team()
self.team_page.wait_for_page()
self.verify_team_info(
name=self.team['name'],
description=self.team['description'],
location='Afghanistan',
language='Afar'
)
def test_student_cannot_see_edit_button(self):
"""
Scenario: The student should not see the edit team button.
Given I am student for a course with a team
When I visit the Team profile page
Then I should not see the Edit Team button
"""
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.team_page.visit()
self.assertFalse(self.team_page.edit_team_button_present)
@ddt.data('Moderator', 'Community TA', 'Administrator')
def test_discussion_privileged_user_can_edit_team(self, role):
"""
Scenario: The user with specified role should see the edit team button.
Given I am user with privileged role for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
"""
kwargs = {
'course_id': self.course_id,
'staff': False
}
if role is not None:
kwargs['roles'] = role
AutoAuthPage(self.browser, **kwargs).visit()
self.team_page.visit()
self.teams_page.wait_for_page()
self.assertTrue(self.team_page.edit_team_button_present)
self.verify_team_info(
name=self.team['name'],
description=self.team['description'],
location='Afghanistan',
language='Afar'
)
self.verify_and_navigate_to_edit_team_page()
self.fill_create_or_edit_form()
self.team_management_page.submit_form()
self.team_page.wait_for_page()
self.verify_team_info(
name=self.TEAMS_NAME,
description=self.TEAM_DESCRIPTION,
location='Pakistan',
language='English'
)
def test_page_viewed_event(self):
"""
Scenario: Visiting the edit team page should fire a page viewed event.
Given I am enrolled in a course with a team configuration and a topic
When I visit the edit team page
Then my browser should post a page viewed event
"""
events = [{
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'edit-team',
'topic_id': self.topic['id'],
'team_id': self.team['id']
}
}]
with self.assert_events_match_during(self.only_team_events, expected_events=events):
self.verify_and_navigate_to_edit_team_page()
@ddt.ddt
class EditMembershipTest(TeamFormActions):
"""
Tests for administrating from the team membership page
"""
def setUp(self):
super(EditMembershipTest, self).setUp()
self.set_team_configuration(
{'course_id': self.course_id, 'max_team_size': 10, 'topics': [self.topic]},
global_staff=True
)
self.team_management_page = TeamManagementPage(self.browser, self.course_id, self.topic)
self.team = self.create_teams(self.topic, num_teams=1)[0]
#make sure a user exists on this team so we can edit the membership
self.create_membership(self.user_info['username'], self.team['id'])
self.edit_membership_page = EditMembershipPage(self.browser, self.course_id, self.team)
self.team_page = TeamPage(self.browser, self.course_id, team=self.team)
def edit_membership_helper(self, role, cancel=False):
"""
Helper for common functionality in edit membership tests.
Checks for all relevant assertions about membership being removed,
including verify edx.team.learner_removed events are emitted.
"""
if role is not None:
AutoAuthPage(
self.browser,
course_id=self.course_id,
staff=False,
roles=role
).visit()
self.team_page.visit()
self.team_page.click_edit_team_button()
self.team_management_page.wait_for_page()
self.assertTrue(
self.team_management_page.membership_button_present
)
self.team_management_page.click_membership_button()
self.edit_membership_page.wait_for_page()
self.edit_membership_page.click_first_remove()
if cancel:
self.edit_membership_page.cancel_delete_membership_dialog()
self.assertEqual(self.edit_membership_page.team_members, 1)
else:
expected_events = [
{
'event_type': 'edx.team.learner_removed',
'event': {
'team_id': self.team['id'],
'remove_method': 'removed_by_admin',
'user_id': self.user_info['user_id']
}
}
]
with self.assert_events_match_during(
event_filter=self.only_team_events, expected_events=expected_events
):
self.edit_membership_page.confirm_delete_membership_dialog()
self.assertEqual(self.edit_membership_page.team_members, 0)
self.assertTrue(self.edit_membership_page.is_browser_on_page)
@ddt.data('Moderator', 'Community TA', 'Administrator', None)
def test_remove_membership(self, role):
"""
Scenario: The user should be able to remove a membership
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the Edit Membership button
And When I click the edit membership button
Then I should see the edit membership page
And When I click the remove button and confirm the dialog
Then my membership should be removed, and I should remain on the page
"""
self.edit_membership_helper(role, cancel=False)
@ddt.data('Moderator', 'Community TA', 'Administrator', None)
def test_cancel_remove_membership(self, role):
"""
Scenario: The user should be able to remove a membership
Given I am staff user for a course with a team
When I visit the Team profile page
Then I should see the Edit Team button
And When I click edit team button
Then I should see the Edit Membership button
And When I click the edit membership button
Then I should see the edit membership page
And When I click the remove button and cancel the dialog
Then my membership should not be removed, and I should remain on the page
"""
self.edit_membership_helper(role, cancel=True)
@attr('shard_5')
@ddt.ddt
class TeamPageTest(TeamsTabBase):
"""Tests for viewing a specific team"""
SEND_INVITE_TEXT = 'Send this link to friends so that they can join too.'
def setUp(self):
super(TeamPageTest, self).setUp()
self.topic = {u"name": u"Example Topic", u"id": "example_topic", u"description": "Description"}
def _set_team_configuration_and_membership(
self,
max_team_size=10,
membership_team_index=0,
visit_team_index=0,
create_membership=True,
another_user=False):
"""
Set team configuration.
Arguments:
max_team_size (int): number of users a team can have
membership_team_index (int): index of team user will join
visit_team_index (int): index of team user will visit
create_membership (bool): whether to create membership or not
another_user (bool): another user to visit a team
"""
#pylint: disable=attribute-defined-outside-init
self.set_team_configuration(
{'course_id': self.course_id, 'max_team_size': max_team_size, 'topics': [self.topic]}
)
self.teams = self.create_teams(self.topic, 2)
if create_membership:
self.create_membership(self.user_info['username'], self.teams[membership_team_index]['id'])
if another_user:
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self.team_page = TeamPage(self.browser, self.course_id, self.teams[visit_team_index])
def setup_thread(self):
"""
Create and return a thread for this test's discussion topic.
"""
thread = Thread(
id="test_thread_{}".format(uuid4().hex),
commentable_id=self.teams[0]['discussion_topic_id'],
body="Dummy text body."
)
thread_fixture = MultipleThreadFixture([thread])
thread_fixture.push()
return thread
def setup_discussion_user(self, role=None, staff=False):
"""Set this test's user to have the given role in its
discussions. Role is one of 'Community TA', 'Moderator',
'Administrator', or 'Student'.
"""
kwargs = {
'course_id': self.course_id,
'staff': staff
}
if role is not None:
kwargs['roles'] = role
#pylint: disable=attribute-defined-outside-init
self.user_info = AutoAuthPage(self.browser, **kwargs).visit().user_info
def verify_teams_discussion_permissions(self, should_have_permission):
"""Verify that the teams discussion component is in the correct state
for the test user. If `should_have_permission` is True, assert that
the user can see controls for posting replies, voting, editing, and
deleting. Otherwise, assert that those controls are hidden.
"""
thread = self.setup_thread()
self.team_page.visit()
self.assertEqual(self.team_page.discussion_id, self.teams[0]['discussion_topic_id'])
discussion = self.team_page.discussion_page
self.assertTrue(discussion.is_browser_on_page())
self.assertTrue(discussion.is_discussion_expanded())
self.assertEqual(discussion.get_num_displayed_threads(), 1)
self.assertTrue(discussion.has_thread(thread['id']))
assertion = self.assertTrue if should_have_permission else self.assertFalse
assertion(discussion.q(css='.post-header-actions').present)
assertion(discussion.q(css='.add-response').present)
assertion(discussion.q(css='.new-post-btn').present)
def test_discussion_on_my_team_page(self):
"""
Scenario: Team Page renders a discussion for a team to which I belong.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic of which I am a member
When the team has a discussion with a thread
And I visit the Team page for that team
Then I should see a discussion with the correct discussion_id
And I should see the existing thread
And I should see controls to change the state of the discussion
"""
self._set_team_configuration_and_membership()
self.verify_teams_discussion_permissions(True)
@ddt.data(True, False)
def test_discussion_on_other_team_page(self, is_staff):
"""
Scenario: Team Page renders a team discussion for a team to which I do
not belong.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic of which I am not a member
When the team has a discussion with a thread
And I visit the Team page for that team
Then I should see a discussion with the correct discussion_id
And I should see the team's thread
And I should not see controls to change the state of the discussion
"""
self._set_team_configuration_and_membership(create_membership=False)
self.setup_discussion_user(staff=is_staff)
self.verify_teams_discussion_permissions(False)
@ddt.data('Moderator', 'Community TA', 'Administrator')
def test_discussion_privileged(self, role):
self._set_team_configuration_and_membership(create_membership=False)
self.setup_discussion_user(role=role)
self.verify_teams_discussion_permissions(True)
def assert_team_details(self, num_members, is_member=True, max_size=10):
"""
Verifies that user can see all the information, present on detail page according to their membership status.
Arguments:
num_members (int): number of users in a team
is_member (bool) default True: True if request user is member else False
max_size (int): number of users a team can have
"""
self.assertEqual(
self.team_page.team_capacity_text,
self.team_page.format_capacity_text(num_members, max_size)
)
self.assertEqual(self.team_page.team_location, 'Afghanistan')
self.assertEqual(self.team_page.team_language, 'Afar')
self.assertEqual(self.team_page.team_members, num_members)
if num_members > 0:
self.assertTrue(self.team_page.team_members_present)
else:
self.assertFalse(self.team_page.team_members_present)
if is_member:
self.assertEqual(self.team_page.team_user_membership_text, 'You are a member of this team.')
self.assertTrue(self.team_page.team_leave_link_present)
self.assertTrue(self.team_page.new_post_button_present)
else:
self.assertEqual(self.team_page.team_user_membership_text, '')
self.assertFalse(self.team_page.team_leave_link_present)
self.assertFalse(self.team_page.new_post_button_present)
def test_team_member_can_see_full_team_details(self):
"""
Scenario: Team member can see full info for team.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic of which I am a member
When I visit the Team page for that team
Then I should see the full team detail
And I should see the team members
And I should see my team membership text
And I should see the language & country
And I should see the Leave Team and Invite Team
"""
self._set_team_configuration_and_membership()
self.team_page.visit()
self.assert_team_details(
num_members=1,
)
def test_other_users_can_see_limited_team_details(self):
"""
Scenario: Users who are not member of this team can only see limited info for this team.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic of which I am not a member
When I visit the Team page for that team
Then I should not see full team detail
And I should see the team members
And I should not see my team membership text
And I should not see the Leave Team and Invite Team links
"""
self._set_team_configuration_and_membership(create_membership=False)
self.team_page.visit()
self.assert_team_details(is_member=False, num_members=0)
def test_user_can_navigate_to_members_profile_page(self):
"""
Scenario: User can navigate to profile page via team member profile image.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic of which I am a member
When I visit the Team page for that team
Then I should see profile images for the team members
When I click on the first profile image
Then I should be taken to the user's profile page
And I should see the username on profile page
"""
self._set_team_configuration_and_membership()
self.team_page.visit()
learner_name = self.team_page.first_member_username
self.team_page.click_first_profile_image()
learner_profile_page = LearnerProfilePage(self.browser, learner_name)
learner_profile_page.wait_for_page()
learner_profile_page.wait_for_field('username')
self.assertTrue(learner_profile_page.field_is_visible('username'))
def test_join_team(self):
"""
Scenario: User can join a Team if not a member already..
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic
And I visit the Team page for that team
Then I should see Join Team button
And I should not see New Post button
When I click on Join Team button
Then there should be no Join Team button and no message
And an analytics event should be emitted
And I should see the updated information under Team Details
And I should see New Post button
And if I switch to "My Team", the team I have joined is displayed
"""
self._set_team_configuration_and_membership(create_membership=False)
teams_page = BrowseTeamsPage(self.browser, self.course_id, self.topic)
teams_page.visit()
teams_page.view_first_team()
self.assertTrue(self.team_page.join_team_button_present)
expected_events = [
{
'event_type': 'edx.team.learner_added',
'event': {
'add_method': 'joined_from_team_view'
}
}
]
with self.assert_events_match_during(event_filter=self.only_team_events, expected_events=expected_events):
self.team_page.click_join_team_button()
self.assertFalse(self.team_page.join_team_button_present)
self.assertFalse(self.team_page.join_team_message_present)
self.assert_team_details(num_members=1, is_member=True)
# Verify that if one switches to "My Team" without reloading the page, the newly joined team is shown.
self.teams_page.click_all_topics()
self.verify_my_team_count(1)
def test_already_member_message(self):
"""
Scenario: User should see `You are already in a team` if user is a
member of other team.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic
And I am already a member of a team
And I visit a team other than mine
Then I should see `You are already in a team` message
"""
self._set_team_configuration_and_membership(membership_team_index=0, visit_team_index=1)
self.team_page.visit()
self.assertEqual(self.team_page.join_team_message, 'You already belong to another team.')
self.assert_team_details(num_members=0, is_member=False)
def test_team_full_message(self):
"""
Scenario: User should see `Team is full` message when team is full.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic
And team has no space left
And I am not a member of any team
And I visit the team
Then I should see `Team is full` message
"""
self._set_team_configuration_and_membership(
create_membership=True,
max_team_size=1,
membership_team_index=0,
visit_team_index=0,
another_user=True
)
self.team_page.visit()
self.assertEqual(self.team_page.join_team_message, 'This team is full.')
self.assert_team_details(num_members=1, is_member=False, max_size=1)
def test_leave_team(self):
"""
Scenario: User can leave a team.
Given I am enrolled in a course with a team configuration, a topic,
and a team belonging to that topic
And I am a member of team
And I visit the team
And I should not see Join Team button
And I should see New Post button
Then I should see Leave Team link
When I click on Leave Team link
Then user should be removed from team
And an analytics event should be emitted
And I should see Join Team button
And I should not see New Post button
And if I switch to "My Team", the team I have left is not displayed
"""
self._set_team_configuration_and_membership()
self.team_page.visit()
self.assertFalse(self.team_page.join_team_button_present)
self.assert_team_details(num_members=1)
expected_events = [
{
'event_type': 'edx.team.learner_removed',
'event': {
'remove_method': 'self_removal'
}
}
]
with self.assert_events_match_during(event_filter=self.only_team_events, expected_events=expected_events):
self.team_page.click_leave_team_link()
self.assert_team_details(num_members=0, is_member=False)
self.assertTrue(self.team_page.join_team_button_present)
# Verify that if one switches to "My Team" without reloading the page, the old team no longer shows.
self.teams_page.click_all_topics()
self.verify_my_team_count(0)
def test_page_viewed_event(self):
"""
Scenario: Visiting the team profile page should fire a page viewed event.
Given I am enrolled in a course with a team configuration and a topic
When I visit the team profile page
Then my browser should post a page viewed event
"""
self._set_team_configuration_and_membership()
events = [{
'event_type': 'edx.team.page_viewed',
'event': {
'page_name': 'single-team',
'topic_id': self.topic['id'],
'team_id': self.teams[0]['id']
}
}]
with self.assert_events_match_during(self.only_team_events, expected_events=events):
self.team_page.visit()
|
jamiefolsom/edx-platform
|
common/test/acceptance/tests/lms/test_teams.py
|
Python
|
agpl-3.0
| 81,480
|
[
"VisIt"
] |
60634d38018c47eb646a6bf65c170bebdd86f2d1b473061fe548ece5bd223fd7
|
# coding: utf-8
# Copyright (c) Henniggroup.
# Distributed under the terms of the MIT License.
from __future__ import division, print_function, unicode_literals, \
absolute_import
"""
Utility functions
"""
from six.moves import range, zip
import itertools as it
from functools import reduce
import linecache
import sys
import os
import math
import socket
import time
import subprocess as sp
import logging
from collections import OrderedDict, Counter
import yaml
import numpy as np
from monty.json import MontyEncoder, MontyDecoder
from monty.serialization import loadfn, dumpfn
from pymatgen.core.sites import PeriodicSite
from pymatgen import Structure, Lattice, Element
from pymatgen.core.surface import Slab, SlabGenerator
from pymatgen.io.ase import AseAtomsAdaptor
from pymatgen.io.vasp.inputs import Poscar
from pymatgen.core.composition import Composition
from pymatgen.core.operations import SymmOp
from pymatgen.core.periodic_table import _pt_data
from pymatgen.io.vasp.outputs import Vasprun
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from custodian.custodian import Custodian
from fireworks.user_objects.queue_adapters.common_adapter import CommonAdapter
from ase.build import surface
#from ase.lattice.surface import surface
from mpinterfaces.default_logger import get_default_logger
from mpinterfaces import VASP_STD_BIN, QUEUE_SYSTEM, QUEUE_TEMPLATE, VASP_PSP,\
PACKAGE_PATH
__author__ = "Kiran Mathew, Joshua J. Gabriel, Michael Ashton"
__copyright__ = "Copyright 2017, Henniggroup"
__maintainer__ = "Joshua J. Gabriel"
__email__ = "joshgabriel92@gmail.com"
__status__ = "Production"
__date__ = "March 3, 2017"
logger = get_default_logger(__name__)
ELEMENT_RADII = {i: Element(i).atomic_radius for i in _pt_data}
def get_ase_slab(pmg_struct, hkl=(1, 1, 1), min_thick=10, min_vac=10):
"""
takes in the intial structure as pymatgen Structure object
uses ase to generate the slab
returns pymatgen Slab object
Args:
pmg_struct: pymatgen structure object
hkl: hkl index of surface of slab to be created
min_thick: minimum thickness of slab in Angstroms
min_vac: minimum vacuum spacing
"""
ase_atoms = AseAtomsAdaptor().get_atoms(pmg_struct)
pmg_slab_gen = SlabGenerator(pmg_struct, hkl, min_thick, min_vac)
h = pmg_slab_gen._proj_height
nlayers = int(math.ceil(pmg_slab_gen.min_slab_size / h))
ase_slab = surface(ase_atoms, hkl, nlayers)
ase_slab.center(vacuum=min_vac / 2, axis=2)
pmg_slab_structure = AseAtomsAdaptor().get_structure(ase_slab)
return Slab(lattice=pmg_slab_structure.lattice,
species=pmg_slab_structure.species_and_occu,
coords=pmg_slab_structure.frac_coords,
site_properties=pmg_slab_structure.site_properties,
miller_index=hkl, oriented_unit_cell=pmg_slab_structure,
shift=0., scale_factor=None, energy=None)
def slab_from_file(hkl, filename):
"""
reads in structure from the file and returns slab object.
useful for reading in 2d/substrate structures from file.
Args:
hkl: miller index of the slab in the input file.
filename: structure file in any format
supported by pymatgen
Returns:
Slab object
"""
slab_input = Structure.from_file(filename)
return Slab(slab_input.lattice,
slab_input.species_and_occu,
slab_input.frac_coords,
hkl,
Structure.from_sites(slab_input, to_unit_cell=True),
shift=0,
scale_factor=np.eye(3, dtype=np.int),
site_properties=slab_input.site_properties)
def get_magmom_string(structure):
"""
Based on a POSCAR, returns the string required for the MAGMOM
setting in the INCAR. Initializes transition metals with 6.0
bohr magneton and all others with 0.5.
Args:
structure (Structure): Pymatgen Structure object
Returns:
string with INCAR setting for MAGMOM according to mat2d
database calculations
"""
magmoms, considered = [], []
for s in structure.sites:
if s.specie not in considered:
amount = int(structure.composition[s.specie])
if s.specie.is_transition_metal:
magmoms.append('{}*6.0'.format(amount))
else:
magmoms.append('{}*0.5'.format(amount))
considered.append(s.specie)
return ' '.join(magmoms)
def get_magmom_mae(poscar, mag_init):
"""
mae
"""
mae_magmom = []
sites_dict = poscar.as_dict()['structure']['sites']
# initialize a magnetic moment on the transition metal
# in vector form on the x-direction
for n, s in enumerate(sites_dict):
if Element(s['label']).is_transition_metal:
mae_magmom.append([0.0, 0.0, mag_init])
else:
mae_magmom.append([0.0, 0.0, 0.0])
return sum(mae_magmom, [])
def get_magmom_afm(poscar, database=None):
"""
returns the magmom string which is an N length list
"""
afm_magmom = []
orig_structure_name = poscar.comment
if len(poscar.structure) % 2 != 0:
if database == 'twod':
# no need for more vacuum spacing
poscar.structure.make_supercell([2, 2, 1])
else:
# for bulk structure
poscar.structure.make_supercell([2, 2, 2])
sites_dict = poscar.as_dict()['structure']['sites']
for n, s in enumerate(sites_dict):
if Element(s['label']).is_transition_metal:
if n % 2 == 0:
afm_magmom.append(6.0)
else:
afm_magmom.append(-6.0)
else:
if n % 2 == 0:
afm_magmom.append(0.5)
else:
afm_magmom.append(-0.5)
return afm_magmom, Poscar(structure=poscar.structure,
comment=orig_structure_name)
def get_run_cmmnd(nnodes=1, ntasks=16, walltime='10:00:00', job_bin=None,
job_name=None, mem=None):
"""
returns the fireworks CommonAdapter based on the queue
system specified by mpint_config.yaml and the submit
file template also specified in mpint_config.yaml
NOTE: for the job_bin, please specify the mpi command as well:
Eg: mpiexec /path/to/binary
"""
d = {}
job_cmd = None
try:
qtemp_file = open(QUEUE_TEMPLATE+'qtemplate.yaml')
qtemp = yaml.load(qtemp_file)
qtemp_file.close()
except:
# test case scenario
qtemp = {'account': None, 'mem': None, \
'walltime': '10:00:00', 'nodes': 1, 'pre_rocket': None, 'job_name': None, \
'ntasks': 16, 'email': None, 'rocket_launch': None}
qtemp.update({'nodes': nnodes, 'ntasks':ntasks, 'walltime': walltime, \
'rocket_launch': job_bin, 'job_name':job_name,'mem':mem})
# SLURM queue
if QUEUE_SYSTEM == 'slurm':
if job_bin is None:
job_bin = VASP_STD_BIN
else:
job_bin = job_bin
d = {'type': 'SLURM',
'params': qtemp}
# PBS queue
elif QUEUE_SYSTEM == 'pbs':
if job_bin is None:
job_bin = VASP_STD_BIN
else:
job_bin = job_bin
d = {'type': 'PBS',
'params': qtemp}
else:
job_cmd = ['ls', '-lt']
if d:
#print (CommonAdapter(d['type'], **d['params']), job_cmd)
return (CommonAdapter(d['type'], **d['params']), job_cmd)
else:
return (None, job_cmd)
def get_job_state(job):
"""
Args:
job: job
Returns:
the job state and the job output file name
"""
ofname = None
# pbs
if QUEUE_SYSTEM == 'pbs':# in hostname:
try:
output = sp.check_output(['qstat', '-i', job.job_id])
state = output.rstrip('\n').split('\n')[-1].split()[-2]
except:
logger.info('Job {} not in the que'.format(job.job_id))
state = "00"
ofname = "FW_job.out"
# slurm
elif QUEUE_SYSTEM == 'slurm':
try:
output = sp.check_output(['squeue', '--job', job.job_id])
state = output.rstrip('\n').split('\n')[-1].split()[-4]
except:
logger.info('Job {} not in the que.'.format(job.job_id))
logger.info(
'This could mean either the batchsystem crashed(highly unlikely) or the job completed a long time ago')
state = "00"
ofname = "vasp_job-" + str(job.job_id) + ".out"
# no batch system
else:
state = 'XX'
return state, ofname
def update_checkpoint(job_ids=None, jfile=None, **kwargs):
"""
rerun the jobs with job ids in the job_ids list. The jobs are
read from the json checkpoint file, jfile.
If no job_ids are given then the checkpoint file will
be updated with corresponding final energy
Args:
job_ids: list of job ids to update or q resolve
jfile: check point file
"""
cal_log = loadfn(jfile, cls=MontyDecoder)
cal_log_new = []
all_jobs = []
run_jobs = []
handlers = []
final_energy = None
incar = None
kpoints = None
qadapter = None
# if updating the specs of the job
for k, v in kwargs.items():
if k == 'incar':
incar = v
if k == 'kpoints':
kpoints = v
if k == 'que':
qadapter = v
for j in cal_log:
job = j["job"]
job.job_id = j['job_id']
all_jobs.append(job)
if job_ids and (j['job_id'] in job_ids or job.job_dir in job_ids):
logger.info('setting job {0} in {1} to rerun'.format(j['job_id'],
job.job_dir))
contcar_file = job.job_dir + os.sep + 'CONTCAR'
poscar_file = job.job_dir + os.sep + 'POSCAR'
if os.path.isfile(contcar_file) and len(
open(contcar_file).readlines()) != 0:
logger.info('setting poscar file from {}'
.format(contcar_file))
job.vis.poscar = Poscar.from_file(contcar_file)
else:
logger.info('setting poscar file from {}'
.format(poscar_file))
job.vis.poscar = Poscar.from_file(poscar_file)
if incar:
logger.info('incar overridden')
job.vis.incar = incar
if kpoints:
logger.info('kpoints overridden')
job.vis.kpoints = kpoints
if qadapter:
logger.info('qadapter overridden')
job.vis.qadapter = qadapter
run_jobs.append(job)
if run_jobs:
c = Custodian(handlers, run_jobs, max_errors=5)
c.run()
for j in all_jobs:
final_energy = j.get_final_energy()
cal_log_new.append({"job": j.as_dict(),
'job_id': j.job_id,
"corrections": [],
'final_energy': final_energy})
dumpfn(cal_log_new, jfile, cls=MontyEncoder, indent=4)
def jobs_from_file(filename='calibrate.json'):
"""
read in json file of format caibrate.json(the default logfile
created when jobs are run through calibrate) and return the
list of job objects.
Args:
filename: checkpoint file name
Returns:
list of all jobs
"""
caljobs = loadfn(filename, cls=MontyDecoder)
all_jobs = []
for j in caljobs:
job = j["job"]
job.job_id = j['job_id']
job.final_energy = j['final_energy']
all_jobs.append(job)
return all_jobs
def launch_daemon(steps, interval, handlers=None, ld_logger=None):
"""
run all the 'steps' in daemon mode
checks job status every 'interval' seconds
also runs all the error handlers
"""
if ld_logger:
global logger
logger = ld_logger
chkpt_files_prev = None
for step in steps:
chkpt_files = step(checkpoint_files=chkpt_files_prev)
chkpt_files_prev = chkpt_files
if not chkpt_files:
return None
while True:
done = []
reruns = []
for cf in chkpt_files:
time.sleep(3)
update_checkpoint(job_ids=reruns, jfile=cf)
all_jobs = jobs_from_file(cf)
for j in all_jobs:
state, ofname = get_job_state(j)
if j.final_energy:
done = done + [True]
elif state == 'R':
logger.info('job {} running'.format(j.job_id))
done = done + [False]
elif state in ['C', 'CF', 'F', '00']:
logger.error(
'Job {0} in {1} cancelled or failed. State = {2}'.
format(j.job_id, j.job_dir, state))
done = done + [False]
if handlers:
logger.info('Investigating ... ')
os.chdir(j.job_dir)
if ofname:
if os.path.exists(ofname):
for h in handlers:
h.output_filename = ofname
if h.check():
logger.error(
'Detected vasp errors {}'.format(
h.errors))
# TODO: correct the error and mark the job for rerun
# all error handling must done using proper errorhandlers
# h.correct()
# reruns.append(j.job_id)
else:
logger.error(
'stdout redirect file not generated, job {} will be rerun'.format(
j.job_id))
reruns.append(j.job_id)
os.chdir(j.parent_job_dir)
else:
logger.info(
'Job {0} pending. State = {1}'.format(j.job_id,
state))
done = done + [False]
if all(done):
logger.info(
'all jobs in {} done. Proceeding to the next one'.format(
step.__name__))
time.sleep(5)
break
logger.info(
'all jobs in {0} NOT done. Next update in {1} seconds'.format(
step.__name__, interval))
time.sleep(interval)
def get_convergence_data(jfile, params=('ENCUT', 'KPOINTS')):
"""
returns data dict in the following format
{'Al':
{'ENCUT': [ [500,1.232], [600,0.8798] ],
'KPOINTS':[ [], [] ]
},
'W': ...
}
Note: processes only INCAR parmaters and KPOINTS
"""
cutoff_jobs = jobs_from_file(jfile)
data = {}
for j in cutoff_jobs:
jdir = os.path.join(j.parent_job_dir, j.job_dir)
poscar_file = os.path.join(jdir, 'POSCAR')
struct_m = Structure.from_file(poscar_file)
species = ''.join([tos.symbol for tos in struct_m.types_of_specie])
if data.get(species):
for p in params:
if j.vis.incar.get(p):
data[species][p].append([j.vis.incar[p],
j.final_energy / len(struct_m)])
elif p == 'KPOINTS':
data[species]['KPOINTS'].append([j.vis.kpoints.kpts,
j.final_energy / len(
struct_m)])
else:
logger.warn(
'dont know how to parse the parameter {}'.format(p))
else:
data[species] = {}
for p in params:
data[species][p] = []
data[species][p] = []
return data
def get_opt_params(data, species, param='ENCUT', ev_per_atom=0.001):
"""
return optimum parameter
default: 1 meV/atom
"""
sorted_list = sorted(data[species][param], key=lambda x: x[1])
sorted_array = np.array(sorted_list)
consecutive_diff = np.abs(
sorted_array[:-1, 1] - sorted_array[1:, 1] - ev_per_atom)
min_index = np.argmin(consecutive_diff)
return sorted_list[min_index][0]
# PLEASE DONT CHANGE THINGS WITHOUT UPDATING SCRIPTS/MODULES THAT DEPEND
# ON IT
# get_convergence_data and get_opt_params moved to *_custom
def get_convergence_data_custom(jfile, params=('ENCUT', 'KPOINTS')):
"""
returns data dict in the following format
{'Al':
{'ENCUT': [ [500,1.232], [600,0.8798] ],
'KPOINTS':[ [], [] ]
},
'W': ...
}
Note: processes only INCAR parmaters and KPOINTS
Sufficient tagging of the data assumed from species,Poscar
comment line and potcar functional
"""
cutoff_jobs = jobs_from_file(jfile)
data = {}
for j in cutoff_jobs:
jdir = os.path.join(j.parent_job_dir, j.job_dir)
poscar_file = os.path.join(jdir, 'POSCAR')
struct_m = Structure.from_file(poscar_file)
species = ''.join([tos.symbol for tos in struct_m.types_of_specie])
tag = '_'.join([species, Poscar.from_file(poscar_file).comment,
j.vis.potcar.functional])
if data.get(tag):
for p in params:
if j.vis.incar.get(p):
data[tag][p].append([j.vis.incar[p],
j.final_energy / len(struct_m),
j.vis.potcar, j.vis.poscar])
# print(j.vis.potcar.functional,j.vis.poscar)
elif p == 'KPOINTS':
data[tag]['KPOINTS'].append([j.vis.kpoints.kpts,
j.final_energy / len(
struct_m), j.vis.potcar,
j.vis.poscar])
else:
logger.warn(
'dont know how to parse the parameter {}'.format(p))
else:
data[tag] = {}
for p in params:
data[tag][p] = []
data[tag][p] = []
return data
def get_opt_params_custom(data, tag, param='ENCUT', ev_per_atom=1.0):
"""
Args:
data: dictionary of convergence data
tag: key to dictionary of convergence dara
param: parameter to be optimized
ev_per_atom: minimizing criterion in eV per unit
Returns
[list] optimum parameter set consisting of tag, potcar object,
poscar object, list of convergence data energies sorted according to
param
default criterion: 1 meV/atom
"""
sorted_list = sorted(data[tag][param], key=lambda x: x[0])
# sorted array data
t = np.array(sorted_list)[:, 1]
# print(sorted_array[:-1,1], sorted_array[1:,1], ev_per_atom)
consecutive_diff = [float(j) - float(i) - ev_per_atom for i, j in
zip(t[:-1], t[1:])]
# print("Consecutive_diff",consecutive_diff)
min_index = np.argmin(consecutive_diff)
# return the tag,potcar object, poscar object, incar setting and
# convergence data for plotting that is optimum
return [tag, data[tag][param][min_index][2],
data[tag][param][min_index][3], sorted_list[min_index][0], t]
def partition_jobs(turn_knobs, max_jobs):
"""
divide turn_knobs into smaller turn_knobs so that each one of
them has smaller max_jobs jobs
"""
params_len = [len(v) for k, v in turn_knobs.items()]
n_total_jobs = reduce(lambda x, y: x * y, params_len)
partition_size = int(n_total_jobs / max_jobs)
max_index = np.argmax(params_len)
max_len = max(params_len)
max_key = list(turn_knobs.items())[max_index][0]
partition = range(0, max_len, max(1, int(max_len / partition_size)))
partition_1 = partition[1:] + [max_len]
logger.info(
'{0} list of length {1} will be partitioned into {2} chunks'.format(
max_key, max_len, len(partition)))
turn_knobs_list = []
name_list = []
for i, j in zip(partition, partition_1):
ordered_list = []
for k, v in turn_knobs.items():
if k == max_key:
tk_item = (k, v[i:j])
else:
tk_item = (k, v)
ordered_list.append(tk_item)
turn_knobs_list.append(OrderedDict(ordered_list))
name_list.append('_'.join([str(i), str(j)]))
return turn_knobs_list, name_list
def get_logger(log_file_name):
"""
writes out logging file.
Very useful project logging, recommended for use
to monitor the start and completion of steps in the workflow
Arg:
log_file_name: name of the log file, log_file_name.log
"""
loggr = logging.getLogger(log_file_name)
loggr.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')
fh = logging.FileHandler(log_file_name + '.log', mode='a')
fh.setFormatter(formatter)
loggr.addHandler(fh)
return loggr
def set_sd_flags(poscar_input=None, n_layers=2, top=True, bottom=True,
poscar_output='POSCAR2'):
"""
set the relaxation flags for top and bottom layers of interface.
The upper and lower bounds of the z coordinate are determined
based on the slab.
Args:
poscar_input: input poscar file name
n_layers: number of layers to be relaxed
top: whether n_layers from top are be relaxed
bottom: whether n_layers from bottom are be relaxed
poscar_output: output poscar file name
Returns:
None
writes the modified poscar file
"""
poscar1 = Poscar.from_file(poscar_input)
sd_flags = np.zeros_like(poscar1.structure.frac_coords)
z_coords = poscar1.structure.frac_coords[:, 2]
z_lower_bound, z_upper_bound = None, None
if bottom:
z_lower_bound = np.unique(z_coords)[n_layers - 1]
sd_flags[np.where(z_coords <= z_lower_bound)] = np.ones((1, 3))
if top:
z_upper_bound = np.unique(z_coords)[-n_layers]
sd_flags[np.where(z_coords >= z_upper_bound)] = np.ones((1, 3))
poscar2 = Poscar(poscar1.structure, selective_dynamics=sd_flags.tolist())
poscar2.write_file(filename=poscar_output)
def print_exception():
"""
Error exception catching function for debugging
can be a very useful tool for a developer
move to utils and activate when debug mode is on
"""
exc_type, exc_obj, tb = sys.exc_info()
f = tb.tb_frame
lineno = tb.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
print('EXCEPTION IN ({}, LINE {} "{}"): {}'.format(filename, lineno,
line.strip(), exc_obj))
def is_converged(directory):
"""
Check if a relaxation has converged.
Args:
directory (str): path to directory to check.
Returns:
boolean. Whether or not the job is converged.
"""
try:
return Vasprun('{}/vasprun.xml'.format(directory)).converged
except:
return False
def get_spacing(structure):
"""
Returns the interlayer spacing for a 2D material or slab.
Args:
structure (Structure): Structure to check spacing for.
cut (float): a fractional z-coordinate that must be within
the vacuum region.
Returns:
float. Spacing in Angstroms.
"""
structure = align_axis(structure)
structure = center_slab(structure)
max_height = max([s.coords[2] for s in structure.sites])
min_height = min([s.coords[2] for s in structure.sites])
return structure.lattice.c - (max_height - min_height)
def center_slab(structure):
"""
Centers the atoms in a slab structure around 0.5
fractional height.
Args:
structure (Structure): Structure to center
Returns:
Centered Structure object.
"""
center = np.average([s.frac_coords[2] for s in structure.sites])
translation = (0, 0, 0.5 - center)
structure.translate_sites(range(len(structure.sites)), translation)
return structure
def add_vacuum(structure, vacuum):
"""
Adds padding to a slab or 2D material.
Args:
structure (Structure): Structure to add vacuum to
vacuum (float): Vacuum thickness to add in Angstroms
Returns:
Structure object with vacuum added.
"""
structure = align_axis(structure)
lattice = np.array(structure.lattice.matrix)
lattice[2][2] += vacuum
#structure.lattice = Lattice(lattice)
#return center_slab(structure)
vac_added_structure = Structure(lattice, structure.species,\
structure.cart_coords, coords_are_cartesian=True)
return center_slab(vac_added_structure)
def ensure_vacuum(structure, vacuum):
"""
Adds padding to a slab or 2D material until the desired amount
of vacuum is reached.
Args:
structure (Structure): Structure to add vacuum to
vacuum (float): Final desired vacuum thickness in Angstroms
Returns:
Structure object with vacuum added.
"""
structure = align_axis(structure)
spacing = get_spacing(structure)
structure = add_vacuum(structure, vacuum - spacing)
return center_slab(structure)
def get_rotation_matrix(axis, theta):
"""
Find the rotation matrix associated with counterclockwise rotation
about the given axis by theta radians.
Credit: http://stackoverflow.com/users/190597/unutbu
Args:
axis (list): rotation axis of the form [x, y, z]
theta (float): rotational angle in radians
Returns:
array. Rotation matrix.
"""
axis = np.array(list(axis))
axis = axis / np.linalg.norm(axis)
axis *= -np.sin(theta/2.0)
a = np.cos(theta/2.0)
b, c, d = tuple(axis.tolist())
aa, bb, cc, dd = a*a, b*b, c*c, d*d
bc, ad, ac, ab, bd, cd = b*c, a*d, a*c, a*b, b*d, c*d
return np.array([[aa+bb-cc-dd, 2*(bc+ad), 2*(bd-ac)],
[2*(bc-ad), aa+cc-bb-dd, 2*(cd+ab)],
[2*(bd+ac), 2*(cd-ab), aa+dd-bb-cc]])
def align_axis(structure, axis='c', direction=(0, 0, 1)):
"""
Rotates a structure so that the specified axis is along
the [001] direction. This is useful for adding vacuum, and
in general for using vasp compiled with no z-axis relaxation.
Args:
structure (Structure): Pymatgen Structure object to rotate.
axis: Axis to be rotated. Can be 'a', 'b', 'c', or a 1x3 vector.
direction (vector): Final axis to be rotated to.
Returns:
structure. Rotated to align axis along direction.
"""
if axis == 'a':
axis = structure.lattice._matrix[0]
elif axis == 'b':
axis = structure.lattice._matrix[1]
elif axis == 'c':
axis = structure.lattice._matrix[2]
proj_axis = np.cross(axis, direction)
if not(proj_axis[0] == 0 and proj_axis[1] == 0):
theta = (
np.arccos(np.dot(axis, direction)
/ (np.linalg.norm(axis) * np.linalg.norm(direction)))
)
R = get_rotation_matrix(proj_axis, theta)
rotation = SymmOp.from_rotation_and_translation(rotation_matrix=R)
structure.apply_operation(rotation)
if axis == 'c' and direction == (0, 0, 1):
structure.lattice._matrix[2][2] = abs(structure.lattice._matrix[2][2])
return structure
def get_structure_type(structure, tol=0.1, seed_index=0,
write_poscar_from_cluster=False):
"""
This is a topology-scaling algorithm used to describe the
periodicity of bonded clusters in a bulk structure.
Args:
structure (structure): Pymatgen structure object to classify.
tol (float): Additional percent of atomic radii to allow
for overlap, thereby defining bonds
(0.1 = +10%, -0.1 = -10%)
seed_index (int): Atom number to start the cluster.
write_poscar_from_cluster (bool): Set to True to write a
POSCAR file from the sites in the cluster.
Returns:
string. "molecular" (0D), "chain" (1D), "layered" (2D), or
"conventional" (3D). Also includes " heterogeneous"
if the cluster's composition is not equal to that
of the overal structure.
"""
# Get conventional structure to orthogonalize the lattice as
# much as possible. A tolerance of 0.1 Angst. was suggested by
# pymatgen developers.
s = SpacegroupAnalyzer(structure, 0.1).get_conventional_standard_structure()
heterogeneous = False
noble_gases = ["He", "Ne", "Ar", "Kr", "Xe", "Rn"]
if len([e for e in structure.composition if e.symbol in noble_gases]) != 0:
type = "noble gas"
else:
# make 2x2x2 supercell to ensure sufficient number of atoms
# for cluster building.
s.make_supercell(2)
# Distance matrix (rowA, columnB) shows distance between
# atoms A and B, taking PBCs into account.
distance_matrix = s.distance_matrix
# Fill diagonal with a large number, so the code knows that
# each atom is not bonded to itself.
np.fill_diagonal(distance_matrix, 100)
# Rows (`radii`) and columns (`radiiT`) of radii.
radii = [ELEMENT_RADII[site.species_string] for site in s.sites]
radiiT = np.array(radii)[np.newaxis].T
radii_matrix = radii + radiiT*(1+tol)
# elements of temp that have value less than 0 are bonded.
temp = distance_matrix - radii_matrix
# True (1) is placed where temp < 0, and False (0) where
# it is not.
binary_matrix = (temp < 0).astype(int)
# list of atoms bonded to the seed atom of a cluster
seed = set((np.where(binary_matrix[seed_index]==1))[0])
cluster = seed
NEW = seed
while True:
temp_set = set()
for n in NEW:
# temp_set will have all atoms, without duplicates,
# that are connected to all atoms in NEW.
temp_set.update(set(np.where(binary_matrix[n]==1)[0]))
if temp_set.issubset(cluster):
# if temp_set has no new atoms, the search is done.
break
else:
NEW = temp_set - cluster # List of newly discovered atoms
cluster.update(temp_set) # cluster is updated with new atoms
if len(cluster) == 0: # i.e. the cluster is a single atom.
cluster = [seed_index] # Make sure it's not empty to write POSCAR.
type = "molecular"
elif len(cluster) == len(s.sites): # i.e. all atoms are bonded.
type = "conventional"
else:
cmp = Composition.from_dict(Counter([s[l].specie.name for l in
list(cluster)]))
if cmp.reduced_formula != s.composition.reduced_formula:
# i.e. the cluster does not have the same composition
# as the overall crystal; therefore there are other
# clusters of varying composition.
heterogeneous = True
old_cluster_size = len(cluster)
# Increase structure to determine whether it is
# layered or molecular, then perform the same kind
# of cluster search as before.
s.make_supercell(2)
distance_matrix = s.distance_matrix
np.fill_diagonal(distance_matrix,100)
radii = [ELEMENT_RADII[site.species_string] for site in s.sites]
radiiT = np.array(radii)[np.newaxis].T
radii_matrix = radii + radiiT*(1+tol)
temp = distance_matrix-radii_matrix
binary_matrix = (temp < 0).astype(int)
seed = set((np.where(binary_matrix[seed_index]==1))[0])
cluster = seed
NEW = seed
check = True
while check:
temp_set = set()
for n in NEW:
temp_set.update(set(np.where(binary_matrix[n]==1)[0]))
if temp_set.issubset(cluster):
check = False
else:
NEW = temp_set - cluster
cluster.update(temp_set)
if len(cluster) != 4 * old_cluster_size:
type = "molecular"
else:
type = "layered"
if heterogeneous:
type += " heterogeneous"
cluster_sites = [s.sites[n] for n in cluster]
if write_poscar_from_cluster:
s.from_sites(cluster_sites).get_primitive_structure().to("POSCAR",
"POSCAR")
return type
def write_potcar(pot_path=VASP_PSP, types='None'):
"""
Writes a POTCAR file based on a list of types.
Args:
pot_path (str): can be changed to override default location
of POTCAR files.
types (list): list of same length as number of elements
containing specifications for the kind of potential
desired for each element, e.g. ['Na_pv', 'O_s']. If
left as 'None', uses the defaults in the
'potcar_symbols.yaml' file in the package root.
"""
if pot_path == None:
# This probably means the mpint_config.yaml file has not
# been set up.
pass
else:
poscar = open('POSCAR', 'r')
lines = poscar.readlines()
elements = lines[5].split()
poscar.close()
potcar_symbols = loadfn(
os.path.join(PACKAGE_PATH, 'mat2d', 'potcar_symbols.yaml')
)
if types == 'None':
sorted_types = [potcar_symbols[elt] for elt in elements]
else:
sorted_types = []
for elt in elements:
for t in types:
if t.split('_')[0] == elt:
sorted_types.append(t)
potentials = []
# Create paths, open files, and write files to
# POTCAR for each potential.
for potential in sorted_types:
potentials.append('{}/{}/POTCAR'.format(pot_path, potential))
outfile = open('POTCAR', 'w')
for potential in potentials:
infile = open(potential)
for line in infile:
outfile.write(line)
infile.close()
outfile.close()
def write_circle_mesh_kpoints(center=[0, 0, 0], radius=0.1, resolution=20):
"""
Create a circular mesh of k-points centered around a specific
k-point and write it to the KPOINTS file. Non-circular meshes
are not supported, but would be easy to code. All
k-point weights are set to 1.
Args:
center (list): x, y, and z coordinates of mesh center.
Defaults to Gamma.
radius (float): Size of the mesh in inverse Angstroms.
resolution (int): Number of mesh divisions along the
radius in the 3 primary directions.
"""
kpoints = []
step = radius / resolution
for i in range(-resolution, resolution):
for j in range(-resolution, resolution):
if i**2 + j**2 <= resolution**2:
kpoints.append([str(center[0]+step*i),
str(center[1]+step*j), '0', '1'])
with open('KPOINTS', 'w') as kpts:
kpts.write('KPOINTS\n{}\ndirect\n'.format(len(kpoints)))
for kpt in kpoints:
kpts.write(' '.join(kpt))
kpts.write('\n')
def get_markovian_path(points):
"""
Calculates the shortest path connecting an array of 2D
points. Useful for sorting linemode k-points.
Args:
points (list): list/array of points of the format
[[x_1, y_1, z_1], [x_2, y_2, z_2], ...]
Returns:
list: A sorted list of the points in order on the markovian path.
"""
def dist(x, y):
return math.hypot(y[0] - x[0], y[1] - x[1])
paths = [p for p in it.permutations(points)]
path_distances = [
sum(map(lambda x: dist(x[0], x[1]), zip(p[:-1], p[1:])))
for p in paths]
min_index = np.argmin(path_distances)
return paths[min_index]
def remove_z_kpoints():
"""
Strips all linemode k-points from the KPOINTS file that include a
z-component, since these are not relevant for 2D materials and
slabs.
"""
kpoint_file = open('KPOINTS')
kpoint_lines = kpoint_file.readlines()
kpoint_file.close()
twod_kpoints = []
labels = {}
i = 4
while i < len(kpoint_lines):
kpt_1 = kpoint_lines[i].split()
kpt_2 = kpoint_lines[i+1].split()
if float(kpt_1[2]) == 0.0 and [float(kpt_1[0]),
float(kpt_1[1])] not in twod_kpoints:
twod_kpoints.append([float(kpt_1[0]), float(kpt_1[1])])
labels[kpt_1[4]] = [float(kpt_1[0]), float(kpt_1[1])]
if float(kpt_2[2]) == 0.0 and [float(kpt_2[0]),
float(kpt_2[1])] not in twod_kpoints:
twod_kpoints.append([float(kpt_2[0]), float(kpt_2[1])])
labels[kpt_2[4]] = [float(kpt_2[0]), float(kpt_2[1])]
i += 3
kpath = get_markovian_path(twod_kpoints)
with open('KPOINTS', 'w') as kpts:
for line in kpoint_lines[:4]:
kpts.write(line)
for i in range(len(kpath)):
label_1 = [l for l in labels if labels[l] == kpath[i]][0]
if i == len(kpath) - 1:
kpt_2 = kpath[0]
label_2 = [l for l in labels if labels[l] == kpath[0]][0]
else:
kpt_2 = kpath[i+1]
label_2 = [l for l in labels if labels[l] == kpath[i+1]][0]
kpts.write(' '.join([str(kpath[i][0]), str(kpath[i][1]), '0.0 !',
label_1]))
kpts.write('\n')
kpts.write(' '.join([str(kpt_2[0]), str(kpt_2[1]), '0.0 !',
label_2]))
kpts.write('\n\n')
kpts.close()
def update_submission_template(default_template, qtemplate):
"""
helper function for writing a CommonAdapter template fireworks
submission file based on a provided default_template which
contains hpc resource allocation information and the qtemplate
which is a yaml of commonly modified user arguments
"""
pass
def write_pbs_runjob(name, nnodes, nprocessors, pmem, walltime, binary):
"""
writes a runjob based on a name, nnodes, nprocessors, walltime,
and binary. Designed for runjobs on the Hennig group_list on
HiperGator 1 (PBS).
Args:
name (str): job name.
nnodes (int): number of requested nodes.
nprocessors (int): number of requested processors.
pmem (str): requested memory including units, e.g. '1600mb'.
walltime (str): requested wall time, hh:mm:ss e.g. '2:00:00'.
binary (str): absolute path to binary to run.
"""
runjob = open('runjob', 'w')
runjob.write('#!/bin/sh\n')
runjob.write('#PBS -N {}\n'.format(name))
runjob.write('#PBS -o test.out\n')
runjob.write('#PBS -e test.err\n')
runjob.write('#PBS -r n\n')
runjob.write('#PBS -l walltime={}\n'.format(walltime))
runjob.write('#PBS -l nodes={}:ppn={}\n'.format(nnodes, nprocessors))
runjob.write('#PBS -l pmem={}\n'.format(pmem))
runjob.write('#PBS -W group_list=hennig\n\n')
runjob.write('cd $PBS_O_WORKDIR\n\n')
runjob.write('mpirun {} > job.log\n\n'.format(binary))
runjob.write('echo \'Done.\'\n')
runjob.close()
def write_slurm_runjob(name, ntasks, pmem, walltime, binary):
"""
writes a runjob based on a name, nnodes, nprocessors, walltime, and
binary. Designed for runjobs on the Hennig group_list on HiperGator
2 (SLURM).
Args:
name (str): job name.
ntasks (int): total number of requested processors.
pmem (str): requested memory including units, e.g. '1600mb'.
walltime (str): requested wall time, hh:mm:ss e.g. '2:00:00'.
binary (str): absolute path to binary to run.
"""
nnodes = int(np.ceil(float(ntasks) / 32.0))
runjob = open('runjob', 'w')
runjob.write('#!/bin/bash\n')
runjob.write('#SBATCH --job-name={}\n'.format(name))
runjob.write('#SBATCH -o out_%j.log\n')
runjob.write('#SBATCH -e err_%j.log\n')
runjob.write('#SBATCH --qos=hennig-b\n')
runjob.write('#SBATCH --nodes={}\n'.format(nnodes))
runjob.write('#SBATCH --ntasks={}\n'.format(ntasks))
runjob.write('#SBATCH --mem-per-cpu={}\n'.format(pmem))
runjob.write('#SBATCH -t {}\n\n'.format(walltime))
runjob.write('cd $SLURM_SUBMIT_DIR\n\n')
runjob.write('module purge\n')
runjob.write('module load intel/2018\n')
runjob.write('module load openmpi/3.1.2\n')
# runjob.write('module load vasp/5.4.1\n\n')
runjob.write('mpirun {} > job.log\n\n'.format(binary))
runjob.write('echo \'Done.\'\n')
runjob.close()
|
henniggroup/MPInterfaces
|
mpinterfaces/utils.py
|
Python
|
mit
| 41,763
|
[
"ASE",
"CRYSTAL",
"VASP",
"pymatgen"
] |
1b9768282067fa88571ace94b80d7a8bcb2342942b4cb531feed501b5fbd2f0b
|
#### PATTERN | EN | INFLECT ##############################################
# -*- coding: utf-8 -*-
# Copyright (c) 2010 University of Antwerp, Belgium
# Author: Tom De Smedt <tom@organisms.be>
# License: BSD (see LICENSE.txt for details).
##########################################################################
# Regular expressions-based rules for English word inflection:
# - pluralization and singularization of nouns and adjectives,
# - conjugation of verbs,
# - comparative and superlative of adjectives.
# Accuracy (measured on CELEX English morphology word forms):
# 95% for pluralize()
# 96% for singularize()
# 95% for Verbs.find_lemma() (for regular verbs)
# 96% for Verbs.find_lexeme() (for regular verbs)
import os
import sys
import re
try:
MODULE = os.path.dirname(os.path.realpath(__file__))
except:
MODULE = ""
sys.path.insert(0, os.path.join(MODULE, "..", "..", "..", ".."))
from pattern_text import Verbs as _Verbs
from pattern_text import (
INFINITIVE, PRESENT, PAST, FUTURE,
FIRST, SECOND, THIRD,
SINGULAR, PLURAL, SG, PL,
PROGRESSIVE,
PARTICIPLE
)
sys.path.pop(0)
VERB, NOUN, ADJECTIVE, ADVERB = "VB", "NN", "JJ", "RB"
VOWELS = "aeiouy"
re_vowel = re.compile(r"a|e|i|o|u|y", re.I)
is_vowel = lambda ch: ch in VOWELS
#### ARTICLE #############################################################
# Based on the Ruby Linguistics module by Michael Granger:
# http://www.deveiate.org/projects/Linguistics/wiki/English
RE_ARTICLE = [(re.compile(x[0]), x[1]) for x in (
# exceptions: an hour, an honor
("euler|hour(?!i)|heir|honest|hono", "an"),
# Abbreviations:
# strings of capitals starting with a vowel-sound consonant followed by another consonant,
# which are not likely to be real words.
(r"(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]",
"an"),
(r"^[aefhilmnorsx][.-]", "an"),
(r"^[a-z][.-]", "a"),
(r"^[^aeiouy]", "a"), # consonants: a bear
(r"^e[uw]", "a"), # -eu like "you": a european
(r"^onc?e", "a"), # -o like "wa" : a one-liner
(r"uni([^nmd]|mo)", "a"), # -u like "you": a university
(r"^u[bcfhjkqrst][aeiou]", "a"), # -u like "you": a uterus
(r"^[aeiou]", "an"), # vowels: an owl
# y like "i": an yclept, a year
(r"y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)", "an"),
(r"", "a") # guess "a"
)]
def definite_article(word):
return "the"
def indefinite_article(word):
"""Returns the indefinite article for a given word.
For example: indefinite_article("university") => "a" university.
"""
word = word.split(" ")[0]
for rule, article in RE_ARTICLE:
if rule.search(word) is not None:
return article
DEFINITE, INDEFINITE = "definite", "indefinite"
def article(word, function=INDEFINITE):
"""Returns the indefinite (a or an) or definite (the) article for the given
word."""
if function == DEFINITE:
return definite_article(word)
else:
return indefinite_article(word)
_article = article
def referenced(word, article=INDEFINITE):
"""Returns a string with the article + the word."""
return "{0} {1}".format(_article(word, article), word)
# print referenced("hour")
# print referenced("FBI")
# print referenced("bear")
# print referenced("one-liner")
# print referenced("european")
# print referenced("university")
# print referenced("uterus")
# print referenced("owl")
# print referenced("yclept")
# print referenced("year")
#### PLURALIZE ###########################################################
# Based on "An Algorithmic Approach to English Pluralization" by Damian Conway:
# http://www.csse.monash.edu.au/~damian/papers/HTML/Plurals.html
# Prepositions are used in forms like "mother-in-law" and "man at arms".
plural_prepositions = set((
"about", "before", "during", "of", "till",
"above", "behind", "except", "off", "to",
"across", "below", "for", "on", "under",
"after", "beneath", "from", "onto", "until",
"among", "beside", "in", "out", "unto",
"around", "besides", "into", "over", "upon",
"at", "between", "near", "since", "with",
"athwart", "betwixt",
"beyond",
"but",
"by"))
# Inflection rules that are either:
# - general,
# - apply to a certain category of words,
# - apply to a certain category of words only in classical mode,
# - apply only in classical mode.
# Each rule is a (suffix, inflection, category, classic)-tuple.
plural_rules = [
# 0) Indefinite articles and demonstratives.
((r"^a$|^an$", "some", None, False),
(r"^this$", "these", None, False),
(r"^that$", "those", None, False),
(r"^any$", "all", None, False)
), # 1) Possessive adjectives.
((r"^my$", "our", None, False),
(r"^your$", "your", None, False),
(r"^thy$", "your", None, False),
(r"^her$|^his$", "their", None, False),
(r"^its$", "their", None, False),
(r"^their$", "their", None, False)
), # 2) Possessive pronouns.
((r"^mine$", "ours", None, False),
(r"^yours$", "yours", None, False),
(r"^thine$", "yours", None, False),
(r"^her$|^his$", "theirs", None, False),
(r"^its$", "theirs", None, False),
(r"^their$", "theirs", None, False)
), # 3) Personal pronouns.
((r"^I$", "we", None, False),
(r"^me$", "us", None, False),
(r"^myself$", "ourselves", None, False),
(r"^you$", "you", None, False),
(r"^thou$|^thee$", "ye", None, False),
(r"^yourself$", "yourself", None, False),
(r"^thyself$", "yourself", None, False),
(r"^she$|^he$", "they", None, False),
(r"^it$|^they$", "they", None, False),
(r"^her$|^him$", "them", None, False),
(r"^it$|^them$", "them", None, False),
(r"^herself$", "themselves", None, False),
(r"^himself$", "themselves", None, False),
(r"^itself$", "themselves", None, False),
(r"^themself$", "themselves", None, False),
(r"^oneself$", "oneselves", None, False)
), # 4) Words that do not inflect.
((r"$", "", "uninflected", False),
(r"$", "", "uncountable", False),
(r"s$", "s", "s-singular", False),
(r"fish$", "fish", None, False),
(r"([- ])bass$", "\\1bass", None, False),
(r"ois$", "ois", None, False),
(r"sheep$", "sheep", None, False),
(r"deer$", "deer", None, False),
(r"pox$", "pox", None, False),
(r"([A-Z].*)ese$", "\\1ese", None, False),
(r"itis$", "itis", None, False),
(r"(fruct|gluc|galact|lact|ket|malt|rib|sacchar|cellul)ose$",
"\\1ose", None, False)
), # 5) Irregular plural forms (e.g., mongoose, oxen).
((r"atlas$", "atlantes", None, True),
(r"atlas$", "atlases", None, False),
(r"beef$", "beeves", None, True),
(r"brother$", "brethren", None, True),
(r"child$", "children", None, False),
(r"corpus$", "corpora", None, True),
(r"corpus$", "corpuses", None, False),
(r"^cow$", "kine", None, True),
(r"ephemeris$", "ephemerides", None, False),
(r"ganglion$", "ganglia", None, True),
(r"genie$", "genii", None, True),
(r"genus$", "genera", None, False),
(r"graffito$", "graffiti", None, False),
(r"loaf$", "loaves", None, False),
(r"money$", "monies", None, True),
(r"mongoose$", "mongooses", None, False),
(r"mythos$", "mythoi", None, False),
(r"octopus$", "octopodes", None, True),
(r"opus$", "opera", None, True),
(r"opus$", "opuses", None, False),
(r"^ox$", "oxen", None, False),
(r"penis$", "penes", None, True),
(r"penis$", "penises", None, False),
(r"soliloquy$", "soliloquies", None, False),
(r"testis$", "testes", None, False),
(r"trilby$", "trilbys", None, False),
(r"turf$", "turves", None, True),
(r"numen$", "numena", None, False),
(r"occiput$", "occipita", None, True)
), # 6) Irregular inflections for common suffixes (e.g., synopses, mice, men).
((r"man$", "men", None, False),
(r"person$", "people", None, False),
(r"([lm])ouse$", "\\1ice", None, False),
(r"tooth$", "teeth", None, False),
(r"goose$", "geese", None, False),
(r"foot$", "feet", None, False),
(r"zoon$", "zoa", None, False),
(r"([csx])is$", "\\1es", None, False)
), # 7) Fully assimilated classical inflections
# (e.g., vertebrae, codices).
((r"ex$", "ices", "ex-ices", False),
(r"ex$", "ices", "ex-ices*", True), # * = classical mode
(r"um$", "a", "um-a", False),
(r"um$", "a", "um-a*", True),
(r"on$", "a", "on-a", False),
(r"a$", "ae", "a-ae", False),
(r"a$", "ae", "a-ae*", True)
), # 8) Classical variants of modern inflections
# (e.g., stigmata, soprani).
((r"trix$", "trices", None, True),
(r"eau$", "eaux", None, True),
(r"ieu$", "ieu", None, True),
(r"([iay])nx$", "\\1nges", None, True),
(r"en$", "ina", "en-ina*", True),
(r"a$", "ata", "a-ata*", True),
(r"is$", "ides", "is-ides*", True),
(r"us$", "i", "us-i*", True),
(r"us$", "us ", "us-us*", True),
(r"o$", "i", "o-i*", True),
(r"$", "i", "-i*", True),
(r"$", "im", "-im*", True)
), # 9) -ch, -sh and -ss take -es in the plural
# (e.g., churches, classes).
((r"([cs])h$", "\\1hes", None, False),
(r"ss$", "sses", None, False),
(r"x$", "xes", None, False)
), # 10) -f or -fe sometimes take -ves in the plural
# (e.g, lives, wolves).
((r"([aeo]l)f$", "\\1ves", None, False),
(r"([^d]ea)f$", "\\1ves", None, False),
(r"arf$", "arves", None, False),
(r"([nlw]i)fe$", "\\1ves", None, False),
), # 11) -y takes -ys if preceded by a vowel, -ies otherwise
# (e.g., storeys, Marys, stories).
((r"([aeiou])y$", "\\1ys", None, False),
(r"([A-Z].*)y$", "\\1ys", None, False),
(r"y$", "ies", None, False)
), # 12) -o sometimes takes -os, -oes otherwise.
# -o is preceded by a vowel takes -os
# (e.g., lassos, potatoes, bamboos).
((r"o$", "os", "o-os", False),
(r"([aeiou])o$", "\\1os", None, False),
(r"o$", "oes", None, False)
), # 13) Miltary stuff
# (e.g., Major Generals).
((r"l$", "ls", "general-generals", False),
), # 14) Assume that the plural takes -s
# (cats, programmes, ...).
((r"$", "s", None, False),)
]
# For performance, compile the regular expressions once:
plural_rules = [[(re.compile(r[0]), r[1], r[2], r[3])
for r in grp] for grp in plural_rules]
# Suffix categories.
plural_categories = {
"uninflected": [
"bison", "debris", "headquarters", "news", "swine",
"bream", "diabetes", "herpes", "pincers", "trout",
"breeches", "djinn", "high-jinks", "pliers", "tuna",
"britches", "eland", "homework", "proceedings", "whiting",
"carp", "elk", "innings", "rabies", "wildebeest"
"chassis", "flounder", "jackanapes", "salmon",
"clippers", "gallows", "mackerel", "scissors",
"cod", "graffiti", "measles", "series",
"contretemps", "mews", "shears",
"corps", "mumps", "species"
],
"uncountable": [
"advice", "fruit", "ketchup", "meat", "sand",
"bread", "furniture", "knowledge", "mustard", "software",
"butter", "garbage", "love", "news", "understanding",
"cheese", "gravel", "luggage", "progress", "water"
"electricity", "happiness", "mathematics", "research",
"equipment", "information", "mayonnaise", "rice"
],
"s-singular": [
"acropolis", "caddis", "dais", "glottis", "pathos",
"aegis", "cannabis", "digitalis", "ibis", "pelvis",
"alias", "canvas", "epidermis", "lens", "polis",
"asbestos", "chaos", "ethos", "mantis", "rhinoceros",
"bathos", "cosmos", "gas", "marquis", "sassafras",
"bias", "glottis", "metropolis", "trellis"
],
"ex-ices": [
"codex", "murex", "silex"
],
"ex-ices*": [
"apex", "index", "pontifex", "vertex",
"cortex", "latex", "simplex", "vortex"
],
"um-a": [
"agendum", "candelabrum", "desideratum", "extremum", "stratum",
"bacterium", "datum", "erratum", "ovum"
],
"um-a*": [
"aquarium", "emporium", "maximum", "optimum", "stadium",
"compendium", "enconium", "medium", "phylum", "trapezium",
"consortium", "gymnasium", "memorandum", "quantum", "ultimatum",
"cranium", "honorarium", "millenium", "rostrum", "vacuum",
"curriculum", "interregnum", "minimum", "spectrum", "velum",
"dictum", "lustrum", "momentum", "speculum"
],
"on-a": [
"aphelion", "hyperbaton", "perihelion",
"asyndeton", "noumenon", "phenomenon",
"criterion", "organon", "prolegomenon"
],
"a-ae": [
"alga", "alumna", "vertebra"
],
"a-ae*": [
"abscissa", "aurora", "hyperbola", "nebula",
"amoeba", "formula", "lacuna", "nova",
"antenna", "hydra", "medusa", "parabola"
],
"en-ina*": [
"foramen", "lumen", "stamen"
],
"a-ata*": [
"anathema", "dogma", "gumma", "miasma", "stigma",
"bema", "drama", "lemma", "schema", "stoma",
"carcinoma", "edema", "lymphoma", "oedema", "trauma",
"charisma", "enema", "magma", "sarcoma",
"diploma", "enigma", "melisma", "soma",
],
"is-ides*": [
"clitoris", "iris"
],
"us-i*": [
"focus", "nimbus", "succubus",
"fungus", "nucleolus", "torus",
"genius", "radius", "umbilicus",
"incubus", "stylus", "uterus"
],
"us-us*": [
"apparatus", "hiatus", "plexus", "status"
"cantus", "impetus", "prospectus",
"coitus", "nexus", "sinus",
],
"o-i*": [
"alto", "canto", "crescendo", "soprano",
"basso", "contralto", "solo", "tempo"
],
"-i*": [
"afreet", "afrit", "efreet"
],
"-im*": [
"cherub", "goy", "seraph"
],
"o-os": [
"albino", "dynamo", "guano", "lumbago", "photo",
"archipelago", "embryo", "inferno", "magneto", "pro",
"armadillo", "fiasco", "jumbo", "manifesto", "quarto",
"commando", "generalissimo", "medico", "rhino",
"ditto", "ghetto", "lingo", "octavo", "stylo"
],
"general-generals": [
"Adjutant", "Brigadier", "Lieutenant", "Major", "Quartermaster",
"adjutant", "brigadier", "lieutenant", "major", "quartermaster"
]
}
def pluralize(word, pos=NOUN, custom={}, classical=True):
""" Returns the plural of a given word, e.g., child => children.
Handles nouns and adjectives, using classical inflection by default
(i.e., where "matrix" pluralizes to "matrices" and not "matrixes").
The custom dictionary is for user-defined replacements.
"""
if word in custom:
return custom[word]
# Recurse genitives.
# Remove the apostrophe and any trailing -s,
# form the plural of the resultant noun, and then append an apostrophe
# (dog's => dogs').
if word.endswith(("'", "'s")):
w = word.rstrip("'s")
w = pluralize(w, pos, custom, classical)
if w.endswith("s"):
return w + "'"
else:
return w + "'s"
# Recurse compound words
# (e.g., Postmasters General, mothers-in-law, Roman deities).
w = word.replace("-", " ").split(" ")
if len(w) > 1:
if w[1] == "general" or \
w[1] == "General" and \
w[0] not in plural_categories["general-generals"]:
return word.replace(w[0], pluralize(w[0], pos, custom, classical))
elif w[1] in plural_prepositions:
return word.replace(w[0], pluralize(w[0], pos, custom, classical))
else:
return word.replace(w[-1], pluralize(w[-1], pos, custom, classical))
# Only a very few number of adjectives inflect.
n = range(len(plural_rules))
if pos.startswith(ADJECTIVE):
n = [0, 1]
# Apply pluralization rules.
for i in n:
for suffix, inflection, category, classic in plural_rules[i]:
# A general rule, or a classic rule in classical mode.
if category is None:
if not classic or (classic and classical):
if suffix.search(word) is not None:
return suffix.sub(inflection, word)
# A rule pertaining to a specific category of words.
if category is not None:
if word in plural_categories[category] and (not classic or (classic and classical)):
if suffix.search(word) is not None:
return suffix.sub(inflection, word)
return word
# print pluralize("part-of-speech")
# print pluralize("child")
# print pluralize("dog's")
# print pluralize("wolf")
# print pluralize("bear")
# print pluralize("kitchen knife")
# print pluralize("octopus", classical=True)
# print pluralize("matrix", classical=True)
# print pluralize("matrix", classical=False)
# print pluralize("my", pos=ADJECTIVE)
#### SINGULARIZE #########################################################
# Adapted from Bermi Ferrer's Inflector for Python:
# http://www.bermi.org/inflector/
# Copyright (c) 2006 Bermi Ferrer Martinez
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software to deal in this software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of this software, and to permit
# persons to whom this software is furnished to do so, subject to the following
# condition:
#
# THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THIS SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THIS SOFTWARE.
singular_rules = [
(r'(?i)(.)ae$', '\\1a'),
(r'(?i)(.)itis$', '\\1itis'),
(r'(?i)(.)eaux$', '\\1eau'),
(r'(?i)(quiz)zes$', '\\1'),
(r'(?i)(matr)ices$', '\\1ix'),
(r'(?i)(ap|vert|ind)ices$', '\\1ex'),
(r'(?i)^(ox)en', '\\1'),
(r'(?i)(alias|status)es$', '\\1'),
(r'(?i)([octop|vir])i$', '\\1us'),
(r'(?i)(cris|ax|test)es$', '\\1is'),
(r'(?i)(shoe)s$', '\\1'),
(r'(?i)(o)es$', '\\1'),
(r'(?i)(bus)es$', '\\1'),
(r'(?i)([m|l])ice$', '\\1ouse'),
(r'(?i)(x|ch|ss|sh)es$', '\\1'),
(r'(?i)(m)ovies$', '\\1ovie'),
(r'(?i)(.)ombies$', '\\1ombie'),
(r'(?i)(s)eries$', '\\1eries'),
(r'(?i)([^aeiouy]|qu)ies$', '\\1y'),
# -f, -fe sometimes take -ves in the plural
# (e.g., lives, wolves).
(r"([aeo]l)ves$", "\\1f"),
(r"([^d]ea)ves$", "\\1f"),
(r"arves$", "arf"),
(r"erves$", "erve"),
(r"([nlw]i)ves$", "\\1fe"),
(r'(?i)([lr])ves$', '\\1f'),
(r"([aeo])ves$", "\\1ve"),
(r'(?i)(sive)s$', '\\1'),
(r'(?i)(tive)s$', '\\1'),
(r'(?i)(hive)s$', '\\1'),
(r'(?i)([^f])ves$', '\\1fe'),
# -ses suffixes.
(r'(?i)(^analy)ses$', '\\1sis'),
(r'(?i)((a)naly|(b)a|(d)iagno|(p)arenthe|(p)rogno|(s)ynop|(t)he)ses$',
'\\1\\2sis'),
(r'(?i)(.)opses$', '\\1opsis'),
(r'(?i)(.)yses$', '\\1ysis'),
(r'(?i)(h|d|r|o|n|b|cl|p)oses$', '\\1ose'),
(r'(?i)(fruct|gluc|galact|lact|ket|malt|rib|sacchar|cellul)ose$',
'\\1ose'),
(r'(?i)(.)oses$', '\\1osis'),
# -a
(r'(?i)([ti])a$', '\\1um'),
(r'(?i)(n)ews$', '\\1ews'),
(r'(?i)s$', ''),
]
# For performance, compile the regular expressions only once:
singular_rules = [(re.compile(r[0]), r[1]) for r in singular_rules]
singular_uninflected = set((
"bison", "debris", "headquarters", "pincers", "trout",
"bream", "diabetes", "herpes", "pliers", "tuna",
"breeches", "djinn", "high-jinks", "proceedings", "whiting",
"britches", "eland", "homework", "rabies", "wildebeest"
"carp", "elk", "innings", "salmon",
"chassis", "flounder", "jackanapes", "scissors",
"christmas", "gallows", "mackerel", "series",
"clippers", "georgia", "measles", "shears",
"cod", "graffiti", "mews", "species",
"contretemps", "mumps", "swine",
"corps", "news", "swiss",
))
singular_uncountable = set((
"advice", "equipment", "happiness", "luggage", "news", "software",
"bread", "fruit", "information", "mathematics", "progress", "understanding",
"butter", "furniture", "ketchup", "mayonnaise", "research", "water"
"cheese", "garbage", "knowledge", "meat", "rice",
"electricity", "gravel", "love", "mustard", "sand",
))
singular_ie = set((
"alergie", "cutie", "hoagie", "newbie", "softie", "veggie",
"auntie", "doggie", "hottie", "nightie", "sortie", "weenie",
"beanie", "eyrie", "indie", "oldie", "stoolie", "yuppie",
"birdie", "freebie", "junkie", "^pie", "sweetie", "zombie"
"bogie", "goonie", "laddie", "pixie", "techie",
"bombie", "groupie", "laramie", "quickie", "^tie",
"collie", "hankie", "lingerie", "reverie", "toughie",
"cookie", "hippie", "meanie", "rookie", "valkyrie",
))
singular_irregular = {
"atlantes": "atlas",
"atlases": "atlas",
"axes": "axe",
"beeves": "beef",
"brethren": "brother",
"children": "child",
"children": "child",
"corpora": "corpus",
"corpuses": "corpus",
"ephemerides": "ephemeris",
"feet": "foot",
"ganglia": "ganglion",
"geese": "goose",
"genera": "genus",
"genii": "genie",
"graffiti": "graffito",
"helves": "helve",
"kine": "cow",
"leaves": "leaf",
"loaves": "loaf",
"men": "man",
"mongooses": "mongoose",
"monies": "money",
"moves": "move",
"mythoi": "mythos",
"numena": "numen",
"occipita": "occiput",
"octopodes": "octopus",
"opera": "opus",
"opuses": "opus",
"our": "my",
"oxen": "ox",
"penes": "penis",
"penises": "penis",
"people": "person",
"sexes": "sex",
"soliloquies": "soliloquy",
"teeth": "tooth",
"testes": "testis",
"trilbys": "trilby",
"turves": "turf",
"zoa": "zoon",
}
def singularize(word, pos=NOUN, custom={}):
"""Returns the singular of a given word."""
if word in custom:
return custom[word]
# Recurse compound words (e.g. mothers-in-law).
if "-" in word:
w = word.split("-")
if len(w) > 1 and w[1] in plural_prepositions:
return singularize(w[0], pos, custom) + "-" + "-".join(w[1:])
# dogs' => dog's
if word.endswith("'"):
return singularize(word[:-1]) + "'s"
w = word.lower()
for x in singular_uninflected:
if x.endswith(w):
return word
for x in singular_uncountable:
if x.endswith(w):
return word
for x in singular_ie:
if w.endswith(x + "s"):
return w
for x in singular_irregular:
if w.endswith(x):
return re.sub('(?i)' + x + '$', singular_irregular[x], word)
for suffix, inflection in singular_rules:
m = suffix.search(word)
g = m and m.groups() or []
if m:
for k in range(len(g)):
if g[k] is None:
inflection = inflection.replace('\\' + str(k + 1), '')
return suffix.sub(inflection, word)
return word
#### VERB CONJUGATION ####################################################
class Verbs(_Verbs):
def __init__(self):
_Verbs.__init__(self, os.path.join(MODULE, "en-verbs.txt"),
language="en",
format=[0, 1, 2, 3, 7, 8, 17, 18, 19, 23, 25,
24, 16, 9, 10, 11, 15, 33, 26, 27, 28, 32],
default={
# present singular => infinitive ("I walk")
1: 0, 2: 0, 3: 0, 7: 0,
4: 7, 5: 7, 6: 7, # present plural
17: 25, 18: 25, 19: 25, 23: 25, # past singular
20: 23, 21: 23, 22: 23, # past plural
9: 16, 10: 16, 11: 16, 15: 16, # present singular negated
12: 15, 13: 15, 14: 15, # present plural negated
26: 33, 27: 33, 28: 33, # past singular negated
29: 32, 30: 32, 31: 32, 32: 33 # past plural negated
})
def find_lemma(self, verb):
""" Returns the base form of the given inflected verb, using a rule-based approach.
This is problematic if a verb ending in -e is given in the past tense or gerund.
"""
v = verb.lower()
b = False
if v in ("'m", "'re", "'s", "n't"):
return "be"
if v in ("'d", "'ll"):
return "will"
if v in ("'ve"):
return "have"
if v.endswith("s"):
if v.endswith("ies") and len(v) > 3 and v[-4] not in VOWELS:
return v[:-3] + "y" # complies => comply
if v.endswith(("sses", "shes", "ches", "xes")):
return v[:-2] # kisses => kiss
return v[:-1]
if v.endswith("ied") and re_vowel.search(v[:-3]) is not None:
return v[:-3] + "y" # envied => envy
if v.endswith("ing") and re_vowel.search(v[:-3]) is not None:
v = v[:-3]
b = True # chopping => chopp
if v.endswith("ed") and re_vowel.search(v[:-2]) is not None:
v = v[:-2]
b = True # danced => danc
if b:
# Doubled consonant after short vowel: chopp => chop.
if len(v) > 3 and v[-1] == v[-2] and v[-3] in VOWELS and v[-4] not in VOWELS and not v.endswith("ss"):
return v[:-1]
if v.endswith(("ick", "ack")):
return v[:-1] # panick => panic
# Guess common cases where the base form ends in -e:
if v.endswith(("v", "z", "c", "i")):
return v + "e" # danc => dance
if v.endswith("g") and v.endswith(("dg", "lg", "ng", "rg")):
return v + "e" # indulg => indulge
if v.endswith(("b", "d", "g", "k", "l", "m", "r", "s", "t")) \
and len(v) > 2 and v[-2] in VOWELS and not v[-3] in VOWELS \
and not v.endswith("er"):
return v + "e" # generat => generate
if v.endswith("n") and v.endswith(("an", "in")) and not v.endswith(("ain", "oin", "oan")):
return v + "e" # imagin => imagine
if v.endswith("l") and len(v) > 1 and v[-2] not in VOWELS:
return v + "e" # squabbl => squabble
if v.endswith("f") and len(v) > 2 and v[-2] in VOWELS and v[-3] not in VOWELS:
return v + "e" # chaf => chafed
if v.endswith("e"):
return v + "e" # decre => decree
if v.endswith(("th", "ang", "un", "cr", "vr", "rs", "ps", "tr")):
return v + "e"
return v
def find_lexeme(self, verb):
""" For a regular verb (base form), returns the forms using a rule-based approach.
"""
v = verb.lower()
if len(v) > 1 and v.endswith("e") and v[-2] not in VOWELS:
# Verbs ending in a consonant followed by "e": dance, save, devote,
# evolve.
return [v, v, v, v + "s", v, v[:-1] + "ing"] + [v + "d"] * 6
if len(v) > 1 and v.endswith("y") and v[-2] not in VOWELS:
# Verbs ending in a consonant followed by "y": comply, copy,
# magnify.
return [v, v, v, v[:-1] + "ies", v, v + "ing"] + [v[:-1] + "ied"] * 6
if v.endswith(("ss", "sh", "ch", "x")):
# Verbs ending in sibilants: kiss, bless, box, polish, preach.
return [v, v, v, v + "es", v, v + "ing"] + [v + "ed"] * 6
if v.endswith("ic"):
# Verbs ending in -ic: panic, mimic.
return [v, v, v, v + "es", v, v + "king"] + [v + "ked"] * 6
if len(v) > 1 and v[-1] not in VOWELS and v[-2] not in VOWELS:
# Verbs ending in a consonant cluster: delight, clamp.
return [v, v, v, v + "s", v, v + "ing"] + [v + "ed"] * 6
if (len(v) > 1 and v.endswith(("y", "w")) and v[-2] in VOWELS) \
or (len(v) > 2 and v[-1] not in VOWELS and v[-2] in VOWELS and v[-3] in VOWELS) \
or (len(v) > 3 and v[-1] not in VOWELS and v[-3] in VOWELS and v[-4] in VOWELS):
# Verbs ending in a long vowel or diphthong followed by a
# consonant: paint, devour, play.
return [v, v, v, v + "s", v, v + "ing"] + [v + "ed"] * 6
if len(v) > 2 and v[-1] not in VOWELS and v[-2] in VOWELS and v[-3] not in VOWELS:
# Verbs ending in a short vowel followed by a consonant: chat,
# chop, or compel.
return [v, v, v, v + "s", v, v + v[-1] + "ing"] + [v + v[-1] + "ed"] * 6
return [v, v, v, v + "s", v, v + "ing"] + [v + "ed"] * 6
verbs = Verbs()
conjugate, lemma, lexeme, tenses = \
verbs.conjugate, verbs.lemma, verbs.lexeme, verbs.tenses
# print conjugate("imaginarify", "part", parse=True)
# print conjugate("imaginarify", "part", parse=False)
#### COMPARATIVE & SUPERLATIVE ###########################################
VOWELS = "aeiouy"
grade_irregular = {
"bad": ("worse", "worst"),
"far": ("further", "farthest"),
"good": ("better", "best"),
"hind": ("hinder", "hindmost"),
"ill": ("worse", "worst"),
"less": ("lesser", "least"),
"little": ("less", "least"),
"many": ("more", "most"),
"much": ("more", "most"),
"well": ("better", "best")
}
grade_uninflected = ["giant", "glib", "hurt", "known", "madly"]
COMPARATIVE = "er"
SUPERLATIVE = "est"
def _count_syllables(word):
""" Returns the estimated number of syllables in the word by counting vowel-groups.
"""
n = 0
p = False # True if the previous character was a vowel.
for ch in word.endswith("e") and word[:-1] or word:
v = ch in VOWELS
n += int(v and not p)
p = v
return n
def grade(adjective, suffix=COMPARATIVE):
"""Returns the comparative or superlative form of the given adjective."""
n = _count_syllables(adjective)
if adjective in grade_irregular:
# A number of adjectives inflect irregularly.
return grade_irregular[adjective][suffix != COMPARATIVE]
elif adjective in grade_uninflected:
# A number of adjectives don't inflect at all.
return "{0} {1}".format(suffix == COMPARATIVE and "more" or "most", adjective)
elif n <= 2 and adjective.endswith("e"):
# With one syllable and ending with an e: larger, wiser.
suffix = suffix.lstrip("e")
elif n == 1 and len(adjective) >= 3 \
and adjective[-1] not in VOWELS and adjective[-2] in VOWELS and adjective[-3] not in VOWELS:
# With one syllable ending with consonant-vowel-consonant: bigger,
# thinner.
if not adjective.endswith(("w")): # Exceptions: lower, newer.
suffix = adjective[-1] + suffix
elif n == 1:
# With one syllable ending with more consonants or vowels: briefer.
pass
elif n == 2 and adjective.endswith("y"):
# With two syllables ending with a y: funnier, hairier.
adjective = adjective[:-1] + "i"
elif n == 2 and adjective[-2:] in ("er", "le", "ow"):
# With two syllables and specific suffixes: gentler, narrower.
pass
else:
# With three or more syllables: more generous, more important.
return "{0} {1}".format(suffix == COMPARATIVE and "more" or "most", adjective)
return adjective + suffix
def comparative(adjective):
return grade(adjective, COMPARATIVE)
def superlative(adjective):
return grade(adjective, SUPERLATIVE)
#### ATTRIBUTIVE & PREDICATIVE ###########################################
def attributive(adjective):
return adjective
def predicative(adjective):
return adjective
|
textioHQ/pattern
|
pattern_text/en/inflect_flymake.py
|
Python
|
bsd-3-clause
| 32,321
|
[
"Elk",
"Octopus"
] |
e00a18efebe3065da2de821446d7417e2092d6e619bab252d1266e4fd93ae01e
|
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class SagaGis(AutotoolsPackage, SourceforgePackage):
"""
SAGA is a GIS for Automated Geoscientific Analyses and has been designed
for an easy and effective implementation of spatial algorithms. It offers
a comprehensive, growing set of geoscientific methods and provides an
easily approachable user interface with many visualisation options
"""
homepage = "http://saga-gis.org/"
sourceforge_mirror_path = "SAGA%20-%205.0.0/saga-5.0.0.tar.gz"
git = "git://git.code.sf.net/p/saga-gis/code"
version('develop', branch='master')
version('7.4.0', branch='release-7.4.0')
version('7.3.0', branch='release-7.3.0')
version('7.1.1', branch='release-7.1.1')
version('7.1.0', branch='release-7.1.0')
version('7.0.0', branch='release-7.0.0')
version('6.4.0', branch='release-6.4.0')
version('6.3.0', branch='release-6.3.0')
version('6.2.0', branch='release-6.2.0')
version('6.1.0', branch='release-6.1.0')
version('6.0.0', branch='release-6.0.0')
version('5.0.1', branch='release-5-0-1')
version('5.0.0', branch='release-5.0.0')
version('4.1.0', branch='release-4.1.0')
version('4.0.0', branch='release-4.0.0')
version('4.0.0', branch='release-4.0.0')
version('3.0.0', branch='release-3.0.0')
version('2.3-lts', branch='release-2-3-lts')
version('2.3.1', branch='release-2-3-1')
version('2.3.0', branch='release-2-3-0')
variant('gui', default=True, description='Build GUI and interactive SAGA tools')
variant('odbc', default=True, description='Build with ODBC support')
# FIXME Saga-gis configure file disables triangle even if
# --enable-triangle flag is used
# variant('triangle', default=True, description='Build with triangle.c
# non free for commercial use otherwise use qhull')
variant('libfire', default=True, description='Build with libfire (non free for commercial usage)')
variant('openmp', default=True, description='Build with OpenMP enabled')
variant('python', default=False, description='Build Python extension')
variant('postgresql', default=False, description='Build with PostgreSQL library')
variant('opencv', default=False, description='Build with libraries using OpenCV')
depends_on('autoconf', type='build')
depends_on('automake', type='build')
depends_on('libtool', type='build')
depends_on('m4', type='build')
depends_on('libsm', type='link')
depends_on('libharu')
depends_on('wxwidgets')
depends_on('postgresql', when='+postgresql')
depends_on('unixodbc', when='+odbc')
# SAGA-GIS requires projects.h from proj
depends_on('proj')
# https://sourceforge.net/p/saga-gis/bugs/271/
depends_on('proj@:5', when='@:7.2')
# Saga-Gis depends on legacy opencv API removed in opencv 4.x
depends_on('opencv@:3', when='+opencv')
# Set jpeg provider (similar to #8133)
depends_on('libjpeg', when='+opencv')
# Set hl variant due to similar issue #7145
depends_on('hdf5+hl')
# write support for grib2 is available since 2.3.0 (https://gdal.org/drivers/raster/grib.html)
depends_on('gdal@2.3:+grib+hdf5+netcdf')
depends_on('gdal@2.3:2.4+grib+hdf5+netcdf', when='@:7.2')
depends_on('libgeotiff@:1.4', when='@:7.2')
# FIXME Saga-Gis uses a wrong include path
# depends_on('qhull', when='~triangle')
depends_on('swig', type='build', when='+python')
extends('python', when='+python')
configure_directory = "saga-gis"
def configure_args(self):
args = []
args += self.enable_or_disable('gui')
args += self.enable_or_disable('odbc')
# FIXME Saga-gis configure file disables triangle even if
# --enable-triangle flag is used
# args += self.enable_or_disable('triangle')
# FIXME SAGA-GIS uses a wrong include path
# if '~triangle' in self.spec:
# args.append('--disable-triangle')
args += self.enable_or_disable('libfire')
args += self.enable_or_disable('openmp')
args += self.enable_or_disable('python')
args += self.with_or_without('postgresql')
return args
def setup_run_environment(self, env):
# Point saga to its tool set, will be loaded during runtime
env.set("SAGA_MLB", self.prefix.lib.saga)
env.set("SAGA_TLB", self.prefix.lib.saga)
|
LLNL/spack
|
var/spack/repos/builtin/packages/saga-gis/package.py
|
Python
|
lgpl-2.1
| 4,704
|
[
"NetCDF"
] |
ce8fa3700b468eb6690bdd058e02bd0a54b245e18d08cf2eaa3820b1d0019e68
|
# Copyright (C) 2013, Thomas Leonard
# See the README file for details, or visit http://0install.net.
from __future__ import print_function
import os, shutil, hashlib, collections, re
from os.path import join, basename, dirname, abspath
from zeroinstall.injector import model
from zeroinstall import SafeException, support
from zeroinstall.support import tasks
from repo import paths, urltest
valid_simple_name = re.compile(r'^[^. \n/][^ \n/]*$')
class Archive(object):
def __init__(self, source_path, rel_url, size, incoming_path = None):
self.basename = basename(source_path)
self.source_path = source_path
self.rel_url = rel_url
self.size = size
self.incoming_path = incoming_path # (used to delete from /incoming)
def get_sha1(path):
sha1 = hashlib.sha1()
with open(path, 'rb') as stream:
while True:
got = stream.read(4096)
if not got: break
sha1.update(got)
return sha1.hexdigest()
def _assert_identical_archives(name, sha1, existing):
if sha1 != existing.sha1:
raise SafeException("A different archive with basename '{name}' is "
"already in the repository: {archive}".format(name = name, archive = existing))
def _default_archive_test(archive, url):
actual_size = urltest.get_size(url)
if actual_size != archive.size:
raise SafeException("Archive {url} has size {actual}, but expected {expected} bytes".format(
url = url, actual = actual_size, expected = archive.size))
def process_method(config, incoming_dir, impl, method, required_digest):
archives = []
if not isinstance(method, model.Recipe):
# turn an individual method into a single-step Recipe
step = method
method = model.Recipe()
method.steps.append(step)
has_external_archives = False
for step in method.steps:
if not hasattr(step, 'url'): continue
archive = step.url
if '/' in archive:
has_external_archives = True
test_archive = getattr(config, 'check_external_archive', _default_archive_test)
test_archive(step, archive)
continue # Hosted externally
if not valid_simple_name.match(archive):
raise SafeException("Illegal archive name '{name}'".format(name = archive))
archive_path = join(incoming_dir, archive)
if not os.path.isfile(archive_path):
raise SafeException("Referenced upload '{path}' not found".format(path = archive_path))
existing = config.archive_db.entries.get(archive, None)
if existing is not None:
new_sha1 = get_sha1(archive_path)
_assert_identical_archives(archive, sha1=new_sha1, existing=existing)
else:
archive_rel_url = paths.get_archive_rel_url(config, archive, impl)
# Copy to archives directory
backup_dir = config.LOCAL_ARCHIVES_BACKUP_DIR # note: may be relative; that's OK
backup_target_dir = join(backup_dir, dirname(archive_rel_url))
paths.ensure_dir(backup_target_dir)
copy_path = join(backup_dir, archive_rel_url)
shutil.copyfile(archive_path, copy_path)
stored_archive = Archive(abspath(copy_path), archive_rel_url, step.size, archive_path)
actual_size = os.path.getsize(stored_archive.source_path)
if step.size != actual_size:
raise SafeException("Archive '{archive}' has size '{actual}', but XML says size should be {expected}".format(
archive = archive,
actual = actual_size,
expected = step.size))
archives.append(stored_archive)
step.url = os.path.abspath(archive_path) # (just used below to test it)
if not has_external_archives and getattr(config, 'CHECK_DIGESTS', True) and os.name != 'nt':
# Check archives unpack to give the correct digests
impl.feed.local_path = "/is-local-hack.xml"
try:
blocker = config.zconfig.fetcher.cook(required_digest, method,
config.zconfig.stores, impl_hint = impl, dry_run = True, may_use_mirror = False)
tasks.wait_for_blocker(blocker)
finally:
impl.feed.local_path = None
return archives
StoredArchive = collections.namedtuple('StoredArchive', ['url', 'sha1'])
class ArchiveDB:
def __init__(self, path):
self.path = abspath(path)
self.entries = {}
if os.path.exists(path):
with open(path, 'rt') as stream:
for line in stream:
line = line.strip()
if line.startswith('#') or not line:
continue
key, sha1, url = [x.strip() for x in line.split(' ', 2)]
assert key not in self.entries, key
self.entries[key] = StoredArchive(url, sha1)
else:
self.save_all()
def add(self, basename, url, sha1):
existing = self.entries.get(basename, None)
if existing is not None:
_assert_identical_archives(basename, sha1=sha1, existing=existing)
else:
with open(self.path, 'at') as stream:
stream.write('%s %s %s\n' % (basename, sha1, url))
self.entries[basename] = StoredArchive(url, sha1)
def lookup(self, basename):
return self.entries.get(basename, None)
def save_all(self):
with open(self.path + '.new', 'wt') as stream:
stream.write("# Records the absolute URL of all known archives.\n"
"# To relocate archives, edit this file to contain the new addresses and run '0repo'.\n"
"# Each line is 'basename SHA1 URL'\n")
for basename, e in sorted(self.entries.items()):
stream.write('%s %s %s\n' % (basename, e.sha1, e.url))
support.portable_rename(self.path + '.new', self.path)
def pick_digest(impl):
from zeroinstall.zerostore import manifest, parse_algorithm_digest_pair
best = None
for digest in impl.digests:
alg_name, digest_value = parse_algorithm_digest_pair(digest)
alg = manifest.algorithms.get(alg_name, None)
if alg and (best is None or best.rating < alg.rating):
best = alg
required_digest = digest
if best is None:
if not impl.digests:
raise SafeException("No <manifest-digest> given for '%(implementation)s' version %(version)s" %
{'implementation': impl.feed.get_name(), 'version': impl.get_version()})
raise SafeException("Unknown digest algorithms '%(algorithms)s' for '%(implementation)s' version %(version)s" %
{'algorithms': impl.digests, 'implementation': impl.feed.get_name(), 'version': impl.get_version()})
return required_digest
# Copy to archives directory and upload
def upload_archives(config, archives):
config.upload_archives(archives)
test_archive = getattr(config, 'check_uploaded_archive', _default_archive_test)
for archive in archives:
url = config.ARCHIVES_BASE_URL + archive.rel_url
test_archive(archive, url)
for archive in archives:
sha1 = get_sha1(archive.source_path)
config.archive_db.add(archive.basename, config.ARCHIVES_BASE_URL + archive.rel_url, sha1)
def process_archives(config, incoming_dir, feed):
"""feed is the parsed XML being processed. Any archives are in 'incoming_dir'."""
# Pick a digest to check (maybe we should check all of them?)
# Find required archives and check they're in 'incoming'
archives = []
for impl in feed.implementations.values():
required_digest = pick_digest(impl)
for method in impl.download_sources:
archives += process_method(config, incoming_dir, impl, method, required_digest)
upload_archives(config, archives)
return archives
|
bastianeicher/0repo
|
repo/archives.py
|
Python
|
lgpl-2.1
| 7,045
|
[
"VisIt"
] |
b8c49451878d397219f52a0a66b800741ae67591e5d6d46f2eb4ce2be79a1e3c
|
#
# Brian C. Lane <bcl@redhat.com>
#
# Copyright 2014 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use, modify,
# copy, or redistribute it subject to the terms and conditions of the GNU
# General Public License v.2. This program is distributed in the hope that it
# will be useful, but WITHOUT ANY WARRANTY expressed or implied, including the
# implied warranties of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 51
# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Any Red Hat
# trademarks that are incorporated in the source code or documentation are not
# subject to the GNU General Public License and may only be used or replicated
# with the express permission of Red Hat, Inc.
#
from pykickstart.base import BaseData, KickstartCommand
from pykickstart.errors import KickstartValueError, formatErrorMsg
from pykickstart.options import KSOptionParser
import warnings
from pykickstart.i18n import _
class F22_SshKeyData(BaseData):
removedKeywords = BaseData.removedKeywords
removedAttrs = BaseData.removedAttrs
def __init__(self, *args, **kwargs):
BaseData.__init__(self, *args, **kwargs)
self.username = kwargs.get("username", None)
self.key = kwargs.get("key", "")
def __eq__(self, y):
if not y:
return False
return self.username == y.username
def __ne__(self, y):
return not self == y
def __str__(self):
retval = BaseData.__str__(self)
retval += "sshkey"
retval += self._getArgsAsStr() + '\n'
return retval
def _getArgsAsStr(self):
retval = ""
retval += " --username=%s" % self.username
retval += ' "%s"' % self.key
return retval
class F22_SshKey(KickstartCommand):
removedKeywords = KickstartCommand.removedKeywords
removedAttrs = KickstartCommand.removedAttrs
def __init__(self, writePriority=0, *args, **kwargs):
KickstartCommand.__init__(self, writePriority, *args, **kwargs)
self.op = self._getParser()
self.sshUserList = kwargs.get("sshUserList", [])
def __str__(self):
retval = ""
for user in self.sshUserList:
retval += user.__str__()
return retval
def _getParser(self):
op = KSOptionParser()
op.add_option("--username", dest="username", required=True)
return op
def parse(self, args):
ud = self.handler.SshKeyData()
(opts, extra) = self.op.parse_args(args=args, lineno=self.lineno)
self._setToObj(self.op, opts, ud)
ud.lineno = self.lineno
if len(extra) != 1:
raise KickstartValueError(formatErrorMsg(self.lineno, msg=_("A single argument is expected for the %s command") % "sshkey"))
ud.key = extra[0]
if ud in self.dataList():
warnings.warn(_("An ssh user with the name %s has already been defined.") % ud.username)
return ud
def dataList(self):
return self.sshUserList
|
cgwalters/pykickstart
|
pykickstart/commands/sshkey.py
|
Python
|
gpl-2.0
| 3,231
|
[
"Brian"
] |
43bdf07169822ee558e99bb58596c02f2c89987f9425f01301b1e38f1bf71c7f
|
#!/usr/bin/env python
from pyDFTutils.perovskite.lattice_factory import PerovskiteCubic
from pyDFTutils.ase_utils import my_write_vasp,normalize, vesta_view, set_element_mag
from ase.io.vasp import read_vasp
try:
from ase.atoms import string2symbols
except:
from ase.symbols import string2symbols
import numpy as np
from ase.build import make_supercell
def gen222(name=None,
A='Sr',
B='Mn',
O='O',
latticeconstant=3.9,
mag_order='FM',
mag_atom=None,
m=5,
sort=True):
if name is not None:
symbols=string2symbols(name)
A, B, O, _, _ = symbols
atoms = PerovskiteCubic([A, B, O], latticeconstant=latticeconstant)
atoms = atoms.repeat([2, 2, 2])
if sort:
my_write_vasp('UCPOSCAR', atoms, vasp5=True, sort=True)
atoms = read_vasp('UCPOSCAR')
spin_dn = {
'FM': [],
'A': [0, 1, 4, 5],
'C': [0, 2, 5, 7],
'G': [0, 3, 5, 6]
}
if mag_order != 'PM':
mag = np.ones(8)
mag[np.array(spin_dn[mag_order], int)] = -1.0
if mag_atom is None:
atoms = set_element_mag(atoms, B, mag * m)
else:
atoms = set_element_mag(atoms, mag_atom, mag * m)
return atoms
def gen_primitive(name=None,A=None,B=None,O=None, latticeconstant=3.9, mag_order='FM',mag_atom=None, m=5):
"""
generate primitive cell with magnetic order.
Parameters:
---------------
name: string
ABO3, eg. BiFeO3, CsPbF3
"""
if name is not None:
symbols=string2symbols(name)
A, B, O, _, _ = symbols
atoms = PerovskiteCubic([A, B, O], latticeconstant=latticeconstant)
direction_dict = {
'A': ([1, 0, 0], [0, 1, 0], [0, 0, 2]),
'C': ([1, -1, 0], [1, 1, 0], [0, 0, 1]),
'G': ([0, 1, 1], [1, 0, 1], [1, 1, 0]),
'FM': np.eye(3),
}
size_dict = {'A': (1, 1, 2), 'C': (1, 1, 1), 'G': (1, 1, 1)}
A, B, O = atoms.get_chemical_symbols()[0:3]
if mag_order == 'PM':
atoms = atoms
elif mag_order == 'FM':
atoms = atoms
if mag_atom is None:
atoms = set_element_mag(atoms, B, [m])
else:
atoms = set_element_mag(atoms, mag_atom, [m])
else:
atoms.translate([0.045] * 3)
atoms = normalize(atoms)
atoms = make_supercell(atoms, direction_dict[mag_order])
atoms.translate([-0.045] * 3)
if mag_atom is None:
atoms = set_element_mag(atoms, B, [m])
else:
atoms = set_element_mag(atoms, mag_atom, [m])
return atoms
if __name__ == '__main__':
atoms = gen_primitive(name='LaMnO3',mag_order='G')
vesta_view(atoms)
|
mailhexu/pyDFTutils
|
pyDFTutils/perovskite/cubic_perovskite.py
|
Python
|
lgpl-3.0
| 2,732
|
[
"ASE",
"VASP"
] |
07cc10a7ba2e90065ed646457988d4268d258bbd842d55bc03823eff1f9dcb8a
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11.4 on 2018-06-06 20:34
from __future__ import unicode_literals
import datetime
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('crm', '0026_auto_20180507_1957'),
]
operations = [
migrations.CreateModel(
name='Call',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('description', models.TextField(verbose_name='Description')),
('date_of_creation', models.DateTimeField(auto_now_add=True, verbose_name='Created at')),
('date_due', models.DateTimeField(blank=True, default=datetime.datetime.now, verbose_name='Date due')),
('last_modification', models.DateTimeField(auto_now=True, verbose_name='Last modified')),
('status', models.CharField(choices=[('P', 'Planned'), ('D', 'Delayed'), ('R', 'ToRecall'), ('F', 'Failed'), ('S', 'Success')], default='P', max_length=1, verbose_name='Status')),
],
),
migrations.CreateModel(
name='ContactPersonAssociation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('contact', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='person_association', to='crm.Contact')),
],
options={
'verbose_name': 'Contacts',
'verbose_name_plural': 'Contacts',
},
),
migrations.CreateModel(
name='Person',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('prefix', models.CharField(blank=True, choices=[('F', 'Company'), ('W', 'Mrs'), ('H', 'Mr'), ('G', 'Ms')], max_length=1, null=True, verbose_name='Prefix')),
('name', models.CharField(blank=True, max_length=100, null=True, verbose_name='Name')),
('prename', models.CharField(blank=True, max_length=100, null=True, verbose_name='Prename')),
('email', models.EmailField(max_length=200, verbose_name='Email Address')),
('phone', models.CharField(max_length=20, verbose_name='Phone Number')),
('role', models.CharField(blank=True, max_length=100, null=True, verbose_name='Role')),
('companies', models.ManyToManyField(blank=True, through='crm.ContactPersonAssociation', to='crm.Contact', verbose_name='Works at')),
],
options={
'verbose_name': 'Person',
'verbose_name_plural': 'People',
},
),
migrations.AddField(
model_name='customer',
name='is_lead',
field=models.BooleanField(default=True),
),
migrations.CreateModel(
name='CallForContact',
fields=[
('call_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='crm.Call')),
('purpose', models.CharField(choices=[('F', 'First commercial call'), ('S', 'Planned commercial call'), ('A', 'Assistance call')], max_length=1, verbose_name='Purpose')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='crm.Contact')),
('cperson', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='crm.Person', verbose_name='Person')),
],
options={
'verbose_name': 'Call',
'verbose_name_plural': 'Calls',
},
bases=('crm.call',),
),
migrations.CreateModel(
name='VisitForContact',
fields=[
('call_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='crm.Call')),
('purpose', models.CharField(choices=[('F', 'First commercial visit'), ('S', 'Installation')], max_length=1, verbose_name='Purpose')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='crm.Contact')),
('cperson', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='crm.Person', verbose_name='Person')),
('ref_call', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='crm.CallForContact', verbose_name='Reference Call')),
],
options={
'verbose_name': 'Visit',
'verbose_name_plural': 'Visits',
},
bases=('crm.call',),
),
migrations.AddField(
model_name='contactpersonassociation',
name='person',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='contact_association', to='crm.Person'),
),
migrations.AddField(
model_name='call',
name='last_modified_by',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='db_calllstmodified', to=settings.AUTH_USER_MODEL, verbose_name='Last modified by'),
),
migrations.AddField(
model_name='call',
name='staff',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='db_relcallstaff', to=settings.AUTH_USER_MODEL, verbose_name='Staff'),
),
]
|
scaphilo/koalixcrm
|
koalixcrm/crm/migrations/0027_auto_20180606_2034.py
|
Python
|
bsd-3-clause
| 5,968
|
[
"VisIt"
] |
45d9557f5d4b82b9f1121ffbd195a416abf8d50b4641e1f794664d2150a8a8b3
|
from django.db import models
# from edc_base.audit_trail import AuditTrail
from edc_constants.choices import YES_NO
from edc_base.model.fields.custom_fields import OtherCharField
from .maternal_crf_model import MaternalCrfModel
from td_maternal.maternal_choices import SIZE_CHECK
class MaternalInterimIdcc(MaternalCrfModel):
info_since_lastvisit = models.CharField(
max_length=25,
verbose_name="Is there new laboratory information available on the mother since last visit",
choices=YES_NO,
help_text="",)
recent_cd4 = models.DecimalField(
max_digits=8,
decimal_places=2,
blank=True,
null=True,
verbose_name="Most recent CD4 available",
help_text="",)
recent_cd4_date = models.DateField(
verbose_name="Date of recent CD4",
blank=True,
null=True)
# recent_vl = models.DecimalField(
# max_digits=10,
# decimal_places=2,
# blank=True,
# null=True,
# verbose_name="Most recent VL available",
# help_text="",)
value_vl_size = models.CharField(
max_length=25,
verbose_name="Is the value for the most recent VL available “=” , “<”, or “>” a number? ",
choices=SIZE_CHECK,
blank=True,
null=True)
value_vl = models.DecimalField(
max_digits=10,
decimal_places=2,
blank=True,
null=True,
verbose_name="Value of VL ",
help_text="",)
recent_vl_date = models.DateField(
verbose_name="Date of recent VL",
blank=True,
null=True)
other_diagnoses = OtherCharField(
max_length=25,
verbose_name="Please specify any other diagnoses found in the IDCC since the last visit ",
blank=True,
null=True)
class Meta:
app_label = 'td_maternal'
verbose_name = "Maternal Interim Idcc Data"
verbose_name_plural = "Maternal Interim Idcc Data"
|
botswana-harvard/tshilo-dikotla
|
td_maternal/models/maternal_interim_idcc_data.py
|
Python
|
gpl-2.0
| 1,989
|
[
"VisIt"
] |
4502760b69559247c9aed4ff0dcbaf3358bedcb6b513512f939e452066e7d157
|
##
# Copyright 2012-2021 Ghent University
#
# This file is part of EasyBuild,
# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
# with support of Ghent University (http://ugent.be/hpc),
# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
# Flemish Research Foundation (FWO) (http://www.fwo.be/en)
# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
#
# https://github.com/easybuilders/easybuild
#
# EasyBuild is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation v2.
#
# EasyBuild is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
##
"""
Unit tests for talking to GitHub.
@author: Jens Timmerman (Ghent University)
@author: Kenneth Hoste (Ghent University)
"""
import base64
import os
import random
import re
import sys
import textwrap
from test.framework.utilities import EnhancedTestCase, TestLoaderFiltered, init_config
from time import gmtime
from unittest import TextTestRunner
import easybuild.tools.testing
from easybuild.base.rest import RestClient
from easybuild.framework.easyconfig.easyconfig import EasyConfig
from easybuild.framework.easyconfig.tools import categorize_files_by_type
from easybuild.tools.build_log import EasyBuildError
from easybuild.tools.config import build_option, module_classes, update_build_option
from easybuild.tools.configobj import ConfigObj
from easybuild.tools.filetools import read_file, write_file
from easybuild.tools.github import GITHUB_EASYCONFIGS_REPO, GITHUB_EASYBLOCKS_REPO, GITHUB_MERGEABLE_STATE_CLEAN
from easybuild.tools.github import VALID_CLOSE_PR_REASONS
from easybuild.tools.github import pick_default_branch
from easybuild.tools.testing import create_test_report, post_pr_test_report, session_state
from easybuild.tools.py2vs3 import HTTPError, URLError, ascii_letters
import easybuild.tools.github as gh
try:
import keyring
HAVE_KEYRING = True
except ImportError:
HAVE_KEYRING = False
# test account, for which a token may be available
GITHUB_TEST_ACCOUNT = 'easybuild_test'
# the user & repo to use in this test (https://github.com/easybuilders/testrepository)
GITHUB_USER = "easybuilders"
GITHUB_REPO = "testrepository"
# branch to test
GITHUB_BRANCH = 'main'
class GithubTest(EnhancedTestCase):
""" small test for The github package
This should not be to much, since there is an hourly limit of request
for non authenticated users of 50"""
def setUp(self):
"""Test setup."""
super(GithubTest, self).setUp()
self.github_token = gh.fetch_github_token(GITHUB_TEST_ACCOUNT)
if self.github_token is None:
username, token = None, None
else:
username, token = GITHUB_TEST_ACCOUNT, self.github_token
self.ghfs = gh.Githubfs(GITHUB_USER, GITHUB_REPO, GITHUB_BRANCH, username, None, token)
self.skip_github_tests = self.github_token is None and os.getenv('FORCE_EB_GITHUB_TESTS') is None
self.orig_testing_create_gist = easybuild.tools.testing.create_gist
def tearDown(self):
"""Cleanup after running test."""
easybuild.tools.testing.create_gist = self.orig_testing_create_gist
super(GithubTest, self).tearDown()
def test_github_pick_default_branch(self):
"""Test pick_default_branch function."""
self.assertEqual(pick_default_branch('easybuilders'), 'main')
self.assertEqual(pick_default_branch('foobar'), 'master')
def test_github_walk(self):
"""test the gitubfs walk function"""
if self.skip_github_tests:
print("Skipping test_walk, no GitHub token available?")
return
try:
expected = [
(None, ['a_directory', 'second_dir'], ['README.md']),
('a_directory', ['a_subdirectory'], ['a_file.txt']),
('a_directory/a_subdirectory', [], ['a_file.txt']), ('second_dir', [], ['a_file.txt']),
]
self.assertEqual([x for x in self.ghfs.walk(None)], expected)
except IOError:
pass
def test_github_read_api(self):
"""Test the githubfs read function"""
if self.skip_github_tests:
print("Skipping test_read_api, no GitHub token available?")
return
try:
self.assertEqual(self.ghfs.read("a_directory/a_file.txt").strip(), b"this is a line of text")
except IOError:
pass
def test_github_read(self):
"""Test the githubfs read function without using the api"""
if self.skip_github_tests:
print("Skipping test_read, no GitHub token available?")
return
try:
fp = self.ghfs.read("a_directory/a_file.txt", api=False)
self.assertEqual(read_file(fp).strip(), "this is a line of text")
os.remove(fp)
except (IOError, OSError):
pass
def test_github_add_pr_labels(self):
"""Test add_pr_labels function."""
if self.skip_github_tests:
print("Skipping test_add_pr_labels, no GitHub token available?")
return
build_options = {
'pr_target_account': GITHUB_USER,
'pr_target_repo': GITHUB_EASYBLOCKS_REPO,
'github_user': GITHUB_TEST_ACCOUNT,
'dry_run': True,
}
init_config(build_options=build_options)
self.mock_stdout(True)
error_pattern = "Adding labels to PRs for repositories other than easyconfigs hasn't been implemented yet"
self.assertErrorRegex(EasyBuildError, error_pattern, gh.add_pr_labels, 1)
self.mock_stdout(False)
build_options['pr_target_repo'] = GITHUB_EASYCONFIGS_REPO
init_config(build_options=build_options)
# PR #11262 includes easyconfigs that use 'dummy' toolchain,
# so we need to allow triggering deprecated behaviour
self.allow_deprecated_behaviour()
self.mock_stdout(True)
self.mock_stderr(True)
gh.add_pr_labels(11262)
stdout = self.get_stdout()
self.mock_stdout(False)
self.mock_stderr(False)
self.assertTrue("Could not determine any missing labels for PR #11262" in stdout)
self.mock_stdout(True)
self.mock_stderr(True)
gh.add_pr_labels(8006) # closed, unmerged, unlabeled PR
stdout = self.get_stdout()
self.mock_stdout(False)
self.mock_stderr(False)
self.assertTrue("PR #8006 should be labelled 'update'" in stdout)
def test_github_fetch_pr_data(self):
"""Test fetch_pr_data function."""
if self.skip_github_tests:
print("Skipping test_fetch_pr_data, no GitHub token available?")
return
pr_data, _ = gh.fetch_pr_data(1, GITHUB_USER, GITHUB_REPO, GITHUB_TEST_ACCOUNT)
self.assertEqual(pr_data['number'], 1)
self.assertEqual(pr_data['title'], "a pr")
self.assertFalse(any(key in pr_data for key in ['issue_comments', 'review', 'status_last_commit']))
pr_data, _ = gh.fetch_pr_data(2, GITHUB_USER, GITHUB_REPO, GITHUB_TEST_ACCOUNT, full=True)
self.assertEqual(pr_data['number'], 2)
self.assertEqual(pr_data['title'], "an open pr (do not close this please)")
self.assertTrue(pr_data['issue_comments'])
self.assertEqual(pr_data['issue_comments'][0]['body'], "this is a test")
self.assertTrue(pr_data['reviews'])
self.assertEqual(pr_data['reviews'][0]['state'], "APPROVED")
self.assertEqual(pr_data['reviews'][0]['user']['login'], 'boegel')
self.assertEqual(pr_data['status_last_commit'], None)
def test_github_list_prs(self):
"""Test list_prs function."""
if self.skip_github_tests:
print("Skipping test_list_prs, no GitHub token available?")
return
parameters = ('closed', 'created', 'asc')
init_config(build_options={'pr_target_account': GITHUB_USER,
'pr_target_repo': GITHUB_REPO})
expected = "PR #1: a pr"
self.mock_stdout(True)
output = gh.list_prs(parameters, per_page=1, github_user=GITHUB_TEST_ACCOUNT)
stdout = self.get_stdout()
self.mock_stdout(False)
self.assertTrue(stdout.startswith("== Listing PRs with parameters: "))
self.assertEqual(expected, output)
def test_github_reasons_for_closing(self):
"""Test reasons_for_closing function."""
if self.skip_github_tests:
print("Skipping test_reasons_for_closing, no GitHub token available?")
return
repo_owner = gh.GITHUB_EB_MAIN
repo_name = gh.GITHUB_EASYCONFIGS_REPO
build_options = {
'dry_run': True,
'github_user': GITHUB_TEST_ACCOUNT,
'pr_target_account': repo_owner,
'pr_target_repo': repo_name,
'robot_path': [],
}
init_config(build_options=build_options)
pr_data, _ = gh.fetch_pr_data(1844, repo_owner, repo_name, GITHUB_TEST_ACCOUNT, full=True)
self.mock_stdout(True)
self.mock_stderr(True)
# can't easily check return value, since auto-detected reasons may change over time if PR is touched
res = gh.reasons_for_closing(pr_data)
stdout = self.get_stdout()
stderr = self.get_stderr()
self.mock_stdout(False)
self.mock_stderr(False)
self.assertTrue(isinstance(res, list))
self.assertEqual(stderr.strip(), "WARNING: Using easyconfigs from closed PR #1844")
patterns = [
"Status of last commit is SUCCESS",
"Last comment on",
"No activity since",
"* QEMU-2.4.0",
]
for pattern in patterns:
self.assertTrue(pattern in stdout, "Pattern '%s' found in: %s" % (pattern, stdout))
def test_github_close_pr(self):
"""Test close_pr function."""
if self.skip_github_tests:
print("Skipping test_close_pr, no GitHub token available?")
return
build_options = {
'dry_run': True,
'github_user': GITHUB_TEST_ACCOUNT,
'pr_target_account': GITHUB_USER,
'pr_target_repo': GITHUB_REPO,
}
init_config(build_options=build_options)
self.mock_stdout(True)
gh.close_pr(2, motivation_msg='just a test')
stdout = self.get_stdout()
self.mock_stdout(False)
patterns = [
"easybuilders/testrepository PR #2 was submitted by migueldiascosta",
"[DRY RUN] Adding comment to testrepository issue #2: '" +
"@migueldiascosta, this PR is being closed for the following reason(s): just a test",
"[DRY RUN] Closed easybuilders/testrepository PR #2",
]
for pattern in patterns:
self.assertTrue(pattern in stdout, "Pattern '%s' found in: %s" % (pattern, stdout))
retest_msg = VALID_CLOSE_PR_REASONS['retest']
self.mock_stdout(True)
gh.close_pr(2, motivation_msg=retest_msg)
stdout = self.get_stdout()
self.mock_stdout(False)
patterns = [
"easybuilders/testrepository PR #2 was submitted by migueldiascosta",
"[DRY RUN] Adding comment to testrepository issue #2: '" +
"@migueldiascosta, this PR is being closed for the following reason(s): %s" % retest_msg,
"[DRY RUN] Closed easybuilders/testrepository PR #2",
"[DRY RUN] Reopened easybuilders/testrepository PR #2",
]
for pattern in patterns:
self.assertTrue(pattern in stdout, "Pattern '%s' found in: %s" % (pattern, stdout))
def test_github_fetch_easyblocks_from_pr(self):
"""Test fetch_easyblocks_from_pr function."""
if self.skip_github_tests:
print("Skipping test_fetch_easyblocks_from_pr, no GitHub token available?")
return
init_config(build_options={
'pr_target_account': gh.GITHUB_EB_MAIN,
})
# PR with new easyblock plus non-easyblock file
all_ebs_pr1964 = ['lammps.py']
# PR with changed easyblock
all_ebs_pr1967 = ['siesta.py']
# PR with more than one easyblock
all_ebs_pr1949 = ['configuremake.py', 'rpackage.py']
for pr, all_ebs in [(1964, all_ebs_pr1964), (1967, all_ebs_pr1967), (1949, all_ebs_pr1949)]:
try:
tmpdir = os.path.join(self.test_prefix, 'pr%s' % pr)
eb_files = gh.fetch_easyblocks_from_pr(pr, path=tmpdir, github_user=GITHUB_TEST_ACCOUNT)
self.assertEqual(sorted(all_ebs), sorted([os.path.basename(f) for f in eb_files]))
except URLError as err:
print("Ignoring URLError '%s' in test_fetch_easyblocks_from_pr" % err)
def test_github_fetch_easyconfigs_from_pr(self):
"""Test fetch_easyconfigs_from_pr function."""
if self.skip_github_tests:
print("Skipping test_fetch_easyconfigs_from_pr, no GitHub token available?")
return
init_config(build_options={
'pr_target_account': gh.GITHUB_EB_MAIN,
})
# PR for rename of arrow to Arrow,
# see https://github.com/easybuilders/easybuild-easyconfigs/pull/8007/files
all_ecs_pr8007 = [
'Arrow-0.7.1-intel-2017b-Python-3.6.3.eb',
'bat-0.3.3-fix-pyspark.patch',
'bat-0.3.3-intel-2017b-Python-3.6.3.eb',
]
# PR where also files are patched in test/
# see https://github.com/easybuilders/easybuild-easyconfigs/pull/6587/files
all_ecs_pr6587 = [
'WIEN2k-18.1-foss-2018a.eb',
'WIEN2k-18.1-gimkl-2017a.eb',
'WIEN2k-18.1-intel-2018a.eb',
'libxc-4.2.3-foss-2018a.eb',
'libxc-4.2.3-gimkl-2017a.eb',
'libxc-4.2.3-intel-2018a.eb',
]
# PR where files are renamed
# see https://github.com/easybuilders/easybuild-easyconfigs/pull/7159/files
all_ecs_pr7159 = [
'DOLFIN-2018.1.0.post1-foss-2018a-Python-3.6.4.eb',
'OpenFOAM-5.0-20180108-foss-2018a.eb',
'OpenFOAM-5.0-20180108-intel-2018a.eb',
'OpenFOAM-6-foss-2018b.eb',
'OpenFOAM-6-intel-2018a.eb',
'OpenFOAM-v1806-foss-2018b.eb',
'PETSc-3.9.3-foss-2018a.eb',
'SCOTCH-6.0.6-foss-2018a.eb',
'SCOTCH-6.0.6-foss-2018b.eb',
'SCOTCH-6.0.6-intel-2018a.eb',
'Trilinos-12.12.1-foss-2018a-Python-3.6.4.eb'
]
for pr, all_ecs in [(8007, all_ecs_pr8007), (6587, all_ecs_pr6587), (7159, all_ecs_pr7159)]:
try:
tmpdir = os.path.join(self.test_prefix, 'pr%s' % pr)
ec_files = gh.fetch_easyconfigs_from_pr(pr, path=tmpdir, github_user=GITHUB_TEST_ACCOUNT)
self.assertEqual(sorted(all_ecs), sorted([os.path.basename(f) for f in ec_files]))
except URLError as err:
print("Ignoring URLError '%s' in test_fetch_easyconfigs_from_pr" % err)
def test_github_fetch_files_from_pr_cache(self):
"""Test caching for fetch_files_from_pr."""
if self.skip_github_tests:
print("Skipping test_fetch_files_from_pr_cache, no GitHub token available?")
return
init_config(build_options={
'pr_target_account': gh.GITHUB_EB_MAIN,
})
# clear cache first, to make sure we start with a clean slate
gh.fetch_files_from_pr.clear_cache()
self.assertFalse(gh.fetch_files_from_pr._cache)
pr7159_filenames = [
'DOLFIN-2018.1.0.post1-foss-2018a-Python-3.6.4.eb',
'OpenFOAM-5.0-20180108-foss-2018a.eb',
'OpenFOAM-5.0-20180108-intel-2018a.eb',
'OpenFOAM-6-foss-2018b.eb',
'OpenFOAM-6-intel-2018a.eb',
'OpenFOAM-v1806-foss-2018b.eb',
'PETSc-3.9.3-foss-2018a.eb',
'SCOTCH-6.0.6-foss-2018a.eb',
'SCOTCH-6.0.6-foss-2018b.eb',
'SCOTCH-6.0.6-intel-2018a.eb',
'Trilinos-12.12.1-foss-2018a-Python-3.6.4.eb'
]
pr7159_files = gh.fetch_easyconfigs_from_pr(7159, path=self.test_prefix, github_user=GITHUB_TEST_ACCOUNT)
self.assertEqual(sorted(pr7159_filenames), sorted(os.path.basename(f) for f in pr7159_files))
# check that cache has been populated for PR 7159
self.assertEqual(len(gh.fetch_files_from_pr._cache.keys()), 1)
# github_account value is None (results in using default 'easybuilders')
cache_key = (7159, None, 'easybuild-easyconfigs', self.test_prefix)
self.assertTrue(cache_key in gh.fetch_files_from_pr._cache.keys())
cache_entry = gh.fetch_files_from_pr._cache[cache_key]
self.assertEqual(sorted([os.path.basename(f) for f in cache_entry]), sorted(pr7159_filenames))
# same query should return result from cache entry
res = gh.fetch_easyconfigs_from_pr(7159, path=self.test_prefix, github_user=GITHUB_TEST_ACCOUNT)
self.assertEqual(res, pr7159_files)
# inject entry in cache and check result of matching query
pr_id = 12345
tmpdir = os.path.join(self.test_prefix, 'easyblocks-pr-12345')
pr12345_files = [
os.path.join(tmpdir, 'foo.py'),
os.path.join(tmpdir, 'bar.py'),
]
for fp in pr12345_files:
write_file(fp, '')
# github_account value is None (results in using default 'easybuilders')
cache_key = (pr_id, None, 'easybuild-easyblocks', tmpdir)
gh.fetch_files_from_pr.update_cache({cache_key: pr12345_files})
res = gh.fetch_easyblocks_from_pr(12345, tmpdir)
self.assertEqual(sorted(pr12345_files), sorted(res))
def test_github_fetch_latest_commit_sha(self):
"""Test fetch_latest_commit_sha function."""
if self.skip_github_tests:
print("Skipping test_fetch_latest_commit_sha, no GitHub token available?")
return
sha = gh.fetch_latest_commit_sha('easybuild-framework', 'easybuilders', github_user=GITHUB_TEST_ACCOUNT)
self.assertTrue(re.match('^[0-9a-f]{40}$', sha))
sha = gh.fetch_latest_commit_sha('easybuild-easyblocks', 'easybuilders', github_user=GITHUB_TEST_ACCOUNT,
branch='develop')
self.assertTrue(re.match('^[0-9a-f]{40}$', sha))
def test_github_download_repo(self):
"""Test download_repo function."""
if self.skip_github_tests:
print("Skipping test_download_repo, no GitHub token available?")
return
cwd = os.getcwd()
# default: download tarball for master branch of easybuilders/easybuild-easyconfigs repo
path = gh.download_repo(path=self.test_prefix, github_user=GITHUB_TEST_ACCOUNT)
repodir = os.path.join(self.test_prefix, 'easybuilders', 'easybuild-easyconfigs-main')
self.assertTrue(os.path.samefile(path, repodir))
self.assertTrue(os.path.exists(repodir))
shafile = os.path.join(repodir, 'latest-sha')
self.assertTrue(re.match('^[0-9a-f]{40}$', read_file(shafile)))
self.assertTrue(os.path.exists(os.path.join(repodir, 'easybuild', 'easyconfigs', 'f', 'foss', 'foss-2019b.eb')))
# current directory should not have changed after calling download_repo
self.assertTrue(os.path.samefile(cwd, os.getcwd()))
# existing downloaded repo is not reperformed, except if SHA is different
account, repo, branch = 'boegel', 'easybuild-easyblocks', 'develop'
repodir = os.path.join(self.test_prefix, account, '%s-%s' % (repo, branch))
latest_sha = gh.fetch_latest_commit_sha(repo, account, branch=branch, github_user=GITHUB_TEST_ACCOUNT)
# put 'latest-sha' fail in place, check whether repo was (re)downloaded (should not)
shafile = os.path.join(repodir, 'latest-sha')
write_file(shafile, latest_sha)
path = gh.download_repo(repo=repo, branch=branch, account=account, path=self.test_prefix,
github_user=GITHUB_TEST_ACCOUNT)
self.assertTrue(os.path.samefile(path, repodir))
self.assertEqual(os.listdir(repodir), ['latest-sha'])
# remove 'latest-sha' file and verify that download was performed
os.remove(shafile)
path = gh.download_repo(repo=repo, branch=branch, account=account, path=self.test_prefix,
github_user=GITHUB_TEST_ACCOUNT)
self.assertTrue(os.path.samefile(path, repodir))
self.assertTrue('easybuild' in os.listdir(repodir))
self.assertTrue(re.match('^[0-9a-f]{40}$', read_file(shafile)))
self.assertTrue(os.path.exists(os.path.join(repodir, 'easybuild', 'easyblocks', '__init__.py')))
def test_install_github_token(self):
"""Test for install_github_token function."""
if self.skip_github_tests:
print("Skipping test_install_github_token, no GitHub token available?")
return
if not HAVE_KEYRING:
print("Skipping test_install_github_token, keyring module not available")
return
random_user = ''.join(random.choice(ascii_letters) for _ in range(10))
self.assertEqual(gh.fetch_github_token(random_user), None)
# poor mans mocking of getpass
# inject leading/trailing spaces to verify stripping of provided value
def fake_getpass(*args, **kwargs):
return ' ' + self.github_token + ' '
orig_getpass = gh.getpass.getpass
gh.getpass.getpass = fake_getpass
token_installed = False
try:
gh.install_github_token(random_user, silent=True)
token_installed = True
except Exception as err:
print(err)
gh.getpass.getpass = orig_getpass
token = gh.fetch_github_token(random_user)
# cleanup
if token_installed:
keyring.delete_password(gh.KEYRING_GITHUB_TOKEN, random_user)
# deliberately not using assertEqual, keep token secret!
self.assertTrue(token_installed)
self.assertTrue(token == self.github_token)
def test_validate_github_token(self):
"""Test for validate_github_token function."""
if self.skip_github_tests:
print("Skipping test_validate_github_token, no GitHub token available?")
return
if not HAVE_KEYRING:
print("Skipping test_validate_github_token, keyring module not available")
return
self.assertTrue(gh.validate_github_token(self.github_token, GITHUB_TEST_ACCOUNT))
# if a token in the old format is available, test with that too
token_old_format = os.getenv('TEST_GITHUB_TOKEN_OLD_FORMAT')
if token_old_format:
self.assertTrue(gh.validate_github_token(token_old_format, GITHUB_TEST_ACCOUNT))
def test_github_find_easybuild_easyconfig(self):
"""Test for find_easybuild_easyconfig function"""
if self.skip_github_tests:
print("Skipping test_find_easybuild_easyconfig, no GitHub token available?")
return
path = gh.find_easybuild_easyconfig(github_user=GITHUB_TEST_ACCOUNT)
expected = os.path.join('e', 'EasyBuild', r'EasyBuild-[1-9]+\.[0-9]+\.[0-9]+\.eb')
regex = re.compile(expected)
self.assertTrue(regex.search(path), "Pattern '%s' found in '%s'" % (regex.pattern, path))
self.assertTrue(os.path.exists(path), "Path %s exists" % path)
def test_github_find_patches(self):
""" Test for find_software_name_for_patch """
test_dir = os.path.dirname(os.path.abspath(__file__))
ec_path = os.path.join(test_dir, 'easyconfigs')
init_config(build_options={
'allow_modules_tool_mismatch': True,
'minimal_toolchains': True,
'use_existing_modules': True,
'external_modules_metadata': ConfigObj(),
'silent': True,
'valid_module_classes': module_classes(),
'validate': False,
})
self.mock_stdout(True)
ec = gh.find_software_name_for_patch('toy-0.0_fix-silly-typo-in-printf-statement.patch', [ec_path])
txt = self.get_stdout()
self.mock_stdout(False)
self.assertTrue(ec == 'toy')
reg = re.compile(r'[1-9]+ of [1-9]+ easyconfigs checked')
self.assertTrue(re.search(reg, txt))
self.assertEqual(gh.find_software_name_for_patch('test.patch', []), None)
# check behaviour of find_software_name_for_patch when non-UTF8 patch files are present (only with Python 3)
if sys.version_info[0] >= 3:
non_utf8_patch = os.path.join(self.test_prefix, 'problem.patch')
with open(non_utf8_patch, 'wb') as fp:
fp.write(bytes("+ ximage->byte_order=T1_byte_order; /* Set t1lib\xb4s byteorder */\n", 'iso_8859_1'))
self.assertEqual(gh.find_software_name_for_patch('test.patch', [self.test_prefix]), None)
def test_github_det_commit_status(self):
"""Test det_commit_status function."""
if self.skip_github_tests:
print("Skipping test_det_commit_status, no GitHub token available?")
return
# ancient commit, from Jenkins era
commit_sha = 'ec5d6f7191676a86a18404616691796a352c5f1d'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'success')
# commit with failing tests from Travis CI era (no GitHub Actions yet)
commit_sha = 'd0c62556caaa78944722dc84bbb1072bf9688f74'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'failure')
# commit with passing tests from Travis CI era (no GitHub Actions yet)
commit_sha = '21354990e4e6b4ca169b93d563091db4c6b2693e'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'success')
# commit with failing tests, tested by both Travis CI and GitHub Actions
commit_sha = '3a596de93dd95b651b0d1503562d888409364a96'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'failure')
# commit with passing tests, tested by both Travis CI and GitHub Actions
commit_sha = '1fba8ac835d62e78cdc7988b08f4409a1570cef1'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'success')
# commit with failing tests, only tested by GitHub Actions
commit_sha = 'd7130683f02fe8284df3557f0b2fd3947c2ea153'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'failure')
# commit with passing tests, only tested by GitHub Actions
commit_sha = 'e6df09700a1b90c63b4f760eda4b590ee1a9c2fd'
res = gh.det_commit_status('easybuilders', 'easybuild-easyconfigs', commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, 'success')
# commit in test repo where no CI is running at all
commit_sha = '8456f867b03aa001fd5a6fe5a0c4300145c065dc'
res = gh.det_commit_status('easybuilders', GITHUB_REPO, commit_sha, GITHUB_TEST_ACCOUNT)
self.assertEqual(res, None)
def test_github_check_pr_eligible_to_merge(self):
"""Test check_pr_eligible_to_merge function"""
def run_check(expected_result=False):
"""Helper function to check result of check_pr_eligible_to_merge"""
self.mock_stdout(True)
self.mock_stderr(True)
res = gh.check_pr_eligible_to_merge(pr_data)
stdout = self.get_stdout()
stderr = self.get_stderr()
self.mock_stdout(False)
self.mock_stderr(False)
self.assertEqual(res, expected_result)
self.assertEqual(stdout, expected_stdout)
self.assertTrue(expected_warning in stderr, "Found '%s' in: %s" % (expected_warning, stderr))
return stderr
pr_data = {
'base': {
'ref': 'main',
'repo': {
'name': 'easybuild-easyconfigs',
'owner': {'login': 'easybuilders'},
},
},
'status_last_commit': None,
'issue_comments': [],
'milestone': None,
'number': '1234',
'merged': False,
'mergeable_state': 'unknown',
'reviews': [{'state': 'CHANGES_REQUESTED', 'user': {'login': 'boegel'}},
# to check that duplicates are filtered
{'state': 'CHANGES_REQUESTED', 'user': {'login': 'boegel'}}],
}
test_result_warning_template = "* test suite passes: %s => not eligible for merging!"
expected_stdout = "Checking eligibility of easybuilders/easybuild-easyconfigs PR #1234 for merging...\n"
# target branch for PR must be develop
expected_warning = "* targets develop branch: FAILED; found 'main' => not eligible for merging!\n"
run_check()
pr_data['base']['ref'] = 'develop'
expected_stdout += "* targets develop branch: OK\n"
# test suite must PASS (not failed, pending or unknown) in Travis
tests = [
('pending', 'pending...'),
('error', '(status: error)'),
('failure', '(status: failure)'),
('foobar', '(status: foobar)'),
('', '(status: )'),
]
for status, test_result in tests:
pr_data['status_last_commit'] = status
expected_warning = test_result_warning_template % test_result
run_check()
pr_data['status_last_commit'] = 'success'
expected_stdout += "* test suite passes: OK\n"
expected_warning = ''
run_check()
# at least the last test report must be successful (and there must be one)
expected_warning = "* last test report is successful: (no test reports found) => not eligible for merging!"
run_check()
pr_data['issue_comments'] = [
{'body': "@easybuild-easyconfigs/maintainers: please review/merge?"},
{'body': "Test report by @boegel\n**SUCCESS**\nit's all good!"},
{'body': "Test report by @boegel\n**FAILED**\nnothing ever works..."},
{'body': "this is just a regular comment"},
]
expected_warning = "* last test report is successful: FAILED => not eligible for merging!"
run_check()
pr_data['issue_comments'].extend([
{'body': "yet another comment"},
{'body': "Test report by @boegel\n**SUCCESS**\nit's all good!"},
])
expected_stdout += "* last test report is successful: OK\n"
expected_warning = ''
run_check()
# approved style review by a human is required
expected_warning = "* approved review: MISSING => not eligible for merging!"
run_check()
pr_data['issue_comments'].insert(2, {'body': 'lgtm'})
run_check()
expected_warning = "* no pending change requests: FAILED (changes requested by boegel)"
expected_warning += " => not eligible for merging!"
run_check()
# if PR is approved by a different user that requested changes and that request has not been dismissed,
# the PR is still not mergeable
pr_data['reviews'].append({'state': 'APPROVED', 'user': {'login': 'not_boegel'}})
expected_stdout_saved = expected_stdout
expected_stdout += "* approved review: OK (by not_boegel)\n"
run_check()
# if the user that requested changes approves the PR, it's mergeable
pr_data['reviews'].append({'state': 'APPROVED', 'user': {'login': 'boegel'}})
expected_stdout = expected_stdout_saved + "* no pending change requests: OK\n"
expected_stdout += "* approved review: OK (by not_boegel, boegel)\n"
expected_warning = ''
run_check()
# milestone must be set
expected_warning = "* milestone is set: no milestone found => not eligible for merging!"
run_check()
pr_data['milestone'] = {'title': '3.3.1'}
expected_stdout += "* milestone is set: OK (3.3.1)\n"
# mergeable state must be clean
expected_warning = "* mergeable state is clean: FAILED (mergeable state is 'unknown')"
run_check()
pr_data['mergeable_state'] = GITHUB_MERGEABLE_STATE_CLEAN
expected_stdout += "* mergeable state is clean: OK\n"
# all checks pass, PR is eligible for merging
expected_warning = ''
self.assertEqual(run_check(True), '')
def test_github_det_pr_labels(self):
"""Test for det_pr_labels function."""
file_info = {'new_folder': [False], 'new_file_in_existing_folder': [True]}
res = gh.det_pr_labels(file_info, GITHUB_EASYCONFIGS_REPO)
self.assertEqual(res, ['update'])
file_info = {'new_folder': [True], 'new_file_in_existing_folder': [False]}
res = gh.det_pr_labels(file_info, GITHUB_EASYCONFIGS_REPO)
self.assertEqual(res, ['new'])
file_info = {'new_folder': [True, False], 'new_file_in_existing_folder': [False, True]}
res = gh.det_pr_labels(file_info, GITHUB_EASYCONFIGS_REPO)
self.assertTrue(sorted(res), ['new', 'update'])
file_info = {'new': [True]}
res = gh.det_pr_labels(file_info, GITHUB_EASYBLOCKS_REPO)
self.assertEqual(res, ['new'])
def test_github_det_patch_specs(self):
"""Test for det_patch_specs function."""
patch_paths = [os.path.join(self.test_prefix, p) for p in ['1.patch', '2.patch', '3.patch']]
file_info = {'ecs': []}
rawtxt = textwrap.dedent("""
easyblock = 'ConfigureMake'
name = 'A'
version = '42'
homepage = 'http://foo.com/'
description = ''
toolchain = {"name":"GCC", "version": "4.6.3"}
patches = ['1.patch']
""")
file_info['ecs'].append(EasyConfig(None, rawtxt=rawtxt))
rawtxt = textwrap.dedent("""
easyblock = 'ConfigureMake'
name = 'B'
version = '42'
homepage = 'http://foo.com/'
description = ''
toolchain = {"name":"GCC", "version": "4.6.3"}
""")
file_info['ecs'].append(EasyConfig(None, rawtxt=rawtxt))
error_pattern = "Failed to determine software name to which patch file .*/2.patch relates"
self.mock_stdout(True)
self.assertErrorRegex(EasyBuildError, error_pattern, gh.det_patch_specs, patch_paths, file_info, [])
self.mock_stdout(False)
rawtxt = textwrap.dedent("""
easyblock = 'ConfigureMake'
name = 'C'
version = '42'
homepage = 'http://foo.com/'
description = ''
toolchain = {"name":"GCC", "version": "4.6.3"}
patches = [('3.patch', 'subdir'), '2.patch']
""")
file_info['ecs'].append(EasyConfig(None, rawtxt=rawtxt))
self.mock_stdout(True)
res = gh.det_patch_specs(patch_paths, file_info, [])
self.mock_stdout(False)
self.assertEqual([i[0] for i in res], patch_paths)
self.assertEqual([i[1] for i in res], ['A', 'C', 'C'])
# check if patches for extensions are found
rawtxt = textwrap.dedent("""
easyblock = 'ConfigureMake'
name = 'patched_ext'
version = '42'
homepage = 'http://foo.com/'
description = ''
toolchain = {"name":"GCC", "version": "4.6.3"}
exts_list = [
'foo',
('bar', '1.2.3'),
('patched', '4.5.6', {
'patches': [('%(name)s-2.patch', 1), '%(name)s-3.patch'],
}),
]
""")
patch_paths[1:3] = [os.path.join(self.test_prefix, p) for p in ['patched-2.patch', 'patched-3.patch']]
file_info['ecs'][-1] = EasyConfig(None, rawtxt=rawtxt)
self.mock_stdout(True)
res = gh.det_patch_specs(patch_paths, file_info, [])
self.mock_stdout(False)
self.assertEqual([i[0] for i in res], patch_paths)
self.assertEqual([i[1] for i in res], ['A', 'patched_ext', 'patched_ext'])
# check if patches for components are found
rawtxt = textwrap.dedent("""
easyblock = 'PythonBundle'
name = 'patched_bundle'
version = '42'
homepage = 'http://foo.com/'
description = ''
toolchain = {"name":"GCC", "version": "4.6.3"}
components = [
('bar', '1.2.3'),
('patched', '4.5.6', {
'patches': [('%(name)s-2.patch', 1), '%(name)s-3.patch'],
}),
]
""")
file_info['ecs'][-1] = EasyConfig(None, rawtxt=rawtxt)
self.mock_stdout(True)
res = gh.det_patch_specs(patch_paths, file_info, [])
self.mock_stdout(False)
self.assertEqual([i[0] for i in res], patch_paths)
self.assertEqual([i[1] for i in res], ['A', 'patched_bundle', 'patched_bundle'])
def test_github_restclient(self):
"""Test use of RestClient."""
if self.skip_github_tests:
print("Skipping test_restclient, no GitHub token available?")
return
client = RestClient('https://api.github.com', username=GITHUB_TEST_ACCOUNT, token=self.github_token)
status, body = client.repos['easybuilders']['testrepository'].contents.a_directory['a_file.txt'].get()
self.assertEqual(status, 200)
# base64.b64encode requires & produces a 'bytes' value in Python 3,
# but we need a string value hence the .decode() (also works in Python 2)
self.assertEqual(body['content'].strip(), base64.b64encode(b'this is a line of text\n').decode())
status, headers = client.head()
self.assertEqual(status, 200)
self.assertTrue(headers)
self.assertTrue('X-GitHub-Media-Type' in headers)
httperror_hit = False
try:
status, body = client.user.emails.post(body='test@example.com')
self.assertTrue(False, 'posting to unauthorized endpoint did not throw a http error')
except HTTPError:
httperror_hit = True
self.assertTrue(httperror_hit, "expected HTTPError not encountered")
httperror_hit = False
try:
status, body = client.user.emails.delete(body='test@example.com')
self.assertTrue(False, 'deleting to unauthorized endpoint did not throw a http error')
except HTTPError:
httperror_hit = True
self.assertTrue(httperror_hit, "expected HTTPError not encountered")
def test_github_create_delete_gist(self):
"""Test create_gist and delete_gist."""
if self.skip_github_tests:
print("Skipping test_restclient, no GitHub token available?")
return
test_txt = "This is just a test."
gist_url = gh.create_gist(test_txt, 'test.txt', github_user=GITHUB_TEST_ACCOUNT, github_token=self.github_token)
gist_id = gist_url.split('/')[-1]
gh.delete_gist(gist_id, github_user=GITHUB_TEST_ACCOUNT, github_token=self.github_token)
def test_github_det_account_branch_for_pr(self):
"""Test det_account_branch_for_pr."""
if self.skip_github_tests:
print("Skipping test_det_account_branch_for_pr, no GitHub token available?")
return
init_config(build_options={
'pr_target_account': 'easybuilders',
'pr_target_repo': 'easybuild-easyconfigs',
})
# see https://github.com/easybuilders/easybuild-easyconfigs/pull/9149
self.mock_stdout(True)
account, branch = gh.det_account_branch_for_pr(9149, github_user=GITHUB_TEST_ACCOUNT)
self.mock_stdout(False)
self.assertEqual(account, 'boegel')
self.assertEqual(branch, '20191017070734_new_pr_EasyBuild401')
init_config(build_options={
'pr_target_account': 'easybuilders',
'pr_target_repo': 'easybuild-framework',
})
# see https://github.com/easybuilders/easybuild-framework/pull/3069
self.mock_stdout(True)
account, branch = gh.det_account_branch_for_pr(3069, github_user=GITHUB_TEST_ACCOUNT)
self.mock_stdout(False)
self.assertEqual(account, 'migueldiascosta')
self.assertEqual(branch, 'fix_inject_checksums')
def test_github_det_pr_target_repo(self):
"""Test det_pr_target_repo."""
self.assertEqual(build_option('pr_target_repo'), None)
# no files => return default target repo (None)
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type([])), None)
test_dir = os.path.dirname(os.path.abspath(__file__))
# easyconfigs/patches (incl. files to delete) => easyconfigs repo
# this is solely based on filenames, actual files are not opened, except for the patch file which must exist
toy_patch_fn = 'toy-0.0_fix-silly-typo-in-printf-statement.patch'
toy_patch = os.path.join(test_dir, 'sandbox', 'sources', 'toy', toy_patch_fn)
test_cases = [
['toy.eb'],
[toy_patch],
['toy.eb', toy_patch],
[':toy.eb'], # deleting toy.eb
['one.eb', 'two.eb'],
['one.eb', 'two.eb', toy_patch, ':todelete.eb'],
]
for test_case in test_cases:
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(test_case)), 'easybuild-easyconfigs')
# if only Python files are involved, result is easyblocks or framework repo;
# all Python files are easyblocks => easyblocks repo, otherwise => framework repo;
# files are opened and inspected here to discriminate between easyblocks & other Python files, so must exist!
github_py = os.path.join(test_dir, 'github.py')
configuremake = os.path.join(test_dir, 'sandbox', 'easybuild', 'easyblocks', 'generic', 'configuremake.py')
self.assertTrue(os.path.exists(configuremake))
toy_eb = os.path.join(test_dir, 'sandbox', 'easybuild', 'easyblocks', 't', 'toy.py')
self.assertTrue(os.path.exists(toy_eb))
self.assertEqual(build_option('pr_target_repo'), None)
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type([github_py])), 'easybuild-framework')
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type([configuremake])), 'easybuild-easyblocks')
py_files = [github_py, configuremake]
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(py_files)), 'easybuild-framework')
py_files[0] = toy_eb
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(py_files)), 'easybuild-easyblocks')
py_files.append(github_py)
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(py_files)), 'easybuild-framework')
# as soon as an easyconfig file or patch files is involved => result is easybuild-easyconfigs repo
for fn in ['toy.eb', toy_patch]:
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(py_files + [fn])), 'easybuild-easyconfigs')
# if --pr-target-repo is specified, we always get this value (no guessing anymore)
init_config(build_options={'pr_target_repo': 'thisisjustatest'})
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type([])), 'thisisjustatest')
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(['toy.eb', toy_patch])), 'thisisjustatest')
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type(py_files)), 'thisisjustatest')
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type([configuremake])), 'thisisjustatest')
self.assertEqual(gh.det_pr_target_repo(categorize_files_by_type([toy_eb])), 'thisisjustatest')
def test_push_branch_to_github(self):
"""Test push_branch_to_github."""
build_options = {'dry_run': True}
init_config(build_options=build_options)
git_repo = gh.init_repo(self.test_prefix, GITHUB_REPO)
branch = 'test123'
self.mock_stderr(True)
self.mock_stdout(True)
gh.setup_repo(git_repo, GITHUB_USER, GITHUB_REPO, 'main')
git_repo.create_head(branch, force=True)
gh.push_branch_to_github(git_repo, GITHUB_USER, GITHUB_REPO, branch)
stderr = self.get_stderr()
stdout = self.get_stdout()
self.mock_stderr(True)
self.mock_stdout(True)
self.assertEqual(stderr, '')
github_path = '%s/%s.git' % (GITHUB_USER, GITHUB_REPO)
pattern = r'^' + '\n'.join([
r"== fetching branch 'main' from https://github.com/%s\.\.\." % github_path,
r"== pushing branch 'test123' to remote 'github_.*' \(git@github.com:%s\) \[DRY RUN\]" % github_path,
]) + r'$'
regex = re.compile(pattern)
self.assertTrue(regex.match(stdout.strip()), "Pattern '%s' doesn't match: %s" % (regex.pattern, stdout))
def test_github_pr_test_report(self):
"""Test for post_pr_test_report function."""
if self.skip_github_tests:
print("Skipping test_post_pr_test_report, no GitHub token available?")
return
init_config(build_options={
'dry_run': True,
'github_user': GITHUB_TEST_ACCOUNT,
})
test_report = {'full': "This is a test report!"}
init_session_state = session_state()
self.mock_stderr(True)
self.mock_stdout(True)
post_pr_test_report('1234', gh.GITHUB_EASYCONFIGS_REPO, test_report, "OK!", init_session_state, True)
stderr, stdout = self.get_stderr(), self.get_stdout()
self.mock_stderr(False)
self.mock_stdout(False)
self.assertEqual(stderr, '')
patterns = [
r"^\[DRY RUN\] Adding comment to easybuild-easyconfigs issue #1234: 'Test report by @easybuild_test",
r"^See https://gist.github.com/DRY_RUN for a full test report.'",
]
for pattern in patterns:
regex = re.compile(pattern, re.M)
self.assertTrue(regex.search(stdout), "Pattern '%s' should be found in: %s" % (regex.pattern, stdout))
self.mock_stderr(True)
self.mock_stdout(True)
post_pr_test_report('1234', gh.GITHUB_EASYBLOCKS_REPO, test_report, "OK!", init_session_state, True)
stderr, stdout = self.get_stderr(), self.get_stdout()
self.mock_stderr(False)
self.mock_stdout(False)
self.assertEqual(stderr, '')
patterns = [
r"^\[DRY RUN\] Adding comment to easybuild-easyblocks issue #1234: 'Test report by @easybuild_test",
r"^See https://gist.github.com/DRY_RUN for a full test report.'",
]
for pattern in patterns:
regex = re.compile(pattern, re.M)
self.assertTrue(regex.search(stdout), "Pattern '%s' should be found in: %s" % (regex.pattern, stdout))
# also test combination of --from-pr and --include-easyblocks-from-pr
update_build_option('include_easyblocks_from_pr', ['6789'])
self.mock_stderr(True)
self.mock_stdout(True)
post_pr_test_report('1234', gh.GITHUB_EASYCONFIGS_REPO, test_report, "OK!", init_session_state, True)
stderr, stdout = self.get_stderr(), self.get_stdout()
self.mock_stderr(False)
self.mock_stdout(False)
self.assertEqual(stderr, '')
patterns = [
r"^\[DRY RUN\] Adding comment to easybuild-easyconfigs issue #1234: 'Test report by @easybuild_test",
r"^See https://gist.github.com/DRY_RUN for a full test report.'",
r"Using easyblocks from PR\(s\) https://github.com/easybuilders/easybuild-easyblocks/pull/6789",
]
for pattern in patterns:
regex = re.compile(pattern, re.M)
self.assertTrue(regex.search(stdout), "Pattern '%s' should be found in: %s" % (regex.pattern, stdout))
def test_github_create_test_report(self):
"""Test create_test_report function."""
logfile = os.path.join(self.test_prefix, 'log.txt')
write_file(logfile, "Bazel failed with: error")
ecs_with_res = [
({'spec': 'test.eb'}, {'success': True}),
({'spec': 'fail.eb'}, {
'success': False,
'err': EasyBuildError("error: bazel"),
'traceback': "in bazel",
'log_file': logfile,
}),
]
init_session_state = {
'easybuild_configuration': ['EASYBUILD_DEBUG=1'],
'environment': {'USER': 'test'},
'module_list': [{'mod_name': 'test'}],
'system_info': {'name': 'test'},
'time': gmtime(0),
}
res = create_test_report("just a test", ecs_with_res, init_session_state)
patterns = [
"**SUCCESS** _test.eb_",
"**FAIL (build issue)** _fail.eb_",
"01 Jan 1970 00:00:00",
"EASYBUILD_DEBUG=1",
]
for pattern in patterns:
self.assertTrue(pattern in res['full'], "Pattern '%s' found in: %s" % (pattern, res['full']))
for pattern in patterns[:2]:
self.assertTrue(pattern in res['full'], "Pattern '%s' found in: %s" % (pattern, res['overview']))
# mock create_gist function, we don't want to actually create a gist every time we run this test...
def fake_create_gist(*args, **kwargs):
return 'https://gist.github.com/test'
easybuild.tools.testing.create_gist = fake_create_gist
res = create_test_report("just a test", ecs_with_res, init_session_state, pr_nrs=[123], gist_log=True)
patterns.insert(2, "https://gist.github.com/test")
patterns.extend([
"https://github.com/easybuilders/easybuild-easyconfigs/pull/123",
])
for pattern in patterns:
self.assertTrue(pattern in res['full'], "Pattern '%s' found in: %s" % (pattern, res['full']))
for pattern in patterns[:3]:
self.assertTrue(pattern in res['full'], "Pattern '%s' found in: %s" % (pattern, res['overview']))
self.assertTrue("**SUCCESS** _test.eb_" in res['overview'])
def suite():
""" returns all the testcases in this module """
return TestLoaderFiltered().loadTestsFromTestCase(GithubTest, sys.argv[1:])
if __name__ == '__main__':
res = TextTestRunner(verbosity=1).run(suite())
sys.exit(len(res.failures))
|
hpcugent/easybuild-framework
|
test/framework/github.py
|
Python
|
gpl-2.0
| 51,543
|
[
"LAMMPS",
"SIESTA",
"WIEN2k"
] |
1a8fddfe186c139b78bbacfa5bd1db08f34e785aaa2b91988a3efa663b3db266
|
# Copyright 2020 Google Research. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Hparams for model architecture and trainer."""
import ast
import collections
import copy
from typing import Any, Dict, Text
import six
import tensorflow as tf
import yaml
def eval_str_fn(val):
if val in {'true', 'false'}:
return val == 'true'
try:
return ast.literal_eval(val)
except (ValueError, SyntaxError):
return val
# pylint: disable=protected-access
class Config(object):
"""A config utility class."""
def __init__(self, config_dict=None):
self.update(config_dict)
def __setattr__(self, k, v):
self.__dict__[k] = Config(v) if isinstance(v, dict) else copy.deepcopy(v)
def __getattr__(self, k):
return self.__dict__[k]
def __getitem__(self, k):
return self.__dict__[k]
def __repr__(self):
return repr(self.as_dict())
def __deepcopy__(self, memodict):
return type(self)(self.as_dict())
def __str__(self):
try:
return yaml.dump(self.as_dict(), indent=4)
except TypeError:
return str(self.as_dict())
def _update(self, config_dict, allow_new_keys=True):
"""Recursively update internal members."""
if not config_dict:
return
for k, v in six.iteritems(config_dict):
if k not in self.__dict__:
if allow_new_keys:
self.__setattr__(k, v)
else:
raise KeyError('Key `{}` does not exist for overriding. '.format(k))
else:
if isinstance(self.__dict__[k], Config) and isinstance(v, dict):
self.__dict__[k]._update(v, allow_new_keys)
elif isinstance(self.__dict__[k], Config) and isinstance(v, Config):
self.__dict__[k]._update(v.as_dict(), allow_new_keys)
else:
self.__setattr__(k, v)
def get(self, k, default_value=None):
return self.__dict__.get(k, default_value)
def update(self, config_dict):
"""Update members while allowing new keys."""
self._update(config_dict, allow_new_keys=True)
def keys(self):
return self.__dict__.keys()
def override(self, config_dict_or_str, allow_new_keys=False):
"""Update members while disallowing new keys."""
if isinstance(config_dict_or_str, str):
if not config_dict_or_str:
return
elif '=' in config_dict_or_str:
config_dict = self.parse_from_str(config_dict_or_str)
elif config_dict_or_str.endswith('.yaml'):
config_dict = self.parse_from_yaml(config_dict_or_str)
else:
raise ValueError(
'Invalid string {}, must end with .yaml or contains "=".'.format(
config_dict_or_str))
elif isinstance(config_dict_or_str, dict):
config_dict = config_dict_or_str
else:
raise ValueError('Unknown value type: {}'.format(config_dict_or_str))
self._update(config_dict, allow_new_keys)
def parse_from_yaml(self, yaml_file_path: Text) -> Dict[Any, Any]:
"""Parses a yaml file and returns a dictionary."""
with tf.io.gfile.GFile(yaml_file_path, 'r') as f:
config_dict = yaml.load(f, Loader=yaml.FullLoader)
return config_dict
def save_to_yaml(self, yaml_file_path):
"""Write a dictionary into a yaml file."""
with tf.io.gfile.GFile(yaml_file_path, 'w') as f:
yaml.dump(self.as_dict(), f, default_flow_style=False)
def parse_from_str(self, config_str: Text) -> Dict[Any, Any]:
"""Parse a string like 'x.y=1,x.z=2' to nested dict {x: {y: 1, z: 2}}."""
if not config_str:
return {}
config_dict = {}
try:
for kv_pair in config_str.split(','):
if not kv_pair: # skip empty string
continue
key_str, value_str = kv_pair.split('=')
key_str = key_str.strip()
def add_kv_recursive(k, v):
"""Recursively parse x.y.z=tt to {x: {y: {z: tt}}}."""
if '.' not in k:
if '*' in v:
# we reserve * to split arrays.
return {k: [eval_str_fn(vv) for vv in v.split('*')]}
return {k: eval_str_fn(v)}
pos = k.index('.')
return {k[:pos]: add_kv_recursive(k[pos + 1:], v)}
def merge_dict_recursive(target, src):
"""Recursively merge two nested dictionary."""
for k in src.keys():
if ((k in target and isinstance(target[k], dict) and
isinstance(src[k], collections.Mapping))):
merge_dict_recursive(target[k], src[k])
else:
target[k] = src[k]
merge_dict_recursive(config_dict, add_kv_recursive(key_str, value_str))
return config_dict
except ValueError:
raise ValueError('Invalid config_str: {}'.format(config_str))
def as_dict(self):
"""Returns a dict representation."""
config_dict = {}
for k, v in six.iteritems(self.__dict__):
if isinstance(v, Config):
config_dict[k] = v.as_dict()
else:
config_dict[k] = copy.deepcopy(v)
return config_dict
# pylint: enable=protected-access
def default_detection_configs():
"""Returns a default detection configs."""
h = Config()
# model name.
h.name = 'efficientdet-d1'
# activation type: see activation_fn in utils.py.
h.act_type = 'swish'
# input preprocessing parameters
h.image_size = 640 # An integer or a string WxH such as 640x320.
h.target_size = None
h.input_rand_hflip = True
h.jitter_min = 0.1
h.jitter_max = 2.0
h.autoaugment_policy = None
h.grid_mask = False
h.sample_image = None
h.map_freq = 5 # AP eval frequency in epochs.
# dataset specific parameters
# TODO(tanmingxing): update this to be 91 for COCO, and 21 for pascal.
h.num_classes = 90 # 1+ actual classes, 0 is reserved for background.
h.seg_num_classes = 3 # segmentation classes
h.heads = ['object_detection'] # 'object_detection', 'segmentation'
h.skip_crowd_during_training = True
h.label_map = None # a dict or a string of 'coco', 'voc', 'waymo'.
h.max_instances_per_image = 100 # Default to 100 for COCO.
h.regenerate_source_id = False
# model architecture
h.min_level = 3
h.max_level = 7
h.num_scales = 3
# ratio w/h: 2.0 means w=1.4, h=0.7. Can be computed with k-mean per dataset.
h.aspect_ratios = [1.0, 2.0, 0.5] # [[0.7, 1.4], [1.0, 1.0], [1.4, 0.7]]
h.anchor_scale = 4.0
# is batchnorm training mode
h.is_training_bn = True
# optimization
h.momentum = 0.9
h.optimizer = 'sgd' # can be 'adam' or 'sgd'.
h.learning_rate = 0.08 # 0.008 for adam.
h.lr_warmup_init = 0.008 # 0.0008 for adam.
h.lr_warmup_epoch = 1.0
h.first_lr_drop_epoch = 200.0
h.second_lr_drop_epoch = 250.0
h.poly_lr_power = 0.9
h.clip_gradients_norm = 10.0
h.num_epochs = 300
h.data_format = 'channels_last'
# The default image normalization is identical to Cloud TPU ResNet.
h.mean_rgb = [0.485 * 255, 0.456 * 255, 0.406 * 255]
h.stddev_rgb = [0.229 * 255, 0.224 * 255, 0.225 * 255]
h.scale_range = False
# classification loss
h.label_smoothing = 0.0 # 0.1 is a good default
# Behold the focal loss parameters
h.alpha = 0.25
h.gamma = 1.5
# localization loss
h.delta = 0.1 # regularization parameter of huber loss.
# total loss = box_loss * box_loss_weight + iou_loss * iou_loss_weight
h.box_loss_weight = 50.0
h.iou_loss_type = None
h.iou_loss_weight = 1.0
# regularization l2 loss.
h.weight_decay = 4e-5
h.strategy = None # 'tpu', 'gpus', None
h.mixed_precision = False # If False, use float32.
h.loss_scale = None # set to 2**16 enables dynamic loss scale
# For detection.
h.box_class_repeats = 3
h.fpn_cell_repeats = 3
h.fpn_num_filters = 88
h.separable_conv = True
h.apply_bn_for_resampling = True
h.conv_after_downsample = False
h.conv_bn_act_pattern = False
h.drop_remainder = True # drop remainder for the final batch eval.
# For post-processing nms, must be a dict.
h.nms_configs = {
'method': 'gaussian',
'iou_thresh': None, # use the default value based on method.
'score_thresh': 0.,
'sigma': None,
'pyfunc': False,
'max_nms_inputs': 0,
'max_output_size': 100,
}
h.tflite_max_detections = 100
# version.
h.fpn_name = None
h.fpn_weight_method = None
h.fpn_config = None
# No stochastic depth in default.
h.survival_prob = None
h.img_summary_steps = None
h.lr_decay_method = 'cosine'
h.moving_average_decay = 0.9998
h.ckpt_var_scope = None # ckpt variable scope.
# If true, skip loading pretrained weights if shape mismatches.
h.skip_mismatch = True
h.backbone_name = 'efficientnet-b1'
h.backbone_config = None
h.var_freeze_expr = None
# A temporary flag to switch between legacy and keras models.
h.use_keras_model = True
h.dataset_type = None
h.positives_momentum = None
h.grad_checkpoint = False
# Parameters for the Checkpoint Callback.
h.verbose = 1
h.save_freq = 'epoch'
return h
efficientdet_model_param_dict = {
'efficientdet-d0':
dict(
name='efficientdet-d0',
backbone_name='efficientnet-b0',
image_size=512,
fpn_num_filters=64,
fpn_cell_repeats=3,
box_class_repeats=3,
),
'efficientdet-d1':
dict(
name='efficientdet-d1',
backbone_name='efficientnet-b1',
image_size=640,
fpn_num_filters=88,
fpn_cell_repeats=4,
box_class_repeats=3,
),
'efficientdet-d2':
dict(
name='efficientdet-d2',
backbone_name='efficientnet-b2',
image_size=768,
fpn_num_filters=112,
fpn_cell_repeats=5,
box_class_repeats=3,
),
'efficientdet-d3':
dict(
name='efficientdet-d3',
backbone_name='efficientnet-b3',
image_size=896,
fpn_num_filters=160,
fpn_cell_repeats=6,
box_class_repeats=4,
),
'efficientdet-d4':
dict(
name='efficientdet-d4',
backbone_name='efficientnet-b4',
image_size=1024,
fpn_num_filters=224,
fpn_cell_repeats=7,
box_class_repeats=4,
),
'efficientdet-d5':
dict(
name='efficientdet-d5',
backbone_name='efficientnet-b5',
image_size=1280,
fpn_num_filters=288,
fpn_cell_repeats=7,
box_class_repeats=4,
),
'efficientdet-d6':
dict(
name='efficientdet-d6',
backbone_name='efficientnet-b6',
image_size=1280,
fpn_num_filters=384,
fpn_cell_repeats=8,
box_class_repeats=5,
fpn_weight_method='sum', # Use unweighted sum for stability.
),
'efficientdet-d7':
dict(
name='efficientdet-d7',
backbone_name='efficientnet-b6',
image_size=1536,
fpn_num_filters=384,
fpn_cell_repeats=8,
box_class_repeats=5,
anchor_scale=5.0,
fpn_weight_method='sum', # Use unweighted sum for stability.
),
'efficientdet-d7x':
dict(
name='efficientdet-d7x',
backbone_name='efficientnet-b7',
image_size=1536,
fpn_num_filters=384,
fpn_cell_repeats=8,
box_class_repeats=5,
anchor_scale=4.0,
max_level=8,
fpn_weight_method='sum', # Use unweighted sum for stability.
),
}
lite_common_param = dict(
mean_rgb=127.0,
stddev_rgb=128.0,
act_type='relu6',
fpn_weight_method='sum',
)
efficientdet_lite_param_dict = {
# lite models are in progress and subject to changes.
# mean_rgb and stddev_rgb are consistent with EfficientNet-Lite models in
# https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/lite/efficientnet_lite_builder.py#L28
'efficientdet-lite0':
dict(
name='efficientdet-lite0',
backbone_name='efficientnet-lite0',
image_size=320,
fpn_num_filters=64,
fpn_cell_repeats=3,
box_class_repeats=3,
anchor_scale=3.0,
**lite_common_param,
),
'efficientdet-lite1':
dict(
name='efficientdet-lite1',
backbone_name='efficientnet-lite1',
image_size=384,
fpn_num_filters=88,
fpn_cell_repeats=4,
box_class_repeats=3,
anchor_scale=3.0,
**lite_common_param,
),
'efficientdet-lite2':
dict(
name='efficientdet-lite2',
backbone_name='efficientnet-lite2',
image_size=448,
fpn_num_filters=112,
fpn_cell_repeats=5,
box_class_repeats=3,
anchor_scale=3.0,
**lite_common_param,
),
'efficientdet-lite3':
dict(
name='efficientdet-lite3',
backbone_name='efficientnet-lite3',
image_size=512,
fpn_num_filters=160,
fpn_cell_repeats=6,
box_class_repeats=4,
**lite_common_param,
),
'efficientdet-lite3x':
dict(
name='efficientdet-lite3x',
backbone_name='efficientnet-lite3',
image_size=640,
fpn_num_filters=200,
fpn_cell_repeats=6,
box_class_repeats=4,
anchor_scale=3.0,
**lite_common_param,
),
'efficientdet-lite4':
dict(
name='efficientdet-lite4',
backbone_name='efficientnet-lite4',
image_size=640,
fpn_num_filters=224,
fpn_cell_repeats=7,
box_class_repeats=4,
**lite_common_param,
),
}
def get_efficientdet_config(model_name='efficientdet-d1'):
"""Get the default config for EfficientDet based on model name."""
h = default_detection_configs()
if model_name in efficientdet_model_param_dict:
h.override(efficientdet_model_param_dict[model_name])
elif model_name in efficientdet_lite_param_dict:
h.override(efficientdet_lite_param_dict[model_name])
else:
raise ValueError('Unknown model name: {}'.format(model_name))
return h
def get_detection_config(model_name):
if model_name.startswith('efficientdet'):
return get_efficientdet_config(model_name)
else:
raise ValueError('model name must start with efficientdet.')
|
google/automl
|
efficientdet/hparams_config.py
|
Python
|
apache-2.0
| 15,113
|
[
"Gaussian"
] |
03c18c31b685bfe04f536fcb0ef0f21e4e0d592e2ad672dfe975fa0d9cf1daa5
|
# -*- coding: utf-8 -*-
"""Flask App for MPContribs API"""
import os
import boto3
import urllib
import logging
import requests
#import rq_dashboard
#import flask_monitoringdashboard as dashboard
import flask_mongorest.operators as ops
from importlib import import_module
from websocket import create_connection
from flask import Flask, current_app, request, g
from flask_marshmallow import Marshmallow
from flask_mongoengine import MongoEngine
from flask_mongorest import register_class
from flask_sse import sse
from flask_compress import Compress
from flasgger.base import Swagger
from mongoengine import ValidationError
from mongoengine.base.datastructures import BaseDict
from itsdangerous import URLSafeTimedSerializer
from string import punctuation
from boltons.iterutils import remap, default_enter
from notebook.utils import url_path_join
from notebook.gateway.managers import GatewayClient
from requests.exceptions import ConnectionError
delimiter, max_depth = ".", 4
invalidChars = set(punctuation.replace("*", ""))
invalidChars.add(" ")
is_gunicorn = "gunicorn" in os.environ.get("SERVER_SOFTWARE", "")
sns_client = boto3.client("sns")
# NOTE not including Size below (special for arrays)
FILTERS = {
"LONG_STRINGS": [
ops.Contains, ops.IContains,
ops.Startswith, ops.IStartswith,
ops.Endswith, ops.IEndswith
],
"NUMBERS": [ops.Lt, ops.Lte, ops.Gt, ops.Gte, ops.Range],
"DATES": [ops.Before, ops.After],
"OTHERS": [ops.Boolean, ops.Exists],
}
FILTERS["STRINGS"] = [ops.In, ops.Exact, ops.IExact, ops.Ne] + FILTERS["LONG_STRINGS"]
FILTERS["ALL"] = FILTERS["STRINGS"] + FILTERS["NUMBERS"] + FILTERS["DATES"] + FILTERS["OTHERS"]
class CustomLoggerAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
prefix = self.extra.get('prefix')
return f"[{prefix}] {msg}" if prefix else msg, kwargs
def get_logger(name):
logger = logging.getLogger(name)
process = os.environ.get("SUPERVISOR_PROCESS_NAME")
group = os.environ.get("SUPERVISOR_GROUP_NAME")
cfg = {"prefix": f"{group}/{process}"} if process and group else {}
logger.setLevel("DEBUG" if os.environ.get("FLASK_ENV") == "development" else "INFO")
return CustomLoggerAdapter(logger, cfg)
logger = get_logger(__name__)
def enter(path, key, value):
if isinstance(value, BaseDict):
return dict(), value.items()
elif isinstance(value, list):
dot_path = delimiter.join(list(path) + [key])
raise ValidationError(f"lists not allowed ({dot_path})!")
return default_enter(path, key, value)
def valid_key(key):
for char in key:
if char in invalidChars:
raise ValidationError(f"invalid character {char} in {key}")
def visit(path, key, value):
key = key.strip()
if len(path) + 1 > max_depth:
dot_path = delimiter.join(list(path) + [key])
raise ValidationError(f"max nesting ({max_depth}) exceeded for {dot_path}")
valid_key(key)
return key, value
def valid_dict(dct):
remap(dct, visit=visit, enter=enter)
def send_email(to, subject, template):
sns_client.publish(TopicArn=to, Message=template, Subject=subject)
def get_collections(db):
"""get list of collections in DB"""
conn = db.app.extensions["mongoengine"][db]["conn"]
dbname = db.app.config.get("MPCONTRIBS_DB")
return conn[dbname].list_collection_names()
def get_resource_as_string(name, charset="utf-8"):
"""http://flask.pocoo.org/snippets/77/"""
with current_app.open_resource(name) as f:
return f.read().decode(charset)
def get_kernel_endpoint(kernel_id=None, ws=False):
gw_client = GatewayClient.instance()
base_url = gw_client.ws_url if ws else gw_client.url
base_endpoint = url_path_join(base_url, gw_client.kernels_endpoint)
if isinstance(kernel_id, str) and kernel_id:
return url_path_join(base_endpoint, kernel_id)
return base_endpoint
def create_kernel_connection(kernel_id):
url = get_kernel_endpoint(kernel_id, ws=True) + "/channels"
return create_connection(url, skip_utf8_validation=True)
def get_kernels():
"""retrieve list of kernels from KernelGateway service"""
try:
r = requests.get(get_kernel_endpoint())
except ConnectionError:
logger.warning("Kernel Gateway NOT AVAILABLE")
return None
response = r.json()
nkernels = 3 # reserve 3 kernels for each deployment
idx = int(os.environ.get("DEPLOYMENT"))
if len(response) < nkernels * (idx + 1):
logger.error("NOT ENOUGH KERNELS AVAILABLE")
return None
return {kernel["id"]: None for kernel in response[idx:idx+3]}
def get_consumer():
groups = request.headers.get("X-Authenticated-Groups", "").split(",")
groups += request.headers.get("X-Consumer-Groups", "").split(",")
return {
"username": request.headers.get("X-Consumer-Username"),
"apikey": request.headers.get("X-Consumer-Custom-Id"),
"groups": ",".join(set(groups)),
}
def create_app():
"""create flask app"""
app = Flask(__name__)
app.config.from_pyfile("config.py", silent=True)
app.config["USTS"] = URLSafeTimedSerializer(app.secret_key)
app.jinja_env.globals["get_resource_as_string"] = get_resource_as_string
app.jinja_env.lstrip_blocks = True
app.jinja_env.trim_blocks = True
app.config["TEMPLATE"]["schemes"] = ["http"] if app.debug else ["https"]
MPCONTRIBS_API_HOST = os.environ["MPCONTRIBS_API_HOST"]
logger.info("database: " + app.config["MPCONTRIBS_DB"])
if app.debug:
from flask_cors import CORS
CORS(app) # enable for development (allow localhost)
Compress(app)
Marshmallow(app)
MongoEngine(app)
Swagger(app, template=app.config.get("TEMPLATE"))
setattr(app, "kernels", get_kernels())
# NOTE: hard-code to avoid pre-generating for new deployment
# collections = get_collections(db)
collections = [
"projects",
"contributions",
"tables",
"attachments",
"structures",
"notebooks",
]
for collection in collections:
module_path = ".".join(["mpcontribs", "api", collection, "views"])
try:
module = import_module(module_path)
except ModuleNotFoundError as ex:
logger.error(f"API module {module_path}: {ex}")
continue
try:
blueprint = getattr(module, collection)
app.register_blueprint(blueprint, url_prefix="/" + collection)
klass = getattr(module, collection.capitalize() + "View")
register_class(app, klass, name=collection)
logger.info(f"{collection} registered")
except AttributeError as ex:
logger.error(f"Failed to register {module_path}: {collection} {ex}")
if app.kernels:
from mpcontribs.api.notebooks.views import rq, make
rq.init_app(app)
if is_gunicorn:
setattr(app, "cron_job_id", f"auto-notebooks-build_{MPCONTRIBS_API_HOST}")
make.cron('*/3 * * * *', app.cron_job_id)
logger.info(f"CRONJOB {app.cron_job_id} added.")
def healthcheck():
# TODO run mpcontribs-api in next-gen task on different port so this won't be needed
# spams logs with expected 500 errors
if not app.debug and not app.kernels:
return "KERNEL GATEWAY NOT AVAILABLE", 500
return "OK"
if is_gunicorn:
app.register_blueprint(sse, url_prefix="/stream")
app.add_url_rule("/healthcheck", view_func=healthcheck)
#app.register_blueprint(rq_dashboard.blueprint, url_prefix="/rq")
#dashboard.config.init_from(file="dashboard.cfg")
#dashboard.config.version = app.config["VERSION"]
#dashboard.config.table_prefix = f"fmd_{MPCONTRIBS_API_HOST}"
#db_password = os.environ["POSTGRES_DB_PASSWORD"]
#db_host = os.environ["POSTGRES_DB_HOST"]
#dashboard.config.database_name = f"postgresql://kong:{db_password}@{db_host}/kong"
#dashboard.bind(app)
logger.info("app created.")
return app
|
materialsproject/MPContribs
|
mpcontribs-api/mpcontribs/api/__init__.py
|
Python
|
mit
| 8,122
|
[
"VisIt"
] |
762a035dc3e56faf62916307a1b3c6504efaf72b107e0a2b403b6d5c2a9ede79
|
# Latent distance model for neural data
import numpy as np
import numpy.random as npr
from autograd import grad
from scipy.optimize import minimize
import autograd.numpy as atnp
from hips.inference.hmc import hmc
from pyglm.utils.utils import expand_scalar, compute_optimal_rotation
from matplotlib import pyplot as plt
"""
l_n ~ N(0, sigma^2 I)
W_{n', n} ~ N(0, exp(-||l_{n}-l_{n'}||_2^2/2)) for n' != n
"""
# Simulated data
dim = 2
N = 20
r = 1 + np.arange(N) // (N/2.)
th = np.linspace(0, 4 * np.pi, N, endpoint=False)
x = r * np.cos(th)
y = r * np.sin(th)
L = np.hstack((x[:, None], y[:, None]))
#w = 4
#s = 0.8
#x = s * (np.arange(N) % w)
#y = s * (np.arange(N) // w)
#L = np.hstack((x[:,None], y[:,None]))
W = np.zeros((N, N))
# Distance matrix
D = ((L[:, None, :] - L[None, :, :]) ** 2).sum(2)
sig = np.exp(-D / 2)
Sig = np.tile(sig[:, :, None, None], (1, 1, 1, 1))
# Covariance of prior on l_{n}
sigma = 1
Mu = expand_scalar(0, (N, N, 1))
L_estimate = np.sqrt(sigma) * np.random.randn(N, dim)
for n in range(N):
for m in range(N):
W[n, m] = npr.multivariate_normal(Mu[n, m], Sig[n, m])
def _hmc_log_probability(N, dim, L, W, sigma):
"""
Compute the log probability as a function of L.
This allows us to take the gradients wrt L using autograd.
:param L:
:return:
"""
# Compute pairwise distance
L1 = atnp.reshape(L, (N, 1, dim))
L2 = atnp.reshape(L, (1, N, dim))
X = W
# Get the covariance and precision
Sig = atnp.exp((-atnp.sum((L1 - L2) ** 2, axis=2)) / 2)
Lmb = 1. / Sig
lp = atnp.sum(-0.5 * X ** 2 * Lmb)
# Log prior of L under spherical Gaussian prior
lp += -0.5 * atnp.sum(L * L / sigma)
return lp
def _resample_sigma(L):
"""
Resample sigma under an inverse gamma prior, sigma ~ IG(1,1)
:return:
"""
a_prior = 1.0
b_prior = 1.0
a_post = a_prior + L.size / 2.0
b_post = b_prior + (L ** 2).sum() / 2.0
from scipy.stats import invgamma
sigma = invgamma.rvs(a=a_post, scale=b_post)
return sigma
def plot_LatentDistanceModel(W, L, N, L_true=None, ax=None):
"""
If D==2, plot the embedded nodes and the connections between them
:param L_true: If given, rotate the inferred features to match F_true
:return:
"""
# Color the weights by the
import matplotlib.cm as cm
cmap = cm.get_cmap("RdBu")
W_lim = abs(W[:,:]).max()
W_rel = (W[:,:] - (-W_lim)) / (2*W_lim)
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111, aspect="equal")
# If true locations are given, rotate L to match L_true
if L_true is not None:
R = compute_optimal_rotation(L, L_true)
L = L.dot(R)
# Scatter plot the node embeddings
# Plot the edges between nodes
for n1 in range(N):
for n2 in range(N):
ax.plot([L[n1,0], L[n2,0]],
[L[n1,1], L[n2,1]],
'-', color=cmap(W_rel[n1,n2]),
lw=1.0)
ax.plot(L[:,0], L[:,1], 's', color='k', markerfacecolor='k', markeredgecolor='k')
# Get extreme feature values
b = np.amax(abs(L)) + L[:].std() / 2.0
# Plot grids for origin
ax.plot([0,0], [-b,b], ':k', lw=0.5)
ax.plot([-b,b], [0,0], ':k', lw=0.5)
# Set the limits
ax.set_xlim([-b,b])
ax.set_ylim([-b,b])
# Labels
ax.set_xlabel('Latent Dimension 1')
ax.set_ylabel('Latent Dimension 2')
plt.show()
return ax
# Inference using bfgs method
lp = lambda Lflat: -_hmc_log_probability(N, dim, atnp.reshape(Lflat, (N, 2)), W, sigma)
dlp = grad(lp)
for i in range(1000):
res = minimize(lp, np.ravel(L_estimate), jac=dlp, method="bfgs")
L_estimate = np.reshape(res.x, (N, 2))
# Debug here, because the two directed weights are ploted together
# With different strength
plot_LatentDistanceModel(W, L_estimate, N, L_true=L)
plot_LatentDistanceModel(W, L, N)
|
sheqi/TVpgGLM
|
test/practice5_Latent_Distance_Model_Weight_Qi_optimize.py
|
Python
|
mit
| 3,911
|
[
"Gaussian"
] |
ece8f8de583bdbe4dd5beee642d51a3a835daef83e069e6a413b27862f4d2028
|
#
# @BEGIN LICENSE
#
# Psi4: an open-source quantum chemistry software package
#
# Copyright (c) 2007-2017 The Psi4 Developers.
#
# The copyrights for code used from other parties are included in
# the corresponding files.
#
# This file is part of Psi4.
#
# Psi4 is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, version 3.
#
# Psi4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with Psi4; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# @END LICENSE
#
"""
| Database of Pulay corannulene structures. Subsumed into CFLOW.
- **cp** ``'off'`` || ``'on'``
- **rlxd** ``'off'``
"""
import re
import qcdb
# <<< CORE Database Module >>>
# Geometries and Reference energies from.
dbse = 'CORE'
# <<< Database Members >>>
HRXN = ['dimer3_54', 'dimer3_64', 'dimer3_73', 'dimer3_74', 'dimer3_84', ]
HRXN_SM = []
HRXN_LG = []
# <<< Chemical Systems Involved >>>
RXNM = {} # reaction matrix of reagent contributions per reaction
ACTV = {} # order of active reagents per reaction
ACTV_CP = {} # order of active reagents per counterpoise-corrected reaction
ACTV_SA = {} # order of active reagents for non-supermolecular calculations
for rxn in HRXN:
RXNM[ '%s-%s' % (dbse, rxn)] = {'%s-%s-dimer' % (dbse, rxn) : +1,
'%s-%s-monoA-CP' % (dbse, rxn) : -1,
'%s-%s-monoB-CP' % (dbse, rxn) : -1,
'%s-%s-monoA-unCP' % (dbse, rxn) : -1,
'%s-%s-monoB-unCP' % (dbse, rxn) : -1 }
ACTV_SA['%s-%s' % (dbse, rxn)] = ['%s-%s-dimer' % (dbse, rxn) ]
ACTV_CP['%s-%s' % (dbse, rxn)] = ['%s-%s-dimer' % (dbse, rxn),
'%s-%s-monoA-CP' % (dbse, rxn),
'%s-%s-monoB-CP' % (dbse, rxn) ]
ACTV[ '%s-%s' % (dbse, rxn)] = ['%s-%s-dimer' % (dbse, rxn),
'%s-%s-monoA-unCP' % (dbse, rxn),
'%s-%s-monoB-unCP' % (dbse, rxn) ]
# <<< Reference Values [kcal/mol] >>>
# Taken from
BIND = {}
BIND['%s-%s' % (dbse, 'dimer3_54' )] = -14.8000
BIND['%s-%s' % (dbse, 'dimer3_64' )] = -15.4000
BIND['%s-%s' % (dbse, 'dimer3_73' )] = -15.6000 # Bootstrapped, Pulay does not report
BIND['%s-%s' % (dbse, 'dimer3_74' )] = -15.4000
BIND['%s-%s' % (dbse, 'dimer3_84' )] = -15.0000
# <<< Comment Lines >>>
TAGL = {}
TAGL['%s-%s' % (dbse, 'dimer3_54' )] = """ """
TAGL['%s-%s-dimer' % (dbse, 'dimer3_54' )] = """Dimer from """
TAGL['%s-%s-monoA-CP' % (dbse, 'dimer3_54' )] = """Monomer A from """
TAGL['%s-%s-monoB-CP' % (dbse, 'dimer3_54' )] = """Monomer B from """
TAGL['%s-%s-monoA-unCP' % (dbse, 'dimer3_54' )] = """Monomer A from """
TAGL['%s-%s-monoB-unCP' % (dbse, 'dimer3_54' )] = """Monomer B from """
TAGL['%s-%s' % (dbse, 'dimer3_64' )] = """ """
TAGL['%s-%s-dimer' % (dbse, 'dimer3_64' )] = """Dimer from """
TAGL['%s-%s-monoA-CP' % (dbse, 'dimer3_64' )] = """Monomer A from """
TAGL['%s-%s-monoB-CP' % (dbse, 'dimer3_64' )] = """Monomer B from """
TAGL['%s-%s-monoA-unCP' % (dbse, 'dimer3_64' )] = """Monomer A from """
TAGL['%s-%s-monoB-unCP' % (dbse, 'dimer3_64' )] = """Monomer B from """
TAGL['%s-%s' % (dbse, 'dimer3_73' )] = """ """
TAGL['%s-%s-dimer' % (dbse, 'dimer3_73' )] = """Dimer from """
TAGL['%s-%s-monoA-CP' % (dbse, 'dimer3_73' )] = """Monomer A from """
TAGL['%s-%s-monoB-CP' % (dbse, 'dimer3_73' )] = """Monomer B from """
TAGL['%s-%s-monoA-unCP' % (dbse, 'dimer3_73' )] = """Monomer A from """
TAGL['%s-%s-monoB-unCP' % (dbse, 'dimer3_73' )] = """Monomer B from """
TAGL['%s-%s' % (dbse, 'dimer3_74' )] = """ """
TAGL['%s-%s-dimer' % (dbse, 'dimer3_74' )] = """Dimer from """
TAGL['%s-%s-monoA-CP' % (dbse, 'dimer3_74' )] = """Monomer A from """
TAGL['%s-%s-monoB-CP' % (dbse, 'dimer3_74' )] = """Monomer B from """
TAGL['%s-%s-monoA-unCP' % (dbse, 'dimer3_74' )] = """Monomer A from """
TAGL['%s-%s-monoB-unCP' % (dbse, 'dimer3_74' )] = """Monomer B from """
TAGL['%s-%s' % (dbse, 'dimer3_84' )] = """ """
TAGL['%s-%s-dimer' % (dbse, 'dimer3_84' )] = """Dimer from """
TAGL['%s-%s-monoA-CP' % (dbse, 'dimer3_84' )] = """Monomer A from """
TAGL['%s-%s-monoB-CP' % (dbse, 'dimer3_84' )] = """Monomer B from """
TAGL['%s-%s-monoA-unCP' % (dbse, 'dimer3_84' )] = """Monomer A from """
TAGL['%s-%s-monoB-unCP' % (dbse, 'dimer3_84' )] = """Monomer B from """
# <<< Geometry Specification Strings >>>
GEOS = {}
GEOS['%s-%s-dimer' % (dbse, 'dimer3_54')] = qcdb.Molecule("""
0 1
C 0.70622800 0.97211978 0.61694803
C -0.70622800 0.97211978 0.61694803
C -1.14280400 -0.37137722 0.61681203
C 0.00000000 -1.20165922 0.61659503
C 1.14280400 -0.37137722 0.61681203
C 1.45779000 2.00650178 0.09413403
C -1.45779000 2.00650178 0.09413403
C -2.35873800 -0.76639722 0.09397203
C 0.00000000 -2.48004022 0.09366903
C 2.35873800 -0.76639722 0.09397203
C 0.69261800 3.17923978 -0.25321497
C -0.69261800 3.17923978 -0.25321497
C -2.80958100 1.64119778 -0.25292797
C -3.23765700 0.32373778 -0.25303797
C -2.42918200 -2.16498922 -0.25302597
C -1.30841500 -2.97916822 -0.25327697
C 1.30841500 -2.97916822 -0.25327697
C 2.42918200 -2.16498922 -0.25302597
C 3.23765700 0.32373778 -0.25303797
C 2.80958100 1.64119778 -0.25292797
H 1.20851300 4.06642078 -0.61418797
H -1.20851300 4.06642078 -0.61418797
H -3.49401500 2.40602178 -0.61367197
H -4.24094400 0.10729578 -0.61373997
H -3.36816400 -2.57958822 -0.61350597
H -1.41248600 -4.00024222 -0.61397997
H 1.41248600 -4.00024222 -0.61397997
H 3.36816400 -2.57958822 -0.61350597
H 4.24094400 0.10729578 -0.61373997
H 3.49401500 2.40602178 -0.61367197
--
0 1
C 0.70622800 0.97211978 4.15694803
C -0.70622800 0.97211978 4.15694803
C -1.14280400 -0.37137722 4.15681203
C 0.00000000 -1.20165922 4.15659503
C 1.14280400 -0.37137722 4.15681203
C 1.45779000 2.00650178 3.63413403
C -1.45779000 2.00650178 3.63413403
C -2.35873800 -0.76639722 3.63397203
C 0.00000000 -2.48004022 3.63366903
C 2.35873800 -0.76639722 3.63397203
C 0.69261800 3.17923978 3.28678503
C -0.69261800 3.17923978 3.28678503
C -2.80958100 1.64119778 3.28707203
C -3.23765700 0.32373778 3.28696203
C -2.42918200 -2.16498922 3.28697403
C -1.30841500 -2.97916822 3.28672303
C 1.30841500 -2.97916822 3.28672303
C 2.42918200 -2.16498922 3.28697403
C 3.23765700 0.32373778 3.28696203
C 2.80958100 1.64119778 3.28707203
H 1.20851300 4.06642078 2.92581203
H -1.20851300 4.06642078 2.92581203
H -3.49401500 2.40602178 2.92632803
H -4.24094400 0.10729578 2.92626003
H -3.36816400 -2.57958822 2.92649403
H -1.41248600 -4.00024222 2.92602003
H 1.41248600 -4.00024222 2.92602003
H 3.36816400 -2.57958822 2.92649403
H 4.24094400 0.10729578 2.92626003
H 3.49401500 2.40602178 2.92632803
units angstrom
""")
GEOS['%s-%s-dimer' % (dbse, 'dimer3_64')] = qcdb.Molecule("""
0 1
C 0.70622800 0.97211978 0.61694803
C -0.70622800 0.97211978 0.61694803
C -1.14280400 -0.37137722 0.61681203
C 0.00000000 -1.20165922 0.61659503
C 1.14280400 -0.37137722 0.61681203
C 1.45779000 2.00650178 0.09413403
C -1.45779000 2.00650178 0.09413403
C -2.35873800 -0.76639722 0.09397203
C 0.00000000 -2.48004022 0.09366903
C 2.35873800 -0.76639722 0.09397203
C 0.69261800 3.17923978 -0.25321497
C -0.69261800 3.17923978 -0.25321497
C -2.80958100 1.64119778 -0.25292797
C -3.23765700 0.32373778 -0.25303797
C -2.42918200 -2.16498922 -0.25302597
C -1.30841500 -2.97916822 -0.25327697
C 1.30841500 -2.97916822 -0.25327697
C 2.42918200 -2.16498922 -0.25302597
C 3.23765700 0.32373778 -0.25303797
C 2.80958100 1.64119778 -0.25292797
H 1.20851300 4.06642078 -0.61418797
H -1.20851300 4.06642078 -0.61418797
H -3.49401500 2.40602178 -0.61367197
H -4.24094400 0.10729578 -0.61373997
H -3.36816400 -2.57958822 -0.61350597
H -1.41248600 -4.00024222 -0.61397997
H 1.41248600 -4.00024222 -0.61397997
H 3.36816400 -2.57958822 -0.61350597
H 4.24094400 0.10729578 -0.61373997
H 3.49401500 2.40602178 -0.61367197
--
0 1
C 0.70622800 0.97211978 4.25694803
C -0.70622800 0.97211978 4.25694803
C -1.14280400 -0.37137722 4.25681203
C 0.00000000 -1.20165922 4.25659503
C 1.14280400 -0.37137722 4.25681203
C 1.45779000 2.00650178 3.73413403
C -1.45779000 2.00650178 3.73413403
C -2.35873800 -0.76639722 3.73397203
C 0.00000000 -2.48004022 3.73366903
C 2.35873800 -0.76639722 3.73397203
C 0.69261800 3.17923978 3.38678503
C -0.69261800 3.17923978 3.38678503
C -2.80958100 1.64119778 3.38707203
C -3.23765700 0.32373778 3.38696203
C -2.42918200 -2.16498922 3.38697403
C -1.30841500 -2.97916822 3.38672303
C 1.30841500 -2.97916822 3.38672303
C 2.42918200 -2.16498922 3.38697403
C 3.23765700 0.32373778 3.38696203
C 2.80958100 1.64119778 3.38707203
H 1.20851300 4.06642078 3.02581203
H -1.20851300 4.06642078 3.02581203
H -3.49401500 2.40602178 3.02632803
H -4.24094400 0.10729578 3.02626003
H -3.36816400 -2.57958822 3.02649403
H -1.41248600 -4.00024222 3.02602003
H 1.41248600 -4.00024222 3.02602003
H 3.36816400 -2.57958822 3.02649403
H 4.24094400 0.10729578 3.02626003
H 3.49401500 2.40602178 3.02632803
units angstrom
""")
GEOS['%s-%s-dimer' % (dbse, 'dimer3_73')] = qcdb.Molecule("""
0 1
C 0.70622800 0.97211978 0.61694803
C -0.70622800 0.97211978 0.61694803
C -1.14280400 -0.37137722 0.61681203
C 0.00000000 -1.20165922 0.61659503
C 1.14280400 -0.37137722 0.61681203
C 1.45779000 2.00650178 0.09413403
C -1.45779000 2.00650178 0.09413403
C -2.35873800 -0.76639722 0.09397203
C 0.00000000 -2.48004022 0.09366903
C 2.35873800 -0.76639722 0.09397203
C 0.69261800 3.17923978 -0.25321497
C -0.69261800 3.17923978 -0.25321497
C -2.80958100 1.64119778 -0.25292797
C -3.23765700 0.32373778 -0.25303797
C -2.42918200 -2.16498922 -0.25302597
C -1.30841500 -2.97916822 -0.25327697
C 1.30841500 -2.97916822 -0.25327697
C 2.42918200 -2.16498922 -0.25302597
C 3.23765700 0.32373778 -0.25303797
C 2.80958100 1.64119778 -0.25292797
H 1.20851300 4.06642078 -0.61418797
H -1.20851300 4.06642078 -0.61418797
H -3.49401500 2.40602178 -0.61367197
H -4.24094400 0.10729578 -0.61373997
H -3.36816400 -2.57958822 -0.61350597
H -1.41248600 -4.00024222 -0.61397997
H 1.41248600 -4.00024222 -0.61397997
H 3.36816400 -2.57958822 -0.61350597
H 4.24094400 0.10729578 -0.61373997
H 3.49401500 2.40602178 -0.61367197
--
0 1
C 0.70622800 0.97211978 4.34694803
C -0.70622800 0.97211978 4.34694803
C -1.14280400 -0.37137722 4.34681203
C 0.00000000 -1.20165922 4.34659503
C 1.14280400 -0.37137722 4.34681203
C 1.45779000 2.00650178 3.82413403
C -1.45779000 2.00650178 3.82413403
C -2.35873800 -0.76639722 3.82397203
C 0.00000000 -2.48004022 3.82366903
C 2.35873800 -0.76639722 3.82397203
C 0.69261800 3.17923978 3.47678503
C -0.69261800 3.17923978 3.47678503
C -2.80958100 1.64119778 3.47707203
C -3.23765700 0.32373778 3.47696203
C -2.42918200 -2.16498922 3.47697403
C -1.30841500 -2.97916822 3.47672303
C 1.30841500 -2.97916822 3.47672303
C 2.42918200 -2.16498922 3.47697403
C 3.23765700 0.32373778 3.47696203
C 2.80958100 1.64119778 3.47707203
H 1.20851300 4.06642078 3.11581203
H -1.20851300 4.06642078 3.11581203
H -3.49401500 2.40602178 3.11632803
H -4.24094400 0.10729578 3.11626003
H -3.36816400 -2.57958822 3.11649403
H -1.41248600 -4.00024222 3.11602003
H 1.41248600 -4.00024222 3.11602003
H 3.36816400 -2.57958822 3.11649403
H 4.24094400 0.10729578 3.11626003
H 3.49401500 2.40602178 3.11632803
units angstrom
""")
GEOS['%s-%s-dimer' % (dbse, 'dimer3_74')] = qcdb.Molecule("""
0 1
C 0.70622800 0.97211978 0.61694803
C -0.70622800 0.97211978 0.61694803
C -1.14280400 -0.37137722 0.61681203
C 0.00000000 -1.20165922 0.61659503
C 1.14280400 -0.37137722 0.61681203
C 1.45779000 2.00650178 0.09413403
C -1.45779000 2.00650178 0.09413403
C -2.35873800 -0.76639722 0.09397203
C 0.00000000 -2.48004022 0.09366903
C 2.35873800 -0.76639722 0.09397203
C 0.69261800 3.17923978 -0.25321497
C -0.69261800 3.17923978 -0.25321497
C -2.80958100 1.64119778 -0.25292797
C -3.23765700 0.32373778 -0.25303797
C -2.42918200 -2.16498922 -0.25302597
C -1.30841500 -2.97916822 -0.25327697
C 1.30841500 -2.97916822 -0.25327697
C 2.42918200 -2.16498922 -0.25302597
C 3.23765700 0.32373778 -0.25303797
C 2.80958100 1.64119778 -0.25292797
H 1.20851300 4.06642078 -0.61418797
H -1.20851300 4.06642078 -0.61418797
H -3.49401500 2.40602178 -0.61367197
H -4.24094400 0.10729578 -0.61373997
H -3.36816400 -2.57958822 -0.61350597
H -1.41248600 -4.00024222 -0.61397997
H 1.41248600 -4.00024222 -0.61397997
H 3.36816400 -2.57958822 -0.61350597
H 4.24094400 0.10729578 -0.61373997
H 3.49401500 2.40602178 -0.61367197
--
0 1
C 0.70622800 0.97211978 4.35694803
C -0.70622800 0.97211978 4.35694803
C -1.14280400 -0.37137722 4.35681203
C 0.00000000 -1.20165922 4.35659503
C 1.14280400 -0.37137722 4.35681203
C 1.45779000 2.00650178 3.83413403
C -1.45779000 2.00650178 3.83413403
C -2.35873800 -0.76639722 3.83397203
C 0.00000000 -2.48004022 3.83366903
C 2.35873800 -0.76639722 3.83397203
C 0.69261800 3.17923978 3.48678503
C -0.69261800 3.17923978 3.48678503
C -2.80958100 1.64119778 3.48707203
C -3.23765700 0.32373778 3.48696203
C -2.42918200 -2.16498922 3.48697403
C -1.30841500 -2.97916822 3.48672303
C 1.30841500 -2.97916822 3.48672303
C 2.42918200 -2.16498922 3.48697403
C 3.23765700 0.32373778 3.48696203
C 2.80958100 1.64119778 3.48707203
H 1.20851300 4.06642078 3.12581203
H -1.20851300 4.06642078 3.12581203
H -3.49401500 2.40602178 3.12632803
H -4.24094400 0.10729578 3.12626003
H -3.36816400 -2.57958822 3.12649403
H -1.41248600 -4.00024222 3.12602003
H 1.41248600 -4.00024222 3.12602003
H 3.36816400 -2.57958822 3.12649403
H 4.24094400 0.10729578 3.12626003
H 3.49401500 2.40602178 3.12632803
units angstrom
""")
GEOS['%s-%s-dimer' % (dbse, 'dimer3_84')] = qcdb.Molecule("""
0 1
C 0.70622800 0.97211978 0.61694803
C -0.70622800 0.97211978 0.61694803
C -1.14280400 -0.37137722 0.61681203
C 0.00000000 -1.20165922 0.61659503
C 1.14280400 -0.37137722 0.61681203
C 1.45779000 2.00650178 0.09413403
C -1.45779000 2.00650178 0.09413403
C -2.35873800 -0.76639722 0.09397203
C 0.00000000 -2.48004022 0.09366903
C 2.35873800 -0.76639722 0.09397203
C 0.69261800 3.17923978 -0.25321497
C -0.69261800 3.17923978 -0.25321497
C -2.80958100 1.64119778 -0.25292797
C -3.23765700 0.32373778 -0.25303797
C -2.42918200 -2.16498922 -0.25302597
C -1.30841500 -2.97916822 -0.25327697
C 1.30841500 -2.97916822 -0.25327697
C 2.42918200 -2.16498922 -0.25302597
C 3.23765700 0.32373778 -0.25303797
C 2.80958100 1.64119778 -0.25292797
H 1.20851300 4.06642078 -0.61418797
H -1.20851300 4.06642078 -0.61418797
H -3.49401500 2.40602178 -0.61367197
H -4.24094400 0.10729578 -0.61373997
H -3.36816400 -2.57958822 -0.61350597
H -1.41248600 -4.00024222 -0.61397997
H 1.41248600 -4.00024222 -0.61397997
H 3.36816400 -2.57958822 -0.61350597
H 4.24094400 0.10729578 -0.61373997
H 3.49401500 2.40602178 -0.61367197
--
0 1
C 0.70622800 0.97211978 4.45694803
C -0.70622800 0.97211978 4.45694803
C -1.14280400 -0.37137722 4.45681203
C 0.00000000 -1.20165922 4.45659503
C 1.14280400 -0.37137722 4.45681203
C 1.45779000 2.00650178 3.93413403
C -1.45779000 2.00650178 3.93413403
C -2.35873800 -0.76639722 3.93397203
C 0.00000000 -2.48004022 3.93366903
C 2.35873800 -0.76639722 3.93397203
C 0.69261800 3.17923978 3.58678503
C -0.69261800 3.17923978 3.58678503
C -2.80958100 1.64119778 3.58707203
C -3.23765700 0.32373778 3.58696203
C -2.42918200 -2.16498922 3.58697403
C -1.30841500 -2.97916822 3.58672303
C 1.30841500 -2.97916822 3.58672303
C 2.42918200 -2.16498922 3.58697403
C 3.23765700 0.32373778 3.58696203
C 2.80958100 1.64119778 3.58707203
H 1.20851300 4.06642078 3.22581203
H -1.20851300 4.06642078 3.22581203
H -3.49401500 2.40602178 3.22632803
H -4.24094400 0.10729578 3.22626003
H -3.36816400 -2.57958822 3.22649403
H -1.41248600 -4.00024222 3.22602003
H 1.41248600 -4.00024222 3.22602003
H 3.36816400 -2.57958822 3.22649403
H 4.24094400 0.10729578 3.22626003
H 3.49401500 2.40602178 3.22632803
units angstrom
""")
# <<< Derived Geometry Strings >>>
for rxn in HRXN:
GEOS['%s-%s-monoA-unCP' % (dbse, rxn)] = GEOS['%s-%s-dimer' % (dbse, rxn)].extract_fragments(1)
GEOS['%s-%s-monoB-unCP' % (dbse, rxn)] = GEOS['%s-%s-dimer' % (dbse, rxn)].extract_fragments(2)
GEOS['%s-%s-monoA-CP' % (dbse, rxn)] = GEOS['%s-%s-dimer' % (dbse, rxn)].extract_fragments(1, 2)
GEOS['%s-%s-monoB-CP' % (dbse, rxn)] = GEOS['%s-%s-dimer' % (dbse, rxn)].extract_fragments(2, 1)
#########################################################################
# <<< Supplementary Quantum Chemical Results >>>
DATA = {}
DATA['NUCLEAR REPULSION ENERGY'] = {}
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_54-dimer' ] = 4584.11459289
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_54-monoA-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_54-monoB-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_64-dimer' ] = 4555.01239979
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_64-monoA-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_64-monoB-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_73-dimer' ] = 4529.48976988
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_73-monoA-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_73-monoB-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_74-dimer' ] = 4526.69216135
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_74-monoA-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_74-monoB-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_84-dimer' ] = 4499.12706628
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_84-monoA-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_84-monoB-unCP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_54-monoA-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_54-monoB-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_64-monoA-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_64-monoB-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_73-monoA-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_73-monoB-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_74-monoA-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_74-monoB-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_84-monoA-CP' ] = 1387.77369315
DATA['NUCLEAR REPULSION ENERGY']['CORE-dimer3_84-monoB-CP' ] = 1387.77369315
|
rmcgibbo/psi4public
|
psi4/share/psi4/databases/CORE.py
|
Python
|
lgpl-3.0
| 23,802
|
[
"Psi4"
] |
fcae6c66a1a8891c222aa79c1fa67350aa14a3738fe1f5fb06def77c97836b5d
|
""" Utilities for WMS
"""
import io
import os
import sys
import json
from DIRAC import gConfig, gLogger, S_OK
from DIRAC.Core.Utilities.File import mkDir
def createJobWrapper(jobID, jobParams, resourceParams, optimizerParams,
extraOptions='',
defaultWrapperLocation='DIRAC/WorkloadManagementSystem/JobWrapper/JobWrapperTemplate.py',
log=gLogger, logLevel='INFO'):
""" This method creates a job wrapper filled with the CE and Job parameters to execute the job.
Main user is the JobAgent
"""
arguments = {'Job': jobParams,
'CE': resourceParams,
'Optimizer': optimizerParams}
log.verbose('Job arguments are: \n %s' % (arguments))
siteRoot = gConfig.getValue('/LocalSite/Root', os.getcwd())
log.debug('SiteRootPythonDir is:\n%s' % siteRoot)
workingDir = gConfig.getValue('/LocalSite/WorkingDirectory', siteRoot)
mkDir('%s/job/Wrapper' % (workingDir))
diracRoot = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
jobWrapperFile = '%s/job/Wrapper/Wrapper_%s' % (workingDir, jobID)
if os.path.exists(jobWrapperFile):
log.verbose('Removing existing Job Wrapper for %s' % (jobID))
os.remove(jobWrapperFile)
with open(os.path.join(diracRoot, defaultWrapperLocation), 'r') as fd:
wrapperTemplate = fd.read()
if 'LogLevel' in jobParams:
logLevel = jobParams['LogLevel']
log.info('Found Job LogLevel JDL parameter with value: %s' % (logLevel))
else:
log.info('Applying default LogLevel JDL parameter with value: %s' % (logLevel))
dPython = sys.executable
realPythonPath = os.path.realpath(dPython)
log.debug('Real python path after resolving links is: ', realPythonPath)
dPython = realPythonPath
# Making real substitutions
# wrapperTemplate = wrapperTemplate.replace( "@JOBARGS@", str( arguments ) )
wrapperTemplate = wrapperTemplate.replace("@SITEPYTHON@", str(siteRoot))
jobWrapperJsonFile = jobWrapperFile + '.json'
with io.open(jobWrapperJsonFile, 'w', encoding='utf8') as jsonFile:
json.dump(unicode(arguments), jsonFile, ensure_ascii=False)
with open(jobWrapperFile, "w") as wrapper:
wrapper.write(wrapperTemplate)
jobExeFile = '%s/job/Wrapper/Job%s' % (workingDir, jobID)
jobFileContents = \
"""#!/bin/sh
%s %s %s -o LogLevel=%s -o /DIRAC/Security/UseServerCertificate=no
""" % (dPython, jobWrapperFile, extraOptions, logLevel)
with open(jobExeFile, 'w') as jobFile:
jobFile.write(jobFileContents)
return S_OK(jobExeFile)
def createRelocatedJobWrapper(wrapperPath, rootLocation,
jobID, jobParams, resourceParams, optimizerParams,
extraOptions='',
defaultWrapperLocation='DIRAC/WorkloadManagementSystem/JobWrapper/JobWrapperTemplate.py',
log=gLogger, logLevel='INFO'):
""" This method creates a job wrapper for a specific job in wrapperPath,
but assumes this has been reloated to rootLocation before running it.
"""
arguments = {'Job': jobParams,
'CE': resourceParams,
'Optimizer': optimizerParams}
log.verbose('Job arguments are: \n %s' % (arguments))
diracRoot = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
jobWrapperFile = os.path.join(wrapperPath, 'Wrapper_%s' % jobID)
if os.path.exists(jobWrapperFile):
log.verbose('Removing existing Job Wrapper for %s' % (jobID))
os.remove(jobWrapperFile)
with open(os.path.join(diracRoot, defaultWrapperLocation), 'r') as fd:
wrapperTemplate = fd.read()
if 'LogLevel' in jobParams:
logLevel = jobParams['LogLevel']
log.info('Found Job LogLevel JDL parameter with value: %s' % (logLevel))
else:
log.info('Applying default LogLevel JDL parameter with value: %s' % (logLevel))
# Making real substitutions
# wrapperTemplate = wrapperTemplate.replace( "@JOBARGS@", str( arguments ) )
wrapperTemplate = wrapperTemplate.replace("@SITEPYTHON@", rootLocation)
jobWrapperJsonFile = jobWrapperFile + '.json'
with io.open(jobWrapperJsonFile, 'w', encoding='utf8') as jsonFile:
json.dump(unicode(arguments), jsonFile, ensure_ascii=False)
with open(jobWrapperFile, "w") as wrapper:
wrapper.write(wrapperTemplate)
# The "real" location of the jobwrapper after it is started
jobWrapperDirect = os.path.join(rootLocation, 'Wrapper_%s' % jobID)
jobExeFile = os.path.join(wrapperPath, 'Job%s' % jobID)
jobFileContents = \
"""#!/bin/sh
python %s %s -o LogLevel=%s -o /DIRAC/Security/UseServerCertificate=no
""" % (jobWrapperDirect, extraOptions, logLevel)
with open(jobExeFile, 'w') as jobFile:
jobFile.write(jobFileContents)
jobExeDirect = os.path.join(rootLocation, 'Job%s' % jobID)
return S_OK(jobExeDirect)
|
andresailer/DIRAC
|
WorkloadManagementSystem/Utilities/Utils.py
|
Python
|
gpl-3.0
| 4,855
|
[
"DIRAC"
] |
dfa69085fbf0168b3b3b03ba6ea0cfbe3d9abc8a1172fb26b7d7ac79274c38b4
|
# Copyright 2015 Planet Labs, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
from setuptools import setup
from setuptools import distutils
import os
import sys
def get_version_from_pkg_info():
metadata = distutils.dist.DistributionMetadata("PKG-INFO")
return metadata.version
def get_version_from_pyver():
try:
import pyver
except ImportError:
if 'sdist' in sys.argv or 'bdist_wheel' in sys.argv:
raise ImportError('You must install pyver to create a package')
else:
return 'noversion'
version, version_info = pyver.get_version(pkg="datalake_api",
public=True)
return version
def get_version():
if os.path.exists("PKG-INFO"):
return get_version_from_pkg_info()
else:
return get_version_from_pyver()
setup(name='datalake_api',
url='https://github.com/planetlabs/datalake-api',
version=get_version(),
description='datalake_api ingests datalake metadata records',
author='Brian Cavagnolo',
author_email='brian@planet.com',
packages=['datalake_api'],
install_requires=[
'pyver>=1.0.18',
'memoized_property>=1.0.2',
'simplejson>=3.3.1',
'Flask>=0.10.1',
'flask-swagger==0.2.8',
'boto3==1.1.3',
'sentry-sdk[flask]>=0.19.5',
'blinker>=1.4',
],
extras_require={
'test': [
'pytest==2.7.2',
'flake8==2.5.0',
'moto==0.4.23',
],
},
include_package_data=True)
|
planetlabs/datalake
|
api/setup.py
|
Python
|
apache-2.0
| 2,111
|
[
"Brian"
] |
746a41c193d1a89d8b1b26d26efda471e019ceea2772c10813b8ea63dff0ef90
|
# ################################################################
#
# Active Particles on Curved Spaces (APCS)
#
# Author: Silke Henkes
#
# ICSMB, Department of Physics
# University of Aberdeen
# Author: Rastko Sknepnek
#
# Division of Physics
# School of Engineering, Physics and Mathematics
# University of Dundee
#
# (c) 2013, 2014
#
# This program cannot be used, copied, or modified without
# explicit permission of the author.
#
# ################################################################
# Integrator code for batch processing of full data runs (incorporating parts of earlier analysis scripts)
# Data interfacing
from read_data import *
from read_param import *
# Pre-existing analysis scripts
from nematic_analysis import *
#from glob import glob
# This is the structured data file hierarchy. Replace as appropriate (do not go the Yaouen way and fully automatize ...)
basefolder='/home/silke/Documents/CurrentProjects/Rastko/nematic/data/phi_0.75/'
#basefolder = '/home/silke/Documents/CurrentProjects/Rastko/nematic/data/J_1_0_v0_1_0/'
#outfolder= '/home/silke/Documents/CurrentProjects/Rastko/nematic/data/J_1_0_v0_1_0/'
outfolder = '/home/silke/Documents/CurrentProjects/Rastko/nematic/data/phi_0.75/'
Jval=['0.01','0.05','0.1','0.5','5.0','10.0']
sigma=1
r='16'
nstep=10100000
nsave=5000
nsnap=int(nstep/nsave)
#skip=835
skip=0
for J in Jval:
#param = Param(basefolder)
#/home/silke/Documents/CurrentProjects/Rastko/nematic/data/phi_0.75/J_0.01
files = sorted(glob(basefolder+'J_'+ J +'/sphere_*.dat'))[skip:]
defects=np.zeros((len(files),12))
ndefect=np.zeros((len(files),1))
u=0
for f in files:
print f
outname =basefolder+'J_'+ J +'/frame_data' + str(u-startvtk)+'.vtk'
if u<startvtk:
defects0,ndefect0=getDefects(f,float(r),sigma,outname,False,False)
else:
defects0,ndefect0=getDefects(f,float(r),sigma,outname,False,True)
outname = '.'.join((f).split('.')[:-1]) + '_defects.vtk'
outname =basefolder+'J_'+ J +'/frame_defects' + str(u-startvtk)+'.vtk'
print outname
writeDefects(defects0,ndefect0,outname)
defects[u,0:3]=defects0[0,:]
defects[u,3:6]=defects0[1,:]
defects[u,6:9]=defects0[2,:]
defects[u,9:12]=defects0[3,:]
ndefect[u]=ndefect0
u+=1
outfile2=outfolder + 'defects_J_' + J + '_R_'+ r+ '.dat'
np.savetxt(outfile2,np.concatenate((ndefect,defects),axis=1),fmt='%12.6g', header='ndefect defects')
|
sknepneklab/SAMoS
|
analysis/batch_nematic/batch_analyze_nematic_phi0.75.py
|
Python
|
gpl-3.0
| 2,450
|
[
"VTK"
] |
d1e8a8681ebc53401b98924819eefd6d109efde887801e676bc2ad4243a7248d
|
#
# tests/test_tensorflow_imagenet_adapter.py - unit test for the tensorflow ImageNet adapter.
#
# Copyright (c) 2018 SingularityNET
#
# Distributed under the MIT software license, see LICENSE file.
#
import logging
import os
from pathlib import Path
import base64
import pytest
from adapters.tensorflow.imagenet import TensorflowImageNet, IMAGENET_CLASSIFIER_ID
from sn_agent import ontology as onto
from sn_agent.job.job_descriptor import JobDescriptor
from sn_agent.log import setup_logging
from sn_agent.ontology.service_descriptor import ServiceDescriptor
from sn_agent.service_adapter import setup_service_manager
from sn_agent.test.mocks import MockApp
log = logging.getLogger(__name__)
TEST_DIRECTORY = Path(__file__).parent
@pytest.fixture
def app():
app = MockApp()
onto.setup_ontology(app)
return app
def test_tensorflow_imagenet_adapter(app):
setup_logging()
log.debug("Testing Tensorflow ImageNet Adapter")
# images to be tested
images = ["bucket.jpg", "cup.jpg", "bowtie.png"]
encoded_images = []
image_types = []
for image in images:
# Load each image and encode it base 64.
image_path = os.path.join(TEST_DIRECTORY, "data", "imagenet", image)
image_file = open(image_path, 'rb')
image_bytes = image_file.read()
encoded_images.append(base64.b64encode(image_bytes))
image_types.append(image.split('.')[1])
# Setup a test job for classifying the test images.
job_parameters = { 'input_type': 'attached',
'input_data': {
'images': encoded_images,
'image_types': image_types
},
'output_type': 'attached',
}
# Get the service for an ImageNet classifier. A service identifies a unique service provided by
# SingularityNET and is part of the ontology.
ontology = app['ontology']
imagenet_service = ontology.get_service(IMAGENET_CLASSIFIER_ID)
# Create the Tensorflow ImageNet service adapter.
imagenet_service_adapter = TensorflowImageNet(app, imagenet_service)
# Create a service descriptor. These are post-contract negotiated descriptors that may include
# other parameters like quality of service, input and output formats, etc.
imagenet_service_descriptor = ServiceDescriptor(IMAGENET_CLASSIFIER_ID)
# Create a new job descriptor with a single set of parameters for the test image of a 7 in the
# format defined above for the python variable: mnist_seven_image.
job_list = [job_parameters]
job = JobDescriptor(imagenet_service_descriptor, job_list)
# Setup the service manager. NOTE: This will add services that are (optionally) passed in
# so you can manually create services in addition to those that are loaded from the config
# file. After all the services are added, it will call post_load_initialize on all the
# services.
setup_service_manager(app, [imagenet_service_adapter])
# Test perform for the ImageNet service adapter.
try:
exception_caught = False
results = imagenet_service_adapter.perform(job)
except RuntimeError as exception:
exception_caught = True
log.error(" Exception caught %s", exception)
log.debug(" Error performing %s %s", job, imagenet_service_adapter)
assert not exception_caught
print(results)
# Check our results for format and content.
assert len(results) == 1
assert results[0]['predictions'] == [['bucket, pail'],['cup','coffee mug'],['bow tie, bow-tie, bowtie']]
assert results[0]['confidences'][0][0] > 0.9600 and results[0]['confidences'][0][0] < 1.0
assert results[0]['confidences'][1][0] > 0.4000 and results[0]['confidences'][1][0] < 0.4200
assert results[0]['confidences'][1][1] > 0.4000 and results[0]['confidences'][1][1] < 0.4100
assert results[0]['confidences'][2][0] > 0.9990 and results[0]['confidences'][2][0] < 1.0
|
singnet/singnet
|
agent/tests/test_tensorflow_imagenet_adapter.py
|
Python
|
mit
| 4,014
|
[
"Bowtie"
] |
75eb4d97259b5f1ce696c2688f5debe3a6776d543fe43342e210e9051c84a916
|
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import sys
from ansible import constants as C
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves import builtins
from ansible.plugins.loader import filter_loader, test_loader
def safe_eval(expr, locals={}, include_exceptions=False):
'''
This is intended for allowing things like:
with_items: a_list_variable
Where Jinja2 would return a string but we do not want to allow it to
call functions (outside of Jinja2, where the env is constrained).
Based on:
http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe
'''
# define certain JSON types
# eg. JSON booleans are unknown to python eval()
JSON_TYPES = {
'false': False,
'null': None,
'true': True,
}
# this is the whitelist of AST nodes we are going to
# allow in the evaluation. Any node type other than
# those listed here will raise an exception in our custom
# visitor class defined below.
SAFE_NODES = set(
(
ast.Add,
ast.BinOp,
# ast.Call,
ast.Compare,
ast.Dict,
ast.Div,
ast.Expression,
ast.List,
ast.Load,
ast.Mult,
ast.Num,
ast.Name,
ast.Str,
ast.Sub,
ast.USub,
ast.Tuple,
ast.UnaryOp,
)
)
# AST node types were expanded after 2.6
if sys.version_info[:2] >= (2, 7):
SAFE_NODES.update(
set(
(ast.Set,)
)
)
# And in Python 3.4 too
if sys.version_info[:2] >= (3, 4):
SAFE_NODES.update(
set(
(ast.NameConstant,)
)
)
filter_list = []
for filter in filter_loader.all():
filter_list.extend(filter.filters().keys())
test_list = []
for test in test_loader.all():
test_list.extend(test.tests().keys())
CALL_WHITELIST = C.DEFAULT_CALLABLE_WHITELIST + filter_list + test_list
class CleansingNodeVisitor(ast.NodeVisitor):
def generic_visit(self, node, inside_call=False):
if type(node) not in SAFE_NODES:
raise Exception("invalid expression (%s)" % expr)
elif isinstance(node, ast.Call):
inside_call = True
elif isinstance(node, ast.Name) and inside_call:
# Disallow calls to builtin functions that we have not vetted
# as safe. Other functions are excluded by setting locals in
# the call to eval() later on
if hasattr(builtins, node.id) and node.id not in CALL_WHITELIST:
raise Exception("invalid function: %s" % node.id)
# iterate over all child nodes
for child_node in ast.iter_child_nodes(node):
self.generic_visit(child_node, inside_call)
if not isinstance(expr, string_types):
# already templated to a datastructure, perhaps?
if include_exceptions:
return (expr, None)
return expr
cnv = CleansingNodeVisitor()
try:
parsed_tree = ast.parse(expr, mode='eval')
cnv.visit(parsed_tree)
compiled = compile(parsed_tree, expr, 'eval')
# Note: passing our own globals and locals here constrains what
# callables (and other identifiers) are recognized. this is in
# addition to the filtering of builtins done in CleansingNodeVisitor
result = eval(compiled, JSON_TYPES, dict(locals))
if include_exceptions:
return (result, None)
else:
return result
except SyntaxError as e:
# special handling for syntax errors, we just return
# the expression string back as-is to support late evaluation
if include_exceptions:
return (expr, None)
return expr
except Exception as e:
if include_exceptions:
return (expr, e)
return expr
|
e-gob/plataforma-kioscos-autoatencion
|
scripts/ansible-play/.venv/lib/python2.7/site-packages/ansible/template/safe_eval.py
|
Python
|
bsd-3-clause
| 4,871
|
[
"VisIt"
] |
e49853d6e030182c04df2e71ed5c72477f3ee23890f35f4a26b884716ad12803
|
"""Tests for the thumbs module"""
from workbench import scenarios
from workbench.test.selenium_test import SeleniumTest
class ThreeThumbsTest(SeleniumTest):
"""Test the functionalities of the three thumbs test XBlock."""
def setUp(self):
super(ThreeThumbsTest, self).setUp()
scenarios.add_xml_scenario(
"test_three_file_thumbs", "three file thumbs test",
"""<vertical_demo><filethumbs/><filethumbs/><filethumbs/></vertical_demo>"""
)
self.addCleanup(scenarios.remove_scenario, "test_three_file_thumbs")
# Suzy opens the browser to visit the workbench
self.browser.get(self.live_server_url)
# She knows it's the site by the header
header1 = self.browser.find_element_by_css_selector('h1')
self.assertEqual(header1.text, 'XBlock scenarios')
def test_three_thumbs_initial_state(self):
# She clicks on the three thumbs at once scenario
link = self.browser.find_element_by_link_text('three file thumbs test')
link.click()
# The header reflects the XBlock
header1 = self.browser.find_element_by_css_selector('h1')
self.assertEqual(header1.text, 'XBlock: three file thumbs test')
# She sees that there are 3 sets of thumbs
vertical_css = 'div.student_view > div.xblock > div.vertical'
# The following will give a NoSuchElementException error
# if it is not there
vertical = self.browser.find_element_by_css_selector(vertical_css)
# Make sure there are three thumbs blocks
thumb_css = 'div.xblock[data-block-type="filethumbs"]'
thumbs = vertical.find_elements_by_css_selector(thumb_css)
self.assertEqual(3, len(thumbs))
# Make sure they all have 0 for upvote and downvote counts
up_count_css = 'span.upvote span.count'
down_count_css = 'span.downvote span.count'
for thumb in thumbs:
up_count = thumb.find_element_by_css_selector(up_count_css)
down_count = thumb.find_element_by_css_selector(down_count_css)
initial_up = int(up_count.text)
initial_down = int(down_count.text)
thumb.find_element_by_css_selector('span.upvote').click()
self.assertEqual(initial_up + 1, int(thumb.find_element_by_css_selector(up_count_css).text))
self.assertEqual(initial_down, int(thumb.find_element_by_css_selector(down_count_css).text))
thumb.find_element_by_css_selector('span.downvote').click()
self.assertEqual(initial_up + 1, int(thumb.find_element_by_css_selector(up_count_css).text))
self.assertEqual(initial_down + 1, int(thumb.find_element_by_css_selector(down_count_css).text))
|
open-craft/xblock-sdk
|
workbench/test/test_filethumbs.py
|
Python
|
agpl-3.0
| 2,756
|
[
"VisIt"
] |
cc95de35fe6a970eb8dbd169b6926796eb8a518c12cc832019795275f840c6bb
|
# qsub -- utilities for batch submission systems
# Copyright (c) 2010 Oliver Beckstein <orbeckst@gmail.com>
# Made available under GNU Pulic License v3.
"""
:mod:`gromacs.qsub` -- utilities for batch submission systems
=============================================================
The module helps writing submission scripts for various batch submission
queuing systems. The known ones are listed stored as
:class:`~gromacs.qsub.QueuingSystem` instances in
:data:`~gromacs.qsub.queuing_systems`; append new ones to this list.
The working paradigm is that template scripts are provided (see
:data:`gromacs.config.templates`) and only a few place holders are substituted
(using :func:`gromacs.cbook.edit_txt`).
*User-supplied template scripts* can be stored in
:data:`gromacs.config.qscriptdir` (by default ``~/.gromacswrapper/qscripts``)
and they will be picked up before the package-supplied ones.
At the moment, some of the functions in :mod:`gromacs.setup` use this module
but it is fairly independent and could conceivably be used for a wider range of
projects.
Queuing system templates
------------------------
The queuing system scripts are highly specific and you will need to add
your own. Templates should be shell scripts. Some parts of the
templates are modified by the
:func:`~gromacs.qsub.generate_submit_scripts` function. The "place
holders" that can be replaced are shown in the table below. Typically,
the place holders are either shell variable assignments or batch
submission system commands. The table shows SGE_ commands but PBS_ and
LoadLeveler_ have similar constructs; e.g. PBS commands start with
``#PBS`` and LoadLeveller uses ``#@`` with its own command keywords).
.. Table:: Substitutions in queuing system templates.
=============== =========== ================ ================= =====================================
place holder default replacement description regex
=============== =========== ================ ================= =====================================
#$ -N GMX_MD *sgename* job name `/^#.*(-N|job_name)/`
#$ -l walltime= 00:20:00 *walltime* max run time `/^#.*(-l walltime|wall_clock_limit)/`
#$ -A BUDGET *budget* account `/^#.*(-A|account_no)/`
DEFFNM= md *deffnm* default gmx name `/^ *DEFFNM=/`
STARTDIR= . *startdir* remote jobdir `/^ *STARTDIR=/`
WALL_HOURS= 0.33 *walltime* h mdrun's -maxh `/^ *WALL_HOURS=/`
NPME= *npme* PME nodes `/^ *NPME=/`
MDRUN_OPTS= "" *mdrun_opts* more options `/^ *MDRUN_OPTS=/`
=============== =========== ================ ================= =====================================
Lines with place holders should not have any white space at the
beginning. The regular expression pattern ("regex") is used to find
the lines for the replacement and the literal default values
("default") are replaced. (Exception: any value that follows an equals
sign "=" is replaced, regardless of the default value in the table
*except* for ``MDRUN_OPTS`` where *only "" will be replace*.) Not all
place holders have to occur in a template; for instance, if a queue
has no run time limitation then one would probably not include
*walltime* and *WALL_HOURS* place holders.
The line ``# JOB_ARRAY_PLACEHOLDER`` can be replaced by
:func:`~gromacs.qsub.generate_submit_array` to produce a "job array"
(also known as a "task array") script that runs a large number of
related simulations under the control of a single queuing system
job. The individual array tasks are run from different sub
directories. Only queuing system scripts that are using the
:program:`bash` shell are supported for job arrays at the moment.
A queuing system script *must* have the appropriate suffix to be properly
recognized, as shown in the table below.
.. Table:: Suffices for queuing system templates. Pure shell-scripts are only used to run locally.
============================== =========== ===========================
Queuing system suffix notes
============================== =========== ===========================
Sun Gridengine .sge Sun's `Sun Gridengine`_
Portable Batch queuing system .pbs OpenPBS_ and `PBS Pro`_
LoadLeveler .ll IBM's `LoadLeveler`_
bash script .bash, .sh `Advanced bash scripting`_
csh script .csh avoid_ csh_
============================== =========== ===========================
.. _OpenPBS: http://www.mcs.anl.gov/research/projects/openpbs/
.. _PBS: OpenPBS_
.. _PBS Pro: http://www.pbsworks.com/Product.aspx?id=1
.. _Sun Gridengine: http://gridengine.sunsource.net/
.. _SGE: Sun Gridengine_
.. _LoadLeveler: http://www-03.ibm.com/systems/software/loadleveler/index.html
.. _Advanced bash scripting: http://tldp.org/LDP/abs/html/
.. _avoid: http://www.grymoire.com/Unix/CshTop10.txt
.. _csh: http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/
Example queuing system script template for PBS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following script is a usable PBS_ script for a super computer. It
contains almost all of the replacement tokens listed in the table
(indicated by ++++++). ::
#!/bin/bash
# File name: ~/.gromacswrapper/qscripts/supercomputer.somewhere.fr_64core.pbs
#PBS -N GMX_MD
# ++++++
#PBS -j oe
#PBS -l select=8:ncpus=8:mpiprocs=8
#PBS -l walltime=00:20:00
# ++++++++
# host: supercomputer.somewhere.fr
# queuing system: PBS
# set this to the same value as walltime; mdrun will stop cleanly
# at 0.99 * WALL_HOURS
WALL_HOURS=0.33
# ++++
# deffnm line is possibly modified by gromacs.setup
# (leave it as it is in the template)
DEFFNM=md
# ++
TPR=${DEFFNM}.tpr
OUTPUT=${DEFFNM}.out
PDB=${DEFFNM}.pdb
MDRUN_OPTS=""
# ++
# If you always want to add additional MDRUN options in this script then
# you can either do this directly in the mdrun commandline below or by
# constructs such as the following:
## MDRUN_OPTS="-npme 24 $MDRUN_OPTS"
# JOB_ARRAY_PLACEHOLDER
#++++++++++++++++++++++ leave the full commented line intact!
# avoids some failures
export MPI_GROUP_MAX=1024
# use hard coded path for time being
GMXBIN="/opt/software/SGI/gromacs/4.0.3/bin"
MPIRUN=/usr/pbs/bin/mpiexec
APPLICATION=$GMXBIN/mdrun_mpi
$MPIRUN $APPLICATION -stepout 1000 -deffnm ${DEFFNM} -s ${TPR} -c ${PDB} -cpi \
$MDRUN_OPTS \
-maxh ${WALL_HOURS} > $OUTPUT
rc=$?
# dependent jobs will only start if rc == 0
exit $rc
Save the above script in ``~/.gromacswrapper/qscripts`` under the name
``supercomputer.somewhere.fr_64core.pbs``. This will make the script
immediately usable. For example, in order to set up a production MD run with
:func:`gromacs.setup.MD` for this super computer one would use ::
gromacs.setup.MD(..., qscripts=['supercomputer.somewhere.fr_64core.pbs', 'local.sh'])
This will generate submission scripts based on
``supercomputer.somewhere.fr_64core.pbs`` and also the default ``local.sh``
that is provided with *GromacsWrapper*.
In order to modify ``MDRUN_OPTS`` one would use the additonal *mdrun_opts*
argument, for instance::
gromacs.setup.MD(..., qscripts=['supercomputer.somewhere.fr_64core.pbs', 'local.sh'],
mdrun_opts="-v -npme 20 -dlb yes -nosum")
Currently there is no good way to specify the number of processors when
creating run scripts. You will need to provide scripts with different numbers
of cores hard coded or set them when submitting the scripts with command line
options to :program:`qsub`.
Classes and functions
---------------------
.. autoclass:: QueuingSystem
:members:
.. autofunction:: generate_submit_scripts
.. autofunction:: generate_submit_array
.. autofunction:: detect_queuing_system
.. autodata:: queuing_systems
"""
from __future__ import absolute_import, with_statement
import os
import errno
from os.path import relpath
import warnings
from . import config
from . import cbook
from .utilities import asiterable, Timedelta
from .exceptions import AutoCorrectionWarning
import logging
logger = logging.getLogger('gromacs.qsub')
class QueuingSystem(object):
"""Class that represents minimum information about a batch submission system."""
def __init__(self, name, suffix, qsub_prefix, array_variable=None, array_option=None):
"""Define a queuing system's functionality
:Arguments:
*name*
name of the queuing system, e.g. 'Sun Gridengine'
*suffix*
suffix of input files, e.g. 'sge'
*qsub_prefix*
prefix string that starts a qsub flag in a script, e.g. '#$'
:Keywords:
*array_variable*
environment variable exported for array jobs, e.g.
'SGE_TASK_ID'
*array_option*
qsub option format string to launch an array (e.g. '-t %d-%d')
"""
self.name = name
self.suffix = suffix
self.qsub_prefix = qsub_prefix
self.array_variable = array_variable
self.array_option = array_option
def flag(self, *args):
"""Return string for qsub flag *args* prefixed with appropriate inscript prefix."""
return " ".join((self.qsub_prefix,)+args)
def has_arrays(self):
"""True if known how to do job arrays."""
return self.array_variable is not None
def array_flag(self, directories):
"""Return string to embed the array launching option in the script."""
return self.flag(self.array_option % (1,len(directories)))
def array(self, directories):
"""Return multiline string for simple array jobs over *directories*.
.. Warning:: The string is in ``bash`` and hence the template must also
be ``bash`` (and *not* ``csh`` or ``sh``).
"""
if not self.has_arrays():
raise NotImplementedError('Not known how make array jobs for '
'queuing system %(name)s' % vars(self))
hrule = '#'+60*'-'
lines = [
'',
hrule,
'# job array:',
self.array_flag(directories),
hrule,
'# directories for job tasks',
'declare -a jobdirs']
for i,dirname in enumerate(asiterable(directories)):
idx = i+1 # job array indices are 1-based
lines.append('jobdirs[{idx:d}]={dirname!r}'.format(**vars()))
lines.extend([
'# Switch to the current tasks directory:',
'wdir="${{jobdirs[${{{array_variable!s}}}]}}"'.format(**vars(self)),
'cd "$wdir" || { echo "ERROR: failed to enter $wdir."; exit 1; }',
hrule,
''
])
return "\n".join(lines)
def isMine(self, scriptname):
"""Primitive queuing system detection; only looks at suffix at the moment."""
suffix = os.path.splitext(scriptname)[1].lower()
if suffix.startswith('.'):
suffix = suffix[1:]
return self.suffix == suffix
def __repr__(self):
return "<"+self.name+" QueuingSystem instance>"
#: Pre-defined queuing systems (SGE, PBS). Add your own here.
queuing_systems = [
QueuingSystem('Sun Gridengine', 'sge', '#$', array_variable='SGE_TASK_ID', array_option='-t %d-%d'),
QueuingSystem('PBS', 'pbs', '#PBS', array_variable='PBS_ARRAY_INDEX', array_option='-J %d-%d'),
QueuingSystem('LoadLeveler', 'll', '#@'), # no idea how to do arrays in LL
QueuingSystem('Slurm', 'slu', '#SBATCH'), # will add array settings
]
def detect_queuing_system(scriptfile):
"""Return the queuing system for which *scriptfile* was written."""
for qs in queuing_systems:
if qs.isMine(scriptfile):
return qs
return None
def generate_submit_scripts(templates, prefix=None, deffnm='md', jobname='MD', budget=None,
mdrun_opts=None, walltime=1.0, jobarray_string=None, startdir=None,
npme=None, **kwargs):
"""Write scripts for queuing systems.
This sets up queuing system run scripts with a simple search and replace in
templates. See :func:`gromacs.cbook.edit_txt` for details. Shell scripts
are made executable.
:Arguments:
*templates*
Template file or list of template files. The "files" can also be names
or symbolic names for templates in the templates directory. See
:mod:`gromacs.config` for details and rules for writing templates.
*prefix*
Prefix for the final run script filename; by default the filename will be
the same as the template. [None]
*dirname*
Directory in which to place the submit scripts. [.]
*deffnm*
Default filename prefix for :program:`mdrun` ``-deffnm`` [md]
*jobname*
Name of the job in the queuing system. [MD]
*budget*
Which budget to book the runtime on [None]
*startdir*
Explicit path on the remote system (for run scripts that need to `cd`
into this directory at the beginning of execution) [None]
*mdrun_opts*
String of additional options for :program:`mdrun`.
*walltime*
Maximum runtime of the job in hours. [1]
*npme*
number of PME nodes
*jobarray_string*
Multi-line string that is spliced in for job array functionality
(see :func:`gromacs.qsub.generate_submit_array`; do not use manually)
*kwargs*
all other kwargs are ignored
:Returns: list of generated run scripts
"""
if not jobname[0].isalpha():
jobname = 'MD_'+jobname
wmsg = "To make the jobname legal it must start with a letter: changed to {0!r}".format(jobname)
logger.warn(wmsg)
warnings.warn(wmsg, category=AutoCorrectionWarning)
if prefix is None:
prefix = ""
if mdrun_opts is not None:
mdrun_opts = '"'+str(mdrun_opts)+'"' # TODO: could test if quotes already present
dirname = kwargs.pop('dirname', os.path.curdir)
wt = Timedelta(hours=walltime)
walltime = wt.strftime("%h:%M:%S")
wall_hours = wt.ashours
def write_script(template):
submitscript = os.path.join(dirname, prefix + os.path.basename(template))
logger.info("Setting up queuing system script {submitscript!r}...".format(**vars()))
# These substitution rules are documented for the user in the module doc string
qsystem = detect_queuing_system(template)
if qsystem is not None and (qsystem.name == 'Slurm'):
cbook.edit_txt(template,
[('^ *DEFFNM=','(?<==)(.*)', deffnm),
('^#.*(-J)', '((?<=-J\s))\s*\w+', jobname),
('^#.*(-A|account_no)', '((?<=-A\s)|(?<=account_no\s))\s*\w+', budget),
('^#.*(-t)', '(?<=-t\s)(\d+:\d+:\d+)', walltime),
('^ *WALL_HOURS=', '(?<==)(.*)', wall_hours),
('^ *STARTDIR=', '(?<==)(.*)', startdir),
('^ *NPME=', '(?<==)(.*)', npme),
('^ *MDRUN_OPTS=', '(?<==)("")', mdrun_opts), # only replace literal ""
('^# JOB_ARRAY_PLACEHOLDER', '^.*$', jobarray_string),
],
newname=submitscript)
ext = os.path.splitext(submitscript)[1]
else:
cbook.edit_txt(template,
[('^ *DEFFNM=','(?<==)(.*)', deffnm),
('^#.*(-N|job_name)', '((?<=-N\s)|(?<=job_name\s))\s*\w+', jobname),
('^#.*(-A|account_no)', '((?<=-A\s)|(?<=account_no\s))\s*\w+', budget),
('^#.*(-l walltime|wall_clock_limit)', '(?<==)(\d+:\d+:\d+)', walltime),
('^ *WALL_HOURS=', '(?<==)(.*)', wall_hours),
('^ *STARTDIR=', '(?<==)(.*)', startdir),
('^ *NPME=', '(?<==)(.*)', npme),
('^ *MDRUN_OPTS=', '(?<==)("")', mdrun_opts), # only replace literal ""
('^# JOB_ARRAY_PLACEHOLDER', '^.*$', jobarray_string),
],
newname=submitscript)
ext = os.path.splitext(submitscript)[1]
if ext in ('.sh', '.csh', '.bash'):
os.chmod(submitscript, 0o755)
return submitscript
return [write_script(template) for template in config.get_templates(templates)]
def generate_submit_array(templates, directories, **kwargs):
"""Generate a array job.
For each ``work_dir`` in *directories*, the array job will
1. cd into ``work_dir``
2. run the job as detailed in the template
It will use all the queuing system directives found in the
template. If more complicated set ups are required, then this
function cannot be used.
:Arguments:
*templates*
Basic template for a single job; the job array logic is spliced into
the position of the line ::
# JOB_ARRAY_PLACEHOLDER
The appropriate commands for common queuing systems (Sun Gridengine, PBS)
are hard coded here. The queuing system is detected from the suffix of
the template.
*directories*
List of directories under *dirname*. One task is set up for each
directory.
*dirname*
The array script will be placed in this directory. The *directories*
**must** be located under *dirname*.
*kwargs*
See :func:`gromacs.setup.generate_submit_script` for details.
"""
dirname = kwargs.setdefault('dirname', os.path.curdir)
reldirs = [relpath(p, start=dirname) for p in asiterable(directories)]
missing = [p for p in (os.path.join(dirname, subdir) for subdir in reldirs)
if not os.path.exists(p)]
if len(missing) > 0:
logger.debug("template=%(template)r: dirname=%(dirname)r reldirs=%(reldirs)r", vars())
logger.error("Some directories are not accessible from the array script: "
"%(missing)r", vars())
def write_script(template):
qsystem = detect_queuing_system(template)
if qsystem is None or not qsystem.has_arrays():
logger.warning("Not known how to make a job array for %(template)r; skipping...", vars())
return None
kwargs['jobarray_string'] = qsystem.array(reldirs)
return generate_submit_scripts(template, **kwargs)[0] # returns list of length 1
# must use config.get_templates() because we need to access the file for detecting
return [write_script(template) for template in config.get_templates(templates)]
|
Becksteinlab/GromacsWrapper
|
gromacs/qsub.py
|
Python
|
gpl-3.0
| 19,233
|
[
"Gromacs"
] |
9fd21d88afc6b3c8abe93460ba38aeaea3f919bb9dd7bdfd90a8c66786ee296e
|
import sqlite3 as sql
import datetime
import sys
import json
import subprocess
from pickle import dump, load
from urlparse import urlparse
import re
# This script stores the number of opened tabs in Firefox and the URL of the current selected tab if Firefox is focused. The script should be called quite often, as the visit time is only counted in steps between calls of the script
# Known bugs/limitations:
# * Does not work if multiple Firefox windows are open (then we would need to get the info which window is focused)
# * Suspended or standby (if the uptime was not reset to zero) is not detected, the time will be counted as if the computer was running all the time (only a problem if Firefox is focused on the first run of the script)
# loosely based on https://stackoverflow.com/questions/15884363/in-mozilla-firefox-how-do-i-extract-the-number-of-currently-opened-tabs-to-save
# sessionstore.js is created with this code:https://hg.mozilla.org/integration/fx-team/file/2b9e5948213f/browser/components/sessionstore/src/SessionStore.jsm
# lastAccessed was introduced with this Bugzilla entry: https://bugzilla.mozilla.org/show_bug.cgi?id=739866
def loadSessionJSON(sessionfile):
global j
j = json.loads(open(sessionfile, 'rb').read().decode('utf-8'))
def getOpenedFirefoxTabs():
all_tabs = list(map(tabs_from_windows, j['windows']))
return sum(map(len, all_tabs))
def info_for_tab(tab):
try:
return (tab['entries'][0]['url'], tab['entries'][0]['title'])
except IndexError:
return None
except KeyError:
return None
def tabs_from_windows(window):
return list(map(info_for_tab, window['tabs']))
def getAccessedDateOfTab(tab):
# lastAccessed is set with Javascripts Date.now() which is in milliseconds, Pythons timestamp is in seconds
print str(tab['index'])+": "+tab['entries'][0]['url']+" was accessed "+str(-(datetime.datetime.fromtimestamp(tab['lastAccessed']/1000)-datetime.datetime.now()).total_seconds()) +" seconds ago.";
return None;
def getPathToDB():
return '/home/florin/bin/QuantifiedSelf/Firefox/firefoxProcesses.db'
def saveToDatabase():
con = None
try:
con = sql.connect(getPathToDB())
cur = con.cursor()
loadSessionJSON('/home/florin/.mozilla/firefox/3wxc4x2q.default/sessionstore.js');
cur.execute("INSERT OR REPLACE INTO tabNumber VALUES (?,?,?)", (datetime.datetime.now().strftime("%s"), 'standard', getOpenedFirefoxTabs()));
con.commit();
#map(getAccessedDateOfTab, j['windows'][0]['tabs']);
# check if foreground window belongs to Firefox
cmdline = subprocess.Popen('/home/florin/bin/QuantifiedSelf/Firefox/getForegroundWindow.sh', stdout=subprocess.PIPE).stdout.read().strip();
print "Current foreground process is "+cmdline;
# TODO(wasmitnetzen): Remove kate (debug only)
if cmdline.find('firefox') != -1 or cmdline.find('kate') != -1:
# index for selected tab is off by one: https://hg.mozilla.org/integration/fx-team/file/2b9e5948213f/browser/components/sessionstore/src/SessionStore.jsm#l2822
# * Index of selected tab (1 is first tab, 0 no selected tab)
#print "The url "+j['windows'][0]['tabs'][j['windows'][0]['selected']-1]['entries'][0]['url'] + " is opened.";
# get hostname
# TODO(wasmitnetzen): about:newtab? about:blank?
parsedUrl = urlparse(j['windows'][0]['tabs'][j['windows'][0]['selected']-1]['entries'][0]['url']);
hostname = parsedUrl.hostname;
print "A site on host "+hostname+" is opened";
# get seconds since last run
try:
tempFile = open('stor.temp', 'r');
lastRun = load(tempFile);
tempFile.close();
except (OSError, IOError, TypeError, EOFError, IndexError) as e:
#on any sort of expectable error, don't count this run
lastRun = datetime.datetime.now();
print "last run was on "+str(lastRun);
diff = (datetime.datetime.now() - lastRun);
print str(diff.total_seconds()) + " seconds ago";
#read uptime file
uptimeFile = open('/proc/uptime', 'r');
lines = uptimeFile.read();
# get first entry of file
firstEntryObj = re.match('^[0-9]*', lines);
uptime=datetime.timedelta(0, int(firstEntryObj.group()));
print "Uptime: " + str(uptime.total_seconds()) +' vs. ', diff.total_seconds();
# if this time is longer than the uptime, the computer was shut off between runs => use uptime as diff length
if diff > uptime:
print 'Use uptime';
diff = uptime;
# check if hostname has already an entry
cur.execute('SELECT * FROM focusedSite WHERE host = "'+hostname+'" AND profile="standard"');
data = cur.fetchone();
#print hostname "Last entry from date ",data[0],", uptime ",data[3];
# no entry found
if data is None:
print 'New entry made for host '+hostname+' with time ',diff.total_seconds(),'.';
cur.execute("INSERT INTO focusedSite(profile, host, firstVisit, time) VALUES (?,?,?,?)", ('standard', hostname, datetime.datetime.now().strftime("%s"), diff.total_seconds()));
# update last record
else:
totalDiff = diff + datetime.timedelta(0, data[4]);
print 'Update last record.',data[4],'+',diff.total_seconds(),'=',totalDiff.total_seconds(),'.';
cur.execute("UPDATE focusedSite SET time = ? WHERE host=? AND profile=?",(totalDiff.total_seconds(), hostname, 'standard'));
con.commit();
f = open('stor.temp', 'w');
now = datetime.datetime.now();
dump(now, f);
f.close()
except sql.Error, e:
print "Error %s:" % e.args[0]
sys.exit(1)
finally:
if con:
con.close()
if __name__ == '__main__':
saveToDatabase()
|
wasmitnetzen/QuantifiedSelf
|
Firefox/countFirefox.py
|
Python
|
gpl-2.0
| 5,486
|
[
"VisIt"
] |
d56411bfe3182afbbedd49ea016f95a7efbdad4146a3a50e59784db372d6c8fd
|
"""
This file implements Oja's hebbian learning rule.
Relevant book chapters:
- http://neuronaldynamics.epfl.ch/online/Ch19.S2.html#SS1.p6
"""
# This file is part of the exercise code repository accompanying
# the book: Neuronal Dynamics (see http://neuronaldynamics.epfl.ch)
# located at http://github.com/EPFL-LCN/neuronaldynamics-exercises.
# This free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License 2.0 as published by the
# Free Software Foundation. You should have received a copy of the
# GNU General Public License along with the repository. If not,
# see http://www.gnu.org/licenses/.
# Should you reuse and publish the code for your own purposes,
# please cite the book or point to the webpage http://neuronaldynamics.epfl.ch.
# Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski.
# Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition.
# Cambridge University Press, 2014.
import matplotlib.pyplot as plt
import numpy as np
def make_cloud(n=2000, ratio=1, angle=0):
"""Returns an oriented elliptic
gaussian cloud of 2D points
Args:
n (int, optional): number of points in the cloud
ratio (int, optional): (std along the short axis) /
(std along the long axis)
angle (int, optional): rotation angle [deg]
Returns:
numpy.ndarray: array of datapoints
"""
if ratio > 1.:
ratio = 1. / ratio
x = np.random.randn(n, 1)
y = ratio * np.random.randn(n, 1)
z = np.concatenate((x, y), 1)
radangle = (180. - angle) * np.pi / 180.
transfo = [
[np.cos(radangle), np.sin(radangle)],
[-np.sin(radangle), np.cos(radangle)]
]
return np.dot(transfo, z.T).T
def learn(cloud, initial_angle=None, eta=0.005):
"""Run one batch of Oja's learning over
a cloud of datapoints.
Args:
cloud (numpy.ndarray): An N by 2 array of datapoints. You can
think of each of the two columns as the time series of firing rates of one presynaptic neuron.
initial_angle (float, optional): angle of initial
set of weights [deg]. If None, this is random.
eta (float, optional): learning rate
Returns:
numpy.ndarray: time course of the weight vector
"""
# get angle if not set
if initial_angle is None:
initial_angle = np.random.rand() * 360.
radangle = initial_angle * np.pi / 180.
w = np.array([np.cos(radangle), np.sin(radangle)])
wcourse = np.zeros((len(cloud), 2), float)
for i in range(0, len(cloud)):
wcourse[i] = w
y = np.dot(w, cloud[i]) # output: postsynaptic firing rate of a linear neuron.
# ojas rule (cloud[i] are the two presynaptic firing rates at time point i
w = w + eta * y * (cloud[i] - y * w)
return wcourse
def plot_oja_trace(data_cloud, weights_course):
"""
Plots the datapoints and the time series of the weights
Args:
data_cloud (numpy.ndarray): n by 2 data
weights_course (numpy.ndarray): n by 2 weights
Returns:
"""
plt.scatter(
data_cloud[:, 0],
data_cloud[:, 1],
marker=".",
facecolor="none",
edgecolor="#222222",
alpha=.2
)
plt.xlabel("x1")
plt.ylabel("x2")
# color time and plot with colorbar
time = np.arange(len(weights_course))
colors = plt.cm.cool(time / float(len(time)))
sm = plt.cm.ScalarMappable(
cmap=plt.cm.cool,
norm=plt.Normalize(vmin=0, vmax=len(data_cloud))
)
sm.set_array(time)
cb = plt.colorbar(sm)
cb.set_label("Iteration")
plt.scatter(
weights_course[:, 0],
weights_course[:, 1],
facecolor=colors,
edgecolor="none",
lw=2
)
# ensure rectangular plot
x_min = data_cloud[:, 0].min()
x_max = data_cloud[:, 0].max()
y_min = data_cloud[:, 1].min()
y_max = data_cloud[:, 1].max()
lims = [min(x_min, y_min), max(x_max, y_max)]
plt.xlim(lims)
plt.ylim(lims)
plt.show()
def run_oja(n=2000, ratio=1., angle=0., learning_rate=0.01, do_plot=True):
"""Generates a point cloud and runs Oja's learning
rule once. Optionally plots the result.
Args:
n (int, optional): number of points in the cloud
ratio (float, optional): (std along the short axis) /
(std along the long axis)
angle (float, optional): rotation angle [deg]
do_plot (bool, optional): plot the result
"""
cloud = make_cloud(n=n, ratio=ratio, angle=angle)
wcourse = learn(cloud, eta=learning_rate)
if do_plot:
plot_oja_trace(cloud, wcourse)
return wcourse, cloud
if __name__ == "__main__":
run_oja(n=2000, ratio=1.1, angle=30, learning_rate=0.2)
|
EPFL-LCN/neuronaldynamics-exercises
|
neurodynex3/ojas_rule/oja.py
|
Python
|
gpl-2.0
| 4,820
|
[
"Gaussian",
"NEURON"
] |
10428cbb544a5aa9d283e5321c09ad67cf8e0640b6deae46b85a0b11ec5f3996
|
"""
pymel ipython configuration
Current Features
----------------
tab completion of depend nodes, dag nodes, and attributes
automatic import of pymel
Future Features
---------------
- tab completion of PyNode attributes
- color coding of tab complete options
- to differentiate between methods and attributes
- dag nodes vs depend nodes
- shortNames vs longNames
- magic commands
- bookmarking of maya's recent project and files
To Use
------
place in your PYTHONPATH
add the following line to the 'main' function of $HOME/.ipython/ipy_user_conf.py::
import ipymel
Author: Chad Dombrova
"""
from optparse import OptionParser
try:
import maya
except ImportError, e:
print("ipymel can only be setup if the maya package can be imported")
raise e
import IPython
ipy_ver = IPython.__version__.split('.')
ipy_ver = [int(x) if x.isdigit() else x for x in ipy_ver]
ver11 = ipy_ver >= [0, 11]
if not ver11:
def get_ipython():
import IPython.ipapi
return IPython.ipapi.get()
IPython.ipapi.IPApi.define_magic = IPython.ipapi.IPApi.expose_magic
import IPython.ColorANSI as coloransi
from IPython.genutils import page
from IPython.ipapi import UsageError
import IPython.Extensions.ipy_completers
def get_colors(obj):
return color_table[obj.rc.colors].colors
else:
import IPython.utils.coloransi as coloransi
from IPython.core.page import page
from IPython.core.error import UsageError
def get_colors(obj):
return color_table[ip.colors].colors
Colors = coloransi.TermColors
ColorScheme = coloransi.ColorScheme
ColorSchemeTable = coloransi.ColorSchemeTable
ip = None
try:
import readline
except ImportError:
import pyreadline as readline
delim = readline.get_completer_delims()
delim = delim.replace('|', '') # remove pipes
delim = delim.replace(':', '') # remove colon
# delim = delim.replace("'", '') # remove quotes
# delim = delim.replace('"', '') # remove quotes
readline.set_completer_delims(delim)
import inspect
import re
import glob
import os
import shlex
import sys
# don't import pymel here, as this will trigger loading of maya/pymel
# immediately, and things in the userSetup.py won't get properly entered into
# the ipython shell's namespace... we need the startup of maya to happen
# from "within" ipython, ie, when we do:
# ip.ex("from pymel.core import *")
# from pymel import core
# ...maya.cmds is ok to import before maya is started up, though - it just
# won't be populated yet...
import maya.cmds as cmds
_scheme_default = 'Linux'
# Build a few color schemes
NoColor = ColorScheme(
'NoColor', {
'instance': Colors.NoColor,
'collapsed': Colors.NoColor,
'tree': Colors.NoColor,
'transform': Colors.NoColor,
'shape': Colors.NoColor,
'nonunique': Colors.NoColor,
'nonunique_transform': Colors.NoColor,
'normal': Colors.NoColor # color off (usu. Colors.Normal)
})
LinuxColors = ColorScheme(
'Linux', {
'instance': Colors.LightCyan,
'collapsed': Colors.Yellow,
'tree': Colors.Green,
'transform': Colors.White,
'shape': Colors.LightGray,
'nonunique': Colors.Red,
'nonunique_transform': Colors.LightRed,
'normal': Colors.Normal # color off (usu. Colors.Normal)
})
LightBGColors = ColorScheme(
'LightBG', {
'instance': Colors.Cyan,
'collapsed': Colors.LightGreen,
'tree': Colors.Blue,
'transform': Colors.DarkGray,
'shape': Colors.Black,
'nonunique': Colors.Red,
'nonunique_transform': Colors.LightRed,
'normal': Colors.Normal # color off (usu. Colors.Normal)
})
# Build table of color schemes (needed by the dag_parser)
color_table = ColorSchemeTable([NoColor, LinuxColors, LightBGColors],
_scheme_default)
def finalPipe(obj):
"""
DAG nodes with children should end in a pipe (|), so that each successive pressing
of TAB will take you further down the DAG hierarchy. this is analagous to TAB
completion of directories, which always places a final slash (/) after a directory.
"""
if cmds.listRelatives(obj):
return obj + "|"
return obj
def splitDag(obj):
buf = obj.split('|')
tail = buf[-1]
path = '|'.join(buf[:-1])
return path, tail
def expand(obj):
"""
allows for completion of objects that reside within a namespace. for example,
``tra*`` will match ``trak:camera`` and ``tram``
for now, we will hardwire the search to a depth of three recursive namespaces.
TODO:
add some code to determine how deep we should go
"""
return (obj + '*', obj + '*:*', obj + '*:*:*')
def complete_node_with_no_path(node):
tmpres = cmds.ls(expand(node))
# print "node_with_no_path", tmpres, node, expand(node)
res = []
for x in tmpres:
x = finalPipe(x.split('|')[-1])
#x = finalPipe(x)
if x not in res:
res.append(x)
# print res
return res
def complete_node_with_attr(node, attr):
# print "noe_with_attr", node, attr
long_attrs = cmds.listAttr(node)
short_attrs = cmds.listAttr(node, shortNames=1)
# if node is a plug ( 'persp.t' ), the first result will be the passed plug
if '.' in node:
attrs = long_attrs[1:] + short_attrs[1:]
else:
attrs = long_attrs + short_attrs
return [u'%s.%s' % (node, a) for a in attrs if a.startswith(attr)]
def pymel_name_completer(self, event):
def get_children(obj):
path, partialObj = splitDag(obj)
# print "getting children", repr(path), repr(partialObj)
try:
fullpath = cmds.ls(path, l=1)[0]
if not fullpath:
return []
children = cmds.listRelatives(fullpath, f=1, c=1)
if not children:
return []
except:
return []
matchStr = fullpath + '|' + partialObj
# print "children", children
# print matchStr, fullpath, path
matches = [x.replace(fullpath, path, 1) for x in children if x.startswith(matchStr)]
# print matches
return matches
# print "\nnode", repr(event.symbol), repr(event.line)
# print "\nbegin"
line = event.symbol
matches = None
#--------------
# Attributes
#--------------
m = re.match( r"""([a-zA-Z_0-9|:.]+)\.(\w*)$""", line)
if m:
node, attr = m.groups()
if node == 'SCENE':
res = cmds.ls(attr + '*')
if res:
matches = ['SCENE.' + x for x in res if '|' not in x]
elif node.startswith('SCENE.'):
node = node.replace('SCENE.', '')
matches = ['SCENE.' + x for x in complete_node_with_attr(node, attr) if '|' not in x]
else:
matches = complete_node_with_attr(node, attr)
#--------------
# Nodes
#--------------
else:
# we don't yet have a full node
if '|' not in line or (line.startswith('|') and line.count('|') == 1):
# print "partial node"
kwargs = {}
if line.startswith('|'):
kwargs['l'] = True
matches = cmds.ls(expand(line), **kwargs)
# we have a full node, get it's children
else:
matches = get_children(line)
if not matches:
raise IPython.ipapi.TryNext
# if we have only one match, get the children as well
if len(matches) == 1:
res = get_children(matches[0] + '|')
matches += res
return matches
def pymel_python_completer(self, event):
"""Match attributes or global python names"""
import pymel.core as pm
# print "python_matches"
text = event.symbol
# print repr(text)
# Another option, seems to work great. Catches things like ''.<tab>
m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
if not m:
raise IPython.ipapi.TryNext
expr, attr = m.group(1, 3)
# print type(self.Completer), dir(self.Completer)
# print self.Completer.namespace
# print self.Completer.global_namespace
try:
# print "first"
obj = eval(expr, self.Completer.namespace)
except:
try:
# print "second"
obj = eval(expr, self.Completer.global_namespace)
except:
raise IPython.ipapi.TryNext
# print "complete"
if isinstance(obj, (pm.nt.DependNode, pm.Attribute)):
# print "isinstance"
node = unicode(obj)
long_attrs = cmds.listAttr(node)
short_attrs = cmds.listAttr(node, shortNames=1)
matches = []
matches = self.Completer.python_matches(text)
# print "here"
# if node is a plug ( 'persp.t' ), the first result will be the passed plug
if '.' in node:
attrs = long_attrs[1:] + short_attrs[1:]
else:
attrs = long_attrs + short_attrs
# print "returning"
matches += [expr + '.' + at for at in attrs]
#import colorize
#matches = [ colorize.colorize(x,'magenta') for x in matches ]
return matches
raise IPython.ipapi.TryNext
def buildRecentFileMenu():
import pymel.core as pm
if "RecentFilesList" not in pm.optionVar:
return
# get the list
RecentFilesList = pm.optionVar["RecentFilesList"]
nNumItems = len(RecentFilesList)
RecentFilesMaxSize = pm.optionVar["RecentFilesMaxSize"]
# # check if there are too many items in the list
# if (RecentFilesMaxSize < nNumItems):
#
# #if so, truncate the list
# nNumItemsToBeRemoved = nNumItems - RecentFilesMaxSize
#
# #Begin removing items from the head of the array (least recent file in the list)
# for ($i = 0; $i < $nNumItemsToBeRemoved; $i++):
#
# core.optionVar -removeFromArray "RecentFilesList" 0;
#
# RecentFilesList = core.optionVar["RecentFilesList"]
# nNumItems = len($RecentFilesList);
# The RecentFilesTypeList optionVar may not exist since it was
# added after the RecentFilesList optionVar. If it doesn't exist,
# we create it and initialize it with a guess at the file type
if nNumItems > 0:
if "RecentFilesTypeList" not in pm.optionVar:
pm.mel.initRecentFilesTypeList(RecentFilesList)
RecentFilesTypeList = pm.optionVar["RecentFilesTypeList"]
# toNativePath
# first, check if we are the same.
def open_completer(self, event):
relpath = event.symbol
# print event # dbg
if '-b' in event.line:
# return only bookmark completions
bkms = self.db.get('bookmarks', {})
return bkms.keys()
if event.symbol == '-':
print "completer"
width_dh = str(len(str(len(ip.user_ns['_sh']) + 1)))
print width_dh
# jump in directory history by number
fmt = '-%0' + width_dh + 'd [%s]'
ents = [fmt % (i, s) for i, s in enumerate(ip.user_ns['_sh'])]
if len(ents) > 1:
return ents
return []
raise IPython.ipapi.TryNext
class TreePager(object):
def __init__(self, colors, options):
self.colors = colors
self.options = options
# print options.depth
def do_level(self, obj, depth, isLast):
if isLast[-1]:
sep = '`-- '
else:
sep = '|-- '
#sep = '|__ '
depth += 1
branch = ''
for x in isLast[:-1]:
if x:
branch += ' '
else:
branch += '| '
branch = self.colors['tree'] + branch + sep + self.colors['normal']
children = self.getChildren(obj)
name = self.getName(obj)
num = len(children) - 1
if children:
if self.options.maxdepth and depth >= self.options.maxdepth:
state = '+'
else:
state = '-'
pre = self.colors['collapsed'] + state + ' '
else:
pre = ' '
yield pre + branch + name + self.colors['normal'] + '\n'
# yield Colors.Yellow + branch + sep + Colors.Normal+ name + '\n'
if not self.options.maxdepth or depth < self.options.maxdepth:
for i, x in enumerate(children):
for line in self.do_level(x, depth, isLast + [i == num]):
yield line
def make_tree(self, roots):
num = len(roots) - 1
tree = ''
for i, x in enumerate(roots):
for line in self.do_level(x, 0, [i == num]):
tree += line
return tree
class DagTree(TreePager):
def getChildren(self, obj):
if self.options.shapes:
return obj.getChildren()
else:
return obj.getChildren(type='transform')
def getName(self, obj):
import pymel.core as pm
name = obj.nodeName()
if obj.isInstanced():
if isinstance(obj, pm.nt.Transform):
# keep transforms bolded
color = self.colors['nonunique_transform']
else:
color = self.colors['nonunique']
id = obj.instanceNumber()
if id != 0:
source = ' -> %s' % obj.getOtherInstances()[0]
else:
source = ''
name = color + name + self.colors['instance'] + ' [' + str(id) + ']' + source
elif not obj.isUniquelyNamed():
if isinstance(obj, pm.nt.Transform):
# keep transforms bolded
color = self.colors['nonunique_transform']
else:
color = self.colors['nonunique']
name = color + name
elif isinstance(obj, pm.nt.Transform):
# bold
name = self.colors['transform'] + name
else:
name = self.colors['shape'] + name
return name
dag_parser = OptionParser()
dag_parser.add_option("-d", type="int", dest="maxdepth")
dag_parser.add_option("-t", action="store_false", dest="shapes", default=True)
dag_parser.add_option("-s", action="store_true", dest="shapes")
def magic_dag(self, parameter_s=''):
"""
"""
import pymel.core as pm
options, args = dag_parser.parse_args(parameter_s.split())
colors = get_colors(self)
dagtree = DagTree(colors, options)
if args:
roots = [pm.PyNode(args[0])]
else:
roots = pm.ls(assemblies=1)
page(dagtree.make_tree(roots))
class DGHistoryTree(TreePager):
def getChildren(self, obj):
source, dest = obj
return source.node().listConnections(plugs=True, connections=True, source=True, destination=False, sourceFirst=True)
def getName(self, obj):
source, dest = obj
name = "%s -> %s" % (source, dest)
return name
def make_tree(self, root):
import pymel.core as pm
roots = pm.listConnections(root, plugs=True, connections=True, source=True, destination=False, sourceFirst=True)
return TreePager.make_tree(self, roots)
dg_parser = OptionParser()
dg_parser.add_option("-d", type="int", dest="maxdepth")
dg_parser.add_option("-t", action="store_false", dest="shapes", default=True)
dg_parser.add_option("-s", action="store_true", dest="shapes")
def magic_dghist(self, parameter_s=''):
"""
"""
import pymel.core as pm
options, args = dg_parser.parse_args(parameter_s.split())
if not args:
print "must pass in nodes to display the history of"
return
colors = get_colors(self)
dgtree = DGHistoryTree(colors, options)
roots = [pm.PyNode(args[0])]
page(dgtree.make_tree(roots))
def magic_open(self, parameter_s=''):
"""Change the current working directory.
This command automatically maintains an internal list of directories
you visit during your IPython session, in the variable _sh. The
command %dhist shows this history nicely formatted. You can also
do 'cd -<tab>' to see directory history conveniently.
Usage:
openFile 'dir': changes to directory 'dir'.
openFile -: changes to the last visited directory.
openFile -<n>: changes to the n-th directory in the directory history.
openFile --foo: change to directory that matches 'foo' in history
openFile -b <bookmark_name>: jump to a bookmark set by %bookmark
(note: cd <bookmark_name> is enough if there is no
directory <bookmark_name>, but a bookmark with the name exists.)
'cd -b <tab>' allows you to tab-complete bookmark names.
Options:
-q: quiet. Do not print the working directory after the cd command is
executed. By default IPython's cd command does print this directory,
since the default prompts do not display path information.
Note that !cd doesn't work for this purpose because the shell where
!command runs is immediately discarded after executing 'command'."""
parameter_s = parameter_s.strip()
#bkms = self.shell.persist.get("bookmarks",{})
oldcwd = os.getcwd()
numcd = re.match(r'(-)(\d+)$', parameter_s)
# jump in directory history by number
if numcd:
nn = int(numcd.group(2))
try:
ps = ip.ev('_sh[%d]' % nn)
except IndexError:
print 'The requested directory does not exist in history.'
return
else:
opts = {}
# elif parameter_s.startswith('--'):
# ps = None
# fallback = None
# pat = parameter_s[2:]
# dh = self.shell.user_ns['_sh']
# # first search only by basename (last component)
# for ent in reversed(dh):
# if pat in os.path.basename(ent) and os.path.isdir(ent):
# ps = ent
# break
#
# if fallback is None and pat in ent and os.path.isdir(ent):
# fallback = ent
#
# # if we have no last part match, pick the first full path match
# if ps is None:
# ps = fallback
#
# if ps is None:
# print "No matching entry in directory history"
# return
# else:
# opts = {}
else:
# turn all non-space-escaping backslashes to slashes,
# for c:\windows\directory\names\
parameter_s = re.sub(r'\\(?! )', '/', parameter_s)
opts, ps = self.parse_options(parameter_s, 'qb', mode='string')
# jump to previous
if ps == '-':
try:
ps = ip.ev('_sh[-2]' % nn)
except IndexError:
raise UsageError('%cd -: No previous directory to change to.')
# # jump to bookmark if needed
# else:
# if not os.path.exists(ps) or opts.has_key('b'):
# bkms = self.db.get('bookmarks', {})
#
# if bkms.has_key(ps):
# target = bkms[ps]
# print '(bookmark:%s) -> %s' % (ps,target)
# ps = target
# else:
# if opts.has_key('b'):
# raise UsageError("Bookmark '%s' not found. "
# "Use '%%bookmark -l' to see your bookmarks." % ps)
# at this point ps should point to the target dir
if ps:
ip.ex('openFile("%s", f=1)' % ps)
# try:
# os.chdir(os.path.expanduser(ps))
# if self.shell.rc.term_title:
# #print 'set term title:',self.shell.rc.term_title # dbg
# platutils.set_term_title('IPy ' + abbrev_cwd())
# except OSError:
# print sys.exc_info()[1]
# else:
# cwd = os.getcwd()
# dhist = self.shell.user_ns['_sh']
# if oldcwd != cwd:
# dhist.append(cwd)
# self.db['dhist'] = compress_dhist(dhist)[-100:]
# else:
# os.chdir(self.shell.home_dir)
# if self.shell.rc.term_title:
# platutils.set_term_title("IPy ~")
# cwd = os.getcwd()
# dhist = self.shell.user_ns['_sh']
#
# if oldcwd != cwd:
# dhist.append(cwd)
# self.db['dhist'] = compress_dhist(dhist)[-100:]
# if not 'q' in opts and self.shell.user_ns['_sh']:
# print self.shell.user_ns['_sh'][-1]
# maya sets a sigint / ctrl-c / KeyboardInterrupt handler that quits maya -
# want to override this to get "normal" python interpreter behavior, where it
# interrupts the current python command, but doesn't exit the interpreter
def ipymel_sigint_handler(signal, frame):
raise KeyboardInterrupt
def install_sigint_handler(force=False):
import signal
if force or signal.getsignal(signal.SIGINT) == ipymel_sigint_handler:
signal.signal(signal.SIGINT, ipymel_sigint_handler)
# unfortunately, it seems maya overrides the SIGINT hook whenever a plugin is
# loaded...
def sigint_plugin_loaded_callback(*args):
# from the docs, as of 2015 the args are:
# ( [ pathToPlugin, pluginName ], clientData )
install_sigint_handler()
sigint_plugin_loaded_callback_id = None
def setup(shell):
global ip
if hasattr(shell, 'get_ipython'):
ip = shell.get_ipython()
else:
ip = get_ipython()
ip.set_hook('complete_command', pymel_python_completer, re_key=".*")
ip.set_hook('complete_command', pymel_name_completer, re_key="(.+(\s+|\())|(SCENE\.)")
ip.set_hook('complete_command', open_completer, str_key="openf")
ip.ex("from pymel.core import *")
# stuff in __main__ is not necessarily in ipython's 'main' namespace... so
# if the user has something in userSetup.py that he wants put in the
# "interactive" namespace, it won't be - unless we do this:
ip.ex('from __main__ import *')
# if you don't want pymel imported into the main namespace, you can replace the above with something like:
#ip.ex("import pymel as pm")
ip.define_magic('openf', magic_open)
ip.define_magic('dag', magic_dag)
ip.define_magic('dghist', magic_dghist)
# add projects
ip.ex("""
import os.path
for _mayaproj in optionVar.get('RecentProjectsList', []):
_mayaproj = os.path.join( _mayaproj, 'scenes' )
if _mayaproj not in _dh:
_dh.append(_mayaproj)""")
# add files
ip.ex("""
import os.path
_sh=[]
for _mayaproj in optionVar.get('RecentFilesList', []):
if _mayaproj not in _sh:
_sh.append(_mayaproj)""")
# setup a handler for ctrl-c / SIGINT / KeyboardInterrupt, so maya / ipymel
# doesn't quit
install_sigint_handler(force=True)
# unfortunately, when Mental Ray loads, it installs a new SIGINT handler
# which restores the old "bad" behavior... need to install a plugin callback
# to restore ours...
global sigint_plugin_loaded_callback_id
import pymel.core as pm
if sigint_plugin_loaded_callback_id is None:
sigint_plugin_loaded_callback_id = pm.api.MSceneMessage.addStringArrayCallback(
pm.api.MSceneMessage.kAfterPluginLoad,
sigint_plugin_loaded_callback)
def main():
import IPython
ipy_ver = IPython.__version__.split('.')
ipy_ver = [int(x) if x.isdigit() else x for x in ipy_ver]
if ipy_ver < [0, 11]:
import IPython.Shell
shell = IPython.Shell.start()
setup(shell)
shell.mainloop()
else:
import IPython.frontend.terminal.ipapp
app = IPython.frontend.terminal.ipapp.TerminalIPythonApp.instance()
app.initialize()
setup(app.shell)
app.start()
if __name__ == '__main__':
main()
|
shrtcww/pymel
|
pymel/tools/ipymel.py
|
Python
|
bsd-3-clause
| 23,674
|
[
"VisIt"
] |
10c4d165cc8a8e1dbb6168e04c635cb6e52c1bf081e261af4167cf6a30772ff5
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""Tests for the Windows Scheduled Task job file parser."""
import unittest
from plaso.formatters import winjob as _ # pylint: disable=unused-import
from plaso.lib import eventdata
from plaso.lib import timelib
from plaso.parsers import winjob
from tests.parsers import test_lib
class WinJobTest(test_lib.ParserTestCase):
"""Tests for the Windows Scheduled Task job file parser."""
def setUp(self):
"""Sets up the needed objects used throughout the test."""
self._parser = winjob.WinJobParser()
def testParse(self):
"""Tests the Parse function."""
test_file = self._GetTestFilePath([u'wintask.job'])
event_queue_consumer = self._ParseFile(self._parser, test_file)
event_objects = self._GetEventObjectsFromQueue(event_queue_consumer)
self.assertEqual(len(event_objects), 2)
event_object = event_objects[0]
application_expected = (
u'C:\\Program Files (x86)\\Google\\Update\\GoogleUpdate.exe')
self.assertEqual(event_object.application, application_expected)
username_expected = u'Brian'
self.assertEqual(event_object.username, username_expected)
description_expected = eventdata.EventTimestamp.LAST_RUNTIME
self.assertEqual(event_object.timestamp_desc, description_expected)
trigger_expected = u'DAILY'
self.assertEqual(event_object.trigger, trigger_expected)
comment_expected = (
u'Keeps your Google software up to date. If this task is disabled or '
u'stopped, your Google software will not be kept up to date, meaning '
u'security vulnerabilities that may arise cannot be fixed and '
u'features may not work. This task uninstalls itself when there is '
u'no Google software using it.')
self.assertEqual(event_object.comment, comment_expected)
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2013-08-24 12:42:00.112')
self.assertEqual(event_object.timestamp, expected_timestamp)
# Parse second event. Same metadata; different timestamp event.
event_object = event_objects[1]
self.assertEqual(event_object.application, application_expected)
self.assertEqual(event_object.username, username_expected)
self.assertEqual(event_object.trigger, trigger_expected)
self.assertEqual(event_object.comment, comment_expected)
description_expected = u'Scheduled To Start'
self.assertEqual(event_object.timestamp_desc, description_expected)
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2013-07-12 15:42:00')
self.assertEqual(event_object.timestamp, expected_timestamp)
expected_msg = (
u'Application: C:\\Program Files (x86)\\Google\\Update\\'
u'GoogleUpdate.exe /ua /installsource scheduler '
u'Scheduled by: Brian '
u'Run Iteration: DAILY')
expected_msg_short = (
u'Application: C:\\Program Files (x86)\\Google\\Update\\'
u'GoogleUpdate.exe /ua /insta...')
self._TestGetMessageStrings(event_object, expected_msg, expected_msg_short)
if __name__ == '__main__':
unittest.main()
|
ostree/plaso
|
tests/parsers/winjob.py
|
Python
|
apache-2.0
| 3,104
|
[
"Brian"
] |
fe0eb3c2c1b164a6d39d9ab71e1e9a473e4c09d20bc5e8a94eea4e82fe60f83b
|
import itertools
import numpy as np
from scipy import sparse
from scipy.stats import norm
from scipy.optimize import minimize, minimize_scalar
from scipy.sparse import csc_matrix, linalg as sla
from functools import partial
from collections import deque
from pygfl.solver import TrailSolver
class GaussianKnown:
'''
A simple Gaussian distribution with known mean and stdev.
'''
def __init__(self, mean, stdev):
self.mean = mean
self.stdev = stdev
def pdf(self, data):
return norm.pdf(data, loc=self.mean, scale=self.stdev)
def sample(self):
return np.random.normal(loc=self.mean, scale=self.stdev)
def noisy_pdf(self, data):
return norm.pdf(data, loc=self.mean, scale=np.sqrt(self.stdev**2 + 1))
def __repr__(self):
return 'N({:.2f}, {:.2f}^2)'.format(self.mean, self.stdev)
class SmoothedFdr(object):
def __init__(self, signal_dist, null_dist, penalties_cross_x=None):
self.signal_dist = signal_dist
self.null_dist = null_dist
if penalties_cross_x is None:
self.penalties_cross_x = np.dot
else:
self.penalties_cross_x = penalties_cross_x
self.w_iters = []
self.beta_iters = []
self.c_iters = []
self.delta_iters = []
# ''' Load the graph fused lasso library '''
# graphfl_lib = cdll.LoadLibrary('libgraphfl.so')
# self.graphfl_weight = graphfl_lib.graph_fused_lasso_weight_warm
# self.graphfl_weight.restype = c_int
# self.graphfl_weight.argtypes = [c_int, ndpointer(c_double, flags='C_CONTIGUOUS'), ndpointer(c_double, flags='C_CONTIGUOUS'),
# c_int, ndpointer(c_int, flags='C_CONTIGUOUS'), ndpointer(c_int, flags='C_CONTIGUOUS'),
# c_double, c_double, c_double, c_int, c_double,
# ndpointer(c_double, flags='C_CONTIGUOUS'), ndpointer(c_double, flags='C_CONTIGUOUS'), ndpointer(c_double, flags='C_CONTIGUOUS')]
self.solver = TrailSolver()
def add_step(self, w, beta, c, delta):
self.w_iters.append(w)
self.beta_iters.append(beta)
self.c_iters.append(c)
self.delta_iters.append(delta)
def finish(self):
self.w_iters = np.array(self.w_iters)
self.beta_iters = np.array(self.beta_iters)
self.c_iters = np.array(self.c_iters)
self.delta_iters = np.array(self.delta_iters)
def reset(self):
self.w_iters = []
self.beta_iters = []
self.c_iters = []
self.delta_iters = []
def solution_path(self, data, penalties, dof_tolerance=1e-4,
min_lambda=0.20, max_lambda=1.5, lambda_bins=30,
converge=0.00001, max_steps=100, m_converge=0.00001,
m_max_steps=20, cd_converge=0.00001, cd_max_steps=1000, verbose=0, dual_solver='graph',
admm_alpha=1., admm_inflate=2., admm_adaptive=False, initial_values=None,
grid_data=None, grid_map=None):
'''Follows the solution path of the generalized lasso to find the best lambda value.'''
lambda_grid = np.exp(np.linspace(np.log(max_lambda), np.log(min_lambda), lambda_bins))
aic_trace = np.zeros(lambda_grid.shape) # The AIC score for each lambda value
aicc_trace = np.zeros(lambda_grid.shape) # The AICc score for each lambda value (correcting for finite sample size)
bic_trace = np.zeros(lambda_grid.shape) # The BIC score for each lambda value
dof_trace = np.zeros(lambda_grid.shape) # The degrees of freedom of each final solution
log_likelihood_trace = np.zeros(lambda_grid.shape)
beta_trace = []
u_trace = []
w_trace = []
c_trace = []
results_trace = []
best_idx = None
best_plateaus = None
flat_data = data.flatten()
edges = penalties[3] if dual_solver == 'graph' else None
if grid_data is not None:
grid_points = np.zeros(grid_data.shape)
grid_points[:,:] = np.nan
for i, _lambda in enumerate(lambda_grid):
if verbose:
print('#{0} Lambda = {1}'.format(i, _lambda))
# Clear out all the info from the previous run
self.reset()
# Fit to the final values
results = self.run(flat_data, penalties, _lambda=_lambda, converge=converge, max_steps=max_steps,
m_converge=m_converge, m_max_steps=m_max_steps, cd_converge=cd_converge,
cd_max_steps=cd_max_steps, verbose=verbose, dual_solver=dual_solver,
admm_alpha=admm_alpha, admm_inflate=admm_inflate, admm_adaptive=admm_adaptive,
initial_values=initial_values)
if verbose:
print('Calculating degrees of freedom')
# Create a grid structure out of the vector of betas
if grid_map is not None:
grid_points[grid_map != -1] = results['beta'][grid_map[grid_map != -1]]
else:
grid_points = results['beta'].reshape(data.shape)
# Count the number of free parameters in the grid (dof)
plateaus = calc_plateaus(grid_points, dof_tolerance, edges=edges)
dof_trace[i] = len(plateaus)
#dof_trace[i] = (np.abs(penalties.dot(results['beta'])) >= dof_tolerance).sum() + 1 # Use the naive DoF
if verbose:
print('Calculating AIC')
# Get the negative log-likelihood
log_likelihood_trace[i] = -self._data_negative_log_likelihood(flat_data, results['c'])
# Calculate AIC = 2k - 2ln(L)
aic_trace[i] = 2. * dof_trace[i] - 2. * log_likelihood_trace[i]
# Calculate AICc = AIC + 2k * (k+1) / (n - k - 1)
aicc_trace[i] = aic_trace[i] + 2 * dof_trace[i] * (dof_trace[i]+1) / (flat_data.shape[0] - dof_trace[i] - 1.)
# Calculate BIC = -2ln(L) + k * (ln(n) - ln(2pi))
bic_trace[i] = -2 * log_likelihood_trace[i] + dof_trace[i] * (np.log(len(flat_data)) - np.log(2 * np.pi))
# Track the best model thus far
if best_idx is None or bic_trace[i] < bic_trace[best_idx]:
best_idx = i
best_plateaus = plateaus
# Save the final run parameters to use for warm-starting the next iteration
initial_values = results
# Save the trace of all the resulting parameters
beta_trace.append(results['beta'])
u_trace.append(results['u'])
w_trace.append(results['w'])
c_trace.append(results['c'])
if verbose:
print('DoF: {0} AIC: {1} AICc: {2} BIC: {3}'.format(dof_trace[i], aic_trace[i], aicc_trace[i], bic_trace[i]))
if verbose:
print('Best setting (by BIC): lambda={0} [DoF: {1}, AIC: {2}, AICc: {3} BIC: {4}]'.format(lambda_grid[best_idx], dof_trace[best_idx], aic_trace[best_idx], aicc_trace[best_idx], bic_trace[best_idx]))
return {'aic': aic_trace,
'aicc': aicc_trace,
'bic': bic_trace,
'dof': dof_trace,
'loglikelihood': log_likelihood_trace,
'beta': np.array(beta_trace),
'u': np.array(u_trace),
'w': np.array(w_trace),
'c': np.array(c_trace),
'lambda': lambda_grid,
'best': best_idx,
'plateaus': best_plateaus}
def run(self, data, penalties, _lambda=0.1, converge=0.00001, max_steps=100, m_converge=0.00001,
m_max_steps=100, cd_converge=0.00001, cd_max_steps=100, verbose=0, dual_solver='graph',
admm_alpha=1., admm_inflate=2., admm_adaptive=False, initial_values=None):
'''Runs the Expectation-Maximization algorithm for the data with the given penalty matrix.'''
delta = converge + 1
if initial_values is None:
beta = np.zeros(data.shape)
prior_prob = np.exp(beta) / (1 + np.exp(beta))
u = initial_values
else:
beta = initial_values['beta']
prior_prob = initial_values['c']
u = initial_values['u']
prev_nll = 0
cur_step = 0
while delta > converge and cur_step < max_steps:
if verbose:
print('Step #{0}'.format(cur_step))
if verbose:
print('\tE-step...')
# Get the likelihood weights vector (E-step)
post_prob = self._e_step(data, prior_prob)
if verbose:
print('\tM-step...')
# Find beta using an alternating Taylor approximation and convex optimization (M-step)
beta, u = self._m_step(beta, prior_prob, post_prob, penalties, _lambda,
m_converge, m_max_steps,
cd_converge, cd_max_steps,
verbose, dual_solver,
admm_adaptive=admm_adaptive,
admm_inflate=admm_inflate,
admm_alpha=admm_alpha,
u0=u)
# Get the signal probabilities
prior_prob = ilogit(beta)
cur_nll = self._data_negative_log_likelihood(data, prior_prob)
if dual_solver == 'admm':
# Get the negative log-likelihood of the data given our new parameters
cur_nll += _lambda * np.abs(u['r']).sum()
# Track the change in log-likelihood to see if we've converged
delta = np.abs(cur_nll - prev_nll) / (prev_nll + converge)
if verbose:
print('\tDelta: {0}'.format(delta))
# Track the step
self.add_step(post_prob, beta, prior_prob, delta)
# Increment the step counter
cur_step += 1
# Update the negative log-likelihood tracker
prev_nll = cur_nll
# DEBUGGING
if verbose:
print('\tbeta: [{0:.4f}, {1:.4f}]'.format(beta.min(), beta.max()))
print('\tprior_prob: [{0:.4f}, {1:.4f}]'.format(prior_prob.min(), prior_prob.max()))
print('\tpost_prob: [{0:.4f}, {1:.4f}]'.format(post_prob.min(), post_prob.max()))
if dual_solver != 'graph':
print('\tdegrees of freedom: {0}'.format((np.abs(penalties.dot(beta)) >= 1e-4).sum()))
# Return the results of the run
return {'beta': beta, 'u': u, 'w': post_prob, 'c': prior_prob}
def _data_negative_log_likelihood(self, data, prior_prob):
'''Calculate the negative log-likelihood of the data given the weights.'''
signal_weight = prior_prob * self.signal_dist.pdf(data)
null_weight = (1-prior_prob) * self.null_dist.pdf(data)
return -np.log(signal_weight + null_weight).sum()
def _e_step(self, data, prior_prob):
'''Calculate the complete-data sufficient statistics (weights vector).'''
signal_weight = prior_prob * self.signal_dist.pdf(data)
null_weight = (1-prior_prob) * self.null_dist.pdf(data)
post_prob = signal_weight / (signal_weight + null_weight)
return post_prob
def _m_step(self, beta, prior_prob, post_prob, penalties,
_lambda, converge, max_steps,
cd_converge, cd_max_steps,
verbose, dual_solver, u0=None,
admm_alpha=1., admm_inflate=2., admm_adaptive=False):
'''
Alternating Second-order Taylor-series expansion about the current iterate
and coordinate descent to optimize Beta.
'''
prev_nll = self._m_log_likelihood(post_prob, beta)
delta = converge + 1
u = u0
cur_step = 0
while delta > converge and cur_step < max_steps:
if verbose > 1:
print('\t\tM-Step iteration #{0}'.format(cur_step))
print('\t\tTaylor approximation...')
# Cache the exponentiated beta
exp_beta = np.exp(beta)
# Form the parameters for our weighted least squares
if dual_solver != 'admm' and dual_solver != 'graph':
# weights is a diagonal matrix, represented as a vector for efficiency
weights = 0.5 * exp_beta / (1 + exp_beta)**2
y = (1+exp_beta)**2 * post_prob / exp_beta + beta - (1 + exp_beta)
if verbose > 1:
print('\t\tForming dual...')
x = np.sqrt(weights) * y
A = (1. / np.sqrt(weights))[:,np.newaxis] * penalties.T
else:
weights = (prior_prob * (1 - prior_prob))
y = beta - (prior_prob - post_prob) / weights
print(weights)
print(y)
if dual_solver == 'cd':
# Solve the dual via coordinate descent
u = self._u_coord_descent(x, A, _lambda, cd_converge, cd_max_steps, verbose > 1, u0=u)
elif dual_solver == 'sls':
# Solve the dual via sequential least squares
u = self._u_slsqp(x, A, _lambda, verbose > 1, u0=u)
elif dual_solver == 'lbfgs':
# Solve the dual via L-BFGS-B
u = self._u_lbfgsb(x, A, _lambda, verbose > 1, u0=u)
elif dual_solver == 'admm':
# Solve the dual via alternating direction methods of multipliers
#u = self._u_admm_1dfusedlasso(y, weights, _lambda, cd_converge, cd_max_steps, verbose > 1, initial_values=u)
#u = self._u_admm(y, weights, _lambda, penalties, cd_converge, cd_max_steps, verbose > 1, initial_values=u)
u = self._u_admm_lucache(y, weights, _lambda, penalties, cd_converge, cd_max_steps,
verbose > 1, initial_values=u, inflate=admm_inflate,
adaptive=admm_adaptive, alpha=admm_alpha)
beta = u['x']
elif dual_solver == 'graph':
u = self._graph_fused_lasso(y, weights, _lambda, penalties[0], penalties[1], penalties[2], penalties[3], cd_converge, cd_max_steps, max(0, verbose - 1), admm_alpha, admm_inflate, initial_values=u)
beta = u['beta']
# if np.abs(beta).max() > 20:
# beta = np.clip(beta, -20, 20)
# u = None
else:
raise Exception('Unknown solver: {0}'.format(dual_solver))
if dual_solver != 'admm' and dual_solver != 'graph':
# Back out beta from the dual solution
beta = y - (1. / weights) * penalties.T.dot(u)
# Get the current log-likelihood
cur_nll = self._m_log_likelihood(post_prob, beta)
# Track the convergence
delta = np.abs(prev_nll - cur_nll) / (prev_nll + converge)
if verbose > 1:
print('\t\tM-step delta: {0}'.format(delta))
# Increment the step counter
cur_step += 1
# Update the negative log-likelihood tracker
prev_nll = cur_nll
return beta, u
def _m_log_likelihood(self, post_prob, beta):
'''Calculate the log-likelihood of the betas given the weights and data.'''
return (np.log(1 + np.exp(beta)) - post_prob * beta).sum()
def _graph_fused_lasso(self, y, weights, _lambda, ntrails, trails, breakpoints, edges, converge, max_steps, verbose, alpha, inflate, initial_values=None):
'''Solve for u using a super fast graph fused lasso library that has an optimized ADMM routine.'''
if verbose:
print('\t\tSolving via Graph Fused Lasso')
# if initial_values is None:
# beta = np.zeros(y.shape, dtype='double')
# z = np.zeros(breakpoints[-1], dtype='double')
# u = np.zeros(breakpoints[-1], dtype='double')
# else:
# beta = initial_values['beta']
# z = initial_values['z']
# u = initial_values['u']
# n = y.shape[0]
# self.graphfl_weight(n, y, weights, ntrails, trails, breakpoints, _lambda, alpha, inflate, max_steps, converge, beta, z, u)
# return {'beta': beta, 'z': z, 'u': u }
self.solver.alpha = alpha
self.solver.inflate = inflate
self.solver.maxsteps = max_steps
self.solver.converge = converge
self.solver.set_data(y, edges, ntrails, trails, breakpoints, weights=weights)
if initial_values is not None:
self.solver.beta = initial_values['beta']
self.solver.z = initial_values['z']
self.solver.u = initial_values['u']
self.solver.solve(_lambda)
return {'beta': self.solver.beta, 'z': self.solver.z, 'u': self.solver.u }
def _u_admm_lucache(self, y, weights, _lambda, D, converge_threshold, max_steps, verbose, alpha=1.8, initial_values=None, inflate=2., adaptive=False):
'''Solve for u using alternating direction method of multipliers with a cached LU decomposition.'''
if verbose:
print('\t\tSolving u via Alternating Direction Method of Multipliers')
n = len(y)
m = D.shape[0]
a = inflate * _lambda # step-size parameter
# Initialize primal and dual variables from warm start
if initial_values is None:
# Graph Laplacian
L = csc_matrix(D.T.dot(D) + csc_matrix(np.eye(n)))
# Cache the LU decomposition
lu_factor = sla.splu(L, permc_spec='MMD_AT_PLUS_A')
x = np.array([y.mean()] * n) # likelihood term
z = np.zeros(n) # slack variable for likelihood
r = np.zeros(m) # penalty term
s = np.zeros(m) # slack variable for penalty
u_dual = np.zeros(n) # scaled dual variable for constraint x = z
t_dual = np.zeros(m) # scaled dual variable for constraint r = s
else:
lu_factor = initial_values['lu_factor']
x = initial_values['x']
z = initial_values['z']
r = initial_values['r']
s = initial_values['s']
u_dual = initial_values['u_dual']
t_dual = initial_values['t_dual']
primal_trace = []
dual_trace = []
converged = False
cur_step = 0
D_full = D
while not converged and cur_step < max_steps:
# Update x
x = (weights * y + a * (z - u_dual)) / (weights + a)
x_accel = alpha * x + (1 - alpha) * z # over-relaxation
# Update constraint term r
arg = s - t_dual
local_lambda = (_lambda - np.abs(arg) / 2.).clip(0) if adaptive else _lambda
r = _soft_threshold(arg, local_lambda / a)
r_accel = alpha * r + (1 - alpha) * s
# Projection to constraint set
arg = x_accel + u_dual + D.T.dot(r_accel + t_dual)
z_new = lu_factor.solve(arg)
s_new = D.dot(z_new)
dual_residual_u = a * (z_new - z)
dual_residual_t = a * (s_new - s)
z = z_new
s = s_new
# Dual update
primal_residual_x = x_accel - z
primal_residual_r = r_accel - s
u_dual = u_dual + primal_residual_x
t_dual = t_dual + primal_residual_r
# Check convergence
primal_resnorm = np.sqrt((np.array([i for i in primal_residual_x] + [i for i in primal_residual_r])**2).mean())
dual_resnorm = np.sqrt((np.array([i for i in dual_residual_u] + [i for i in dual_residual_t])**2).mean())
primal_trace.append(primal_resnorm)
dual_trace.append(dual_resnorm)
converged = dual_resnorm < converge_threshold and primal_resnorm < converge_threshold
if primal_resnorm > 5 * dual_resnorm:
a *= inflate
u_dual /= inflate
t_dual /= inflate
elif dual_resnorm > 5 * primal_resnorm:
a /= inflate
u_dual *= inflate
t_dual *= inflate
# Update the step counter
cur_step += 1
if verbose and cur_step % 100 == 0:
print('\t\t\tStep #{0}: dual_resnorm: {1:.6f} primal_resnorm: {2:.6f}'.format(cur_step, dual_resnorm, primal_resnorm))
return {'x': x, 'r': r, 'z': z, 's': s, 'u_dual': u_dual, 't_dual': t_dual,
'primal_trace': primal_trace, 'dual_trace': dual_trace, 'steps': cur_step,
'lu_factor': lu_factor}
def _u_admm(self, y, weights, _lambda, D, converge_threshold, max_steps, verbose, alpha=1.0, initial_values=None):
'''Solve for u using alternating direction method of multipliers.'''
if verbose:
print('\t\tSolving u via Alternating Direction Method of Multipliers')
n = len(y)
m = D.shape[0]
a = _lambda # step-size parameter
# Set up system involving graph Laplacian
L = D.T.dot(D)
W_over_a = np.diag(weights / a)
x_denominator = W_over_a + L
#x_denominator = sparse.linalg.inv(W_over_a + L)
# Initialize primal and dual variables
if initial_values is None:
x = np.array([y.mean()] * n)
z = np.zeros(m)
u = np.zeros(m)
else:
x = initial_values['x']
z = initial_values['z']
u = initial_values['u']
primal_trace = []
dual_trace = []
converged = False
cur_step = 0
while not converged and cur_step < max_steps:
# Update x
x_numerator = 1.0 / a * weights * y + D.T.dot(a * z - u)
x = np.linalg.solve(x_denominator, x_numerator)
Dx = D.dot(x)
# Update z
Dx_relaxed = alpha * Dx + (1 - alpha) * z # over-relax Dx
z_new = _soft_threshold(Dx_relaxed + u / a, _lambda / a)
dual_residual = a * D.T.dot(z_new - z)
z = z_new
primal_residual = Dx_relaxed - z
# Update u
u = u + a * primal_residual
# Check convergence
primal_resnorm = np.sqrt((primal_residual ** 2).mean())
dual_resnorm = np.sqrt((dual_residual ** 2).mean())
primal_trace.append(primal_resnorm)
dual_trace.append(dual_resnorm)
converged = dual_resnorm < converge_threshold and primal_resnorm < converge_threshold
# Update step-size parameter based on norm of primal and dual residuals
# This is the varying penalty extension to standard ADMM
a *= 2 if primal_resnorm > 10 * dual_resnorm else 0.5
# Recalculate the x_denominator since we changed the step-size
# TODO: is this worth it? We're paying a matrix inverse in exchange for varying the step size
#W_over_a = sparse.dia_matrix(np.diag(weights / a))
W_over_a = np.diag(weights / a)
#x_denominator = sparse.linalg.inv(W_over_a + L)
# Update the step counter
cur_step += 1
if verbose and cur_step % 100 == 0:
print('\t\t\tStep #{0}: dual_resnorm: {1:.6f} primal_resnorm: {2:.6f}'.format(cur_step, dual_resnorm, primal_resnorm))
dof = np.sum(Dx > converge_threshold) + 1.
AIC = np.sum((y - x)**2) + 2 * dof
return {'x': x, 'z': z, 'u': u, 'dof': dof, 'AIC': AIC}
def _u_admm_1dfusedlasso(self, y, W, _lambda, converge_threshold, max_steps, verbose, alpha=1.0, initial_values=None):
'''Solve for u using alternating direction method of multipliers. Note that this method only works for the 1-D fused lasso case.'''
if verbose:
print('\t\tSolving u via Alternating Direction Method of Multipliers (1-D fused lasso)')
n = len(y)
m = n - 1
a = _lambda
# The D matrix is the first-difference operator. K is the matrix (W + a D^T D)
# where W is the diagonal matrix of weights. We use a tridiagonal representation
# of K.
Kd = np.array([a] + [2*a] * (n-2) + [a]) + W # diagonal entries
Kl = np.array([-a] * (n-1)) # below the diagonal
Ku = np.array([-a] * (n-1)) # above the diagonal
# Initialize primal and dual variables
if initial_values is None:
x = np.array([y.mean()] * n)
z = np.zeros(m)
u = np.zeros(m)
else:
x = initial_values['x']
z = initial_values['z']
u = initial_values['u']
primal_trace = []
dual_trace = []
converged = False
cur_step = 0
while not converged and cur_step < max_steps:
# Update x
out = _1d_fused_lasso_crossprod(a*z - u)
x = tridiagonal_solve(Kl, Ku, Kd, W * y + out)
Dx = np.ediff1d(x)
# Update z
Dx_hat = alpha * Dx + (1 - alpha) * z # Over-relaxation
z_new = _soft_threshold(Dx_hat + u / a, _lambda / a)
dual_residual = a * _1d_fused_lasso_crossprod(z_new - z)
z = z_new
primal_residual = Dx - z
#primal_residual = Dx_hat - z
# Update u
u = (u + a * primal_residual).clip(-_lambda, _lambda)
# Check convergence
primal_resnorm = np.sqrt((primal_residual ** 2).mean())
dual_resnorm = np.sqrt((dual_residual ** 2).mean())
primal_trace.append(primal_resnorm)
dual_trace.append(dual_resnorm)
converged = dual_resnorm < converge_threshold and primal_resnorm < converge_threshold
# Update step-size parameter based on norm of primal and dual residuals
a *= 2 if primal_resnorm > 10 * dual_resnorm else 0.5
Kd = np.array([a] + [2*a] * (n-2) + [a]) + W # diagonal entries
Kl = np.array([-a] * (n-1)) # below the diagonal
Ku = np.array([-a] * (n-1)) # above the diagonal
cur_step += 1
if verbose and cur_step % 100 == 0:
print('\t\t\tStep #{0}: dual_resnorm: {1:.6f} primal_resnorm: {2:.6f}'.format(cur_step, dual_resnorm, primal_resnorm))
dof = np.sum(Dx > converge_threshold) + 1.
AIC = np.sum((y - x)**2) + 2 * dof
return {'x': x, 'z': z, 'u': u, 'dof': dof, 'AIC': AIC}
def _u_coord_descent(self, x, A, _lambda, converge, max_steps, verbose, u0=None):
'''Solve for u using coordinate descent.'''
if verbose:
print('\t\tSolving u via Coordinate Descent')
u = u0 if u0 is not None else np.zeros(A.shape[1])
l2_norm_A = (A * A).sum(axis=0)
r = x - A.dot(u)
delta = converge + 1
prev_objective = _u_objective_func(u, x, A)
cur_step = 0
while delta > converge and cur_step < max_steps:
# Update each coordinate one at a time.
for coord in range(len(u)):
prev_u = u[coord]
next_u = prev_u + A.T[coord].dot(r) / l2_norm_A[coord]
u[coord] = min(_lambda, max(-_lambda, next_u))
r += A.T[coord] * prev_u - A.T[coord] * u[coord]
# Track the change in the objective function value
cur_objective = _u_objective_func(u, x, A)
delta = np.abs(prev_objective - cur_objective) / (prev_objective + converge)
if verbose and cur_step % 100 == 0:
print('\t\t\tStep #{0}: Objective: {1:.6f} CD Delta: {2:.6f}'.format(cur_step, cur_objective, delta))
# Increment the step counter and update the previous objective value
cur_step += 1
prev_objective = cur_objective
return u
def _u_slsqp(self, x, A, _lambda, verbose, u0=None):
'''Solve for u using sequential least squares.'''
if verbose:
print('\t\tSolving u via Sequential Least Squares')
if u0 is None:
u0 = np.zeros(A.shape[1])
# Create our box constraints
bounds = [(-_lambda, _lambda) for u0_i in u0]
results = minimize(_u_objective_func, u0,
args=(x, A),
jac=_u_objective_deriv,
bounds=bounds,
method='SLSQP',
options={'disp': False, 'maxiter': 1000})
if verbose:
print('\t\t\t{0}'.format(results.message))
print('\t\t\tFunction evaluations: {0}'.format(results.nfev))
print('\t\t\tGradient evaluations: {0}'.format(results.njev))
print('\t\t\tu: [{0}, {1}]'.format(results.x.min(), results.x.max()))
return results.x
def _u_lbfgsb(self, x, A, _lambda, verbose, u0=None):
'''Solve for u using L-BFGS-B.'''
if verbose:
print('\t\tSolving u via L-BFGS-B')
if u0 is None:
u0 = np.zeros(A.shape[1])
# Create our box constraints
bounds = [(-_lambda, _lambda) for _ in u0]
# Fit
results = minimize(_u_objective_func, u0, args=(x, A), method='L-BFGS-B', bounds=bounds, options={'disp': verbose})
return results.x
def plateau_regression(self, plateaus, data, grid_map=None, verbose=False):
'''Perform unpenalized 1-d regression for each of the plateaus.'''
weights = np.zeros(data.shape)
for i,(level,p) in enumerate(plateaus):
if verbose:
print('\tPlateau #{0}'.format(i+1))
# Get the subset of grid points for this plateau
if grid_map is not None:
plateau_data = np.array([data[grid_map[x,y]] for x,y in p])
else:
plateau_data = np.array([data[x,y] for x,y in p])
w = single_plateau_regression(plateau_data, self.signal_dist, self.null_dist)
for idx in p:
weights[idx if grid_map is None else grid_map[idx[0], idx[1]]] = w
posteriors = self._e_step(data, weights)
weights = weights.flatten()
return (weights, posteriors)
def _u_objective_func(u, x, A):
return np.linalg.norm(x - A.dot(u))**2
def _u_objective_deriv(u, x, A):
return 2*A.T.dot(A.dot(u) - x)
def _u_slsqp_constraint_func(idx, _lambda, u):
'''Constraint function for the i'th value of u.'''
return np.array([_lambda - np.abs(u[idx])])
def _u_slsqp_constraint_deriv(idx, u):
jac = np.zeros(len(u))
jac[idx] = -np.sign(u[idx])
return jac
def _1d_fused_lasso_crossprod(x):
'''Efficiently compute the cross-product D^T x, where D is the first-differences matrix.'''
return -np.ediff1d(x, to_begin=x[0], to_end=-x[-1])
def _soft_threshold(x, _lambda):
return np.sign(x) * (np.abs(x) - _lambda).clip(0)
## Tri-Diagonal Matrix Algorithm (a.k.a Thomas algorithm) solver
## Source: http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm
def tridiagonal_solve(a,b,c,f):
alpha = [0]
beta = [0]
n = len(f)
x = [0] * n
for i in range(n-1):
alpha.append(-b[i]/(a[i]*alpha[i] + c[i]))
beta.append((f[i] - a[i]*beta[i])/(a[i]*alpha[i] + c[i]))
x[n-1] = (f[n-1] - a[n-2]*beta[n-1])/(c[n-1] + a[n-2]*alpha[n-1])
for i in reversed(range(n-1)):
x[i] = alpha[i+1]*x[i+1] + beta[i+1]
return np.array(x)
def ilogit(x):
return 1. / (1. + np.exp(-x))
def calc_plateaus(beta, rel_tol=1e-4, edges=None, verbose=0):
'''Calculate the plateaus (degrees of freedom) of a 1d or 2d grid of beta values in linear time.'''
to_check = deque(itertools.product(*[range(x) for x in beta.shape])) if edges is None else deque(range(len(beta)))
check_map = np.zeros(beta.shape, dtype=bool)
check_map[np.isnan(beta)] = True
plateaus = []
if verbose:
print('\tCalculating plateaus...')
if verbose > 1:
print('\tIndices to check {0} {1}'.format(len(to_check), check_map.shape))
# Loop until every beta index has been checked
while to_check:
if verbose > 1:
print('\t\tPlateau #{0}'.format(len(plateaus) + 1))
# Get the next unchecked point on the grid
idx = to_check.popleft()
# If we already have checked this one, just pop it off
while to_check and check_map[idx]:
try:
idx = to_check.popleft()
except:
break
# Edge case -- If we went through all the indices without reaching an unchecked one.
if check_map[idx]:
break
# Create the plateau and calculate the inclusion conditions
cur_plateau = set([idx])
cur_unchecked = deque([idx])
val = beta[idx]
min_member = val - rel_tol
max_member = val + rel_tol
# Check every possible boundary of the plateau
while cur_unchecked:
idx = cur_unchecked.popleft()
# neighbors to check
local_check = []
# Generic graph case
if edges is not None:
local_check.extend(edges[idx])
# 1d case -- check left and right
elif len(beta.shape) == 1:
if idx[0] > 0:
local_check.append(idx[0] - 1) # left
if idx[0] < beta.shape[0] - 1:
local_check.append(idx[0] + 1) # right
# 2d case -- check left, right, up, and down
elif len(beta.shape) == 2:
if idx[0] > 0:
local_check.append((idx[0] - 1, idx[1])) # left
if idx[0] < beta.shape[0] - 1:
local_check.append((idx[0] + 1, idx[1])) # right
if idx[1] > 0:
local_check.append((idx[0], idx[1] - 1)) # down
if idx[1] < beta.shape[1] - 1:
local_check.append((idx[0], idx[1] + 1)) # up
# Only supports 1d and 2d cases for now
else:
raise Exception('Degrees of freedom calculation does not currently support more than 2 dimensions unless edges are specified explicitly. ({0} given)'.format(len(beta.shape)))
# Check the index's unchecked neighbors
for local_idx in local_check:
if not check_map[local_idx] \
and beta[local_idx] >= min_member \
and beta[local_idx] <= max_member:
# Label this index as being checked so it's not re-checked unnecessarily
check_map[local_idx] = True
# Add it to the plateau and the list of local unchecked locations
cur_unchecked.append(local_idx)
cur_plateau.add(local_idx)
# Track each plateau's indices
plateaus.append((val, cur_plateau))
# Returns the list of plateaus and their values
return plateaus
def plateau_loss_func(c, data, signal_dist, null_dist):
'''The negative log-likelihood function for a plateau.'''
return -np.log(c * signal_dist.pdf(data) + (1. - c) * null_dist.pdf(data)).sum()
def single_plateau_regression(data, signal_dist, null_dist):
'''Perform unpenalized 1-d regression on all of the points in a plateau.'''
return minimize_scalar(plateau_loss_func, args=(data, signal_dist, null_dist), bounds=(0,1), method='Bounded').x
|
tansey/smoothfdr
|
smoothfdr/smoothed_fdr.py
|
Python
|
mit
| 35,721
|
[
"Gaussian"
] |
3fe4d04ac909ae548696c35ffc4555caed31173f2bafe880cd73b19bfe582b2f
|
import unittest
import os
from datetime import datetime
from tempfile import mkstemp
from invenio.testutils import make_test_suite, run_test_suite
from invenio import bibupload
from invenio import bibtask
from invenio.dbquery import run_sql
from invenio.search_engine_utils import get_fieldvalues
from invenio import oai_harvest_daemon, \
oai_harvest_dblayer
from invenio.bibdocfile import BibRecDocs, \
InvenioBibDocFileError
EXAMPLE_PDF_URL_1 = "http://invenio-software.org/download/" \
"invenio-demo-site-files/9812226.pdf"
EXAMPLE_PDF_URL_2 = "http://invenio-software.org/download/" \
"invenio-demo-site-files/0105155.pdf"
RECID = 20
ARXIV_ID = '1005.1481'
ARXIV_OAI_RESPONSE = """<?xml version="1.0" encoding="UTF-8"?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
<responseDate>2013-04-16T13:50:10Z</responseDate>
<request verb="GetRecord" identifier="oai:arXiv.org:1304.4214" metadataPrefix="arXivRaw">http://export.arxiv.org/oai2</request>
<GetRecord>
<record>
<header>
<identifier>oai:arXiv.org:1304.4214</identifier>
<datestamp>2013-04-16</datestamp>
<setSpec>physics:cond-mat</setSpec>
</header>
<metadata>
<arXivRaw xmlns="http://arxiv.org/OAI/arXivRaw/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://arxiv.org/OAI/arXivRaw/ http://arxiv.org/OAI/arXivRaw.xsd">
<id>1304.4214</id><submitter>John M. Tranquada</submitter><version version="v%s"><date>Mon, 15 Apr 2013 19:33:21 GMT</date><size>609kb</size><source_type>D</source_type></version><title>Neutron Scattering and Its Application to Strongly Correlated Systems</title><authors>Igor A. Zaliznyak and John M. Tranquada</authors><categories>cond-mat.str-el</categories><comments>31 pages, chapter for "Strongly Correlated Systems: Experimental
Techniques", edited by A. Avella and F. Mancini</comments><license>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</license><abstract> Neutron scattering is a powerful probe of strongly correlated systems. It can
directly detect common phenomena such as magnetic order, and can be used to
determine the coupling between magnetic moments through measurements of the
spin-wave dispersions. In the absence of magnetic order, one can detect diffuse
scattering and dynamic correlations. Neutrons are also sensitive to the
arrangement of atoms in a solid (crystal structure) and lattice dynamics
(phonons). In this chapter, we provide an introduction to neutrons and neutron
sources. The neutron scattering cross section is described and formulas are
given for nuclear diffraction, phonon scattering, magnetic diffraction, and
magnon scattering. As an experimental example, we describe measurements of
antiferromagnetic order, spin dynamics, and their evolution in the
La(2-x)Ba(x)CuO(4) family of high-temperature superconductors.
</abstract></arXivRaw>
</metadata>
</record>
</GetRecord>
</OAI-PMH>
"""
class TestTask(unittest.TestCase):
def setUp(self, recid=RECID, arxiv_id=ARXIV_ID):
self.recid = recid
self.arxiv_id = arxiv_id
self.arxiv_version = 1
self.bibupload_xml = """<record>
<controlfield tag="001">%s</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">arXiv:%s</subfield>
<subfield code="9">arXiv</subfield>
<subfield code="c">hep-ph</subfield>
</datafield>
</record>""" % (recid, arxiv_id)
bibtask.setup_loggers()
bibtask.task_set_task_param('verbose', 0)
recs = bibupload.xml_marc_to_records(self.bibupload_xml)
status, dummy, err = bibupload.bibupload(recs[0], opt_mode='correct')
assert status == 0, err.strip()
assert len(get_fieldvalues(recid, '037__a')) == 1
def mocked_oai_harvest_get(prefix, baseurl, harvestpath,
verb, identifier):
temp_fd, temp_path = mkstemp()
os.write(temp_fd, ARXIV_OAI_RESPONSE % self.arxiv_version)
os.close(temp_fd)
return [temp_path]
self.oai_harvest_get = oai_harvest_daemon.oai_harvest_get
oai_harvest_daemon.oai_harvest_get = mocked_oai_harvest_get
def mocked_get_oai_src(params={}):
return [{'baseurl': ''}]
self.get_oai_src = oai_harvest_dblayer.get_oai_src
oai_harvest_dblayer.get_oai_src = mocked_get_oai_src
def tearDown(self):
"""Helper function that restores recID 3 MARCXML"""
recs = bibupload.xml_marc_to_records(self.bibupload_xml)
bibupload.bibupload(recs[0], opt_mode='delete')
oai_harvest_daemon.oai_harvest_get = self.oai_harvest_get
oai_harvest_dblayer.get_oai_src = self.get_oai_src
def clean_bibtask(self):
from invenio.arxiv_pdf_checker import NAME
run_sql("""DELETE FROM schTASK
WHERE user = %s
ORDER BY id DESC LIMIT 1
""", [NAME])
def clean_bibupload_fft(self):
run_sql("""DELETE FROM schTASK
WHERE proc = 'bibupload:FFT'
ORDER BY id DESC LIMIT 1""")
def test_fetch_records(self):
from invenio.arxiv_pdf_checker import fetch_updated_arxiv_records
date = datetime(year=1900, month=1, day=1)
records = fetch_updated_arxiv_records(date)
self.assert_(records)
def test_task_run_core(self):
from invenio.arxiv_pdf_checker import task_run_core
self.assert_(task_run_core())
self.clean_bibtask()
self.clean_bibupload_fft()
def test_extract_arxiv_ids_from_recid(self):
from invenio.arxiv_pdf_checker import extract_arxiv_ids_from_recid
self.assertEqual(list(extract_arxiv_ids_from_recid(self.recid)), [self.arxiv_id])
def test_build_arxiv_url(self):
from invenio.arxiv_pdf_checker import build_arxiv_url
self.assert_('1012.0299' in build_arxiv_url('1012.0299', 1))
def test_record_has_fulltext(self):
from invenio.arxiv_pdf_checker import record_has_fulltext
record_has_fulltext(1)
def test_download_external_url_invalid_content_type(self):
from invenio.filedownloadutils import (download_external_url,
InvenioFileDownloadError)
from invenio.config import CFG_SITE_URL
temp_fd, temp_path = mkstemp()
os.close(temp_fd)
try:
try:
download_external_url(CFG_SITE_URL,
temp_path,
content_type='pdf')
self.fail()
except InvenioFileDownloadError:
pass
finally:
os.unlink(temp_path)
def test_download_external_url(self):
from invenio.filedownloadutils import (download_external_url,
InvenioFileDownloadError)
temp_fd, temp_path = mkstemp()
os.close(temp_fd)
try:
try:
download_external_url(EXAMPLE_PDF_URL_1,
temp_path,
content_type='pdf')
except InvenioFileDownloadError, e:
self.fail(str(e))
finally:
os.unlink(temp_path)
def test_process_one(self):
from invenio import arxiv_pdf_checker
from invenio.arxiv_pdf_checker import process_one, \
FoundExistingPdf, \
fetch_arxiv_pdf_status, \
STATUS_OK, \
AlreadyHarvested
arxiv_pdf_checker.CFG_ARXIV_URL_PATTERN = EXAMPLE_PDF_URL_1 + "?%s%s"
# Make sure there is no harvesting state stored or this test will fail
run_sql('DELETE FROM bibARXIVPDF WHERE id_bibrec = %s', [self.recid])
def look_for_fulltext(recid):
"""Look for fulltext pdf (bibdocfile) for a given recid"""
rec_info = BibRecDocs(recid)
docs = rec_info.list_bibdocs()
for doc in docs:
for d in doc.list_all_files():
if d.get_format().strip('.') in ['pdf', 'pdfa', 'PDF']:
try:
yield doc, d
except InvenioWebBibDocFileError:
pass
# Remove all pdfs from record 3
for doc, docfile in look_for_fulltext(self.recid):
doc.delete_file(docfile.get_format(), docfile.get_version())
if not doc.list_all_files():
doc.expunge()
try:
process_one(self.recid)
finally:
self.clean_bibtask()
# Check for existing pdf
docs = list(look_for_fulltext(self.recid))
if not docs:
self.fail()
# Check that harvesting state is stored
status, version = fetch_arxiv_pdf_status(self.recid)
self.assertEqual(status, STATUS_OK)
self.assertEqual(version, 1)
try:
process_one(self.recid)
self.fail()
except AlreadyHarvested:
pass
# Even though the version is changed the md5 is the same
self.arxiv_version = 2
try:
process_one(self.recid)
self.fail()
except FoundExistingPdf:
pass
arxiv_pdf_checker.CFG_ARXIV_URL_PATTERN = EXAMPLE_PDF_URL_2 + "?%s%s"
self.arxiv_version = 3
try:
process_one(self.recid)
finally:
self.clean_bibtask()
# We know the PDF is attached, run process_one again
# and it needs to raise an error
try:
process_one(self.recid)
self.fail()
except AlreadyHarvested:
run_sql('DELETE FROM bibARXIVPDF WHERE id_bibrec = %s',
[self.recid])
# Restore state
for doc, docfile in docs:
doc.delete_file(docfile.get_format(), docfile.get_version())
if not doc.list_all_files():
doc.expunge()
self.clean_bibupload_fft()
TEST_SUITE = make_test_suite(TestTask)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
|
GRArmstrong/invenio-inspire-ops
|
modules/pdfchecker/lib/arxiv_pdf_checker_regression_tests.py
|
Python
|
gpl-2.0
| 10,613
|
[
"CRYSTAL"
] |
664cb92997d4ffa651e45dc672f9b1bab51126b1f69f13094d6ab32090926fe6
|
"""
====================================================================
Linear and Quadratic Discriminant Analysis with confidence ellipsoid
====================================================================
Plot the confidence ellipsoids of each class and decision boundary
"""
print(__doc__)
from scipy import linalg
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import colors
from sklearn.lda import LDA
from sklearn.qda import QDA
###############################################################################
# colormap
cmap = colors.LinearSegmentedColormap(
'red_blue_classes',
{'red': [(0, 1, 1), (1, 0.7, 0.7)],
'green': [(0, 0.7, 0.7), (1, 0.7, 0.7)],
'blue': [(0, 0.7, 0.7), (1, 1, 1)]})
plt.cm.register_cmap(cmap=cmap)
###############################################################################
# generate datasets
def dataset_fixed_cov():
'''Generate 2 Gaussians samples with the same covariance matrix'''
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0., -0.23], [0.83, .23]])
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C) + np.array([1, 1])]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
def dataset_cov():
'''Generate 2 Gaussians samples with different covariance matrices'''
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0., -1.], [2.5, .7]]) * 2.
X = np.r_[np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C.T) + np.array([1, 4])]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
###############################################################################
# plot functions
def plot_data(lda, X, y, y_pred, fig_index):
splot = plt.subplot(2, 2, fig_index)
if fig_index == 1:
plt.title('Linear Discriminant Analysis')
plt.ylabel('Data with fixed covariance')
elif fig_index == 2:
plt.title('Quadratic Discriminant Analysis')
elif fig_index == 3:
plt.ylabel('Data with varying covariances')
tp = (y == y_pred) # True Positive
tp0, tp1 = tp[y == 0], tp[y == 1]
X0, X1 = X[y == 0], X[y == 1]
X0_tp, X0_fp = X0[tp0], X0[~tp0]
X1_tp, X1_fp = X1[tp1], X1[~tp1]
xmin, xmax = X[:, 0].min(), X[:, 0].max()
ymin, ymax = X[:, 1].min(), X[:, 1].max()
# class 0: dots
plt.plot(X0_tp[:, 0], X0_tp[:, 1], 'o', color='red')
plt.plot(X0_fp[:, 0], X0_fp[:, 1], '.', color='#990000') # dark red
# class 1: dots
plt.plot(X1_tp[:, 0], X1_tp[:, 1], 'o', color='blue')
plt.plot(X1_fp[:, 0], X1_fp[:, 1], '.', color='#000099') # dark blue
# class 0 and 1 : areas
nx, ny = 200, 100
x_min, x_max = plt.xlim()
y_min, y_max = plt.ylim()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx),
np.linspace(y_min, y_max, ny))
Z = lda.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z = Z[:, 1].reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, Z, [0.5], linewidths=2., colors='k')
# means
plt.plot(lda.means_[0][0], lda.means_[0][1],
'o', color='black', markersize=10)
plt.plot(lda.means_[1][0], lda.means_[1][1],
'o', color='black', markersize=10)
return splot
def plot_ellipse(splot, mean, cov, color):
v, w = linalg.eigh(cov)
u = w[0] / linalg.norm(w[0])
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
# filled Gaussian at 2 standard deviation
ell = mpl.patches.Ellipse(mean, 2 * v[0] ** 0.5, 2 * v[1] ** 0.5,
180 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
splot.set_xticks(())
splot.set_yticks(())
def plot_lda_cov(lda, splot):
plot_ellipse(splot, lda.means_[0], lda.covariance_, 'red')
plot_ellipse(splot, lda.means_[1], lda.covariance_, 'blue')
def plot_qda_cov(qda, splot):
plot_ellipse(splot, qda.means_[0], qda.covariances_[0], 'red')
plot_ellipse(splot, qda.means_[1], qda.covariances_[1], 'blue')
###############################################################################
for i, (X, y) in enumerate([dataset_fixed_cov(), dataset_cov()]):
# LDA
lda = LDA(solver="svd", store_covariance=True)
y_pred = lda.fit(X, y).predict(X)
splot = plot_data(lda, X, y, y_pred, fig_index=2 * i + 1)
plot_lda_cov(lda, splot)
plt.axis('tight')
# QDA
qda = QDA()
y_pred = qda.fit(X, y, store_covariances=True).predict(X)
splot = plot_data(qda, X, y, y_pred, fig_index=2 * i + 2)
plot_qda_cov(qda, splot)
plt.axis('tight')
plt.suptitle('LDA vs QDA')
plt.show()
|
3manuek/scikit-learn
|
examples/classification/plot_lda_qda.py
|
Python
|
bsd-3-clause
| 4,806
|
[
"Gaussian"
] |
550abd6cf98d32a8bcae0a85570e1ae4770c891fc94207f4cdc6f5ec7e722cb0
|
# -*- coding: utf-8 -*-
import base64
import datetime
import json
import time
import mock
from nose.tools import eq_, ok_
from nose.plugins.attrib import attr
from pyquery import PyQuery as pq
from urlparse import urlparse
from django.conf import settings
from django.contrib.sites.models import Site
from django.core import mail
from django.db.models import Q
from django.test.client import (FakePayload, encode_multipart,
BOUNDARY, CONTENT_TYPE_RE, MULTIPART_CONTENT)
from django.test.utils import override_settings
from django.http import Http404
from django.utils.encoding import smart_str
from constance import config
from jingo.helpers import urlparams
from waffle.models import Flag, Switch
from kuma.attachments.models import Attachment
from kuma.attachments.utils import make_test_file
from kuma.authkeys.models import Key
from kuma.core.cache import memcache as cache
from kuma.core.models import IPBan
from kuma.core.tests import post, get, override_constance_settings
from kuma.core.urlresolvers import reverse
from kuma.users.tests import UserTestCase, user
from ..content import get_seo_description
from ..events import EditDocumentEvent
from ..forms import MIDAIR_COLLISION
from ..models import (Document, Revision, RevisionIP, DocumentZone,
DocumentTag, DocumentDeletionLog)
from ..views.document import _get_seo_parent_title
from . import (doc_rev, document, new_document_data, revision,
normalize_html, create_template_test_users,
make_translation, WikiTestCase, FakeResponse)
class RedirectTests(UserTestCase, WikiTestCase):
"""Tests for the REDIRECT wiki directive"""
localizing_client = True
def test_redirect_suppression(self):
"""The document view shouldn't redirect when passed redirect=no."""
redirect, _ = doc_rev('REDIRECT <a class="redirect" '
'href="/en-US/docs/blah">smoo</a>')
url = redirect.get_absolute_url() + '?redirect=no'
response = self.client.get(url, follow=True)
self.assertContains(response, 'REDIRECT ')
def test_redirects_only_internal(self):
"""Ensures redirects cannot be used to link to other sites"""
redirect, _ = doc_rev('REDIRECT <a class="redirect" '
'href="//davidwalsh.name">DWB</a>')
url = redirect.get_absolute_url()
response = self.client.get(url, follow=True)
self.assertContains(response, 'DWB')
def test_redirects_only_internal_2(self):
"""Ensures redirects cannot be used to link to other sites"""
redirect, _ = doc_rev('REDIRECT <a class="redirect" '
'href="http://davidwalsh.name">DWB</a>')
url = redirect.get_absolute_url()
response = self.client.get(url, follow=True)
self.assertContains(response, 'DWB')
def test_self_redirect_suppression(self):
"""The document view shouldn't redirect to itself."""
slug = 'redirdoc'
html = ('REDIRECT <a class="redirect" href="/en-US/docs/%s">smoo</a>' %
slug)
doc = document(title='blah', slug=slug, html=html, save=True,
locale=settings.WIKI_DEFAULT_LANGUAGE)
revision(document=doc, content=html, is_approved=True, save=True)
response = self.client.get(doc.get_absolute_url(), follow=True)
eq_(200, response.status_code)
response_html = pq(response.content)
article_body = response_html.find('#wikiArticle').html()
self.assertHTMLEqual(html, article_body)
class LocaleRedirectTests(UserTestCase, WikiTestCase):
"""Tests for fallbacks to en-US and such for slug lookups."""
# Some of these may fail or be invalid if your WIKI_DEFAULT_LANGUAGE is de.
localizing_client = True
def test_fallback_to_translation(self):
"""If a slug isn't found in the requested locale but is in the default
locale and if there is a translation of that default-locale document to
the requested locale, the translation should be served."""
en_doc, de_doc = self._create_en_and_de_docs()
response = self.client.get(reverse('wiki.document',
args=(en_doc.slug,),
locale='de'),
follow=True)
self.assertRedirects(response, de_doc.get_absolute_url())
def test_fallback_with_query_params(self):
"""The query parameters should be passed along to the redirect."""
en_doc, de_doc = self._create_en_and_de_docs()
url = reverse('wiki.document', args=[en_doc.slug], locale='de')
response = self.client.get(url + '?x=y&x=z', follow=True)
self.assertRedirects(response, de_doc.get_absolute_url() + '?x=y&x=z')
def test_redirect_with_no_slug(self):
"""Bug 775241: Fix exception in redirect for URL with ui-locale"""
loc = settings.WIKI_DEFAULT_LANGUAGE
url = '/%s/docs/%s/' % (loc, loc)
try:
self.client.get(url, follow=True)
except Http404, e:
pass
except Exception, e:
self.fail("The only exception should be a 404, not this: %s" % e)
def _create_en_and_de_docs(self):
en = settings.WIKI_DEFAULT_LANGUAGE
en_doc = document(locale=en, slug='english-slug', save=True)
de_doc = document(locale='de', parent=en_doc, save=True)
revision(document=de_doc, is_approved=True, save=True)
return en_doc, de_doc
class ViewTests(UserTestCase, WikiTestCase):
fixtures = UserTestCase.fixtures + ['wiki/documents.json']
localizing_client = True
@attr('bug875349')
def test_json_view(self):
expected_tags = sorted(['foo', 'bar', 'baz'])
expected_review_tags = sorted(['tech', 'editorial'])
doc = Document.objects.get(pk=1)
doc.tags.set(*expected_tags)
doc.current_revision.review_tags.set(*expected_review_tags)
url = reverse('wiki.json', locale=settings.WIKI_DEFAULT_LANGUAGE)
resp = self.client.get(url, {'title': 'an article title'})
eq_(200, resp.status_code)
data = json.loads(resp.content)
eq_('article-title', data['slug'])
result_tags = sorted([str(x) for x in data['tags']])
eq_(expected_tags, result_tags)
result_review_tags = sorted([str(x) for x in data['review_tags']])
eq_(expected_review_tags, result_review_tags)
url = reverse('wiki.json_slug', args=('article-title',),
locale=settings.WIKI_DEFAULT_LANGUAGE)
Switch.objects.create(name='application_ACAO', active=True)
resp = self.client.get(url)
ok_('Access-Control-Allow-Origin' in resp)
eq_('*', resp['Access-Control-Allow-Origin'])
eq_(200, resp.status_code)
data = json.loads(resp.content)
eq_('an article title', data['title'])
ok_('translations' in data)
result_tags = sorted([str(x) for x in data['tags']])
eq_(expected_tags, result_tags)
result_review_tags = sorted([str(x) for x in data['review_tags']])
eq_(expected_review_tags, result_review_tags)
def test_history_view(self):
slug = 'history-view-test-doc'
html = 'history view test doc'
doc = document(title='History view test doc', slug=slug,
html=html, save=True,
locale=settings.WIKI_DEFAULT_LANGUAGE)
for i in xrange(1, 51):
revision(document=doc, content=html,
comment='Revision %s' % i,
is_approved=True, save=True)
url = reverse('wiki.document_revisions', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
resp = self.client.get(url)
eq_(200, resp.status_code)
all_url = urlparams(reverse('wiki.document_revisions', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE),
limit='all')
resp = self.client.get(all_url)
eq_(403, resp.status_code)
self.client.login(username='testuser', password='testpass')
resp = self.client.get(all_url)
eq_(200, resp.status_code)
def test_toc_view(self):
slug = 'toc_test_doc'
html = '<h2>Head 2</h2><h3>Head 3</h3>'
doc = document(title='blah', slug=slug, html=html, save=True,
locale=settings.WIKI_DEFAULT_LANGUAGE)
revision(document=doc, content=html, is_approved=True, save=True)
url = reverse('wiki.toc', args=[slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
Switch.objects.create(name='application_ACAO', active=True)
resp = self.client.get(url)
ok_('Access-Control-Allow-Origin' in resp)
eq_('*', resp['Access-Control-Allow-Origin'])
self.assertHTMLEqual(
resp.content, '<ol><li><a href="#Head_2" rel="internal">Head 2</a>'
'<ol><li><a href="#Head_3" rel="internal">Head 3</a>'
'</ol></li></ol>')
@attr('bug875349')
def test_children_view(self):
test_content = '<p>Test <a href="http://example.com">Summary</a></p>'
def _make_doc(title, slug, parent=None, is_redir=False):
doc = document(title=title,
slug=slug,
save=True,
is_redirect=is_redir)
if is_redir:
content = 'REDIRECT <a class="redirect" href="/en-US/blah">Blah</a>'
else:
content = test_content
revision(document=doc,
content=test_content,
summary=get_seo_description(
test_content,
strip_markup=False),
save=True)
doc.html = content
if parent:
doc.parent_topic = parent
doc.save()
return doc
root_doc = _make_doc('Root', 'Root')
child_doc_1 = _make_doc('Child 1', 'Root/Child_1', root_doc)
_make_doc('Grandchild 1', 'Root/Child_1/Grandchild_1', child_doc_1)
grandchild_doc_2 = _make_doc('Grandchild 2',
'Root/Child_1/Grandchild_2',
child_doc_1)
_make_doc('Great Grandchild 1',
'Root/Child_1/Grandchild_2/Great_Grand_Child_1',
grandchild_doc_2)
_make_doc('Child 2', 'Root/Child_2', root_doc)
_make_doc('Child 3', 'Root/Child_3', root_doc, True)
Switch.objects.create(name='application_ACAO', active=True)
for expand in (True, False):
url = reverse('wiki.children', args=['Root'],
locale=settings.WIKI_DEFAULT_LANGUAGE)
if expand:
url = '%s?expand' % url
resp = self.client.get(url)
ok_('Access-Control-Allow-Origin' in resp)
eq_('*', resp['Access-Control-Allow-Origin'])
json_obj = json.loads(resp.content)
# Basic structure creation testing
eq_(json_obj['slug'], 'Root')
if not expand:
ok_('summary' not in json_obj)
else:
eq_(json_obj['summary'],
'Test <a href="http://example.com">Summary</a>')
ok_('tags' in json_obj)
ok_('review_tags' in json_obj)
eq_(len(json_obj['subpages']), 2)
eq_(len(json_obj['subpages'][0]['subpages']), 2)
eq_(json_obj['subpages'][0]['subpages'][1]['title'],
'Grandchild 2')
# Depth parameter testing
def _depth_test(depth, aught):
url = reverse('wiki.children', args=['Root'],
locale=settings.WIKI_DEFAULT_LANGUAGE) + '?depth=' + str(depth)
resp = self.client.get(url)
json_obj = json.loads(resp.content)
eq_(len(json_obj['subpages'][0]['subpages'][1]['subpages']), aught)
_depth_test(2, 0)
_depth_test(3, 1)
_depth_test(6, 1)
# Sorting test
sort_root_doc = _make_doc('Sort Root', 'Sort_Root')
_make_doc('B Child', 'Sort_Root/B_Child', sort_root_doc)
_make_doc('A Child', 'Sort_Root/A_Child', sort_root_doc)
resp = self.client.get(reverse('wiki.children', args=['Sort_Root'],
locale=settings.WIKI_DEFAULT_LANGUAGE))
json_obj = json.loads(resp.content)
eq_(json_obj['subpages'][0]['title'], 'A Child')
# Test if we are serving an error json if document does not exist
no_doc_url = reverse('wiki.children', args=['nonexistentDocument'],
locale=settings.WIKI_DEFAULT_LANGUAGE)
resp = self.client.get(no_doc_url)
result = json.loads(resp.content)
eq_(result, {'error': 'Document does not exist.'})
def test_summary_view(self):
"""The ?summary option should restrict document view to summary"""
d, r = doc_rev("""
<p>Foo bar <a href="http://example.com">baz</a></p>
<p>Quux xyzzy</p>
""")
resp = self.client.get('%s?raw&summary' % d.get_absolute_url())
eq_(resp.content, 'Foo bar <a href="http://example.com">baz</a>')
@override_settings(CELERY_ALWAYS_EAGER=True)
@mock.patch('waffle.flag_is_active')
@mock.patch('kuma.wiki.jobs.DocumentContributorsJob.get')
def test_footer_contributors(self, get_contributors, flag_is_active):
get_contributors.return_value = [
{'id': 1, 'username': 'ringo', 'email': 'ringo@apple.co.uk'},
{'id': 2, 'username': 'john', 'email': 'lennon@apple.co.uk'},
]
flag_is_active.return_value = True
d, r = doc_rev('some content')
resp = self.client.get(d.get_absolute_url())
page = pq(resp.content)
contributors = (page.find(":contains('Contributors to this page')")
.parent())
# just checking if the contributor link is rendered
eq_(len(contributors.find('a')), 2)
def test_revision_view_bleached_content(self):
"""Bug 821988: Revision content should be cleaned with bleach"""
d, r = doc_rev("""
<a href="#" onload=alert(3)>Hahaha</a>
<svg><svg onload=alert(3);>
""")
resp = self.client.get(r.get_absolute_url())
page = pq(resp.content)
ct = page.find('#wikiArticle').html()
ok_('<svg>' not in ct)
ok_('<a href="#">Hahaha</a>' in ct)
def test_raw_css_view(self):
"""The raw source for a document can be requested"""
self.client.login(username='admin', password='testpass')
doc = document(title='Template:CustomSampleCSS',
slug='Template:CustomSampleCSS',
save=True)
revision(
save=True,
is_approved=True,
document=doc,
content="""
/* CSS here */
body {
padding: 0;
margin: 0;
}
svg:not(:root) {
display:block;
}
""")
response = self.client.get('%s?raw=true' %
reverse('wiki.document', args=[doc.slug]))
ok_('text/css' in response['Content-Type'])
class PermissionTests(UserTestCase, WikiTestCase):
localizing_client = True
def setUp(self):
"""Set up the permissions, groups, and users needed for the tests"""
super(PermissionTests, self).setUp()
self.perms, self.groups, self.users, self.superuser = (
create_template_test_users())
def test_template_revert_permission(self):
locale = 'en-US'
slug = 'Template:test-revert-perm'
doc = document(save=True, slug=slug, title=slug, locale=locale)
rev = revision(save=True, document=doc)
# Revision template should not show revert button
url = reverse('wiki.revision', args=([doc.slug, rev.id]))
resp = self.client.get(url)
ok_('Revert' not in resp.content)
# Revert POST should give permission denied to user without perm
username = self.users['none'].username
self.client.login(username=username, password='testpass')
url = reverse('wiki.revert_document',
args=([doc.slug, rev.id]))
resp = self.client.post(url, {'comment': 'test'})
eq_(403, resp.status_code)
# Revert POST should give success to user with perm
username = self.users['change'].username
self.client.login(username=username, password='testpass')
url = reverse('wiki.revert_document',
args=([doc.slug, rev.id]))
resp = self.client.post(url, {'comment': 'test'}, follow=True)
eq_(200, resp.status_code)
def test_template_permissions(self):
msg = ('edit', 'create')
for is_add in (True, False):
slug_trials = (
('test_for_%s', (
(True, self.superuser),
(True, self.users['none']),
(True, self.users['all']),
(True, self.users['add']),
(True, self.users['change']),
)),
('Template:test_for_%s', (
(True, self.superuser),
(False, self.users['none']),
(True, self.users['all']),
(is_add, self.users['add']),
(not is_add, self.users['change']),
))
)
for slug_tmpl, trials in slug_trials:
for expected, tmp_user in trials:
username = tmp_user.username
slug = slug_tmpl % username
locale = settings.WIKI_DEFAULT_LANGUAGE
Document.objects.all().filter(slug=slug).delete()
if not is_add:
doc = document(save=True, slug=slug, title=slug,
locale=locale)
revision(save=True, document=doc)
self.client.login(username=username, password='testpass')
data = new_document_data()
slug = slug_tmpl % username
data.update({"title": slug, "slug": slug})
if is_add:
url = reverse('wiki.new_document', locale=locale)
resp = self.client.post(url, data, follow=False)
else:
data['form'] = 'rev'
url = reverse('wiki.edit_document', args=(slug,),
locale=locale)
resp = self.client.post(url, data, follow=False)
if expected:
eq_(302, resp.status_code,
"%s should be able to %s %s" %
(user, msg[is_add], slug))
Document.objects.filter(slug=slug).delete()
else:
eq_(403, resp.status_code,
"%s should not be able to %s %s" %
(user, msg[is_add], slug))
class ConditionalGetTests(UserTestCase, WikiTestCase):
"""Tests for conditional GET on document view"""
localizing_client = True
def test_last_modified(self):
"""Ensure the last-modified stamp of a document is cached"""
doc, rev = doc_rev()
get_url = reverse('wiki.document',
args=[doc.slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
# There should be a last-modified date cached for this document already
cache_key = doc.last_modified_cache_key
ok_(cache.get(cache_key))
# Now, try a request, and ensure that the last-modified header is
# present.
response = self.client.get(get_url, follow=False)
ok_(response.has_header('last-modified'))
last_mod = response['last-modified']
# Try another request, using If-Modified-Since. This should be a 304
response = self.client.get(get_url, follow=False,
HTTP_IF_MODIFIED_SINCE=last_mod)
eq_(304, response.status_code)
# Finally, ensure that the last-modified was cached.
cached_last_mod = cache.get(cache_key)
eq_(doc.modified.strftime('%s'), cached_last_mod)
# Let the clock tick, so the last-modified will change on edit.
time.sleep(1.0)
# Edit the document, ensure the last-modified has been invalidated.
revision(document=doc, content="New edits", save=True)
ok_(cache.get(cache_key) != cached_last_mod)
# This should be another 304, but the last-modified in response and
# cache should have changed.
response = self.client.get(get_url, follow=False,
HTTP_IF_MODIFIED_SINCE=last_mod)
eq_(200, response.status_code)
ok_(last_mod != response['last-modified'])
ok_(cached_last_mod != cache.get(cache_key))
def test_deletion_clears_last_modified(self):
"""Deleting a page clears any last-modified caching"""
# Setup mostly the same as previous test, to get a doc and set
# last-modified info.
doc, rev = doc_rev()
self.url = reverse('wiki.document',
args=[doc.slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
cache_key = doc.last_modified_cache_key
last_mod = cache.get(cache_key)
ok_(last_mod) # exists already because pre-filled
self.client.get(self.url, follow=False)
ok_(cache.get(cache_key) == last_mod)
# Now delete the doc and make sure there's no longer
# last-modified data in the cache for it afterward.
doc.delete()
ok_(not cache.get(cache_key))
def test_deleted_doc_returns_404(self):
"""Requesting a deleted doc returns 404"""
doc, rev = doc_rev()
doc.delete()
DocumentDeletionLog.objects.create(locale=doc.locale, slug=doc.slug,
user=rev.creator, reason="test")
response = self.client.get(doc.get_absolute_url(), follow=False)
eq_(404, response.status_code)
class ReadOnlyTests(UserTestCase, WikiTestCase):
"""Tests readonly scenarios"""
fixtures = UserTestCase.fixtures + ['wiki/documents.json']
localizing_client = True
def setUp(self):
super(ReadOnlyTests, self).setUp()
self.d, r = doc_rev()
self.edit_url = reverse('wiki.edit_document', args=[self.d.slug])
def test_everyone(self):
""" kumaediting: everyone, kumabanned: none """
self.kumaediting_flag.everyone = True
self.kumaediting_flag.save()
self.client.login(username='testuser', password='testpass')
resp = self.client.get(self.edit_url)
eq_(200, resp.status_code)
def test_superusers_only(self):
""" kumaediting: superusers, kumabanned: none """
self.kumaediting_flag.everyone = None
self.kumaediting_flag.superusers = True
self.kumaediting_flag.save()
self.client.login(username='testuser', password='testpass')
resp = self.client.get(self.edit_url)
eq_(403, resp.status_code)
ok_('The wiki is in read-only mode.' in resp.content)
self.client.logout()
self.client.login(username='admin', password='testpass')
resp = self.client.get(self.edit_url)
eq_(200, resp.status_code)
def test_banned_users(self):
""" kumaediting: everyone, kumabanned: testuser2 """
self.kumaediting_flag.everyone = True
self.kumaediting_flag.save()
# ban testuser2
kumabanned = Flag.objects.create(name='kumabanned')
kumabanned.users = self.user_model.objects.filter(username='testuser2')
kumabanned.save()
# testuser can still access
self.client.login(username='testuser', password='testpass')
resp = self.client.get(self.edit_url)
eq_(200, resp.status_code)
self.client.logout()
# testuser2 cannot
self.client.login(username='testuser2', password='testpass')
resp = self.client.get(self.edit_url)
eq_(403, resp.status_code)
ok_('Your profile has been banned from making edits.' in resp.content)
# ban testuser01 and testuser2
kumabanned.users = self.user_model.objects.filter(
Q(username='testuser2') | Q(username='testuser01'))
kumabanned.save()
# testuser can still access
self.client.login(username='testuser', password='testpass')
resp = self.client.get(self.edit_url)
eq_(200, resp.status_code)
self.client.logout()
# testuser2 cannot access
self.client.login(username='testuser2', password='testpass')
resp = self.client.get(self.edit_url)
eq_(403, resp.status_code)
ok_('Your profile has been banned from making edits.' in resp.content)
# testuser01 cannot access
self.client.login(username='testuser01', password='testpass')
resp = self.client.get(self.edit_url)
eq_(403, resp.status_code)
ok_('Your profile has been banned from making edits.' in resp.content)
class BannedIPTests(UserTestCase, WikiTestCase):
"""Tests readonly scenarios"""
fixtures = UserTestCase.fixtures + ['wiki/documents.json']
localizing_client = True
def setUp(self):
super(BannedIPTests, self).setUp()
self.ip = '127.0.0.1'
self.ip_ban = IPBan.objects.create(ip=self.ip)
self.doc, rev = doc_rev()
self.edit_url = reverse('wiki.edit_document',
args=[self.doc.slug])
def tearDown(self):
cache.clear()
def test_banned_ip_cant_get_edit(self):
self.client.login(username='testuser', password='testpass')
response = self.client.get(self.edit_url, REMOTE_ADDR=self.ip)
eq_(403, response.status_code)
def test_banned_ip_cant_post_edit(self):
self.client.login(username='testuser', password='testpass')
response = self.client.get(self.edit_url, REMOTE_ADDR=self.ip)
eq_(403, response.status_code)
def test_banned_ip_can_still_get_articles(self):
response = self.client.get(self.doc.get_absolute_url(),
REMOTE_ADDR=self.ip)
eq_(200, response.status_code)
class KumascriptIntegrationTests(UserTestCase, WikiTestCase):
"""
Tests for usage of the kumascript service.
Note that these tests really just check whether or not the service was
used, and are not integration tests meant to exercise the real service.
"""
localizing_client = True
def setUp(self):
super(KumascriptIntegrationTests, self).setUp()
self.d, self.r = doc_rev()
self.r.content = "TEST CONTENT"
self.r.save()
self.d.tags.set('foo', 'bar', 'baz')
self.url = reverse('wiki.document',
args=(self.d.slug,),
locale=self.d.locale)
# TODO: upgrade mock to 0.8.0 so we can do this.
# self.mock_kumascript_get = (
# mock.patch('kuma.wiki.kumascript.get'))
# self.mock_kumascript_get.return_value = self.d.html
def tearDown(self):
super(KumascriptIntegrationTests, self).tearDown()
# TODO: upgrade mock to 0.8.0 so we can do this.
# self.mock_kumascript_get.stop()
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0)
@mock.patch('kuma.wiki.kumascript.get')
def test_basic_view(self, mock_kumascript_get):
"""When kumascript timeout is non-zero, the service should be used"""
mock_kumascript_get.return_value = (self.d.html, None)
self.client.get(self.url, follow=False)
ok_(mock_kumascript_get.called,
"kumascript should have been used")
@override_constance_settings(KUMASCRIPT_TIMEOUT=0.0)
@mock.patch('kuma.wiki.kumascript.get')
def test_disabled(self, mock_kumascript_get):
"""When disabled, the kumascript service should not be used"""
mock_kumascript_get.return_value = (self.d.html, None)
self.client.get(self.url, follow=False)
ok_(not mock_kumascript_get.called,
"kumascript not should have been used")
@override_constance_settings(KUMASCRIPT_TIMEOUT=0.0)
@mock.patch('kuma.wiki.kumascript.get')
@override_settings(CELERY_ALWAYS_EAGER=True)
def test_disabled_rendering(self, mock_kumascript_get):
"""When disabled, the kumascript service should not be used
in rendering"""
mock_kumascript_get.return_value = (self.d.html, None)
self.d.schedule_rendering('max-age=0')
ok_(not mock_kumascript_get.called,
"kumascript not should have been used")
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0)
@mock.patch('kuma.wiki.kumascript.get')
def test_nomacros(self, mock_kumascript_get):
mock_kumascript_get.return_value = (self.d.html, None)
self.client.get('%s?nomacros' % self.url, follow=False)
ok_(not mock_kumascript_get.called,
"kumascript should not have been used")
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0)
@mock.patch('kuma.wiki.kumascript.get')
def test_raw(self, mock_kumascript_get):
mock_kumascript_get.return_value = (self.d.html, None)
self.client.get('%s?raw' % self.url, follow=False)
ok_(not mock_kumascript_get.called,
"kumascript should not have been used")
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0)
@mock.patch('kuma.wiki.kumascript.get')
def test_raw_macros(self, mock_kumascript_get):
mock_kumascript_get.return_value = (self.d.html, None)
self.client.get('%s?raw¯os' % self.url, follow=False)
ok_(mock_kumascript_get.called,
"kumascript should have been used")
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0,
KUMASCRIPT_MAX_AGE=1234)
@mock.patch('requests.get')
def test_ua_max_age_zero(self, mock_requests_get):
"""Authenticated users can request a zero max-age for kumascript"""
trap = {}
def my_requests_get(url, headers=None, timeout=None):
trap['headers'] = headers
return FakeResponse(status_code=200,
headers={}, text='HELLO WORLD')
mock_requests_get.side_effect = my_requests_get
self.client.get(self.url, follow=False,
HTTP_CACHE_CONTROL='no-cache')
eq_('max-age=1234', trap['headers']['Cache-Control'])
self.client.login(username='admin', password='testpass')
self.client.get(self.url, follow=False,
HTTP_CACHE_CONTROL='no-cache')
eq_('no-cache', trap['headers']['Cache-Control'])
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0,
KUMASCRIPT_MAX_AGE=1234)
@mock.patch('requests.get')
def test_ua_no_cache(self, mock_requests_get):
"""Authenticated users can request no-cache for kumascript"""
trap = {}
def my_requests_get(url, headers=None, timeout=None):
trap['headers'] = headers
return FakeResponse(status_code=200,
headers={}, text='HELLO WORLD')
mock_requests_get.side_effect = my_requests_get
self.client.get(self.url, follow=False,
HTTP_CACHE_CONTROL='no-cache')
eq_('max-age=1234', trap['headers']['Cache-Control'])
self.client.login(username='admin', password='testpass')
self.client.get(self.url, follow=False,
HTTP_CACHE_CONTROL='no-cache')
eq_('no-cache', trap['headers']['Cache-Control'])
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0,
KUMASCRIPT_MAX_AGE=1234)
@mock.patch('requests.get')
def test_conditional_get(self, mock_requests_get):
"""Ensure conditional GET in requests to kumascript work as expected"""
expected_etag = "8675309JENNY"
expected_modified = "Wed, 14 Mar 2012 22:29:17 GMT"
expected_content = "HELLO THERE, WORLD"
trap = dict(req_cnt=0)
def my_requests_get(url, headers=None, timeout=None):
trap['req_cnt'] += 1
trap['headers'] = headers
if trap['req_cnt'] in [1, 2]:
return FakeResponse(
status_code=200, text=expected_content,
headers={
"etag": expected_etag,
"last-modified": expected_modified,
"age": 456
})
else:
return FakeResponse(
status_code=304, text='',
headers={
"etag": expected_etag,
"last-modified": expected_modified,
"age": 123
})
mock_requests_get.side_effect = my_requests_get
# First request to let the view cache etag / last-modified
response = self.client.get(self.url)
# Clear rendered_html to force another request.
self.d.rendered_html = ''
self.d.save()
# Second request to verify the view sends them back
response = self.client.get(self.url)
eq_(expected_etag, trap['headers']['If-None-Match'])
eq_(expected_modified, trap['headers']['If-Modified-Since'])
# Third request to verify content was cached and served on a 304
response = self.client.get(self.url)
ok_(expected_content in response.content)
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0,
KUMASCRIPT_MAX_AGE=600)
@mock.patch('requests.get')
def test_error_reporting(self, mock_requests_get):
"""Kumascript reports errors in HTTP headers, Kuma should display"""
# Make sure we have enough log messages to ensure there are more than
# 10 lines of Base64 in headers. This ensures that there'll be a
# failure if the view sorts FireLogger sequence number alphabetically
# instead of numerically.
expected_errors = {
"logs": [
{"level": "debug",
"message": "Message #1",
"args": ['TestError', {}, {'name': 'SomeMacro', 'token': {'args': 'arguments here'}}],
"time": "12:32:03 GMT-0400 (EDT)",
"timestamp": "1331829123101000"},
{"level": "warning",
"message": "Message #2",
"args": ['TestError', {}, {'name': 'SomeMacro2'}],
"time": "12:33:58 GMT-0400 (EDT)",
"timestamp": "1331829238052000"},
{"level": "info",
"message": "Message #3",
"args": ['TestError'],
"time": "12:34:22 GMT-0400 (EDT)",
"timestamp": "1331829262403000"},
{"level": "debug",
"message": "Message #4",
"time": "12:32:03 GMT-0400 (EDT)",
"timestamp": "1331829123101000"},
{"level": "warning",
"message": "Message #5",
"time": "12:33:58 GMT-0400 (EDT)",
"timestamp": "1331829238052000"},
{"level": "info",
"message": "Message #6",
"time": "12:34:22 GMT-0400 (EDT)",
"timestamp": "1331829262403000"},
]
}
# Pack it up, get ready to ship it out.
d_json = json.dumps(expected_errors)
d_b64 = base64.encodestring(d_json)
d_lines = [x for x in d_b64.split("\n") if x]
# Headers are case-insensitive, so let's just drive that point home
p = ['firelogger', 'FIRELOGGER', 'FireLogger']
fl_uid = 8675309
headers_out = {}
for i in range(0, len(d_lines)):
headers_out['%s-%s-%s' % (p[i % len(p)], fl_uid, i)] = d_lines[i]
# Now, trap the request from the view.
trap = {}
def my_requests_get(url, headers=None, timeout=None):
trap['headers'] = headers
return FakeResponse(
status_code=200,
text='HELLO WORLD',
headers=headers_out
)
mock_requests_get.side_effect = my_requests_get
# Finally, fire off the request to the view and ensure that the log
# messages were received and displayed on the page. But, only for a
# logged in user.
self.client.login(username='admin', password='testpass')
response = self.client.get(self.url)
eq_(trap['headers']['X-FireLogger'], '1.2')
for error in expected_errors['logs']:
ok_(error['message'] in response.content)
eq_(response.status_code, 200)
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0,
KUMASCRIPT_MAX_AGE=600)
@mock.patch('requests.post')
def test_preview_nonascii(self, mock_post):
"""POSTing non-ascii to kumascript should encode to utf8"""
content = u'Français'
trap = {}
def my_post(url, timeout=None, headers=None, data=None):
trap['data'] = data
return FakeResponse(status_code=200, headers={},
text=content.encode('utf8'))
mock_post.side_effect = my_post
self.client.login(username='admin', password='testpass')
self.client.post(reverse('wiki.preview'), {'content': content})
try:
trap['data'].decode('utf8')
except UnicodeDecodeError:
self.fail("Data wasn't posted as utf8")
@attr('bug1197971')
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0,
KUMASCRIPT_MAX_AGE=600)
@mock.patch('kuma.wiki.kumascript.post')
def test_dont_render_previews_for_deferred_docs(self, mock_post):
"""
When a user previews a document with deferred rendering,
we want to force the preview to skip the kumascript POST,
so that big previews can't use up too many kumascript connections.
"""
self.d.defer_rendering = True
self.d.save()
def should_not_post(*args, **kwargs):
self.fail("Preview doc with deferred rendering should not "
"post to KumaScript.")
mock_post.side_effect = should_not_post
self.client.login(username='admin', password='testpass')
self.client.post(reverse('wiki.preview'), {'doc_id': self.d.id})
class DocumentSEOTests(UserTestCase, WikiTestCase):
"""Tests for the document seo logic"""
localizing_client = True
@attr('bug1190212')
def test_get_seo_parent_doesnt_throw_404(self):
slug_dict = {'seo_root': 'Root/Does/Not/Exist'}
try:
_get_seo_parent_title(slug_dict, 'bn-BD')
except Http404:
self.fail('Missing parent should not cause 404 from '
'_get_seo_parent_title')
def test_seo_title(self):
self.client.login(username='admin', password='testpass')
# Utility to make a quick doc
def _make_doc(title, aught_titles, slug):
doc = document(save=True, slug=slug, title=title,
locale=settings.WIKI_DEFAULT_LANGUAGE)
revision(save=True, document=doc)
response = self.client.get(reverse('wiki.document', args=[slug],
locale=settings.WIKI_DEFAULT_LANGUAGE))
page = pq(response.content)
ok_(page.find('title').text() in aught_titles)
# Test nested document titles
_make_doc('One', ['One | MDN'], 'one')
_make_doc('Two', ['Two - One | MDN'], 'one/two')
_make_doc('Three', ['Three - One | MDN'], 'one/two/three')
_make_doc(u'Special Φ Char',
[u'Special \u03a6 Char - One | MDN',
u'Special \xce\xa6 Char - One | MDN'],
'one/two/special_char')
# Additional tests for /Web/* changes
_make_doc('Firefox OS', ['Firefox OS | MDN'], 'firefox_os')
_make_doc('Email App', ['Email App - Firefox OS | MDN'],
'firefox_os/email_app')
_make_doc('Web', ['Web | MDN'], 'Web')
_make_doc('HTML', ['HTML | MDN'], 'Web/html')
_make_doc('Fieldset', ['Fieldset - HTML | MDN'], 'Web/html/fieldset')
_make_doc('Legend', ['Legend - HTML | MDN'],
'Web/html/fieldset/legend')
def test_seo_script(self):
self.client.login(username='admin', password='testpass')
def make_page_and_compare_seo(slug, content, aught_preview):
# Create the doc
data = new_document_data()
data.update({'title': 'blah', 'slug': slug, 'content': content})
response = self.client.post(reverse('wiki.new_document',
locale=settings.WIKI_DEFAULT_LANGUAGE),
data)
eq_(302, response.status_code)
# Connect to newly created page
response = self.client.get(reverse('wiki.document', args=[slug],
locale=settings.WIKI_DEFAULT_LANGUAGE))
page = pq(response.content)
meta_content = page.find('meta[name=description]').attr('content')
eq_(str(meta_content).decode('utf-8'),
str(aught_preview).decode('utf-8'))
# Test pages - very basic
good = 'This is the content which should be chosen, man.'
make_page_and_compare_seo('one', '<p>' + good + '</p>', good)
# No content, no seo
make_page_and_compare_seo('two', 'blahblahblahblah<br />', None)
# No summary, no seo
make_page_and_compare_seo('three', '<div><p>You cant see me</p></div>',
None)
# Warning paragraph ignored
make_page_and_compare_seo('four',
'<div class="geckoVersion">'
'<p>No no no</p></div><p>yes yes yes</p>',
'yes yes yes')
# Warning paragraph ignored, first one chosen if multiple matches
make_page_and_compare_seo('five',
'<div class="geckoVersion"><p>No no no</p>'
'</div><p>yes yes yes</p>'
'<p>ignore ignore ignore</p>',
'yes yes yes')
# Don't take legacy crumbs
make_page_and_compare_seo('six', u'<p>« CSS</p><p>I am me!</p>',
'I am me!')
# Take the seoSummary class'd element
make_page_and_compare_seo('seven',
u'<p>I could be taken</p>'
'<p class="seoSummary">I should be though</p>',
'I should be though')
# Two summaries append
make_page_and_compare_seo('eight',
u'<p>I could be taken</p>'
'<p class="seoSummary">a</p>'
'<p class="seoSummary">b</p>',
'a b')
# No brackets
make_page_and_compare_seo('nine',
u'<p>I <em>am</em> awesome.'
' <a href="blah">A link</a> is also <cool></p>',
u'I am awesome. A link is also cool')
class DocumentEditingTests(UserTestCase, WikiTestCase):
"""Tests for the document-editing view"""
localizing_client = True
def test_noindex_post(self):
self.client.login(username='admin', password='testpass')
# Go to new document page to ensure no-index header works
response = self.client.get(reverse('wiki.new_document', args=[],
locale=settings.WIKI_DEFAULT_LANGUAGE))
eq_(response['X-Robots-Tag'], 'noindex')
@attr('bug821986')
def test_editor_safety_filter(self):
"""Safety filter should be applied before rendering editor"""
self.client.login(username='admin', password='testpass')
r = revision(save=True, content="""
<svg><circle onload=confirm(3)>
""")
args = [r.document.slug]
urls = (
reverse('wiki.edit_document', args=args),
'%s?tolocale=%s' % (reverse('wiki.translate', args=args), 'fr')
)
for url in urls:
page = pq(self.client.get(url).content)
editor_src = page.find('#id_content').text()
ok_('onload' not in editor_src)
def test_create_on_404(self):
self.client.login(username='admin', password='testpass')
# Create the parent page.
d, r = doc_rev()
# Establish attribs of child page.
locale = settings.WIKI_DEFAULT_LANGUAGE
local_slug = 'Some_New_Title'
slug = '%s/%s' % (d.slug, local_slug)
url = reverse('wiki.document', args=[slug], locale=locale)
# Ensure redirect to create new page on attempt to visit non-existent
# child page.
resp = self.client.get(url)
eq_(302, resp.status_code)
ok_('docs/new' in resp['Location'])
ok_('?slug=%s' % local_slug in resp['Location'])
# Ensure real 404 for visit to non-existent page with params common to
# kumascript and raw content API.
for p_name in ('raw', 'include', 'nocreate'):
sub_url = '%s?%s=1' % (url, p_name)
resp = self.client.get(sub_url)
eq_(404, resp.status_code)
# Ensure root level documents work, not just children
response = self.client.get(reverse('wiki.document',
args=['noExist'], locale=locale))
eq_(302, response.status_code)
response = self.client.get(reverse('wiki.document',
args=['Template:NoExist'],
locale=locale))
eq_(302, response.status_code)
def test_new_document_comment(self):
"""Creating a new document with a revision comment saves the comment"""
self.client.login(username='admin', password='testpass')
comment = 'I am the revision comment'
slug = 'Test-doc-comment'
loc = settings.WIKI_DEFAULT_LANGUAGE
# Create a new doc.
data = new_document_data()
data.update({'slug': slug, 'comment': comment})
self.client.post(reverse('wiki.new_document'), data)
doc = Document.objects.get(slug=slug, locale=loc)
eq_(comment, doc.current_revision.comment)
@attr('toc')
def test_toc_initial(self):
self.client.login(username='admin', password='testpass')
resp = self.client.get(reverse('wiki.new_document'))
eq_(200, resp.status_code)
page = pq(resp.content)
toc_select = page.find('#id_toc_depth')
toc_options = toc_select.find('option')
for option in toc_options:
opt_element = pq(option)
found_selected = False
if opt_element.attr('selected'):
found_selected = True
eq_(str(Revision.TOC_DEPTH_H4), opt_element.attr('value'))
if not found_selected:
raise AssertionError("No ToC depth initially selected.")
@attr('retitle')
def test_retitling_solo_doc(self):
""" Editing just title of non-parent doc:
* Changes title
* Doesn't cause errors
* Doesn't create redirect
"""
# Not testing slug changes separately; the model tests cover those plus
# slug+title changes. If title changes work in the view, the rest
# should also.
self.client.login(username='admin', password='testpass')
new_title = 'Some New Title'
d, r = doc_rev()
old_title = d.title
data = new_document_data()
data.update({'title': new_title,
'form': 'rev'})
data['slug'] = ''
url = reverse('wiki.edit_document', args=[d.slug])
self.client.post(url, data)
eq_(new_title,
Document.objects.get(slug=d.slug, locale=d.locale).title)
try:
Document.objects.get(title=old_title)
self.fail("Should not find doc by old title after retitling.")
except Document.DoesNotExist:
pass
@attr('retitle')
def test_retitling_parent_doc(self):
""" Editing just title of parent doc:
* Changes title
* Doesn't cause errors
* Doesn't create redirect
"""
# Not testing slug changes separately; the model tests cover those plus
# slug+title changes. If title changes work in the view, the rest
# should also.
self.client.login(username='admin', password='testpass')
# create parent doc & rev along with child doc & rev
d = document(title='parent', save=True)
revision(document=d, content='parent', save=True)
d2 = document(title='child', parent_topic=d, save=True)
revision(document=d2, content='child', save=True)
old_title = d.title
new_title = 'Some New Title'
data = new_document_data()
data.update({'title': new_title,
'form': 'rev'})
data['slug'] = ''
url = reverse('wiki.edit_document', args=[d.slug])
self.client.post(url, data)
eq_(new_title,
Document.objects.get(slug=d.slug, locale=d.locale).title)
try:
Document.objects.get(title=old_title)
self.fail("Should not find doc by old title after retitling.")
except Document.DoesNotExist:
pass
def test_slug_change_ignored_for_iframe(self):
"""When the title of an article is edited in an iframe, the change is
ignored."""
self.client.login(username='admin', password='testpass')
new_slug = 'some_new_slug'
d, r = doc_rev()
old_slug = d.slug
data = new_document_data()
data.update({'title': d.title,
'slug': new_slug,
'form': 'rev'})
self.client.post('%s?iframe=1' % reverse('wiki.edit_document',
args=[d.slug]), data)
eq_(old_slug, Document.objects.get(slug=d.slug,
locale=d.locale).slug)
assert "REDIRECT" not in Document.objects.get(slug=old_slug).html
@attr('clobber')
def test_slug_collision_errors(self):
"""When an attempt is made to retitle an article and another with that
title already exists, there should be form errors"""
self.client.login(username='admin', password='testpass')
exist_slug = "existing-doc"
# Create a new doc.
data = new_document_data()
data.update({"slug": exist_slug})
resp = self.client.post(reverse('wiki.new_document'), data)
eq_(302, resp.status_code)
# Create another new doc.
data = new_document_data()
data.update({"slug": 'some-new-title'})
resp = self.client.post(reverse('wiki.new_document'), data)
eq_(302, resp.status_code)
# Now, post an update with duplicate slug
data.update({
'form': 'rev',
'slug': exist_slug
})
resp = self.client.post(reverse('wiki.edit_document',
args=['some-new-title']), data)
eq_(200, resp.status_code)
p = pq(resp.content)
ok_(p.find('.errorlist').length > 0)
ok_(p.find('.errorlist a[href="#id_slug"]').length > 0)
@attr('clobber')
def test_redirect_can_be_clobbered(self):
"""When an attempt is made to retitle an article, and another article
with that title exists but is a redirect, there should be no errors and
the redirect should be replaced."""
self.client.login(username='admin', password='testpass')
exist_title = "Existing doc"
exist_slug = "existing-doc"
changed_title = 'Changed title'
changed_slug = 'changed-title'
# Create a new doc.
data = new_document_data()
data.update({"title": exist_title, "slug": exist_slug})
resp = self.client.post(reverse('wiki.new_document'), data)
eq_(302, resp.status_code)
# Change title and slug
data.update({'form': 'rev',
'title': changed_title,
'slug': changed_slug})
resp = self.client.post(reverse('wiki.edit_document',
args=[exist_slug]),
data)
eq_(302, resp.status_code)
# Change title and slug back to originals, clobbering the redirect
data.update({'form': 'rev',
'title': exist_title,
'slug': exist_slug})
resp = self.client.post(reverse('wiki.edit_document',
args=[changed_slug]),
data)
eq_(302, resp.status_code)
def test_invalid_slug(self):
"""Slugs cannot contain "$", but can contain "/"."""
self.client.login(username='admin', password='testpass')
data = new_document_data()
data['title'] = 'valid slug'
data['slug'] = 'valid'
response = self.client.post(reverse('wiki.new_document'), data)
self.assertRedirects(response,
reverse('wiki.document', args=[data['slug']],
locale=settings.WIKI_DEFAULT_LANGUAGE))
new_url = reverse('wiki.new_document')
invalid_slugs = [
'va/lid', # slashes
'inva$lid', # dollar signs
'inva?lid', # question marks
'inva%lid', # percentage sign
'"invalid\'', # quotes
'in valid', # whitespace
]
for invalid_slug in invalid_slugs:
data['title'] = 'invalid with %s' % invalid_slug
data['slug'] = invalid_slug
response = self.client.post(new_url, data)
self.assertContains(response, 'The slug provided is not valid.')
def test_invalid_reserved_term_slug(self):
"""Slugs should not collide with reserved URL patterns"""
self.client.login(username='admin', password='testpass')
data = new_document_data()
# TODO: This is info derived from urls.py, but unsure how to DRY it
reserved_slugs = (
'ckeditor_config.js',
'watch-ready-for-review',
'unwatch-ready-for-review',
'watch-approved',
'unwatch-approved',
'.json',
'new',
'all',
'preview-wiki-content',
'category/10',
'needs-review/technical',
'needs-review/',
'feeds/atom/all/',
'feeds/atom/needs-review/technical',
'feeds/atom/needs-review/',
'tag/tasty-pie'
)
for term in reserved_slugs:
data['title'] = 'invalid with %s' % term
data['slug'] = term
response = self.client.post(reverse('wiki.new_document'), data)
self.assertContains(response, 'The slug provided is not valid.')
def test_slug_revamp(self):
self.client.login(username='admin', password='testpass')
def _createAndRunTests(slug):
# Create some vars
locale = settings.WIKI_DEFAULT_LANGUAGE
foreign_locale = 'es'
new_doc_url = reverse('wiki.new_document')
invalid_slug = "some/thing"
invalid_slugs = [
"some/thing",
"some?thing",
"some thing",
"some%thing",
"$omething",
]
child_slug = 'kiddy'
grandchild_slug = 'grandkiddy'
# Create the document data
doc_data = new_document_data()
doc_data['title'] = slug + ' Doc'
doc_data['slug'] = slug
doc_data['content'] = 'This is the content'
doc_data['is_localizable'] = True
""" NEW DOCUMENT CREATION, CHILD CREATION """
# Create the document, validate it exists
response = self.client.post(new_doc_url, doc_data)
eq_(302, response.status_code) # 302 = good, forward to new page
ok_(slug in response['Location'])
self.assertRedirects(response, reverse('wiki.document',
locale=locale, args=[slug]))
doc_url = reverse('wiki.document', locale=locale, args=[slug])
eq_(self.client.get(doc_url).status_code, 200)
doc = Document.objects.get(locale=locale, slug=slug)
eq_(doc.slug, slug)
eq_(0, len(Document.objects.filter(title=doc_data['title'] + 'Redirect')))
# Create child document data
child_data = new_document_data()
child_data['title'] = slug + ' Child Doc'
child_data['slug'] = invalid_slug
child_data['content'] = 'This is the content'
child_data['is_localizable'] = True
# Attempt to create the child with invalid slug, validate it fails
def test_invalid_slug(inv_slug, url, data, doc):
data['slug'] = inv_slug
response = self.client.post(url, data)
page = pq(response.content)
eq_(200, response.status_code) # 200 = bad, invalid data
# Slug doesn't add parent
eq_(inv_slug, page.find('input[name=slug]')[0].value)
eq_(doc.get_absolute_url(),
page.find('.metadataDisplay').attr('href'))
self.assertContains(response,
'The slug provided is not valid.')
for invalid_slug in invalid_slugs:
test_invalid_slug(invalid_slug,
new_doc_url + '?parent=' + str(doc.id),
child_data, doc)
# Attempt to create the child with *valid* slug,
# should succeed and redirect
child_data['slug'] = child_slug
full_child_slug = slug + '/' + child_data['slug']
response = self.client.post(new_doc_url + '?parent=' + str(doc.id),
child_data)
eq_(302, response.status_code)
self.assertRedirects(response, reverse('wiki.document',
locale=locale,
args=[full_child_slug]))
child_doc = Document.objects.get(locale=locale,
slug=full_child_slug)
eq_(child_doc.slug, full_child_slug)
eq_(0, len(Document.objects.filter(
title=child_data['title'] + ' Redirect 1',
locale=locale)))
# Create grandchild data
grandchild_data = new_document_data()
grandchild_data['title'] = slug + ' Grandchild Doc'
grandchild_data['slug'] = invalid_slug
grandchild_data['content'] = 'This is the content'
grandchild_data['is_localizable'] = True
# Attempt to create the child with invalid slug, validate it fails
response = self.client.post(
new_doc_url + '?parent=' + str(child_doc.id), grandchild_data)
page = pq(response.content)
eq_(200, response.status_code) # 200 = bad, invalid data
# Slug doesn't add parent
eq_(invalid_slug, page.find('input[name=slug]')[0].value)
eq_(child_doc.get_absolute_url(),
page.find('.metadataDisplay').attr('href'))
self.assertContains(response, 'The slug provided is not valid.')
# Attempt to create the child with *valid* slug,
# should succeed and redirect
grandchild_data['slug'] = grandchild_slug
full_grandchild_slug = (full_child_slug + '/' +
grandchild_data['slug'])
response = self.client.post(
new_doc_url + '?parent=' + str(child_doc.id),
grandchild_data)
eq_(302, response.status_code)
self.assertRedirects(response,
reverse('wiki.document', locale=locale,
args=[full_grandchild_slug]))
grandchild_doc = Document.objects.get(locale=locale,
slug=full_grandchild_slug)
eq_(grandchild_doc.slug, full_grandchild_slug)
missing_title = grandchild_data['title'] + ' Redirect 1'
eq_(0, len(Document.objects.filter(title=missing_title,
locale=locale)))
def _run_edit_tests(edit_slug, edit_data, edit_doc,
edit_parent_path):
"""EDIT DOCUMENT TESTING"""
# Load "Edit" page for the root doc, ensure no "/" in the slug
# Also ensure the 'parent' link is not present
response = self.client.get(reverse('wiki.edit_document',
args=[edit_doc.slug], locale=locale))
eq_(200, response.status_code)
page = pq(response.content)
eq_(edit_data['slug'], page.find('input[name=slug]')[0].value)
eq_(edit_parent_path,
page.find('.metadataDisplay').attr('href'))
# Attempt an invalid edit of the root,
# ensure the slug stays the same (i.e. no parent prepending)
def test_invalid_slug_edit(inv_slug, url, data):
data['slug'] = inv_slug
data['form'] = 'rev'
response = self.client.post(url, data)
eq_(200, response.status_code) # 200 = bad, invalid data
page = pq(response.content)
# Slug doesn't add parent
eq_(inv_slug, page.find('input[name=slug]')[0].value)
eq_(edit_parent_path,
page.find('.metadataDisplay').attr('href'))
self.assertContains(response,
'The slug provided is not valid.')
# Ensure no redirect
redirect_title = data['title'] + ' Redirect 1'
eq_(0, len(Document.objects.filter(title=redirect_title,
locale=locale)))
# Push a valid edit, without changing the slug
edit_data['slug'] = edit_slug
edit_data['form'] = 'rev'
response = self.client.post(reverse('wiki.edit_document',
args=[edit_doc.slug],
locale=locale),
edit_data)
eq_(302, response.status_code)
# Ensure no redirect
redirect_title = edit_data['title'] + ' Redirect 1'
eq_(0, len(Document.objects.filter(title=redirect_title,
locale=locale)))
self.assertRedirects(response,
reverse('wiki.document',
locale=locale,
args=[edit_doc.slug]))
def _run_translate_tests(translate_slug, translate_data,
translate_doc):
"""TRANSLATION DOCUMENT TESTING"""
foreign_url = (reverse('wiki.translate',
args=[translate_doc.slug],
locale=locale) +
'?tolocale=' + foreign_locale)
foreign_doc_url = reverse('wiki.document',
args=[translate_doc.slug],
locale=foreign_locale)
# Verify translate page form is populated correctly
response = self.client.get(foreign_url)
eq_(200, response.status_code)
page = pq(response.content)
eq_(translate_data['slug'],
page.find('input[name=slug]')[0].value)
# Attempt an invalid edit of the root
# ensure the slug stays the same (i.e. no parent prepending)
def test_invalid_slug_translate(inv_slug, url, data):
data['slug'] = inv_slug
data['form'] = 'both'
response = self.client.post(url, data)
eq_(200, response.status_code) # 200 = bad, invalid data
page = pq(response.content)
# Slug doesn't add parent
eq_(inv_slug, page.find('input[name=slug]')[0].value)
self.assertContains(response,
'The slug provided is not valid.')
# Ensure no redirect
eq_(0, len(Document.objects.filter(title=data['title'] +
' Redirect 1',
locale=foreign_locale)))
# Push a valid translation
translate_data['slug'] = translate_slug
translate_data['form'] = 'both'
response = self.client.post(foreign_url, translate_data)
eq_(302, response.status_code)
# Ensure no redirect
redirect_title = translate_data['title'] + ' Redirect 1'
eq_(0, len(Document.objects.filter(title=redirect_title,
locale=foreign_locale)))
self.assertRedirects(response, foreign_doc_url)
return Document.objects.get(locale=foreign_locale,
slug=translate_doc.slug)
_run_translate_tests(slug, doc_data, doc)
_run_translate_tests(child_slug, child_data, child_doc)
_run_translate_tests(grandchild_slug, grandchild_data,
grandchild_doc)
def _run_translate_edit_tests(edit_slug, edit_data, edit_doc):
"""TEST BASIC EDIT OF TRANSLATION"""
# Hit the initial URL
response = self.client.get(reverse('wiki.edit_document',
args=[edit_doc.slug],
locale=foreign_locale))
eq_(200, response.status_code)
page = pq(response.content)
eq_(edit_data['slug'], page.find('input[name=slug]')[0].value)
# Attempt an invalid edit of the root, ensure the slug stays
# the same (i.e. no parent prepending)
edit_data['slug'] = invalid_slug
edit_data['form'] = 'both'
response = self.client.post(reverse('wiki.edit_document',
args=[edit_doc.slug],
locale=foreign_locale),
edit_data)
eq_(200, response.status_code) # 200 = bad, invalid data
page = pq(response.content)
# Slug doesn't add parent
eq_(invalid_slug, page.find('input[name=slug]')[0].value)
self.assertContains(response, page.find('ul.errorlist li'
' a[href="#id_slug"]').
text())
# Ensure no redirect
eq_(0, len(Document.objects.filter(title=edit_data['title'] +
' Redirect 1',
locale=foreign_locale)))
# Push a valid edit, without changing the slug
edit_data['slug'] = edit_slug
response = self.client.post(reverse('wiki.edit_document',
args=[edit_doc.slug],
locale=foreign_locale),
edit_data)
eq_(302, response.status_code)
# Ensure no redirect
eq_(0, len(Document.objects.filter(title=edit_data['title'] +
' Redirect 1',
locale=foreign_locale)))
self.assertRedirects(response, reverse('wiki.document',
locale=foreign_locale,
args=[edit_doc.slug]))
""" TEST EDITING SLUGS AND TRANSLATIONS """
def _run_slug_edit_tests(edit_slug, edit_data, edit_doc, loc):
edit_data['slug'] = edit_data['slug'] + '_Updated'
edit_data['form'] = 'rev'
response = self.client.post(reverse('wiki.edit_document',
args=[edit_doc.slug],
locale=loc),
edit_data)
eq_(302, response.status_code)
# HACK: the es doc gets a 'Redirigen 1' if locale/ is updated
# Ensure *1* redirect
eq_(1,
len(Document.objects.filter(
title__contains=edit_data['title'] + ' Redir',
locale=loc)))
self.assertRedirects(response,
reverse('wiki.document',
locale=loc,
args=[edit_doc.slug.replace(
edit_slug,
edit_data['slug'])]))
# Run all of the tests
_createAndRunTests("parent")
# Test that slugs with the same "specific" slug but in different levels
# in the heiharachy are validate properly upon submission
# Create base doc
parent_doc = document(title='Length',
slug='length',
is_localizable=True,
locale=settings.WIKI_DEFAULT_LANGUAGE)
parent_doc.save()
r = revision(document=parent_doc)
r.save()
# Create child, try to use same slug, should work
child_data = new_document_data()
child_data['title'] = 'Child Length'
child_data['slug'] = 'length'
child_data['content'] = 'This is the content'
child_data['is_localizable'] = True
child_url = (reverse('wiki.new_document') +
'?parent=' +
str(parent_doc.id))
response = self.client.post(child_url, child_data)
eq_(302, response.status_code)
self.assertRedirects(response,
reverse('wiki.document',
args=['length/length'],
locale=settings.WIKI_DEFAULT_LANGUAGE))
# Editing "length/length" document doesn't cause errors
child_data['form'] = 'rev'
child_data['slug'] = ''
edit_url = reverse('wiki.edit_document', args=['length/length'],
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.post(edit_url, child_data)
eq_(302, response.status_code)
self.assertRedirects(response, reverse('wiki.document',
args=['length/length'],
locale=settings.WIKI_DEFAULT_LANGUAGE))
# Creating a new translation of "length" and "length/length"
# doesn't cause errors
child_data['form'] = 'both'
child_data['slug'] = 'length'
translate_url = reverse('wiki.document', args=[child_data['slug']],
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.post(translate_url + '$translate?tolocale=es',
child_data)
eq_(302, response.status_code)
self.assertRedirects(response, reverse('wiki.document',
args=[child_data['slug']],
locale='es'))
translate_url = reverse('wiki.document', args=['length/length'],
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.post(translate_url + '$translate?tolocale=es',
child_data)
eq_(302, response.status_code)
slug = 'length/' + child_data['slug']
self.assertRedirects(response, reverse('wiki.document',
args=[slug],
locale='es'))
def test_translate_keeps_topical_parent(self):
self.client.login(username='admin', password='testpass')
en_doc, de_doc = make_translation()
en_child_doc = document(parent_topic=en_doc, slug='en-child',
save=True)
en_child_rev = revision(document=en_child_doc, save=True)
de_child_doc = document(parent_topic=de_doc, locale='de',
slug='de-child', parent=en_child_doc,
save=True)
revision(document=de_child_doc, save=True)
post_data = {}
post_data['slug'] = de_child_doc.slug
post_data['title'] = 'New title'
post_data['form'] = 'both'
post_data['content'] = 'New translation'
post_data['tolocale'] = 'de'
post_data['toc_depth'] = 0
post_data['based_on'] = en_child_rev.id
post_data['parent_id'] = en_child_doc.id
translate_url = reverse('wiki.edit_document',
args=[de_child_doc.slug],
locale='de')
self.client.post(translate_url, post_data)
de_child_doc = Document.objects.get(locale='de', slug='de-child')
eq_(en_child_doc, de_child_doc.parent)
eq_(de_doc, de_child_doc.parent_topic)
eq_('New translation', de_child_doc.current_revision.content)
def test_translate_keeps_toc_depth(self):
self.client.login(username='admin', password='testpass')
locale = settings.WIKI_DEFAULT_LANGUAGE
original_slug = 'eng-doc'
foreign_locale = 'es'
foreign_slug = 'es-doc'
en_doc = document(title='Eng Doc', slug=original_slug,
is_localizable=True, locale=locale)
en_doc.save()
r = revision(document=en_doc, toc_depth=1)
r.save()
post_data = new_document_data()
post_data['title'] = 'ES Doc'
post_data['slug'] = foreign_slug
post_data['content'] = 'This is the content'
post_data['is_localizable'] = True
post_data['form'] = 'both'
post_data['toc_depth'] = r.toc_depth
translate_url = reverse('wiki.document', args=[original_slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
translate_url += '$translate?tolocale=' + foreign_locale
response = self.client.post(translate_url, post_data)
self.assertRedirects(response, reverse('wiki.document',
args=[foreign_slug],
locale=foreign_locale))
es_d = Document.objects.get(locale=foreign_locale, slug=foreign_slug)
eq_(r.toc_depth, es_d.current_revision.toc_depth)
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0)
def test_translate_rebuilds_source_json(self):
self.client.login(username='admin', password='testpass')
# Create an English original and a Spanish translation.
en_slug = 'en-doc'
es_locale = 'es'
es_slug = 'es-doc'
en_doc = document(title='EN Doc',
slug=en_slug,
is_localizable=True,
locale=settings.WIKI_DEFAULT_LANGUAGE)
en_doc.save()
en_doc.render()
en_doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=en_slug)
json.loads(en_doc.json)
r = revision(document=en_doc)
r.save()
translation_data = new_document_data()
translation_data['title'] = 'ES Doc'
translation_data['slug'] = es_slug
translation_data['content'] = 'This is the content'
translation_data['is_localizable'] = False
translation_data['form'] = 'both'
translate_url = reverse('wiki.document', args=[en_slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
translate_url += '$translate?tolocale=' + es_locale
response = self.client.post(translate_url, translation_data)
# Sanity to make sure the translate succeeded.
self.assertRedirects(response, reverse('wiki.document',
args=[es_slug],
locale=es_locale))
es_doc = Document.objects.get(locale=es_locale,
slug=es_slug)
es_doc.render()
new_en_json = json.loads(Document.objects.get(pk=en_doc.pk).json)
ok_('translations' in new_en_json)
ok_(translation_data['title'] in [t['title'] for t in
new_en_json['translations']])
es_translation_json = [t for t in new_en_json['translations'] if
t['title'] == translation_data['title']][0]
eq_(es_translation_json['last_edit'],
es_doc.current_revision.created.isoformat())
def test_slug_translate(self):
"""Editing a translated doc keeps the correct slug"""
self.client.login(username='admin', password='testpass')
# Settings
original_slug = 'eng-doc'
child_slug = 'child-eng-doc'
foreign_locale = 'es'
foreign_slug = 'es-doc'
foreign_child_slug = 'child-es-doc'
# Create the one-level English Doc
en_doc = document(title='Eng Doc',
slug=original_slug,
is_localizable=True,
locale=settings.WIKI_DEFAULT_LANGUAGE)
en_doc.save()
r = revision(document=en_doc)
r.save()
# Translate to ES
parent_data = new_document_data()
parent_data['title'] = 'ES Doc'
parent_data['slug'] = foreign_slug
parent_data['content'] = 'This is the content'
parent_data['is_localizable'] = True
parent_data['form'] = 'both'
translate_url = reverse('wiki.document', args=[original_slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
translate_url += '$translate?tolocale=' + foreign_locale
response = self.client.post(translate_url, parent_data)
self.assertRedirects(response, reverse('wiki.document',
args=[foreign_slug],
locale=foreign_locale))
# Go to edit the translation, ensure the the slug is correct
response = self.client.get(reverse('wiki.edit_document',
args=[foreign_slug],
locale=foreign_locale))
page = pq(response.content)
eq_(page.find('input[name=slug]')[0].value, foreign_slug)
# Create an English child now
en_doc = document(title='Child Eng Doc',
slug=original_slug + '/' + child_slug,
is_localizable=True,
locale=settings.WIKI_DEFAULT_LANGUAGE,
parent_topic=en_doc)
en_doc.save()
r = revision(document=en_doc)
r.save()
# Translate to ES
child_data = new_document_data()
child_data['title'] = 'ES Child Doc'
child_data['slug'] = foreign_child_slug
child_data['content'] = 'This is the content'
child_data['is_localizable'] = True
child_data['form'] = 'both'
translate_url = reverse('wiki.document',
args=[original_slug + '/' + child_slug],
locale=settings.WIKI_DEFAULT_LANGUAGE)
translate_url += '$translate?tolocale=' + foreign_locale
response = self.client.post(translate_url, child_data)
slug = foreign_slug + '/' + child_data['slug']
self.assertRedirects(response, reverse('wiki.document',
args=[slug],
locale=foreign_locale))
def test_clone(self):
self.client.login(username='admin', password='testpass')
slug = None
title = None
content = '<p>Hello!</p>'
test_revision = revision(save=True, title=title, slug=slug,
content=content)
document = test_revision.document
response = self.client.get(reverse('wiki.new_document',
args=[],
locale=settings.WIKI_DEFAULT_LANGUAGE) + '?clone=' + str(document.id))
page = pq(response.content)
eq_(page.find('input[name=title]')[0].value, title)
eq_(page.find('input[name=slug]')[0].value, slug)
self.assertHTMLEqual(page.find('textarea[name=content]')[0].value, content)
def test_localized_based_on(self):
"""Editing a localized article 'based on' an older revision of the
localization is OK."""
self.client.login(username='admin', password='testpass')
en_r = revision(save=True)
fr_d = document(parent=en_r.document, locale='fr', save=True)
fr_r = revision(document=fr_d, based_on=en_r, save=True)
url = reverse('wiki.new_revision_based_on',
locale='fr', args=(fr_d.slug, fr_r.pk,))
response = self.client.get(url)
input = pq(response.content)('#id_based_on')[0]
eq_(int(input.value), en_r.pk)
def test_restore_translation_source(self):
"""Edit a localized article without an English parent allows user to
set translation parent."""
# Create english doc
self.client.login(username='admin', password='testpass')
data = new_document_data()
self.client.post(reverse('wiki.new_document'), data)
en_d = Document.objects.get(locale=data['locale'], slug=data['slug'])
# Create french doc
data.update({'locale': 'fr',
'title': 'A Tést Articlé',
'content': "C'ést bon."})
self.client.post(reverse('wiki.new_document', locale='fr'), data)
fr_d = Document.objects.get(locale=data['locale'], slug=data['slug'])
# Check edit doc page for choose parent box
url = reverse('wiki.edit_document', args=[fr_d.slug], locale='fr')
response = self.client.get(url)
ok_(pq(response.content)('li.metadata-choose-parent'))
# Set the parent
data.update({'form': 'rev', 'parent_id': en_d.id})
resp = self.client.post(url, data)
eq_(302, resp.status_code)
ok_('fr/docs/a-test-article' in resp['Location'])
# Check the languages drop-down
resp = self.client.get(resp['Location'])
translations = pq(resp.content)('ul#translations li')
ok_('A Test Article' in translations.html())
ok_('English (US)' in translations.text())
def test_translation_source(self):
"""Allow users to change "translation source" settings"""
self.client.login(username='admin', password='testpass')
data = new_document_data()
self.client.post(reverse('wiki.new_document'), data)
parent = Document.objects.get(locale=data['locale'], slug=data['slug'])
data.update({'title': 'Another Test Article',
'content': "Yahoooo!",
'parent_id': parent.id})
self.client.post(reverse('wiki.new_document'), data)
child = Document.objects.get(locale=data['locale'], slug=data['slug'])
url = reverse('wiki.edit_document', args=[child.slug])
response = self.client.get(url)
content = pq(response.content)
ok_(content('li.metadata-choose-parent'))
ok_(str(parent.id) in content.html())
@attr('tags')
@mock.patch.object(Site.objects, 'get_current')
def test_document_tags(self, get_current):
"""Document tags can be edited through revisions"""
data = new_document_data()
locale = data['locale']
slug = data['slug']
path = slug
ts1 = ('JavaScript', 'AJAX', 'DOM')
ts2 = ('XML', 'JSON')
get_current.return_value.domain = 'su.mo.com'
self.client.login(username='admin', password='testpass')
def assert_tag_state(yes_tags, no_tags):
# Ensure the tags are found for the Documents
doc = Document.objects.get(locale=locale, slug=slug)
doc_tags = [x.name for x in doc.tags.all()]
for t in yes_tags:
ok_(t in doc_tags)
for t in no_tags:
ok_(t not in doc_tags)
# Ensure the tags are found in the Document view
response = self.client.get(reverse('wiki.document',
args=[doc.slug]), data)
page = pq(response.content)
for t in yes_tags:
eq_(1, page.find('.tags li a:contains("%s")' % t).length,
'%s should NOT appear in document view tags' % t)
for t in no_tags:
eq_(0, page.find('.tags li a:contains("%s")' % t).length,
'%s should appear in document view tags' % t)
# Check for the document slug (title in feeds) in the tag listing
for t in yes_tags:
response = self.client.get(reverse('wiki.tag', args=[t]))
self.assertContains(response, doc.slug, msg_prefix=t)
response = self.client.get(reverse('wiki.feeds.recent_documents',
args=['atom', t]))
self.assertContains(response, doc.title)
for t in no_tags:
response = self.client.get(reverse('wiki.tag', args=[t]))
ok_(doc.slug not in response.content.decode('utf-8'))
response = self.client.get(reverse('wiki.feeds.recent_documents',
args=['atom', t]))
self.assertNotContains(response, doc.title)
# Create a new doc with tags
data.update({'slug': slug, 'tags': ','.join(ts1)})
self.client.post(reverse('wiki.new_document'), data)
assert_tag_state(ts1, ts2)
# Now, update the tags.
data.update({'form': 'rev', 'tags': ', '.join(ts2)})
self.client.post(reverse('wiki.edit_document',
args=[path]), data)
assert_tag_state(ts2, ts1)
@attr('review_tags')
@mock.patch.object(Site.objects, 'get_current')
def test_review_tags(self, get_current):
"""Review tags can be managed on document revisions"""
get_current.return_value.domain = 'su.mo.com'
self.client.login(username='admin', password='testpass')
# Create a new doc with one review tag
data = new_document_data()
data.update({'review_tags': ['technical']})
response = self.client.post(reverse('wiki.new_document'), data)
# Ensure there's now a doc with that expected tag in its newest
# revision
doc = Document.objects.get(slug="a-test-article")
rev = doc.revisions.order_by('-id').all()[0]
review_tags = [x.name for x in rev.review_tags.all()]
eq_(['technical'], review_tags)
# Now, post an update with two tags
data.update({
'form': 'rev',
'review_tags': ['editorial', 'technical'],
})
response = self.client.post(reverse('wiki.edit_document',
args=[doc.slug]), data)
# Ensure the doc's newest revision has both tags.
doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug="a-test-article")
rev = doc.revisions.order_by('-id').all()[0]
review_tags = [x.name for x in rev.review_tags.all()]
review_tags.sort()
eq_(['editorial', 'technical'], review_tags)
# Now, ensure that warning boxes appear for the review tags.
response = self.client.get(reverse('wiki.document',
args=[doc.slug]), data)
page = pq(response.content)
eq_(2, page.find('.warning.warning-review').length)
# Ensure the page appears on the listing pages
response = self.client.get(reverse('wiki.list_review'))
eq_(1, pq(response.content).find("ul.document-list li a:contains('%s')" %
doc.title).length)
response = self.client.get(reverse('wiki.list_review_tag',
args=('technical',)))
eq_(1, pq(response.content).find("ul.document-list li a:contains('%s')" %
doc.title).length)
response = self.client.get(reverse('wiki.list_review_tag',
args=('editorial',)))
eq_(1, pq(response.content).find("ul.document-list li a:contains('%s')" %
doc.title).length)
# Also, ensure that the page appears in the proper feeds
# HACK: Too lazy to parse the XML. Lazy lazy.
response = self.client.get(reverse('wiki.feeds.list_review',
args=('atom',)))
ok_('<entry><title>%s</title>' % doc.title in response.content)
response = self.client.get(reverse('wiki.feeds.list_review_tag',
args=('atom', 'technical', )))
ok_('<entry><title>%s</title>' % doc.title in response.content)
response = self.client.get(reverse('wiki.feeds.list_review_tag',
args=('atom', 'editorial', )))
ok_('<entry><title>%s</title>' % doc.title in response.content)
# Post an edit that removes one of the tags.
data.update({
'form': 'rev',
'review_tags': ['editorial', ]
})
response = self.client.post(reverse('wiki.edit_document',
args=[doc.slug]), data)
# Ensure only one of the tags' warning boxes appears, now.
response = self.client.get(reverse('wiki.document',
args=[doc.slug]), data)
page = pq(response.content)
eq_(1, page.find('.warning.warning-review').length)
# Ensure the page appears on the listing pages
response = self.client.get(reverse('wiki.list_review'))
eq_(1, pq(response.content).find("ul.document-list li a:contains('%s')" %
doc.title).length)
response = self.client.get(reverse('wiki.list_review_tag',
args=('technical',)))
eq_(0, pq(response.content).find("ul.document-list li a:contains('%s')" %
doc.title).length)
response = self.client.get(reverse('wiki.list_review_tag',
args=('editorial',)))
eq_(1, pq(response.content).find("ul.document-list li a:contains('%s')" %
doc.title).length)
# Also, ensure that the page appears in the proper feeds
# HACK: Too lazy to parse the XML. Lazy lazy.
response = self.client.get(reverse('wiki.feeds.list_review',
args=('atom',)))
ok_('<entry><title>%s</title>' % doc.title in response.content)
response = self.client.get(reverse('wiki.feeds.list_review_tag',
args=('atom', 'technical', )))
ok_('<entry><title>%s</title>' % doc.title not in response.content)
response = self.client.get(reverse('wiki.feeds.list_review_tag',
args=('atom', 'editorial', )))
ok_('<entry><title>%s</title>' % doc.title in response.content)
@attr('review-tags')
def test_quick_review(self):
"""Test the quick-review button."""
self.client.login(username='admin', password='testpass')
test_data = [
{
'params': {'approve_technical': 1},
'expected_tags': ['editorial'],
'name': 'technical',
'message_contains': ['Technical review completed.']
},
{
'params': {'approve_editorial': 1},
'expected_tags': ['technical'],
'name': 'editorial',
'message_contains': ['Editorial review completed.']
},
{
'params': {
'approve_technical': 1,
'approve_editorial': 1
},
'expected_tags': [],
'name': 'editorial-technical',
'message_contains': [
'Technical review completed.',
'Editorial review completed.',
]
}
]
for data_dict in test_data:
slug = 'test-quick-review-%s' % data_dict['name']
data = new_document_data()
data.update({'review_tags': ['editorial', 'technical'],
'slug': slug})
resp = self.client.post(reverse('wiki.new_document'), data)
doc = Document.objects.get(slug=slug)
rev = doc.revisions.order_by('-id').all()[0]
review_url = reverse('wiki.quick_review',
args=[doc.slug])
params = dict(data_dict['params'], revision_id=rev.id)
resp = self.client.post(review_url, params)
eq_(302, resp.status_code)
doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=slug)
rev = doc.revisions.order_by('-id').all()[0]
review_tags = [x.name for x in rev.review_tags.all()]
review_tags.sort()
for expected_str in data_dict['message_contains']:
ok_(expected_str in rev.summary)
ok_(expected_str in rev.comment)
eq_(data_dict['expected_tags'], review_tags)
@attr('midair')
def test_edit_midair_collision(self):
self.client.login(username='admin', password='testpass')
# Post a new document.
data = new_document_data()
resp = self.client.post(reverse('wiki.new_document'), data)
doc = Document.objects.get(slug=data['slug'])
# Edit #1 starts...
resp = self.client.get(reverse('wiki.edit_document',
args=[doc.slug]))
page = pq(resp.content)
rev_id1 = page.find('input[name="current_rev"]').attr('value')
# Edit #2 starts...
resp = self.client.get(reverse('wiki.edit_document',
args=[doc.slug]))
page = pq(resp.content)
rev_id2 = page.find('input[name="current_rev"]').attr('value')
# Edit #2 submits successfully
data.update({
'form': 'rev',
'content': 'This edit got there first',
'current_rev': rev_id2
})
resp = self.client.post(reverse('wiki.edit_document',
args=[doc.slug]), data)
eq_(302, resp.status_code)
# Edit #1 submits, but receives a mid-aired notification
data.update({
'form': 'rev',
'content': 'This edit gets mid-aired',
'current_rev': rev_id1
})
resp = self.client.post(reverse('wiki.edit_document',
args=[doc.slug]), data)
eq_(200, resp.status_code)
ok_(unicode(MIDAIR_COLLISION).encode('utf-8') in resp.content,
"Midair collision message should appear")
@attr('toc')
def test_toc_toggle_off(self):
"""Toggling of table of contents in revisions"""
self.client.login(username='admin', password='testpass')
d, _ = doc_rev()
data = new_document_data()
ok_(Document.objects.get(slug=d.slug, locale=d.locale).show_toc)
data['form'] = 'rev'
data['toc_depth'] = 0
data['slug'] = d.slug
data['title'] = d.title
self.client.post(reverse('wiki.edit_document',
args=[d.slug]),
data)
doc = Document.objects.get(slug=d.slug, locale=d.locale)
eq_(0, doc.current_revision.toc_depth)
@attr('toc')
def test_toc_toggle_on(self):
"""Toggling of table of contents in revisions"""
self.client.login(username='admin', password='testpass')
d, r = doc_rev()
new_r = revision(document=d, content=r.content, toc_depth=0,
is_approved=True)
new_r.save()
ok_(not Document.objects.get(slug=d.slug, locale=d.locale).show_toc)
data = new_document_data()
data['form'] = 'rev'
data['slug'] = d.slug
data['title'] = d.title
self.client.post(reverse('wiki.edit_document',
args=[d.slug]),
data)
ok_(Document.objects.get(slug=d.slug, locale=d.locale).show_toc)
def test_parent_topic(self):
"""Selection of a parent topic when creating a document."""
self.client.login(username='admin', password='testpass')
d = document(title='HTML8')
d.save()
r = revision(document=d)
r.save()
data = new_document_data()
data['title'] = 'Replicated local storage'
data['parent_topic'] = d.id
resp = self.client.post(reverse('wiki.new_document'), data)
eq_(302, resp.status_code)
ok_(d.children.count() == 1)
ok_(d.children.all()[0].title == 'Replicated local storage')
def test_repair_breadcrumbs(self):
english_top = document(locale=settings.WIKI_DEFAULT_LANGUAGE,
title='English top',
save=True)
english_mid = document(locale=settings.WIKI_DEFAULT_LANGUAGE,
title='English mid',
parent_topic=english_top,
save=True)
english_bottom = document(locale=settings.WIKI_DEFAULT_LANGUAGE,
title='English bottom',
parent_topic=english_mid,
save=True)
french_top = document(locale='fr',
title='French top',
parent=english_top,
save=True)
french_mid = document(locale='fr',
title='French mid',
parent=english_mid,
parent_topic=english_mid,
save=True)
french_bottom = document(locale='fr',
title='French bottom',
parent=english_bottom,
parent_topic=english_bottom,
save=True)
self.client.login(username='admin', password='testpass')
resp = self.client.get(reverse('wiki.repair_breadcrumbs',
args=[french_bottom.slug],
locale='fr'))
eq_(302, resp.status_code)
ok_(french_bottom.get_absolute_url() in resp['Location'])
french_bottom_fixed = Document.objects.get(locale='fr',
title=french_bottom.title)
eq_(french_mid.id, french_bottom_fixed.parent_topic.id)
eq_(french_top.id, french_bottom_fixed.parent_topic.parent_topic.id)
def test_translate_on_edit(self):
d1 = document(title="Doc1", locale=settings.WIKI_DEFAULT_LANGUAGE,
save=True)
revision(document=d1, save=True)
d2 = document(title="TransDoc1", locale='de', parent=d1, save=True)
revision(document=d2, save=True)
self.client.login(username='admin', password='testpass')
url = reverse('wiki.edit_document', args=(d2.slug,), locale=d2.locale)
resp = self.client.get(url)
eq_(200, resp.status_code)
def test_discard_location(self):
"""Testing that the 'discard' HREF goes to the correct place when it's
explicitely and implicitely set"""
self.client.login(username='admin', password='testpass')
def _create_doc(slug, locale):
doc = document(slug=slug, is_localizable=True, locale=locale)
doc.save()
r = revision(document=doc)
r.save()
return doc
# Test that the 'discard' button on an edit goes to the original page
doc = _create_doc('testdiscarddoc', settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.get(reverse('wiki.edit_document',
args=[doc.slug], locale=doc.locale))
eq_(pq(response.content).find('.btn-discard').attr('href'),
reverse('wiki.document', args=[doc.slug], locale=doc.locale))
# Test that the 'discard button on a new translation goes
# to the en-US page'
response = self.client.get(reverse('wiki.translate',
args=[doc.slug], locale=doc.locale) + '?tolocale=es')
eq_(pq(response.content).find('.btn-discard').attr('href'),
reverse('wiki.document', args=[doc.slug], locale=doc.locale))
# Test that the 'discard' button on an existing translation goes
# to the 'es' page
foreign_doc = _create_doc('testdiscarddoc', 'es')
response = self.client.get(reverse('wiki.edit_document',
args=[foreign_doc.slug],
locale=foreign_doc.locale))
eq_(pq(response.content).find('.btn-discard').attr('href'),
reverse('wiki.document', args=[foreign_doc.slug],
locale=foreign_doc.locale))
# Test new
response = self.client.get(reverse('wiki.new_document',
locale=settings.WIKI_DEFAULT_LANGUAGE))
eq_(pq(response.content).find('.btn-discard').attr('href'),
reverse('wiki.new_document',
locale=settings.WIKI_DEFAULT_LANGUAGE))
@override_constance_settings(KUMASCRIPT_TIMEOUT=1.0)
@mock.patch('kuma.wiki.kumascript.get')
def test_revert(self, mock_kumascript_get):
self.client.login(username='admin', password='testpass')
mock_kumascript_get.return_value = (
'lorem ipsum dolor sit amet', None)
data = new_document_data()
data['title'] = 'A Test Article For Reverting'
data['slug'] = 'test-article-for-reverting'
response = self.client.post(reverse('wiki.new_document'), data)
doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug='test-article-for-reverting')
rev = doc.revisions.order_by('-id').all()[0]
data['content'] = 'Not lorem ipsum anymore'
data['comment'] = 'Nobody likes Latin anyway'
response = self.client.post(reverse('wiki.edit_document',
args=[doc.slug]), data)
mock_kumascript_get.called = False
response = self.client.post(reverse('wiki.revert_document',
args=[doc.slug, rev.id]),
{'revert': True, 'comment': 'Blah blah'})
ok_(mock_kumascript_get.called,
"kumascript should have been used")
ok_(302 == response.status_code)
rev = doc.revisions.order_by('-id').all()[0]
ok_('lorem ipsum dolor sit amet' == rev.content)
ok_('Blah blah' in rev.comment)
mock_kumascript_get.called = False
rev = doc.revisions.order_by('-id').all()[1]
response = self.client.post(reverse('wiki.revert_document',
args=[doc.slug, rev.id]),
{'revert': True})
ok_(302 == response.status_code)
rev = doc.revisions.order_by('-id').all()[0]
ok_(': ' not in rev.comment)
ok_(mock_kumascript_get.called,
"kumascript should have been used")
def test_store_revision_ip(self):
self.client.login(username='testuser', password='testpass')
data = new_document_data()
slug = 'test-article-for-storing-revision-ip'
data.update({'title': 'A Test Article For Storing Revision IP',
'slug': slug})
self.client.post(reverse('wiki.new_document'), data)
doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=slug)
data.update({'form': 'rev',
'content': 'This revision should NOT record IP',
'comment': 'This revision should NOT record IP'})
self.client.post(reverse('wiki.edit_document', args=[doc.slug]),
data)
eq_(0, RevisionIP.objects.all().count())
Switch.objects.create(name='store_revision_ips', active=True)
data.update({'content': 'Store the IP address for the revision.',
'comment': 'Store the IP address for the revision.'})
self.client.post(reverse('wiki.edit_document', args=[doc.slug]),
data)
eq_(1, RevisionIP.objects.all().count())
rev = doc.revisions.order_by('-id').all()[0]
rev_ip = RevisionIP.objects.get(revision=rev)
eq_('127.0.0.1', rev_ip.ip)
@mock.patch.object(Site.objects, 'get_current')
def test_email_for_first_edits(self, get_current):
get_current.return_value.domain = 'dev.mo.org'
self.client.login(username='testuser', password='testpass')
data = new_document_data()
slug = 'test-article-for-storing-revision-ip'
data.update({'title': 'A Test Article For First Edit Emails',
'slug': slug})
self.client.post(reverse('wiki.new_document'), data)
eq_(1, len(mail.outbox))
doc = Document.objects.get(
locale=settings.WIKI_DEFAULT_LANGUAGE, slug=slug)
data.update({'form': 'rev',
'content': 'This edit should not send an email',
'comment': 'This edit should not send an email'})
self.client.post(reverse('wiki.edit_document',
args=[doc.slug]),
data)
eq_(1, len(mail.outbox))
self.client.login(username='admin', password='testpass')
data.update({'content': 'Admin first edit should send an email',
'comment': 'Admin first edit should send an email'})
self.client.post(reverse('wiki.edit_document',
args=[doc.slug]),
data)
eq_(2, len(mail.outbox))
def _check_message_for_headers(message, username):
ok_("%s made their first edit" % username in message.subject)
eq_({'X-Kuma-Document-Url': "https://dev.mo.org%s" % doc.get_absolute_url(),
'X-Kuma-Editor-Username': username}, message.extra_headers)
testuser_message = mail.outbox[0]
admin_message = mail.outbox[1]
_check_message_for_headers(testuser_message, 'testuser')
_check_message_for_headers(admin_message, 'admin')
class DocumentWatchTests(UserTestCase, WikiTestCase):
"""Tests for un/subscribing to document edit notifications."""
localizing_client = True
def setUp(self):
super(DocumentWatchTests, self).setUp()
self.document, self.r = doc_rev()
self.client.login(username='testuser', password='testpass')
def test_watch_GET_405(self):
"""Watch document with HTTP GET results in 405."""
response = get(self.client, 'wiki.subscribe',
args=[self.document.slug])
eq_(405, response.status_code)
def test_unwatch_GET_405(self):
"""Unwatch document with HTTP GET results in 405."""
response = get(self.client, 'wiki.subscribe',
args=[self.document.slug])
eq_(405, response.status_code)
def test_watch_unwatch(self):
"""Watch and unwatch a document."""
user = self.user_model.objects.get(username='testuser')
# Subscribe
response = post(self.client, 'wiki.subscribe', args=[self.document.slug])
eq_(200, response.status_code)
assert EditDocumentEvent.is_notifying(user, self.document), \
'Watch was not created'
# Unsubscribe
response = post(self.client, 'wiki.subscribe', args=[self.document.slug])
eq_(200, response.status_code)
assert not EditDocumentEvent.is_notifying(user, self.document), \
'Watch was not destroyed'
class SectionEditingResourceTests(UserTestCase, WikiTestCase):
localizing_client = True
def test_raw_source(self):
"""The raw source for a document can be requested"""
self.client.login(username='admin', password='testpass')
d, r = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
expected = """
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
"""
Switch.objects.create(name='application_ACAO', active=True)
response = self.client.get('%s?raw=true' %
reverse('wiki.document', args=[d.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
ok_('Access-Control-Allow-Origin' in response)
eq_('*', response['Access-Control-Allow-Origin'])
eq_(normalize_html(expected),
normalize_html(response.content))
@attr('bug821986')
def test_raw_editor_safety_filter(self):
"""Safety filter should be applied before rendering editor"""
self.client.login(username='admin', password='testpass')
d, r = doc_rev("""
<p onload=alert(3)>FOO</p>
<svg><circle onload=confirm(3)>HI THERE</circle></svg>
""")
response = self.client.get('%s?raw=true' %
reverse('wiki.document', args=[d.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
ok_('<p onload=' not in response.content)
ok_('<circle onload=' not in response.content)
def test_raw_with_editing_links_source(self):
"""The raw source for a document can be requested, with section editing
links"""
self.client.login(username='admin', password='testpass')
d, r = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
expected = """
<h1 id="s1"><a class="edit-section" data-section-id="s1" data-section-src-url="/en-US/docs/%(slug)s?raw=true&section=s1" href="/en-US/docs/%(slug)s$edit?section=s1&edit_links=true" title="Edit section">Edit</a>s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2"><a class="edit-section" data-section-id="s2" data-section-src-url="/en-US/docs/%(slug)s?raw=true&section=s2" href="/en-US/docs/%(slug)s$edit?section=s2&edit_links=true" title="Edit section">Edit</a>s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3"><a class="edit-section" data-section-id="s3" data-section-src-url="/en-US/docs/%(slug)s?raw=true&section=s3" href="/en-US/docs/%(slug)s$edit?section=s3&edit_links=true" title="Edit section">Edit</a>s3</h1>
<p>test</p>
<p>test</p>
""" % {'slug': d.slug}
response = self.client.get('%s?raw=true&edit_links=true' %
reverse('wiki.document', args=[d.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(normalize_html(expected),
normalize_html(response.content))
def test_raw_section_source(self):
"""The raw source for a document section can be requested"""
self.client.login(username='admin', password='testpass')
d, r = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
expected = """
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
"""
response = self.client.get('%s?section=s2&raw=true' %
reverse('wiki.document',
args=[d.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(normalize_html(expected),
normalize_html(response.content))
@attr('midair')
@attr('rawsection')
def test_raw_section_edit(self):
self.client.login(username='admin', password='testpass')
d, r = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
replace = """
<h1 id="s2">s2</h1>
<p>replace</p>
"""
expected = """
<h1 id="s2">s2</h1>
<p>replace</p>
"""
response = self.client.post('%s?section=s2&raw=true' %
reverse('wiki.edit_document',
args=[d.slug]),
{"form": "rev",
"slug": d.slug,
"content": replace},
follow=True,
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(normalize_html(expected),
normalize_html(response.content))
expected = """
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>replace</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
"""
response = self.client.get('%s?raw=true' %
reverse('wiki.document',
args=[d.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(normalize_html(expected),
normalize_html(response.content))
@attr('midair')
def test_midair_section_merge(self):
"""If a page was changed while someone was editing, but the changes
didn't affect the specific section being edited, then ignore the midair
warning"""
self.client.login(username='admin', password='testpass')
doc, rev = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
replace_1 = """
<h1 id="replace1">replace1</h1>
<p>replace</p>
"""
replace_2 = """
<h1 id="replace2">replace2</h1>
<p>replace</p>
"""
expected = """
<h1 id="replace1">replace1</h1>
<p>replace</p>
<h1 id="replace2">replace2</h1>
<p>replace</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
"""
data = {
'form': 'rev',
'content': rev.content,
'slug': ''
}
# Edit #1 starts...
resp = self.client.get('%s?section=s1' %
reverse('wiki.edit_document',
args=[doc.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
page = pq(resp.content)
rev_id1 = page.find('input[name="current_rev"]').attr('value')
# Edit #2 starts...
resp = self.client.get('%s?section=s2' %
reverse('wiki.edit_document',
args=[doc.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
page = pq(resp.content)
rev_id2 = page.find('input[name="current_rev"]').attr('value')
# Edit #2 submits successfully
data.update({
'form': 'rev',
'content': replace_2,
'current_rev': rev_id2,
'slug': doc.slug
})
resp = self.client.post('%s?section=s2&raw=true' %
reverse('wiki.edit_document',
args=[doc.slug]),
data,
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(302, resp.status_code)
# Edit #1 submits, but since it's a different section, there's no
# mid-air collision
data.update({
'form': 'rev',
'content': replace_1,
'current_rev': rev_id1
})
resp = self.client.post('%s?section=s1&raw=true' %
reverse('wiki.edit_document', args=[doc.slug]),
data,
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
# No conflict, but we should get a 205 Reset as an indication that the
# page needs a refresh.
eq_(205, resp.status_code)
# Finally, make sure that all the edits landed
response = self.client.get('%s?raw=true' %
reverse('wiki.document',
args=[doc.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(normalize_html(expected),
normalize_html(response.content))
# Also, ensure that the revision is slipped into the headers
eq_(unicode(Document.objects.get(slug=doc.slug, locale=doc.locale)
.current_revision.id),
unicode(response['x-kuma-revision']))
@attr('midair')
def test_midair_section_collision(self):
"""If both a revision and the edited section has changed, then a
section edit is a collision."""
self.client.login(username='admin', password='testpass')
doc, rev = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
replace_1 = """
<h1 id="s2">replace</h1>
<p>replace</p>
"""
replace_2 = """
<h1 id="s2">first replace</h1>
<p>first replace</p>
"""
data = {
'form': 'rev',
'content': rev.content
}
# Edit #1 starts...
resp = self.client.get('%s?section=s2' %
reverse('wiki.edit_document',
args=[doc.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
page = pq(resp.content)
rev_id1 = page.find('input[name="current_rev"]').attr('value')
# Edit #2 starts...
resp = self.client.get('%s?section=s2' %
reverse('wiki.edit_document',
args=[doc.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
page = pq(resp.content)
rev_id2 = page.find('input[name="current_rev"]').attr('value')
# Edit #2 submits successfully
data.update({
'form': 'rev',
'content': replace_2,
'slug': doc.slug,
'current_rev': rev_id2
})
resp = self.client.post('%s?section=s2&raw=true' %
reverse('wiki.edit_document',
args=[doc.slug]),
data, HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(302, resp.status_code)
# Edit #1 submits, but since it's the same section, there's a collision
data.update({
'form': 'rev',
'content': replace_1,
'current_rev': rev_id1
})
resp = self.client.post('%s?section=s2&raw=true' %
reverse('wiki.edit_document',
args=[doc.slug]),
data, HTTP_X_REQUESTED_WITH='XMLHttpRequest')
# With the raw API, we should get a 409 Conflict on collision.
eq_(409, resp.status_code)
def test_raw_include_option(self):
doc_src = u"""
<div class="noinclude">{{ XULRefAttr() }}</div>
<dl>
<dt>{{ XULAttr("maxlength") }}</dt>
<dd>Type: <em>integer</em></dd>
<dd>Przykłady 例 예제 示例</dd>
</dl>
<div class="noinclude">
<p>{{ languages( { "ja": "ja/XUL/Attribute/maxlength" } ) }}</p>
</div>
"""
doc, rev = doc_rev(doc_src)
expected = u"""
<dl>
<dt>{{ XULAttr("maxlength") }}</dt>
<dd>Type: <em>integer</em></dd>
<dd>Przykłady 例 예제 示例</dd>
</dl>
"""
resp = self.client.get('%s?raw&include' %
reverse('wiki.document', args=[doc.slug]),
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
eq_(normalize_html(expected),
normalize_html(resp.content.decode('utf-8')))
def test_section_edit_toc(self):
"""show_toc is preserved in section editing."""
self.client.login(username='admin', password='testpass')
doc, rev = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
rev.toc_depth = 1
rev.save()
replace = """
<h1 id="s2">s2</h1>
<p>replace</p>
"""
self.client.post('%s?section=s2&raw=true' %
reverse('wiki.edit_document', args=[doc.slug]),
{"form": "rev", "slug": doc.slug, "content": replace},
follow=True, HTTP_X_REQUESTED_WITH='XMLHttpRequest')
changed = Document.objects.get(pk=doc.id).current_revision
ok_(rev.id != changed.id)
eq_(1, changed.toc_depth)
def test_section_edit_review_tags(self):
"""review tags are preserved in section editing."""
self.client.login(username='admin', password='testpass')
doc, rev = doc_rev("""
<h1 id="s1">s1</h1>
<p>test</p>
<p>test</p>
<h1 id="s2">s2</h1>
<p>test</p>
<p>test</p>
<h1 id="s3">s3</h1>
<p>test</p>
<p>test</p>
""")
tags_to_save = ['bar', 'foo']
rev.save()
rev.review_tags.set(*tags_to_save)
replace = """
<h1 id="s2">s2</h1>
<p>replace</p>
"""
self.client.post('%s?section=s2&raw=true' %
reverse('wiki.edit_document', args=[doc.slug]),
{"form": "rev", "slug": doc.slug, "content": replace},
follow=True, HTTP_X_REQUESTED_WITH='XMLHttpRequest')
changed = Document.objects.get(pk=doc.id).current_revision
ok_(rev.id != changed.id)
eq_(set(tags_to_save),
set([t.name for t in changed.review_tags.all()]))
class MindTouchRedirectTests(UserTestCase, WikiTestCase):
"""
Test that we appropriately redirect old-style MindTouch URLs to
new-style kuma URLs.
"""
# A note on these tests: we could try to use assertRedirects on
# these, but for the most part we're just constructing a URL
# similar enough to the wiki app's own built-in redirects that
# it'll pick up the request and do what we want with it. But it
# may end up issuing its own redirects, which are tricky to sort
# out from the ones the legacy MindTouch handling will emit, so
# instead we just test that A) we did issue a redirect and B) the
# URL we constructed is enough for the document views to go on.
localizing_client = True
server_prefix = 'http://testserver/%s/docs' % settings.WIKI_DEFAULT_LANGUAGE
namespace_urls = (
# One for each namespace.
{'mindtouch': '/Help:Foo',
'kuma': '%s/Help:Foo' % server_prefix},
{'mindtouch': '/Help_talk:Foo',
'kuma': '%s/Help_talk:Foo' % server_prefix},
{'mindtouch': '/Project:En/MDC_editor_guide',
'kuma': '%s/Project:MDC_editor_guide' % server_prefix},
{'mindtouch': '/Project_talk:En/MDC_style_guide',
'kuma': '%s/Project_talk:MDC_style_guide' % server_prefix},
{'mindtouch': '/Special:Foo',
'kuma': '%s/Special:Foo' % server_prefix},
{'mindtouch': '/Talk:en/Foo',
'kuma': '%s/Talk:Foo' % server_prefix},
{'mindtouch': '/Template:Foo',
'kuma': '%s/Template:Foo' % server_prefix},
{'mindtouch': '/User:Foo',
'kuma': '%s/User:Foo' % server_prefix},
)
documents = (
{'title': 'XHTML', 'mt_locale': 'cn', 'kuma_locale': 'zh-CN',
'expected': '/zh-CN/docs/XHTML'},
{'title': 'JavaScript', 'mt_locale': 'zh_cn', 'kuma_locale': 'zh-CN',
'expected': '/zh-CN/docs/JavaScript'},
{'title': 'XHTML6', 'mt_locale': 'zh_tw', 'kuma_locale': 'zh-CN',
'expected': '/zh-TW/docs/XHTML6'},
{'title': 'HTML7', 'mt_locale': 'fr', 'kuma_locale': 'fr',
'expected': '/fr/docs/HTML7'},
)
def test_namespace_urls(self):
new_doc = document()
new_doc.title = 'User:Foo'
new_doc.slug = 'User:Foo'
new_doc.save()
for namespace_test in self.namespace_urls:
resp = self.client.get(namespace_test['mindtouch'], follow=False)
eq_(301, resp.status_code)
eq_(namespace_test['kuma'], resp['Location'])
def test_trailing_slash(self):
d = document()
d.locale = 'zh-CN'
d.slug = 'foofoo'
d.title = 'FooFoo'
d.save()
mt_url = '/cn/%s/' % (d.slug,)
resp = self.client.get(mt_url)
eq_(301, resp.status_code)
eq_('http://testserver%s' % d.get_absolute_url(), resp['Location'])
def test_document_urls(self):
for doc in self.documents:
d = document()
d.title = doc['title']
d.slug = doc['title']
d.locale = doc['kuma_locale']
d.save()
mt_url = '/%s' % '/'.join([doc['mt_locale'], doc['title']])
resp = self.client.get(mt_url)
eq_(301, resp.status_code)
eq_('http://testserver%s' % doc['expected'], resp['Location'])
def test_view_param(self):
d = document()
d.locale = settings.WIKI_DEFAULT_LANGUAGE
d.slug = 'HTML/HTML5'
d.title = 'HTML 5'
d.save()
mt_url = '/en-US/%s?view=edit' % (d.slug,)
resp = self.client.get(mt_url)
eq_(301, resp.status_code)
expected_url = 'http://testserver%s$edit' % d.get_absolute_url()
eq_(expected_url, resp['Location'])
class AutosuggestDocumentsTests(WikiTestCase):
"""
Test the we're properly filtering out the Redirects from the document list
"""
localizing_client = True
def test_autosuggest_no_term(self):
url = reverse('wiki.autosuggest_documents',
locale=settings.WIKI_DEFAULT_LANGUAGE)
resp = self.client.get(url)
eq_(400, resp.status_code)
def test_document_redirects(self):
# All contain "e", so that will be the search term
invalid_documents = (
{
'title': 'Something Redirect 8',
'html': 'REDIRECT <a class="redirect" href="/blah">Something Redirect</a>',
'is_redirect': 1
},
)
valid_documents = (
{'title': 'e 6', 'html': '<p>Blah text Redirect'},
{'title': 'e 7', 'html': 'AppleTalk'},
{'title': 'Response.Redirect'},
)
for doc in invalid_documents + valid_documents:
d = document()
d.title = doc['title']
if 'html' in doc:
d.html = doc['html']
if 'slug' in doc:
d.slug = doc['slug']
if 'is_redirect' in doc:
d.is_redirect = 1
d.save()
url = reverse('wiki.autosuggest_documents',
locale=settings.WIKI_DEFAULT_LANGUAGE) + '?term=e'
Switch.objects.create(name='application_ACAO', active=True)
resp = self.client.get(url)
ok_('Access-Control-Allow-Origin' in resp)
eq_('*', resp['Access-Control-Allow-Origin'])
eq_(200, resp.status_code)
data = json.loads(resp.content)
eq_(len(data), len(valid_documents))
# Ensure that the valid docs found are all in the valid list
for d in data:
found = False
for v in valid_documents:
if v['title'] in d['title']:
found = True
break
eq_(True, found)
def test_list_no_redirects(self):
Document.objects.all().delete()
invalid_documents = [
{
'title': 'Something Redirect 8',
'slug': 'xx',
'html': 'REDIRECT <a class="redirect" href="%s">yo</a>' % settings.SITE_URL
},
{
'title': 'My Template',
'slug': 'Template:Something',
'html': 'blah',
},
]
valid_documents = [
{'title': 'A Doc', 'slug': 'blah', 'html': 'Blah blah blah'}
]
for doc in invalid_documents + valid_documents:
document(save=True, slug=doc['slug'],
title=doc['title'], html=doc['html'])
resp = self.client.get(reverse('wiki.all_documents',
locale=settings.WIKI_DEFAULT_LANGUAGE))
eq_(len(valid_documents), len(pq(resp.content).find('.document-list li')))
class CodeSampleViewTests(UserTestCase, WikiTestCase):
localizing_client = True
@override_constance_settings(
KUMA_WIKI_IFRAME_ALLOWED_HOSTS='^https?\:\/\/testserver')
def test_code_sample_1(self):
"""The raw source for a document can be requested"""
d, r = doc_rev("""
<p>This is a page. Deal with it.</p>
<div id="sample1" class="code-sample">
<pre class="brush: html">Some HTML</pre>
<pre class="brush: css">.some-css { color: red; }</pre>
<pre class="brush: js">window.alert("HI THERE")</pre>
</div>
<p>test</p>
""")
expecteds = (
'<style type="text/css">.some-css { color: red; }</style>',
'Some HTML',
'<script type="text/javascript">window.alert("HI THERE")</script>',
)
Switch.objects.create(name='application_ACAO', active=True)
response = self.client.get(reverse('wiki.code_sample',
args=[d.slug, 'sample1']),
HTTP_HOST='testserver')
ok_('Access-Control-Allow-Origin' in response)
eq_('*', response['Access-Control-Allow-Origin'])
eq_(200, response.status_code)
normalized = normalize_html(response.content)
# Content checks
ok_('<!DOCTYPE html>' in response.content)
for item in expecteds:
ok_(item in normalized)
@override_constance_settings(
KUMA_WIKI_IFRAME_ALLOWED_HOSTS='^https?\:\/\/sampleserver')
def test_code_sample_host_restriction(self):
d, r = doc_rev("""
<p>This is a page. Deal with it.</p>
<div id="sample1" class="code-sample">
<pre class="brush: html">Some HTML</pre>
<pre class="brush: css">.some-css { color: red; }</pre>
<pre class="brush: js">window.alert("HI THERE")</pre>
</div>
<p>test</p>
""")
response = self.client.get(reverse('wiki.code_sample',
args=[d.slug, 'sample1']),
HTTP_HOST='testserver')
eq_(403, response.status_code)
response = self.client.get(reverse('wiki.code_sample',
args=[d.slug, 'sample1']),
HTTP_HOST='sampleserver')
eq_(200, response.status_code)
@override_constance_settings(
KUMA_WIKI_IFRAME_ALLOWED_HOSTS='^https?\:\/\/sampleserver')
def test_code_sample_iframe_embed(self):
slug = 'test-code-embed'
embed_url = ('https://sampleserver/%s/docs/%s$samples/sample1' %
(settings.WIKI_DEFAULT_LANGUAGE, slug))
doc_src = """
<p>This is a page. Deal with it.</p>
<div id="sample1" class="code-sample">
<pre class="brush: html">Some HTML</pre>
<pre class="brush: css">.some-css { color: red; }</pre>
<pre class="brush: js">window.alert("HI THERE")</pre>
</div>
<iframe id="if1" src="%(embed_url)s"></iframe>
<iframe id="if2" src="http://testserver"></iframe>
<iframe id="if3" src="https://some.alien.site.com"></iframe>
<p>test</p>
""" % dict(embed_url=embed_url)
slug = 'test-code-doc'
d, r = doc_rev()
revision(save=True, document=d, title="Test code doc", slug=slug,
content=doc_src)
response = self.client.get(reverse('wiki.document', args=(d.slug,)))
eq_(200, response.status_code)
page = pq(response.content)
if1 = page.find('#if1')
eq_(if1.length, 1)
eq_(if1.attr('src'), embed_url)
if2 = page.find('#if2')
eq_(if2.length, 1)
eq_(if2.attr('src'), '')
if3 = page.find('#if3')
eq_(if3.length, 1)
eq_(if3.attr('src'), '')
class CodeSampleViewFileServingTests(UserTestCase, WikiTestCase):
@override_constance_settings(
KUMA_WIKI_IFRAME_ALLOWED_HOSTS='^https?\:\/\/testserver',
WIKI_ATTACHMENT_ALLOWED_TYPES='text/plain')
@override_settings(ATTACHMENT_HOST='testserver')
def test_code_sample_file_serving(self):
self.client.login(username='admin', password='testpass')
# first let's upload a file
file_for_upload = make_test_file(content='Something something unique')
post_data = {
'title': 'An uploaded file',
'description': 'A unique experience for your file serving needs.',
'comment': 'Yadda yadda yadda',
'file': file_for_upload,
}
response = self.client.post(reverse('attachments.new_attachment'),
data=post_data)
eq_(response.status_code, 302)
# then build the document and revision we need to test
attachment = Attachment.objects.get(title='An uploaded file')
filename = attachment.current_revision.filename()
url_css = 'url("files/%(attachment_id)s/%(filename)s")' % {
'attachment_id': attachment.id,
'filename': filename,
}
doc, rev = doc_rev("""
<p>This is a page. Deal with it.</p>
<div id="sample1" class="code-sample">
<pre class="brush: html">Some HTML</pre>
<pre class="brush: css">.some-css { background: %s }</pre>
<pre class="brush: js">window.alert("HI THERE")</pre>
</div>
<p>test</p>
""" % url_css)
# then see of the code sample view has successfully found the sample
response = self.client.get(reverse('wiki.code_sample',
args=[doc.slug, 'sample1'],
locale='en-US'))
eq_(response.status_code, 200)
normalized = normalize_html(response.content)
ok_(url_css in normalized)
# and then we try if a redirect by the file serving view redirects
# to the main file serving view
response = self.client.get(reverse('wiki.raw_code_sample_file',
args=[doc.slug,
'sample1',
attachment.id,
filename],
locale='en-US'))
eq_(response.status_code, 302)
eq_(response['Location'], attachment.get_file_url())
class DeferredRenderingViewTests(UserTestCase, WikiTestCase):
"""Tests for the deferred rendering system and interaction with views"""
localizing_client = True
def setUp(self):
super(DeferredRenderingViewTests, self).setUp()
self.rendered_content = 'HELLO RENDERED CONTENT'
self.raw_content = 'THIS IS RAW CONTENT'
self.d, self.r = doc_rev(self.raw_content)
# Disable TOC, makes content inspection easier.
self.r.toc_depth = 0
self.r.save()
self.d.html = self.raw_content
self.d.rendered_html = self.rendered_content
self.d.save()
self.url = reverse('wiki.document',
args=(self.d.slug,),
locale=self.d.locale)
config.KUMASCRIPT_TIMEOUT = 5.0
config.KUMASCRIPT_MAX_AGE = 600
def tearDown(self):
super(DeferredRenderingViewTests, self).tearDown()
config.KUMASCRIPT_TIMEOUT = 0
config.KUMASCRIPT_MAX_AGE = 0
@mock.patch('kuma.wiki.kumascript.get')
def test_rendered_content(self, mock_kumascript_get):
"""Document view should serve up rendered content when available"""
mock_kumascript_get.return_value = (self.rendered_content, None)
resp = self.client.get(self.url, follow=False)
p = pq(resp.content)
txt = p.find('#wikiArticle').text()
ok_(self.rendered_content in txt)
ok_(self.raw_content not in txt)
eq_(0, p.find('#doc-rendering-in-progress').length)
eq_(0, p.find('#doc-render-raw-fallback').length)
def test_rendering_in_progress_warning(self):
"""Document view should serve up rendered content when available"""
# Make the document look like there's a rendering in progress.
self.d.render_started_at = datetime.datetime.now()
self.d.save()
resp = self.client.get(self.url, follow=False)
p = pq(resp.content)
txt = p.find('#wikiArticle').text()
# Even though a rendering looks like it's in progress, ensure the
# last-known render is displayed.
ok_(self.rendered_content in txt)
ok_(self.raw_content not in txt)
eq_(0, p.find('#doc-rendering-in-progress').length)
# Only for logged-in users, ensure the render-in-progress warning is
# displayed.
self.client.login(username='testuser', password='testpass')
resp = self.client.get(self.url, follow=False)
p = pq(resp.content)
eq_(1, p.find('#doc-rendering-in-progress').length)
@mock.patch('kuma.wiki.kumascript.get')
def test_raw_content_during_initial_render(self, mock_kumascript_get):
"""Raw content should be displayed during a document's initial
deferred rendering"""
mock_kumascript_get.return_value = (self.rendered_content, None)
# Make the document look like there's no rendered content, but that a
# rendering is in progress.
self.d.html = self.raw_content
self.d.rendered_html = ''
self.d.render_started_at = datetime.datetime.now()
self.d.save()
# Now, ensure that raw content is shown in the view.
resp = self.client.get(self.url, follow=False)
p = pq(resp.content)
txt = p.find('#wikiArticle').text()
ok_(self.rendered_content not in txt)
ok_(self.raw_content in txt)
eq_(0, p.find('#doc-render-raw-fallback').length)
# Only for logged-in users, ensure that a warning is displayed about
# the fallback
self.client.login(username='testuser', password='testpass')
resp = self.client.get(self.url, follow=False)
p = pq(resp.content)
eq_(1, p.find('#doc-render-raw-fallback').length)
@attr('schedule_rendering')
@mock.patch.object(Document, 'schedule_rendering')
@mock.patch('kuma.wiki.kumascript.get')
def test_schedule_rendering(self, mock_kumascript_get,
mock_document_schedule_rendering):
mock_kumascript_get.return_value = (self.rendered_content, None)
self.client.login(username='testuser', password='testpass')
data = new_document_data()
data.update({
'form': 'rev',
'content': 'This is an update',
})
edit_url = reverse('wiki.edit_document', args=[self.d.slug])
resp = self.client.post(edit_url, data)
eq_(302, resp.status_code)
ok_(mock_document_schedule_rendering.called)
mock_document_schedule_rendering.reset_mock()
data.update({
'form': 'both',
'content': 'This is a translation',
})
translate_url = (reverse('wiki.translate', args=[data['slug']],
locale=settings.WIKI_DEFAULT_LANGUAGE) + '?tolocale=fr')
response = self.client.post(translate_url, data)
eq_(302, response.status_code)
ok_(mock_document_schedule_rendering.called)
@mock.patch('kuma.wiki.kumascript.get')
@mock.patch('requests.post')
def test_alternate_bleach_whitelist(self, mock_requests_post,
mock_kumascript_get):
# Some test content with contentious tags.
test_content = """
<p id="foo">
<a style="position: absolute; border: 1px;" href="http://example.com">This is a test</a>
<textarea name="foo"></textarea>
</p>
"""
# Expected result filtered through old/current Bleach rules
expected_content_old = """
<p id="foo">
<a style="position: absolute; border: 1px;" href="http://example.com">This is a test</a>
<textarea name="foo"></textarea>
</p>
"""
# Expected result filtered through alternate whitelist
expected_content_new = """
<p id="foo">
<a style="border: 1px;" href="http://example.com">This is a test</a>
<textarea name="foo"></textarea>
</p>
"""
# Set up an alternate set of whitelists...
config.BLEACH_ALLOWED_TAGS = json.dumps([
"a", "p"
])
config.BLEACH_ALLOWED_ATTRIBUTES = json.dumps({
"a": ['href', 'style'],
"p": ['id']
})
config.BLEACH_ALLOWED_STYLES = json.dumps([
"border"
])
config.KUMASCRIPT_TIMEOUT = 100
# Rig up a mocked response from KumaScript GET method
mock_kumascript_get.return_value = (test_content, None)
# Rig up a mocked response from KumaScript POST service
# Digging a little deeper into the stack, so that the rest of
# kumascript.post processing happens.
from StringIO import StringIO
m_resp = mock.Mock()
m_resp.status_code = 200
m_resp.text = test_content
m_resp.read = StringIO(test_content).read
mock_requests_post.return_value = m_resp
d, r = doc_rev(test_content)
trials = (
(False, '', expected_content_old),
(False, '&bleach_new', expected_content_old),
(True, '', expected_content_old),
(True, '&bleach_new', expected_content_new),
)
for trial in trials:
do_login, param, expected = trial
if do_login:
self.client.login(username='testuser', password='testpass')
else:
self.client.logout()
url = ('%s?raw¯os%s' % (
reverse('wiki.document', args=(d.slug,), locale=d.locale),
param))
resp = self.client.get(url, follow=True)
eq_(normalize_html(expected),
normalize_html(resp.content),
"Should match? %s %s %s %s" %
(do_login, param, expected, resp.content))
class APITests(UserTestCase, WikiTestCase):
localizing_client = True
def setUp(self):
super(APITests, self).setUp()
self.username = 'tester23'
self.password = 'trustno1'
self.email = 'tester23@example.com'
self.user = user(username=self.username,
email=self.email,
password=self.password,
save=True)
self.key = Key(user=self.user, description='Test Key 1')
self.secret = self.key.generate_secret()
self.key_id = self.key.key
self.key.save()
auth = '%s:%s' % (self.key_id, self.secret)
self.basic_auth = 'Basic %s' % base64.encodestring(auth)
self.d, self.r = doc_rev("""
<h3 id="S1">Section 1</h3>
<p>This is a page. Deal with it.</p>
<h3 id="S2">Section 2</h3>
<p>This is a page. Deal with it.</p>
<h3 id="S3">Section 3</h3>
<p>This is a page. Deal with it.</p>
""")
self.r.tags = "foo, bar, baz"
self.r.review_tags.set('technical', 'editorial')
self.url = self.d.get_absolute_url()
def tearDown(self):
super(APITests, self).tearDown()
Document.objects.filter(current_revision__creator=self.user).delete()
Revision.objects.filter(creator=self.user).delete()
Key.objects.filter(user=self.user).delete()
self.user.delete()
def test_put_existing(self):
"""PUT API should allow overwrite of existing document content"""
data = dict(
summary="Look, I made an edit!",
content="""
<p>This is an edit to the page. We've dealt with it.</p>
""",
)
# No auth key leads to a 403 Forbidden
resp = self._put(self.url, data)
eq_(403, resp.status_code)
# But, this should work, given a proper auth key
resp = self._put(self.url, data,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(205, resp.status_code)
# Verify the edit happened.
curr_d = Document.objects.get(pk=self.d.pk)
eq_(normalize_html(data['content'].strip()),
normalize_html(Document.objects.get(pk=self.d.pk).html))
# Also, verify that this resulted in a new revision.
curr_r = curr_d.current_revision
ok_(self.r.pk != curr_r.pk)
eq_(data['summary'], curr_r.summary)
r_tags = ','.join(sorted(t.name for t in curr_r.review_tags.all()))
eq_('editorial,technical', r_tags)
def test_put_section_edit(self):
"""PUT API should allow overwrite of a specific section of an existing
document"""
data = dict(
content="""
<h3 id="S2">Section 2</h3>
<p>This is an edit to the page. We've dealt with it.</p>
""",
# Along with the section, let's piggyback in some other metadata
# edits just for good measure. They're not tied to section edit
# though.
title="Hahah this is a new title!",
tags="hello,quux,xyzzy",
review_tags="technical",
)
resp = self._put('%s?section=S2' % self.url, data,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(205, resp.status_code)
expected = """
<h3 id="S1">Section 1</h3>
<p>This is a page. Deal with it.</p>
<h3 id="S2">Section 2</h3>
<p>This is an edit to the page. We've dealt with it.</p>
<h3 id="S3">Section 3</h3>
<p>This is a page. Deal with it.</p>
"""
# Verify the section edit happened.
curr_d = Document.objects.get(pk=self.d.pk)
eq_(normalize_html(expected.strip()),
normalize_html(curr_d.html))
eq_(data['title'], curr_d.title)
d_tags = ','.join(sorted(t.name for t in curr_d.tags.all()))
eq_(data['tags'], d_tags)
# Also, verify that this resulted in a new revision.
curr_r = curr_d.current_revision
ok_(self.r.pk != curr_r.pk)
r_tags = ','.join(sorted(t.name for t in curr_r.review_tags.all()))
eq_(data['review_tags'], r_tags)
def test_put_new_root(self):
"""PUT API should allow creation of a document whose path would place
it at the root of the topic hierarchy."""
slug = 'new-root-doc'
url = reverse('wiki.document', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
data = dict(
title="This is the title of a new page",
content="""
<p>This is a new page, hooray!</p>
""",
tags="hello,quux,xyzzy",
review_tags="technical",
)
resp = self._put(url, data,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(201, resp.status_code)
def test_put_new_child(self):
"""PUT API should allow creation of a document whose path would make it
a child of an existing parent."""
data = dict(
title="This is the title of a new page",
content="""
<p>This is a new page, hooray!</p>
""",
tags="hello,quux,xyzzy",
review_tags="technical",
)
# This first attempt should fail; the proposed parent does not exist.
url = '%s/nonexistent/newchild' % self.url
resp = self._put(url, data,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(404, resp.status_code)
# TODO: I suppose we could rework this part to create the chain of
# missing parents with stub content, but currently this demands
# that API users do that themselves.
# Now, fill in the parent gap...
p_doc = document(slug='%s/nonexistent' % self.d.slug,
locale=settings.WIKI_DEFAULT_LANGUAGE,
parent_topic=self.d)
p_doc.save()
p_rev = revision(document=p_doc,
slug='%s/nonexistent' % self.d.slug,
title='I EXIST NOW', save=True)
p_rev.save()
# The creation should work, now.
resp = self._put(url, data,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(201, resp.status_code)
new_slug = '%s/nonexistent/newchild' % self.d.slug
new_doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=new_slug)
eq_(p_doc.pk, new_doc.parent_topic.pk)
def test_put_unsupported_content_type(self):
"""PUT API should complain with a 400 Bad Request on an unsupported
content type submission"""
slug = 'new-root-doc'
url = reverse('wiki.document', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
data = "I don't even know what this content is."
resp = self._put(url, json.dumps(data),
content_type='x-super-happy-fun-text',
HTTP_AUTHORIZATION=self.basic_auth)
eq_(400, resp.status_code)
def test_put_json(self):
"""PUT API should handle application/json requests"""
slug = 'new-root-json-doc'
url = reverse('wiki.document', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
data = dict(
title="This is the title of a new page",
content="""
<p>This is a new page, hooray!</p>
""",
tags="hello,quux,xyzzy",
review_tags="technical",
)
resp = self._put(url, json.dumps(data),
content_type='application/json',
HTTP_AUTHORIZATION=self.basic_auth)
eq_(201, resp.status_code)
new_doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=slug)
eq_(data['title'], new_doc.title)
eq_(normalize_html(data['content']), normalize_html(new_doc.html))
def test_put_simple_html(self):
"""PUT API should handle text/html requests"""
slug = 'new-root-html-doc-1'
url = reverse('wiki.document', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
html = """
<p>This is a new page, hooray!</p>
"""
resp = self._put(url, html, content_type='text/html',
HTTP_AUTHORIZATION=self.basic_auth)
eq_(201, resp.status_code)
new_doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=slug)
eq_(normalize_html(html), normalize_html(new_doc.html))
def test_put_complex_html(self):
"""PUT API should handle text/html requests with complex HTML documents
and extract document fields from the markup"""
slug = 'new-root-html-doc-2'
url = reverse('wiki.document', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
data = dict(
title='This is a complex document',
content="""
<p>This is a new page, hooray!</p>
""",
)
html = """
<html>
<head>
<title>%(title)s</title>
</head>
<body>%(content)s</body>
</html>
""" % data
resp = self._put(url, html, content_type='text/html',
HTTP_AUTHORIZATION=self.basic_auth)
eq_(201, resp.status_code)
new_doc = Document.objects.get(locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=slug)
eq_(data['title'], new_doc.title)
eq_(normalize_html(data['content']), normalize_html(new_doc.html))
# TODO: Anything else useful to extract from HTML?
# Extract tags from head metadata?
def test_put_track_authkey(self):
"""Revisions modified by PUT API should track the auth key used"""
slug = 'new-root-doc'
url = reverse('wiki.document', args=(slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
data = dict(
title="This is the title of a new page",
content="""
<p>This is a new page, hooray!</p>
""",
tags="hello,quux,xyzzy",
review_tags="technical",
)
resp = self._put(url, data, HTTP_AUTHORIZATION=self.basic_auth)
eq_(201, resp.status_code)
last_log = self.key.history.order_by('-pk').all()[0]
eq_('created', last_log.action)
data['title'] = 'New title for old page'
resp = self._put(url, data, HTTP_AUTHORIZATION=self.basic_auth)
eq_(205, resp.status_code)
last_log = self.key.history.order_by('-pk').all()[0]
eq_('updated', last_log.action)
def test_put_etag_conflict(self):
"""A PUT request with an if-match header throws a 412 Precondition
Failed if the underlying document has been changed."""
resp = self.client.get(self.url)
orig_etag = resp['ETag']
content1 = """
<h2 id="s1">Section 1</h2>
<p>New section 1</p>
<h2 id="s2">Section 2</h2>
<p>New section 2</p>
"""
# First update should work.
resp = self._put(self.url, dict(content=content1),
HTTP_IF_MATCH=orig_etag,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(205, resp.status_code)
# Get the new etag, ensure it doesn't match the original.
resp = self.client.get(self.url)
new_etag = resp['ETag']
ok_(orig_etag != new_etag)
# But, the ETag should have changed, so this update shouldn't work.
# Using the old ETag suggests a mid-air edit collision happened.
resp = self._put(self.url, dict(content=content1),
HTTP_IF_MATCH=orig_etag,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(412, resp.status_code)
# Just for good measure, switching to the new ETag should work
resp = self._put(self.url, dict(content=content1),
HTTP_IF_MATCH=new_etag,
HTTP_AUTHORIZATION=self.basic_auth)
eq_(205, resp.status_code)
def _put(self, path, data={}, content_type=MULTIPART_CONTENT,
follow=False, **extra):
"""django.test.client.put() does the wrong thing, here. This does
better, based on post()."""
if content_type is MULTIPART_CONTENT:
post_data = encode_multipart(BOUNDARY, data)
else:
# Encode the content so that the byte representation is correct.
match = CONTENT_TYPE_RE.match(content_type)
if match:
charset = match.group(1)
else:
charset = settings.DEFAULT_CHARSET
post_data = smart_str(data, encoding=charset)
parsed = urlparse(path)
params = {
'CONTENT_LENGTH': len(post_data),
'CONTENT_TYPE': content_type,
'PATH_INFO': self.client._get_path(parsed),
'QUERY_STRING': parsed[4],
'REQUEST_METHOD': 'PUT',
'wsgi.input': FakePayload(post_data),
}
params.update(extra)
response = self.client.request(**params)
if follow:
response = self.client._handle_redirects(response, **extra)
return response
class PageMoveTests(UserTestCase, WikiTestCase):
localizing_client = True
def setUp(self):
super(PageMoveTests, self).setUp()
page_move_flag = Flag.objects.create(name='page_move')
page_move_flag.users = self.user_model.objects.filter(is_superuser=True)
page_move_flag.save()
def test_move_conflict(self):
parent = revision(title='Test page move views',
slug='test-page-move-views',
is_approved=True,
save=True)
parent_doc = parent.document
child = revision(title='Child of page-move view test',
slug='page-move/test-views',
is_approved=True,
save=True)
child_doc = child.document
child_doc.parent_topic = parent.document
child_doc.save()
revision(title='Conflict for page-move view',
slug='moved/test-page-move-views/test-views',
is_approved=True,
save=True)
data = {'slug': 'moved/test-page-move-views'}
self.client.login(username='admin', password='testpass')
resp = self.client.post(reverse('wiki.move',
args=(parent_doc.slug,),
locale=parent_doc.locale),
data=data)
eq_(200, resp.status_code)
class DocumentZoneTests(UserTestCase, WikiTestCase):
localizing_client = True
def setUp(self):
super(DocumentZoneTests, self).setUp()
root_rev = revision(title='ZoneRoot', slug='ZoneRoot',
content='This is the Zone Root',
is_approved=True, save=True)
self.root_doc = root_rev.document
middle_rev = revision(title='middlePage', slug='middlePage',
content='This is a middlepage',
is_approved=True, save=True)
self.middle_doc = middle_rev.document
self.middle_doc.parent_topic = self.root_doc
self.middle_doc.save()
sub_rev = revision(title='SubPage', slug='SubPage',
content='This is a subpage',
is_approved=True, save=True)
self.sub_doc = sub_rev.document
self.sub_doc.parent_topic = self.middle_doc
self.sub_doc.save()
self.root_zone = DocumentZone(document=self.root_doc)
self.root_zone.styles = """
article { color: blue; }
"""
self.root_zone.save()
self.middle_zone = DocumentZone(document=self.middle_doc)
self.middle_zone.styles = """
article { font-weight: bold; }
"""
self.middle_zone.save()
def test_zone_styles(self):
"""Ensure CSS styles for a zone can be fetched"""
url = reverse('wiki.styles', args=(self.root_doc.slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.get(url, follow=True)
eq_(self.root_zone.styles, response.content)
url = reverse('wiki.styles', args=(self.middle_doc.slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.get(url, follow=True)
eq_(self.middle_zone.styles, response.content)
url = reverse('wiki.styles', args=(self.sub_doc.slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.get(url, follow=True)
eq_(404, response.status_code)
def test_zone_styles_links(self):
"""Ensure link to zone style appears in child document views"""
url = reverse('wiki.document', args=(self.sub_doc.slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
response = self.client.get(url, follow=True)
styles_url = reverse('wiki.styles', args=(self.root_doc.slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
root_expected = ('<link rel="stylesheet" type="text/css" href="%s"' %
styles_url)
ok_(root_expected in response.content)
styles_url = reverse('wiki.styles', args=(self.middle_doc.slug,),
locale=settings.WIKI_DEFAULT_LANGUAGE)
middle_expected = ('<link rel="stylesheet" type="text/css" href="%s"' %
styles_url)
ok_(middle_expected in response.content)
class ListDocumentTests(UserTestCase, WikiTestCase):
"""Tests for list_documents view"""
localizing_client = True
fixtures = UserTestCase.fixtures + ['wiki/documents.json']
def test_case_insensitive_tags(self):
"""
Bug 976071 - Tags should be case insensitive
https://bugzil.la/976071
"""
lower_tag = DocumentTag.objects.create(name='foo', slug='foo')
lower_tag.save()
doc = Document.objects.get(pk=1)
doc.tags.set(lower_tag)
response = self.client.get(reverse('wiki.tag', args=['foo']))
ok_(doc.slug in response.content.decode('utf-8'))
response = self.client.get(reverse('wiki.tag', args=['Foo']))
ok_(doc.slug in response.content.decode('utf-8'))
|
chirilo/kuma
|
kuma/wiki/tests/test_views.py
|
Python
|
mpl-2.0
| 168,228
|
[
"VisIt"
] |
2f7e0c6e63db563bad12338708aa47d9d86b3d8efcfcff877d55d1630804a050
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.