markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
上と下のコードセルでリストの連結を行っているが、++/2 演算子を用いている。 この `++/2` という表記は `++` が演算子自体で `/2` がアリティ (引数の数) を表す。 ---質問 $\quad$ アリティとはなにか。---質問 $\quad$ リストの連結に `++` で文字列の連結 `` なのはなぜか。 オーバーライディングはあるのか。 文字列 string はリストではないのか。 長さを測る関数も別々なのか。
# リストの連結 !elixir -e 'IO.inspect [1, 2] ++ [3, 4, 1]' # リストの減算 # --/2 演算子は存在しない値を引いてしまってもオッケー !elixir -e 'IO.inspect ["foo", :bar, 42] -- [42, "bar"]' # 重複した値の場合、右辺の要素のそれぞれに対し、左辺の要素のうち初めて登場した同じ値が順次削除 !elixir -e 'IO.inspect [1,2,2,3,2,3] -- [1,2,3,2]' # リストの減算の値のマッチには strict comparison が使われている !elixir -e 'IO.inspect [2] -- [2.0]' !elixir -e 'IO.inspect [2.0] -- [2.0]' # head /tail !elixir -e 'IO.inspect hd [3.14, :pie, "Apple"]' !elixir -e 'IO.inspect tl [3.14, :pie, "Apple"]'
3.14 [:pie, "Apple"]
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---リストを頭部と尾部に分けるのに* パターンマッチング* cons 演算子( `|` )を使うこともできる。
!elixir -e '[head | tail] = [3.14, :pie, "Apple"]; IO.inspect head; IO.inspect tail'
3.14 [:pie, "Apple"]
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
キーワードリストキーワードリストとマップは elixir の連想配列である。キーワードリストは最初の要素がアトムのタプルからなる特別なリストで、リストと同様の性能になる。
# キーワードリスト !elixir -e 'IO.inspect [foo: "bar", hello: "world"]' # タプルのリストとしても同じ !elixir -e 'IO.inspect [{:foo, "bar"}, {:hello, "world"}]' !elixir -e 'IO.inspect [foo: "bar", hello: "world"] == [{:foo, "bar"}, {:hello, "world"}]'
[foo: "bar", hello: "world"] [foo: "bar", hello: "world"] true
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
キーワードリストの 3 つの特徴* キーはアトムである。* キーは順序付けされている。* キーの一意性は保証されない。こうした理由から、キーワードリストは関数にオプションを渡すためによく用いられる。
# 実験 リストの角括弧は省略できる !elixir -e 'IO.inspect foo: "bar", hello: "world"' # 実験 !elixir -e 'IO.inspect [1, fred: 1, dave: 2]' !elixir -e 'IO.inspect {1, fred: 1, dave: 2}' !elixir -e 'IO.inspect {1, [{:fred,1},{:dave, 2}]}'
[1, {:fred, 1}, {:dave, 2}] {1, [fred: 1, dave: 2]} {1, [fred: 1, dave: 2]}
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
マップ* キーワードリストとは違ってどんな型のキーも使える。* 順序付けされない。* キーの一意性が保証されている。重複したキーが追加された場合は、前の値が置き換えられる。* 変数をマップのキーにできる。* `%{}` 構文で定義する。
!elixir -e 'IO.inspect %{:foo => "bar", "hello" => :world}' !elixir -e 'map = %{:foo => "bar", "hello" => :world}; IO.inspect map[:foo]' !elixir -e 'map = %{:foo => "bar", "hello" => :world}; IO.inspect map["hello"]' !echo !elixir -e 'key = "hello"; IO.inspect %{key => "world"}' !echo !elixir -e 'IO.inspect %{:foo => "bar", :foo => "hello world"}'
%{:foo => "bar", "hello" => :world} "bar" :world %{"hello" => "world"} warning: key :foo will be overridden in map nofile:1 %{foo: "hello world"}
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
アトムのキーだけを含んだマップには特別な構文がある。
!elixir -e 'IO.inspect %{foo: "bar", hello: "world"} == %{:foo => "bar", :hello => "world"}' # 加えて、アトムのキーにアクセスするための特別な構文がある。 !elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect map.hello' !elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect map[:hello]' !elixir -e 'map = %{:foo => "bar", :hello => "world"}; IO.inspect map[:hello]'
"world" "world" "world"
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---質問 map の特別な構文1. `=>` の代わりにコロン `:` を使う2. 要素を取り出すのに `[]` の代わりにピリオド `.` を使うは不要ではないか。不要だが見かけが良くなる、ということか。普通はどっちを使うのか。無駄に構文を複雑にするだけのような気がする。多分まず Python の dict でコロン `:` を使うこと、Ruby は `=>` を使うが糖衣構文としてコロン `:` が使えてその形が主流であることから、見かけ大切ということでこうなったのではないか。キーにアトムを使うことが前提ならば生産性が上がるかもしれない。キーであることを示すコロンが不要になる。fat arrow よりコロンの方が短い。map の定義が同時に行われる。要素の取り出しピリオドを使う点についても同様。ということは基本的にこの構文になる、と言う事だろう。
# マップの更新のための構文がある (新しい map が作成される) # この構文は、マップに既に存在するキーを更新する場合にのみ機能する !elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect %{map | foo: "baz"}' # 新しいキーを作成するには、`Map.put/3` を使用 !elixir -e 'map = %{hello: "world"}; IO.inspect Map.put(map, :foo, "baz")'
%{foo: "baz", hello: "world"}
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---質問 binary については良くわからないので別途。 バイナリ binary
# binaries !elixir -e 'IO.inspect <<1,2>>' !elixir -e 'IO.inspect <<1,10>>' !elixir -e 'bin = <<1,10>>; IO.inspect byte_size bin' !elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect bin' !elixir -e 'IO.puts Integer.to_string(213,2)' !elixir -e 'IO.puts 0b11' !elixir -e 'IO.puts 0b0101' !echo !elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect byte_size bin' !elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect :io.format("~-8.2b~n",:binary.bin_to_list(bin))' !elixir -e 'IO.inspect <<1,2>> <> <<3>>'
<<1, 2, 3>>
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
----** Date and Time 日付 **
# Date and Time !elixir -e 'IO.inspect Date.new(2021,6,2)' !elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect d1' !elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect Date.day_of_week(d1)' !elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect Date.add(d1,7)' !elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect d1, structs: false'
~D[2021-06-02] 3 ~D[2021-06-09] %{__struct__: Date, calendar: Calendar.ISO, day: 2, month: 6, year: 2021}
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
`~D[...]` や `~T[...]` は elixir の シギル sigil である。 文字列とバイナリーのところで説明する。 help についてメモ $\quad$ 関数の調べ方Helper の使い方。 help, type, info, information とか。下のコードセルにあるように、対象のモジュールの関数名を調べ、そのヘルプを見ればけっこうくわしくわかる。コメントアウトしてあるのは出力が大きいので、とりあえずコメントアウトして出力を抑制してある。具体的には、Enum にあたるところにモジュール名を入れて関数のリストを出す。 Ctrl+A Ctrl+C でコピーして vscode などでペーストして読む。 調べたい関数名をヘルプの、Enum.all?/1 のところに入れて出力をコピーして、vscode などでペーストして読む
# !elixir -e 'Enum.__info__(:functions) |> Enum.each(fn({function, arity}) -> IO.puts "#{function}/#{arity}" end)' # !elixir -e 'require IEx.Helpers;IEx.Helpers.h Enum.all?/1' # h 単独のドキュメントを見たい # !elixir -e 'require IEx.Helpers;IEx.Helpers.h' # i というのもある # !elixir -e 'x = [3,2]; require IEx.Helpers;IEx.Helpers.i x' # !elixir -e 'require IEx.Helpers;IEx.Helpers.h IO'
_____no_output_____
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
Enum モジュールEnum はリストなどコレクションを列挙するための一連のアルゴリズム。* all?、any?* chunk_every、chunk_by、map_every* each* map、filter、reduce* min、max* sort、uniq、uniq_by* キャプチャ演算子 `(&)`
# all? 関数を引数で受け取り、リストの全体が true の時、true を返す !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 3 end)' !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) >1 end)' # any? 少なくとも1つの要素が true と評価された場合に true を返す !elixir -e 'IO.puts Enum.any?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 5 end)' # chunk_every リストを小さなグループに分割する !elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 2)' !elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 3)' !elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 4)' # chunk_by 関数の戻り値が変化することによって分割する !elixir -e 'IO.inspect Enum.chunk_by(["one", "two", "three", "four", "five"], fn(x) -> String.length(x) end)' !elixir -e 'IO.inspect Enum.chunk_by(["one", "two", "three", "four", "five", "six"], fn(x) -> String.length(x) end)' # map_every nth ごとに map 処理する !elixir -e 'IO.inspect Enum.map_every(1..10, 3, fn x -> x + 1000 end)' !elixir -e 'IO.inspect Enum.map_every(1..10, 1, fn x -> x + 1000 end)' !elixir -e 'IO.inspect Enum.map_every(1..10, 0, fn x -> x + 1000 end)' # each 新しい値を生成することなく反復する。返り値は:ok というアトム。 !elixir -e 'IO.inspect Enum.each(["one", "two", "three"], fn(s) -> IO.puts(s) end)' !elixir -e 'IO.puts Enum.each(["one", "two", "three"], fn(s) -> IO.puts(s) end)' # map 関数を各要素に適用して新しいリストを生み出す !elixir -e 'IO.inspect Enum.map([0, 1, 2, 3], fn(x) -> x - 1 end)' # min 最小の値を探す。 リストが空の場合エラーになる # リストが空だったときのために予め最小値を生成する関数を渡すことができる !elixir -e 'IO.inspect Enum.min([5, 3, 0, -1])' !elixir -e 'IO.inspect Enum.min([], fn -> :foo end)' # max 最大の(max/1)値を返す !elixir -e 'IO.inspect Enum.max([5, 3, 0, -1])' !elixir -e 'IO.inspect Enum.max([], fn -> :bar end)' # filter 与えられた関数によって true と評価された要素だけを得る !elixir -e 'IO.inspect Enum.filter([1, 2, 3, 4], fn(x) -> rem(x, 2) == 0 end)' !elixir -e 'IO.inspect Enum.filter([], fn(x) -> rem(x, 2) == 0 end)' # reduce リストを関数に従って単一の値へ抽出する。 accumulator を指定できる。 # accumulator が与えられない場合は最初の要素が用いられる。 !elixir -e 'IO.inspect Enum.reduce([1, 2, 3], 10, fn(x, acc) -> x + acc end)' !elixir -e 'IO.inspect Enum.reduce([1, 2, 3], fn(x, acc) -> x + acc end)' !elixir -e 'IO.inspect Enum.reduce(["a","b","c"], "1", fn(x,acc)-> x <> acc end)' # sort `sort/1` はソートの順序に Erlangの Term 優先順位 を使う !elixir -e 'IO.inspect Enum.sort([5, 6, 1, 3, -1, 4])' !elixir -e 'IO.inspect Enum.sort([:foo, "bar", Enum, -1, 4])' # `sort/2` は、順序を決める為の関数を渡すことができる !elixir -e 'IO.inspect Enum.sort([%{:val => 4}, %{:val => 1}], fn(x, y) -> x[:val] > y[:val] end)' # なしの場合 !elixir -e 'IO.inspect Enum.sort([%{:count => 4}, %{:count => 1}])' # sort/2 に :asc または :desc をソート関数として渡すことができる !elixir -e 'IO.inspect Enum.sort([2, 3, 1], :desc)' # uniq 重複した要素を取り除く !elixir -e 'IO.inspect Enum.uniq([1, 2, 3, 2, 1, 1, 1, 1, 1])' [1, 2, 3] # uniq_by 重複した要素を削除するが、ユニークかどうか比較を行う関数を渡せる !elixir -e 'IO.inspect Enum.uniq_by([%{x: 1, y: 1}, %{x: 2, y: 1}, %{x: 3, y: 3}], fn coord -> coord.y end)'
[1, 2, 3] [%{x: 1, y: 1}, %{x: 3, y: 3}]
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
キャプチャ演算子 `&` を使用した Enum と無名関数 elixir の Enum モジュール内の多くの関数は、引数として無名関数を取る。これらの無名関数は、多くの場合、キャプチャ演算子 `&` を使用して省略形で記述される。
# 無名関数でのキャプチャ演算子の使用 !elixir -e 'IO.inspect Enum.map([1,2,3], fn number -> number + 3 end)' !elixir -e 'IO.inspect Enum.map([1,2,3], &(&1 + 3))' !elixir -e 'plus_three = &(&1 + 3);IO.inspect Enum.map([1,2,3], plus_three)' # Enum.all? でもキャプチャ演算子が使えるか # all? 関数を引数で受け取り、リストの全体が true の時、true を返す # !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 3 end)' !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], &(String.length(&1)==3))' # !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) >1 end)' !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], &(String.length(&1)>1))'
false true
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
--- パターンマッチングパターンマッチングでは、値、データ構造、関数をマッチすることができる。* マッチ演算子* ピン演算子
# マッチ演算子 `=` はマッチ演算子である。 マッチ演算子を通して値を代入し、 # その後、マッチさせることができる。マッチすると、方程式の結果が返され、 # 失敗すると、エラーになる !elixir -e 'IO.puts x = 1' !elixir -e 'x = 1;IO.puts 1 = x' # !elixir -e 'x = 1;IO.puts 2 = x' #=> (MatchError) no match of right hand side value: 1 # リストでのマッチ演算子 !elixir -e 'IO.inspect list = [1, 2, 3]' !elixir -e 'list = [1, 2, 3]; IO.inspect [1, 2, 3] = list' # !elixir -e 'list = [1, 2, 3]; IO.inspect [] = list' #=> (MatchError) no match of right hand side value: [1, 2, 3] !elixir -e 'list = [1, 2, 3]; IO.inspect [1 | tail] = list' !elixir -e 'list = [1, 2, 3]; [1 | tail] = list; IO.inspect tail' # タプルとマッチ演算子 !elixir -e 'IO.inspect {:ok, value} = {:ok, "Successful!"}' !elixir -e '{:ok, value} = {:ok, "Successful!"}; IO.inspect value'
{:ok, "Successful!"} "Successful!"
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---**ピン演算子**マッチ演算子は左辺に変数が含まれている時に代入操作を行う。 この変数を再び束縛するという挙動は望ましくない場合がある。 そうした状況のために、ピン演算子 `^` がある。ピン演算子で変数を固定すると、新しく再束縛するのではなく既存の値とマッチする。
# ピン演算子 !elixir -e 'IO.inspect x = 1' # !elixir -e 'x = 1; IO.inspect ^x = 2' #=> ** (MatchError) no match of right hand side value: 2 !elixir -e 'x = 1; IO.inspect {x, ^x} = {2, 1}' !elixir -e 'x = 1;{x, ^x} = {2, 1}; IO.inspect x' !echo !elixir -e 'IO.inspect key = "hello"' !elixir -e 'key = "hello"; IO.inspect %{^key => value} = %{"hello" => "world"}' !elixir -e 'key = "hello"; %{^key => value} = %{"hello" => "world"}; IO.inspect value' !elixir -e 'key = "hello"; %{^key => value} = %{"hello" => "world"}; IO.inspect value' # 関数の clause でのピン演算子 !elixir -e 'IO.inspect greeting = "Hello"' !elixir -e 'greeting = "Hello"; IO.inspect greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end' !elixir -e 'greeting = "Hello"; greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end; IO.inspect greet.("Hello","Sean")' !elixir -e 'greeting = "Hello"; greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end; IO.inspect greet.("Mornin","Sean")'
"Hello" #Function<43.65746770/2 in :erl_eval.expr/5> "Hi Sean" "Mornin,Sean"
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
制御構造 control structure* if と unless* case* cond* with if と unless elixir の if と unless は ruby と同じ。elixir は if と unless はマクロとして定義されている。この実装は kernel module で知ることができる。elixir では偽とみなされる値は nil と真理値の false だけだということに留意。
%%writefile temp.exs IO.puts ( if String.valid?("Hello") do "Valid string!" else "Invalid string." end) !elixir temp.exs %%writefile temp.exs if "a string value" do IO.puts "Truthy" end !elixir temp.exs # unless/2 は if/2 の逆で、条件が否定される時だけ作用する %%writefile temp.exs unless is_integer("hello") do IO.puts "Not an Int" end !elixir temp.exs # 実験 シェルワンライナー版 do や end の前後にセミコロンは要らない !elixir -e 'unless is_integer("hello") do IO.puts "Not an Int" end' # 複数のパターンにマッチする場合、case/2 を使う %%writefile temp.exs IO.puts( case {:error, "Hello World"} do {:ok, result} -> result {:error, _} -> "Uh oh!" _ -> "Catch all" end ) !elixir temp.exs # アンダースコア _ 変数は case/2 命令文の中に含まれる重要な要素 # これが無いと、マッチするものが見あたらない場合にエラーが発生する # エラーの例 !elixir -e 'case :even do :odd -> IO.puts "Odd" end' # アンダースコア _ を"他の全て"にマッチする else と考えること !elixir -e 'case :even do :odd -> IO.puts "Odd"; _ -> IO.puts "Not odd" end' # case/2 はパターンマッチングに依存しているため、パターンマッチングと同じルールや制限が全て適用される # 既存の変数に対してマッチさせようという場合にはピン ^ 演算子を使う !elixir -e 'pie=3.14; IO.puts(case "cherry pie" do ^pie -> "Not so tasty"; pie -> "I bet #{pie} is tasty" end)' !elixir -e 'pie=3.14; IO.puts(case "cherry pie" do pie -> "Not so tasty"; pie -> "I bet #{pie} is tasty" end)' # case/2 はガード節に対応している # 公式ドキュメントの Expressions allowed in guard clauses を参照 !elixir -e 'IO.puts(case {1, 2, 3} do {1, x, 3} when x > 0 -> "Will match"; _ -> "Wont match" end)'
Will match
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---ガード節とは何か?公式ドキュメントの Expressions allowed in guard clauses を参照
# cond !elixir -e 'IO.puts (cond do 2+2==5 -> "This will not be true"; 2*2==3 -> "Nor this"; 1+1 == 2 -> "But this will" end)' # cond も case と同様マッチしない場合にエラーになるので、true になる条件を定義する !elixir -e 'IO.puts (cond do 7+1==0 -> "Incorrect"; true -> "Catch all" end)' # with # 特殊形式の with/1 はネストされた case/2 文やきれいにパイプできない状況に便利 # with/1 式はキーワード, ジェネレータ, そして式から成り立っている # ジェネレータについてはリスト内包表記のところで詳しく述べる # `<-` の右側と左側を比べるのにパターンマッチングが使われる !elixir -e 'user=%{first: "Sean", last: "Callan"}; IO.inspect user' !elixir -e 'user=%{first: "Sean", last: "Callan"}; with {:ok, first} <- Map.fetch(user, :first), {:ok, last} <- Map.fetch(user, :last), do: IO.puts last <> ", " <> first' # シェルワンライナーが長いのでファイルにする %%writefile temp.exs user=%{first: "Sean", last: "Callan"} with {:ok, first} <- Map.fetch(user, :first), {:ok, last} <- Map.fetch(user, :last), do: IO.puts last <> ", " <> first !elixir temp.exs # 式がマッチに失敗した場合 # Map.fetch が失敗して :error を返し、first が設定されずプログラムが止まる %%writefile temp.exs user = %{first: "doomspork"} with {:ok, first} <- Map.fetch(user, :first), {:ok, last} <- Map.fetch(user, :last), do: IO.puts last <> ", " <> first !elixir temp.exs # with/1 で else が使える %%writefile temp.exs import Integer m = %{a: 1, c: 3} a = with {:ok, number} <- Map.fetch(m, :a), true <- is_even(number) do IO.puts "#{number} divided by 2 is #{div(number, 2)}" :even else :error -> IO.puts("We don't have this item in map") :error _ -> IO.puts("It is odd") :odd end IO.inspect a !elixir temp.exs
We don't have this item in map :error
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
関数 Function
# 関数型言語では、関数は第一級オブジェクト first class object である # ここでは無名関数、名前付き関数、アリティ、パターンマッチング、プライベート関数、ガード、デフォルト引数について学ぶ # 無名関数 anonymous function # fn end のキーワードを用い、 引数 `->` 関数定義 の形で定義する %%writefile temp.exs sum = fn (a, b) -> a + b end IO.puts sum.(2, 3) !elixir temp.exs # シェルワンライナーで書いてみる !elixir -e 'sum=fn(a,b)->a+b end;IO.puts sum.(2,3)' # elixir では通常関数定義に省略記号 & を使う (キャプチャ演算子) !elixir -e 'sum = &(&1 + &2); IO.puts sum.(2, 3)'
5
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---質問 無名関数に引数を渡して結果を得るのはどうやるのか&(&1 + &2).(2, 3) として出来なかった。 => 出来た。!elixir -e 'IO.puts ((&(&1 + &2)).(2,3))'
!elixir -e 'IO.puts ((fn (a,b) -> a + b end).(2,3))' !elixir -e 'IO.puts ((&(&1 + &2)).(2,3))' # 関数定義にパターンマッチングが使える %%writefile temp.exs handle_result = fn {:ok, _result} -> IO.puts "Handling result..." {:ok, _} -> IO.puts "This would be never run as previous will be matched beforehand." {:error} -> IO.puts "An error has occurred!" end some_result = 1 handle_result.({:ok, some_result}) #=> Handling result... handle_result.({:error}) #=> An error has occured! !elixir temp.exs # 名前付き関数 # 名前付き関数はモジュール内部で def キーワードを用いて定義する %%writefile temp.exs defmodule Greeter do def hello(name) do "Hello, " <> name end end IO.puts Greeter.hello("Sean") !elixir temp.exs # 次のような書き方もできる do: を使う %%writefile temp.exs defmodule Greeter do def hello(name), do: "Hello, " <> name end IO.puts Greeter.hello("Sean") !elixir temp.exs # 実験 シェルワンライナーで出来るか !elixir -e 'defmodule Greeter do def hello(name) do "Hello, " <> name end end;IO.puts Greeter.hello("Sean")' # 実験 シェルワンライナーで `, do:` 構文が使えるか !elixir -e 'defmodule Greeter do def hello(name),do: "Hello, " <> name end;IO.puts Greeter.hello("Sean")' # 再帰 %%writefile temp.exs defmodule Length do def of([]), do: 0 def of([_ | tail]), do: 1 + of(tail) end IO.puts Length.of [] IO.puts Length.of [1, 2, 3] !elixir temp.exs # アリティとは関数の引数の数 # 引数の数が違えば別の関数 %%writefile temp.exs defmodule Greeter2 do def hello(), do: "Hello, anonymous person!" # hello/0 def hello(name), do: "Hello, " <> name # hello/1 def hello(name1, name2), do: "Hello, #{name1} and #{name2}" # hello/2 end IO.puts Greeter2.hello() IO.puts Greeter2.hello("Fred") IO.puts Greeter2.hello("Fred", "Jane") !elixir temp.exs # 関数とパターンマッチング %%writefile temp.exs defmodule Greeter1 do def hello(%{name: person_name}) do IO.puts "Hello, " <> person_name end end fred = %{ name: "Fred", age: "95", favorite_color: "Taupe" } IO.puts Greeter1.hello(fred) #=> Hello, fred になる #IO.puts Greeter1.hello(%{age: "95", favorite_color: "Taupe"}) #=> (FunctionClauseError) no function clause matching in Greeter1.hello/1 !elixir temp.exs # Fredの名前を person_name にアサインしたいが、人物マップ全体の値も保持したいという場合 # マップを引数にすれば、別々の変数に格納することができる %%writefile temp.exs defmodule Greeter2 do def hello(%{name: person_name} = person) do IO.puts "Hello, " <> person_name IO.inspect person end end fred = %{ name: "Fred", age: "95", favorite_color: "Taupe" } Greeter2.hello(fred) IO.puts("") Greeter2.hello(%{name: "Fred"}) IO.puts("") # Greeter2.hello(%{age: "95", favorite_color: "Taupe"}) #=> (FunctionClauseError) no function clause matching in Greeter2.hello/1 !elixir temp.exs
Hello, Fred %{age: "95", favorite_color: "Taupe", name: "Fred"} Hello, Fred %{name: "Fred"}
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
# %{name: person_name} と person の順序を入れ替えても、それぞれがfredとマッチングするので同じ結果となる # 変数とマップを入れ替えてみる # それぞれがパターンマッチしているので結果は同じになる %%writefile temp.exs defmodule Greeter3 do def hello(person = %{name: person_name}) do IO.puts "Hello, " <> person_name IO.inspect person end end fred = %{ name: "Fred", age: "95", favorite_color: "Taupe" } Greeter3.hello(fred) IO.puts("") Greeter3.hello(%{name: "Fred"}) !elixir temp.exs # プライベート関数 # プライベート関数は defp を用いて定義する # そのモジュール自身の内部からのみ呼び出すことが出来る %%writefile temp.exs defmodule Greeter do def hello(name), do: phrase() <> name defp phrase, do: "Hello, " end IO.puts Greeter.hello("Sean") #=> "Hello, Sean" # IO.puts Greeter.phrase #=> (UndefinedFunctionError) function Greeter.phrase/0 is undefined or private !elixir temp.exs # ガード %%writefile temp.exs defmodule Greeter do def hello(names) when is_list(names) do names |> Enum.join(", ") |> hello end def hello(name) when is_binary(name) do phrase() <> name end defp phrase, do: "Hello, " end IO.puts Greeter.hello ["Sean", "Steve"] IO.puts Greeter.hello "Bill" !elixir temp.exs
Hello, Sean, Steve Hello, Bill
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
---質問 Elixir のガードは Haskell のガードと同じか?
# デフォルト引数 # デフォルト値が欲しい場合、引数 \\ デフォルト値の記法を用いる %%writefile temp.exs defmodule Greeter do def hello(name, language_code \\ "en") do phrase(language_code) <> name end defp phrase("en"), do: "Hello, " defp phrase("es"), do: "Hola, " end IO.puts Greeter.hello("Sean", "en") IO.puts Greeter.hello("Sean") IO.puts Greeter.hello("Sean", "es") !elixir temp.exs # ガードとデフォルト引数を組み合わせる場合 # 混乱を避けるためデフォルト引数を処理する定義を先に置く %%writefile temp.exs defmodule Greeter do def hello(names, language_code \\ "en") def hello(names, language_code) when is_list(names) do names |> Enum.join(", ") |> hello(language_code) end def hello(name, language_code) when is_binary(name) do phrase(language_code) <> name end defp phrase("en"), do: "Hello, " defp phrase("es"), do: "Hola, " end IO.puts Greeter.hello ["Sean", "Steve"] #=> "Hello, Sean, Steve" IO.puts Greeter.hello ["Sean", "Steve"], "es" #=> "Hola, Sean, Steve" IO.puts Greeter.hello "Bob", "es" !elixir temp.exs # パイプライン演算子 # パイプライン演算子 `|>` はある式の結果を別の式に渡す # 関数のネストを理解しやすくするためのもの # 文字列をトークン化する、単語に分ける !elixir -e 'IO.inspect "Elixir rocks" |> String.split()' !elixir -e 'IO.inspect "Elixir rocks" |> String.upcase() |> String.split()' # パイプラインを使う場合に関数の括弧は省略せずには入れた方がわかりやすい !elixir -e 'IO.inspect "elixir" |> String.ends_with?("ixir")'
true
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
モジュール ---質問 いままで IO.puts とか一々モジュール名を付けていたが、elixir ではこれが普通なのか?関数を作る際に一々モジュールを作成していたがあれで既存のモジュールに付け加えられているのか?
# モジュールの基本的な例 %%writefile temp.exs defmodule Example do def greeting(name) do "Hello #{name}." end end IO.puts Example.greeting "Sean" !elixir temp.exs # モジュールはネストする事ができる %%writefile temp.exs defmodule Example.Greetings do def morning(name) do "Good morning #{name}." end def evening(name) do "Good night #{name}." end end IO.puts Example.Greetings.morning "Sean" !elixir temp.exs # モジュールの属性 # モジュール属性は Elixir では一般に定数として用いられる # Elixirには予約されている属性がある # moduledoc — 現在のモジュールにドキュメントを付ける # doc — 関数やマクロについてのドキュメント管理 # behaviour — OTPまたはユーザが定義した振る舞い(ビヘイビア)に用いる %%writefile temp.exs defmodule Example do @greeting "Hello" def greeting(name) do ~s(#{@greeting} #{name}.) end end IO.puts Example.greeting "tak" !elixir temp.exs # 構造体 struct # 構造体は定義済みのキーの一群とデフォルト値を持つマップである # 定義するには defstruct を用いる %%writefile temp.exs defmodule Example.User do defstruct name: "Sean", roles: [] end defmodule Main do IO.inspect %Example.User{} IO.inspect %Example.User{name: "Steve"} IO.inspect %Example.User{name: "Steve", roles: [:manager]} end !elixir temp.exs # 構造体の更新 %%writefile temp.exs defmodule Example.User do defstruct name: "Sean", roles: [] end defmodule Main do steve = %Example.User{name: "Steve"} IO.inspect %{steve | name: "Sean"} IO.inspect steve end !elixir temp.exs # 構造体の更新とマッチング %%writefile temp.exs defmodule Example.User do defstruct name: "Sean", roles: [] end defmodule Main do steve = %Example.User{name: "Steve"} sean = %{steve | name: "Sean"} IO.inspect %{name: "Sean"} = sean end !elixir temp.exs # inspect の出力を変える %%writefile temp.exs defmodule Example.User do # @derive {Inspect, only: [:name]} @derive {Inspect, except: [:roles]} defstruct name: "Sean", roles: [] end defmodule Main do steve = %Example.User{name: "Steve"} sean = %{steve | name: "Sean"} IO.inspect %{name: "Sean"} = sean end !elixir temp.exs # コンポジション(Composition) # コンポジションを用いてモジュールや構造体に既存の機能を追加する # alias モジュール名をエイリアスする %%writefile temp.exs defmodule Sayings.Greetings do def basic(name), do: "Hi, #{name}" end defmodule Example do alias Sayings.Greetings def greeting(name), do: Greetings.basic(name) end IO.puts Example.greeting "Bob!!" # aliasを使わない場合 # defmodule Example do # def greeting(name), do: Sayings.Greetings.basic(name) # end !elixir temp.exs # 別名で alias したい時は `:as` を使う %%writefile temp.exs defmodule Sayings.Greetings do def basic(name), do: "Hi, #{name}" end defmodule Example do alias Sayings.Greetings, as: Hi def print_message(name), do: Hi.basic(name) end IO.puts Example.print_message "Chris!!" !elixir temp.exs # 複数のモジュールを一度に alias する # defmodule Example do # alias Sayings.{Greetings, Farewells} # end # import # 関数を取り込みたいという場合には、 import を使う !elixir -e 'import List; IO.inspect last([1,2,3])' # フィルタリング # import のデフォルトでは全ての関数とマクロが取り込まれるが、 :only や :except でフィルタすることができる # アリティを付ける必要がある %%writefile temp.exs import List, only: [last: 1] IO.inspect last([1,2,3]) # IO.inspect first([1,2,3]) #=> (CompileError) temp.exs:3: undefined function first/1 (there is no such import) !elixir temp.exs # import には :functions と :macros という2つの特別なアトムもありるこれらはそれぞれ関数とマクロのみを取り込む # import List, only: :functions # import List, only: :macros # require と import の違いがわからない # まだロードされていないマクロを呼びだそうとすると、Elixirはエラーを発生させる # とのこと # defmodule Example do # require SuperMacros # # SuperMacros.do_stuff # end # use # use マクロを用いることで他のモジュールを利用して現在のモジュールの定義を変更することができる # コード上で use を呼び出すと、実際には提供されたモジュールに定義されている # __using__/1 コールバックを呼び出している %%writefile temp.exs defmodule Hello do defmacro __using__ _ do quote do def hello(name), do: "Hi, #{name}" end end end defmodule Example do use Hello end IO.puts Example.hello("Sean") !elixir temp.exs # greeting オプションを追加する %%writefile temp.exs defmodule Hello do defmacro __using__(opts) do greeting = Keyword.get(opts, :greeting, "Hi") quote do def hello(name), do: unquote(greeting) <> ", " <> name end end end defmodule Example do use Hello, greeting: "Hola" end IO.puts Example.hello("Sean") !elixir temp.exs
Hola, Sean
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
Mix
# mixとは Ruby の Bundler, RubyGems, Rake が組み合わさったようなもの # colab の環境でやってみる !mix new example #=> # * creating README.md # * creating .formatter.exs # * creating .gitignore # * creating mix.exs # * creating lib # * creating lib/example.ex # * creating test # * creating test/test_helper.exs # * creating test/example_test.exs # # Your Mix project was created successfully. # You can use "mix" to compile it, test it, and more: # # cd example # mix test # # Run "mix help" for more commands. # colab 環境ではシステムコマンドを 1 行の中で書かないとディレクトリ内の処理ができない !cd example; mix test !cd example; ls -la !cd example; cat mix.exs #=> 次のフォーマットのプログラムが出来る # defmodule Example.MixProject do # use Mix.Project # def project do # 名前(app)と依存関係(deps)が書かれている # def application do # defp deps do # end !cd example; iex -S mix # iex で対話的に使うことが出来るが colab 環境では出来ない # cd example # iex -S mix # compile # mix はコードの変更を自動的にコンパイルする # 明示的にコンパイルすることも出来る # !cd example; mix compile # rootディレクトリ以外から実行する場合は、グローバルmix taskのみが実行可能 !cd example; mix compile !cd example; ls -la !cd example; ls -laR _build # 依存関係を管理する # 新しい依存関係を追加するには、 mix.exs の deps 内に追加する # パッケージ名のアトムと、バージョンを表す文字列)と1つの任意的な値(オプション)を持つタプル # 実例として、phoenix_slimのようなプロジェクトの依存関係を見る # def deps do # [ # {:phoenix, "~> 1.1 or ~> 1.2"}, # {:phoenix_html, "~> 2.3"}, # {:cowboy, "~> 1.0", only: [:dev, :test]}, # {:slime, "~> 0.14"} # ] # end # cowboy の依存は開発時とテスト時にのみ必要 # 依存しているパッケージの取り込みは bundle install に似たもの # mix deps.get !cd example/_build/test/lib/example/ebin; ./example.app #=> Permission denied # colab 環境ではアプリは起動できないと言う事か # 環境 # Bundler に似て、様々な環境に対応している # mixは最初から 3 つの環境で動作するように構成されている # :dev - 初期状態での環境。 # :test - mix testで用いられる環境。次のレッスンでさらに見ていきる # :prod - アプリケーションを製品に出荷するときに用いられる環境。 # 現在の環境は Mix.env で取得することができる # この環境は MIX_ENV 環境変数によって変更することが出来る # MIX_ENV=prod mix compile
_____no_output_____
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
シギル sigil
# シギル sigil とは elixir で文字列リテラルを取り扱うための特別の構文 # チルダ ~ で始まる # シギルのリスト # ~C エスケープや埋め込みを含まない文字のリストを生成する # ~c エスケープや埋め込みを含む文字のリストを生成する # ~R エスケープや埋め込みを含まない正規表現を生成する # ~r エスケープや埋め込みを含む正規表現を生成する # ~S エスケープや埋め込みを含まない文字列を生成する # ~s エスケープや埋め込みを含む文字列を生成する # ~W エスケープや埋め込みを含まない単語のリストを生成する # ~w エスケープや埋め込みを含む単語のリストを生成する # ~N NaiveDateTime 構造体を生成する # デリミタのリスト # <...> カギ括弧のペア angle bracket # {...} 中括弧のペア brace # [...] 大括弧のペア bracket # (...) 小括弧のペア parenthesis # |...| パイプ記号のペア pipe # /.../ スラッシュのペア slash # "..." ダブルクォートのペア double quote # '...' シングルクォートのペア single quote # 文字のリスト #=> tutorial と結果が違う!!!! !elixir -e 'IO.puts ~c/2 + 7 = #{ 2 + 7 }/' !elixir -e 'IO.puts ~C/2 + 7 = #{ 2 + 7 }/' # 正規表現 !elixir -e 'IO.puts 3 == 3' !elixir -e 'IO.puts "Elixir" =~ ~r/elixir/' !elixir -e 'IO.puts "elixir" =~ ~r/elixir/' !echo !elixir -e 'IO.puts "Elixir" =~ ~r/elixir/i' !elixir -e 'IO.puts "elixir" =~ ~r/elixir/i' # Erlang の正規表現ライブラリを元に作られた Regex.split/2 を使う !elixir -e 'string="100_000_000"; IO.inspect Regex.split(~r/_/, string)' # 文字列 !elixir -e 'IO.puts ~s/welcome to elixir #{String.downcase "SCHOOL"}/' !elixir -e 'IO.puts ~S/welcome to elixir #{String.downcase "SCHOOL"}/' # 単語のリスト !elixir -e 'IO.inspect ~w/i love elixir school/' !elixir -e 'IO.inspect ~w/i love\telixir school/' !elixir -e 'IO.inspect ~W/i love\telixir school/' !elixir -e 'name="Bob"; IO.inspect ~w/i love #{name}lixir school/' !elixir -e 'name="Bob"; IO.inspect ~W/i love #{name}lixir school/' # NaiveDateTime # NaiveDateTime は タイムゾーンがない DateTime を表現する構造体を手早く作るときに有用 # NaiveDateTime 構造体を直接作ることは避けるべき # パターンマッチングには有用 !elixir -e 'IO.inspect NaiveDateTime.from_iso8601("2015-01-23 23:50:07") == {:ok, ~N[2015-01-23 23:50:07]}' # シギルを作る %%writefile temp.exs defmodule MySigils do def sigil_u(string, []), do: String.upcase(string) end defmodule Main do import MySigils IO.puts (~u/elixir school/) end !elixir temp.exs
ELIXIR SCHOOL
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
**ドキュメント** **インラインドキュメント用の属性*** @moduledoc - モジュールレベルのドキュメント用* @doc - 関数レベルのドキュメント用省略 **テスト**ExUnit省略 内包表記
# 内包表記 list comprehension # 内包表記は列挙体 enumerable をループするための糖衣構文である !elixir -e 'list=[1,2,3,4,5];IO.inspect for x <- list, do: x*x' # for とジェネレータの使い方に留意する # ジェネレータとは `x <- list` の部分 # Haskell だと [x * x | x <- list] と書き、数学の集合での表記に近いが Elixir ではこのように書く # 内包表記はリストに限定されない # キーワードリスト !elixir -e 'IO.inspect for {_key, val} <- [one: 1, two: 2, three: 3], do: val' # マップ !elixir -e 'IO.inspect for {k, v} <- %{"a" => "A", "b" => "B"}, do: {k, v}' # バイナリ !elixir -e 'IO.inspect for <<c <- "hello">>, do: <<c>>' # ジェネレータは入力値セットと左辺の変数を比較するのにパターンマッチングを利用している # マッチするものが見つからない場合には、値は無視される !elixir -e 'IO.inspect for {:ok, val} <- [ok: "Hello", error: "Unknown", ok: "World"], do: val' # 入れ子 %%writefile temp.exs list = [1, 2, 3, 4] IO.inspect ( for n <- list, times <- 1..n do String.duplicate("*", times) end ) !elixir temp.exs # ループの見える化 !elixir -e 'list = [1, 2, 3, 4]; for n <- list, times <- 1..n, do: IO.puts "#{n} - #{times}"' # フィルタ !elixir -e 'import Integer; IO.inspect for x <- 1..10, is_even(x), do: x' # 偶数かつ 3 で割り切れる値のみをフィルタ %%writefile temp.exs import Integer IO.inspect ( for x <- 1..100, is_even(x), rem(x, 3) == 0, do: x) !elixir temp.exs # :into の使用 # 他のものを生成したい場合 # :into は Collectable プロトコルを実装している構造体を指定する # :into を用いて、キーワードリストからマップを作成する !elixir -e 'IO.inspect for {k, v} <- [one: 1, two: 2, three: 3], into: %{}, do: {k, v}' !elixir -e 'IO.inspect %{:one => 1, :three => 2, :two => 2}' !elixir -e 'IO.inspect %{"one" => 1, "three" => 2, "two" => 2}' # なるほど、と言うかわからなくて当然ですね。多分、Erlang の仕様を引き継いでこのようになっているのだろう # map では高速なプログラムができなくて、キーワードリストを作って、キーワードリストはリストでありマップなのだろう # ビット文字列 bitstring は列挙可能 enumerable なので、:into を用いて文字列を作成することが出来る !elixir -e "IO.inspect for c <- [72, 101, 108, 108, 111], into: \"\", do: <<c>>"
"Hello"
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
文字列
# 文字列 string # elixir の文字列はバイトのシーケンスである !elixir -e 'string = <<104,101,108,108,111>>;IO.puts string' !elixir -e 'string = <<104,101,108,108,111>>;IO.inspect string' !elixir -e 'IO.inspect <<104,101,108,108,111>>' !echo # 文字列に 0 バイトを追加するとバイナリとして表示される !elixir -e 'IO.inspect <<104,101,108,108,111,0>>' # 質問 文字列をバイナリ表示するにはどうするか !elixir -e 'IO.inspect "hello"<> <<0>>' # 実験 日本語 !elixir -e 'IO.inspect "あ"<> <<0>>' #=> <<227, 129, 130, 0>> !elixir -e 'IO.inspect <<227, 129, 130>>' #=> "あ" # 文字リスト # elixir は文字列と別に文字リストという型を別に持っている # 文字列はダブルクオートで生成され、文字リストはシングルクオートで生成される # 文字リストは utf-8 で、文字列はバイナリである !elixir -e "IO.inspect 'hello'" !elixir -e "IO.inspect 'hello' ++ [0]" !elixir -e 'IO.inspect "hello"<> <<0>>' !echo !elixir -e "IO.inspect 'hełło' ++ [0]" !elixir -e 'IO.inspect "hełło"<> <<0>>' !echo !elixir -e "IO.inspect 'あ' ++ [0]" !elixir -e 'IO.inspect "あ"<> <<0>>' # クエスチョンマークによるコードポイントの取得 # コードポイントは unicode なので 1 バイト以上のバイトである !elixir -e 'IO.inspect ?Z' !elixir -e 'IO.inspect ?あ' !elixir -e 'IO.inspect "áñèane" <> <<0>>' !elixir -e "IO.inspect 'áñèane' ++ [0]" !elixir -e "IO.inspect 'あいう' ++ [0]" # シンボルには ? 表記が使える # elixir でプログラムする時は通常文字リストは使わず文字列を使う # 文字リストが必要なのは erlang のため # String モジュールにコードポイントを取得する関数 graphemes/1 と codepoints/1 がある !elixir -e 'string = "\u0061\u0301"; IO.puts string' #=> á !elixir -e 'string = "\u0061\u0301"; IO.inspect String.codepoints string' !elixir -e 'string = "\u0061\u0301"; IO.inspect String.graphemes string' # 下記の実験から á と あ は違う # á は graphemes では 1 文字だが codepoints では 2 文字 # あ はどちらでも 1 文字 !elixir -e 'string = "あいう"; IO.puts string' !elixir -e 'string = "あいう"; IO.inspect String.codepoints string' !elixir -e 'string = "あいう"; IO.inspect String.graphemes string' # 文字列関数 # length/1 !elixir -e 'IO.puts String.length "hello"' !elixir -e 'IO.puts String.length "あいう"' # replace/3 !elixir -e 'IO.puts String.replace("Hello", "e", "a")' # duplicate/2 !elixir -e 'IO.puts String.duplicate("Oh my ", 3)' # split/2 !elixir -e 'IO.inspect String.split("Oh my ", " ")' # split/1 # こちらが words 相当か !elixir -e 'IO.inspect String.split("Oh my ")' # 問題 アナグラムチェック # A = super # B = perus # 文字列 A を並び替えれば B に出来るので A は B のアナグラム %%writefile temp.exs defmodule Anagram do def anagrams?(a, b) when is_binary(a) and is_binary(b) do sort_string(a) == sort_string(b) end def sort_string(string) do string |> String.downcase() |> String.graphemes() |> Enum.sort() end end defmodule Main do IO.puts Anagram.anagrams?("Hello", "ohell") IO.puts Anagram.anagrams?("María", "íMara") IO.puts Anagram.anagrams?(3, 5) #=> エラー end !elixir temp.exs
_____no_output_____
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
日付と時間
# 日付と時間 # 現在時刻の取得 !elixir -e 'IO.puts Time.utc_now' # シギルで Time 構造体を作る !elixir -e 'IO.puts ~T[21:00:27.472988]' # hour, minute, second !elixir -e 't = ~T[21:00:27.472988];IO.puts t.hour' !elixir -e 't = ~T[21:00:27.472988];IO.puts t.minute' !elixir -e 't = ~T[21:00:27.472988];IO.puts t.second' # Date !elixir -e 'IO.puts Date.utc_today' # シギルで Date 構造体を作る !elixir -e 'IO.puts ~D[2022-03-22]' # !elixir -e '{:ok, date} = Date.new(2020, 12,12); IO.puts date' !elixir -e '{:ok, date} = Date.new(2020, 12,12); IO.puts Date.day_of_week date' !elixir -e '{:ok, date} = Date.new(2020, 12,12); IO.puts Date.leap_year? date' !echo # NaiveDateTime Date と Time の両方を扱えるがタイムゾーンのサポートがない !elixir -e 'IO.puts NaiveDateTime.utc_now' !elixir -e 'IO.puts ~N[2022-03-22 21:14:23.371420]' !elixir -e 'IO.puts NaiveDateTime.add(~N[2022-03-22 21:14:23.371420],30)' !elixir -e 'IO.puts NaiveDateTime.add(~N[2022-03-22 21:14:23],30)' # DateTime # DateTime は Date と Time の両方を扱えタイムゾーンのサポートがある # しかし!!!! Elixir がデフォルトではタイムゾーンデータベースがない # デフォルトでは Calendar.get_time_zone_database/0 によって返されるタイムゾーンデータベースを使う # デフォルトでは Calendar.UTCOnlyTimeZoneDatabase で、Etc/UTC のみを処理し # 他のタイムゾーンでは {:error, :utc_only_time_zone_database} を返す # タイムゾーンを提供することにより NaiveDateTime から DateTimeのインスタンスを作ることができる !elixir -e 'IO.inspect DateTime.from_naive(~N[2016-05-24 13:26:08.003], "Etc/UTC")' # タイムゾーンの利用 # elixir でタイムゾーンを利用するには tzdata パッケージをインストールし # Tzdata タイムゾーンデータベースとして使用する # パリのタイムゾーンで時間を作成してそれをニューヨーク時間に変換してみる # パリとニューヨークの時差は 6 時間である # %%writefile temp.exs # config :elixir, :time_zone_database, Tzdata.TimeZoneDatabase # paris_datetime = DateTime.from_naive!(~N[2019-01-01 12:00:00], "Europe/Paris") # {:ok, ny_datetime} = DateTime.shift_zone(paris_datetime, "America/New_York") # IO.inspect paris_datetime # IO.inspect ny_datetime
Overwriting temp.exs
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
カスタムMixタスク 省略 いまここ IEx Helpers 省略
_____no_output_____
MIT
learnelixir.ipynb
kalz2q/myjupyternotebooks
NeuroTorch Tutorial**NeuroTorch** is a framework for reconstructing neuronal morphology fromoptical microscopy images. It interfaces PyTorch with differentautomated neuron tracing algorithms for fast, accurate, scalableneuronal reconstructions. It uses deep learning to generate an initialsegmentation of neurons in optical microscopy images. Thissegmentation is then traced using various automated neuron tracingalgorithms to convert the segmentation into an SWC file—the mostcommon neuronal morphology file format. NeuroTorch is designed withscalability in mind and can handle teravoxel-sized images.This IPython notebook will outline a brief tutorial for using NeuroTorchto train and predict on image volume datasets. Creating image datasetsOne of NeuroTorch’s key features is its dynamic approach to volumetric datasets, which allows it to handle teravoxel-sized images without worrying about memory concerns and efficiency. Everything is loaded just-in-time based on when it is needed or expected to be needed. To load an image dataset, we needto specify the voxel coordinates of each image file as shown in files `inputs_spec.json` and `labels_spec.json`. `inputs_spec.json````json[ { "filename" : "inputs.tif", "bounding_box" : [[0, 0, 0], [1024, 512, 50]] }, { "filename" : "inputs.tif", "bounding_box" : [[0, 0, 50], [1024, 512, 100]] }]``` `labels_spec.json````json[ { "filename" : "labels.tif", "bounding_box" : [[0, 0, 0], [1024, 512, 50]] }, { "filename" : "labels.tif", "bounding_box" : [[0, 0, 50], [1024, 512, 100]] }]``` Loading image datasetsNow that the image datasets for the inputs and labels have been specified,these datasets can be loaded with NeuroTorch.
from neurotorch.datasets.specification import JsonSpec import os IMAGE_PATH = '../../tests/images/' json_spec = JsonSpec() # Initialize the JSON specification # Create a dataset containing the inputs inputs = json_spec.open(os.path.join(IMAGE_PATH, "inputs_spec.json")) # Create a dataset containing the labels labels = json_spec.open(os.path.join(IMAGE_PATH, "labels_spec.json"))
_____no_output_____
BSD-3-Clause
docs/tutorials/neurotorch-tutorial.ipynb
jgornet/NeuroTorch
Augmenting datasetsWith the image datasets, it is possible to augment data on-the-fly. To implement an augmentation–such as branch occlusion—instantiate an aligned volume and specify the augmentation with the aligned volume.
from neurotorch.datasets.dataset import AlignedVolume from neurotorch.augmentations.occlusion import Occlusion from neurotorch.augmentations.blur import Blur from neurotorch.augmentations.brightness import Brightness from neurotorch.augmentations.dropped import Drop from neurotorch.augmentations.duplicate import Duplicate from neurotorch.augmentations.stitch import Stitch from neurotorch.augmentations.occlusion import Occlusion volume = AlignedVolume([inputs, labels]) augmented_volume = Occlusion(volume, frequency=0.5) augmented_volume = Stitch(augmented_volume, frequency=0.5) augmented_volume = Drop(volume, frequency=0.5) augmented_volume = Blur(augmented_volume, frequency=0.5) augmented_volume = Duplicate(augmented_volume, frequency=0.5)
_____no_output_____
BSD-3-Clause
docs/tutorials/neurotorch-tutorial.ipynb
jgornet/NeuroTorch
Training with the image datasetsTo train a neural network using these image datasets, load the neural network architecture and initialize a `Trainer`. To savetraining checkpoints, add a `CheckpointWriter` to the `Trainer` object.Lastly, call the `Trainer` object to run training.
from neurotorch.core.trainer import Trainer from neurotorch.nets.RSUNet import RSUNet from neurotorch.training.checkpoint import CheckpointWriter from neurotorch.training.logging import ImageWriter, LossWriter net = RSUNet() # Initialize the U-Net architecture # Setup the trainer trainer = Trainer(net, augmented_volume, max_epochs=10, gpu_device=0) # Setup the trainer the add a checkpoint every 500 epochs trainer = LossWriter(trainer, ".", "tutorial_tensorboard") trainer = ImageWriter(trainer, ".", "tutorial_tensorboard") trainer = CheckpointWriter(trainer, checkpoint_dir='.', checkpoint_period=50) trainer.run_training()
_____no_output_____
BSD-3-Clause
docs/tutorials/neurotorch-tutorial.ipynb
jgornet/NeuroTorch
Predicting using NeuroTorchOnce training has completed, we can use the training checkpointsto predict on image datasets. We first have to load the neural network architecture and image volume.We then have to initialize a `Predictor` object and an output volume.Once these have been specified, we can begin prediction.
from neurotorch.nets.RSUNet import RSUNet from neurotorch.core.predictor import Predictor from neurotorch.datasets.filetypes import TiffVolume from neurotorch.datasets.dataset import Array from neurotorch.datasets.datatypes import (BoundingBox, Vector) import numpy as np import tifffile as tif import os IMAGE_PATH = '../../tests/images/' net = RSUNet() # Initialize the U-Net architecture checkpoint = './iteration_1000.ckpt' # Specify the checkpoint path with TiffVolume(os.path.join(IMAGE_PATH, "inputs.tif"), BoundingBox(Vector(0, 0, 0), Vector(1024, 512, 50))) as inputs: predictor = Predictor(net, checkpoint, gpu_device=0) output_volume = Array(np.zeros(inputs.getBoundingBox() .getNumpyDim(), dtype=np.float32)) predictor.run(inputs, output_volume, batch_size=5) tif.imsave("test_prediction.tif", output_volume.getArray().astype(np.float32))
_____no_output_____
BSD-3-Clause
docs/tutorials/neurotorch-tutorial.ipynb
jgornet/NeuroTorch
Displaying the predictionPredictions are output in logits form. To map this to aprobability distribution, we need to apply a sigmoid functionto the prediction. We can then evaluate the prediction and ground-truth.
# Apply sigmoid function probability_map = 1/(1+np.exp(-output_volume.getArray())) # Plot prediction and ground-truth plt.subplot(2, 1, 1) plt.title('Prediction') plt.imshow(output_volume.getArray()[25]) plt.axis('off') plt.subplot(2, 1, 2) plt.title('Ground-Truth') plt.imshow(labels.get( BoundingBox(Vector(0, 0, 0), Vector(1024, 512, 50))).getArray()[25], cmap='gray' ) plt.axis('off') plt.show()
_____no_output_____
BSD-3-Clause
docs/tutorials/neurotorch-tutorial.ipynb
jgornet/NeuroTorch
This IPython Notebook introduces the use of the `openmc.mgxs` module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:* **General equations** for scalar-flux averaged multi-group cross sections* Creation of multi-group cross sections for an **infinite homogeneous medium*** Use of **tally arithmetic** to manipulate multi-group cross sections Introduction to Multi-Group Cross Sections (MGXS) Many Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use *multi-group cross sections* defined over discretized energy bins or *energy groups*. An example of U-235's continuous-energy fission cross section along with a 16-group cross section computed for a light water reactor spectrum is displayed below.
from IPython.display import Image Image(filename='images/mgxs.png', width=350)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The `openmc.mgxs` Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.Before proceeding to illustrate how one may use the `openmc.mgxs` module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by `openmc.mgxs` - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic. Introductory NotationThe continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. **Note**: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity. Spatial and Energy DiscretizationThe energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \in \{1, 2, ..., G\}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as **energy condensation**.Multi-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \in \{1, 2, ..., K\}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as **spatial homogenization**. General Scalar-Flux Weighted MGXSThe multi-group cross sections computed by `openmc.mgxs` are defined as a *scalar flux-weighted average* of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\sigma_{n,x,k,g}$ as follows:$$\sigma_{n,x,k,g} = \frac{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,x}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$This scalar flux-weighted average microscopic cross section is computed by `openmc.mgxs` for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, [filters](http://openmc.readthedocs.io/en/latest/usersguide/tallies.htmlfilters) on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator. Multi-Group Scattering MatricesThe general multi-group cross section $\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes. We denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\sigma_{n,s}(\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\sigma_{n,s,k,g \to g'}$ as follows:$$\sigma_{n,s,k,g\rightarrow g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,s}(\mathbf{r},E'\rightarrow E'')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$This scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters. Multi-Group Fission SpectrumThe energy spectrum of neutrons emitted from fission is denoted by $\chi_{n}(\mathbf{r},E' \rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\chi_{n}(\mathbf{r},E)$ with outgoing energy $E$.Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\sigma_{n,f}(\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\nu_{n}(\mathbf{r},E)$. The multi-group fission spectrum $\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$. Similar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\chi_{n,k,g}$ as follows:$$\chi_{n,k,g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\chi_{n}(\mathbf{r},E'\rightarrow E'')\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}$$The fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters.This concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the `openmc.mgxs` module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations. Generate Input Files
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import openmc import openmc.mgxs as mgxs
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
We being by creating a material for the homogeneous medium.
# Instantiate a Material and register the Nuclides inf_medium = openmc.Material(name='moderator') inf_medium.set_density('g/cc', 5.) inf_medium.add_nuclide('H1', 0.028999667) inf_medium.add_nuclide('O16', 0.01450188) inf_medium.add_nuclide('U235', 0.000114142) inf_medium.add_nuclide('U238', 0.006886019) inf_medium.add_nuclide('Zr90', 0.002116053)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
With our material, we can now create a `Materials` object that can be exported to an actual XML file.
# Instantiate a Materials collection and export to XML materials_file = openmc.Materials([inf_medium]) materials_file.export_to_xml()
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
# Instantiate boundary Planes min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63) max_x = openmc.XPlane(boundary_type='reflective', x0=0.63) min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63) max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
# Instantiate a Cell cell = openmc.Cell(cell_id=1, name='cell') # Register bounding Surfaces with the Cell cell.region = +min_x & -max_x & +min_y & -max_y # Fill the Cell with the Material cell.fill = inf_medium
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
# Create root universe root_universe = openmc.Universe(name='root universe', cells=[cell])
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
We now must create a geometry that is assigned a root universe and export it to XML.
# Create Geometry and set root Universe openmc_geometry = openmc.Geometry(root_universe) # Export to "geometry.xml" openmc_geometry.export_to_xml()
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
# OpenMC simulation parameters batches = 50 inactive = 10 particles = 2500 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': True} # Create an initial uniform spatial source distribution over fissionable zones bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings_file.source = openmc.source.Source(space=uniform_dist) # Export to "settings.xml" settings_file.export_to_xml()
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in `EnergyGroups` class.
# Instantiate a 2-group EnergyGroups object groups = mgxs.EnergyGroups() groups.group_edges = np.array([0., 0.625, 20.0e6])
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
We can now use the `EnergyGroups` object, along with our previously created materials and geometry, to instantiate some `MGXS` objects from the `openmc.mgxs` module. In particular, the following are subclasses of the generic and abstract `MGXS` class:* `TotalXS`* `TransportXS`* `AbsorptionXS`* `CaptureXS`* `FissionXS`* `KappaFissionXS`* `ScatterXS`* `ScatterMatrixXS`* `Chi`* `ChiPrompt`* `InverseVelocity`* `PromptNuFissionXS`Of course, we are aware that the fission cross section (`FissionXS`) can sometimes be paired with the fission neutron multiplication to become $\nu\sigma_f$. This can be accomodated in to the `FissionXS` class by setting the `nu` parameter to `True` as shown below.Additionally, scattering reactions (like (n,2n)) can also be defined to take in to account the neutron multiplication to become $\nu\sigma_s$. This can be accomodated in the the transport (`TransportXS`), scattering (`ScatterXS`), and scattering-matrix (`ScatterMatrixXS`) cross sections types by setting the `nu` parameter to `True` as shown below.These classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.
# Instantiate a few different sections total = mgxs.TotalXS(domain=cell, groups=groups) absorption = mgxs.AbsorptionXS(domain=cell, groups=groups) scattering = mgxs.ScatterXS(domain=cell, groups=groups) # Note that if we wanted to incorporate neutron multiplication in the # scattering cross section we would write the previous line as: # scattering = mgxs.ScatterXS(domain=cell, groups=groups, nu=True)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Each multi-group cross section object stores its tallies in a Python dictionary called `tallies`. We can inspect the tallies in the dictionary for our `Absorption` object as follows.
absorption.tallies
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
The `Absorption` object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each `MGXS` object contains the tallies that it needs, we must add these tallies to a `Tallies` object to generate the "tallies.xml" input file for OpenMC.
# Instantiate an empty Tallies object tallies_file = openmc.Tallies() # Add total tallies to the tallies file tallies_file += total.tallies.values() # Add absorption tallies to the tallies file tallies_file += absorption.tallies.values() # Add scattering tallies to the tallies file tallies_file += scattering.tallies.values() # Export to "tallies.xml" tallies_file.export_to_xml()
/home/romano/openmc/openmc/mixin.py:61: IDWarning: Another CellFilter instance already exists with id=3. warn(msg, IDWarning) /home/romano/openmc/openmc/mixin.py:61: IDWarning: Another EnergyFilter instance already exists with id=4. warn(msg, IDWarning)
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Now we a have a complete set of inputs, so we can go ahead and run our simulation.
# Run OpenMC openmc.run()
%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% ############### %%%%%%%%%%%%%%%%%%%%%%%% ################## %%%%%%%%%%%%%%%%%%%%%%% ################### %%%%%%%%%%%%%%%%%%%%%%% #################### %%%%%%%%%%%%%%%%%%%%%% ##################### %%%%%%%%%%%%%%%%%%%%% ###################### %%%%%%%%%%%%%%%%%%%% ####################### %%%%%%%%%%%%%%%%%% ####################### %%%%%%%%%%%%%%%%% ###################### %%%%%%%%%%%%%%%%% #################### %%%%%%%%%%%%%%%%% ################# %%%%%%%%%%%%%%%%% ############### %%%%%%%%%%%%%%%% ############ %%%%%%%%%%%%%%% ######## %%%%%%%%%%%%%% %%%%%%%%%%% | The OpenMC Monte Carlo Code Copyright | 2011-2017 Massachusetts Institute of Technology License | http://openmc.readthedocs.io/en/latest/license.html Version | 0.9.0 Git SHA1 | 9b7cebf7bc34d60e0f1750c3d6cb103df11e8dc4 Date/Time | 2017-12-04 20:56:46 OpenMP Threads | 4 Reading settings XML file... Reading cross sections XML file... Reading materials XML file... Reading geometry XML file... Building neighboring cells lists for each surface... Reading H1 from /home/romano/openmc/scripts/nndc_hdf5/H1.h5 Reading O16 from /home/romano/openmc/scripts/nndc_hdf5/O16.h5 Reading U235 from /home/romano/openmc/scripts/nndc_hdf5/U235.h5 Reading U238 from /home/romano/openmc/scripts/nndc_hdf5/U238.h5 Reading Zr90 from /home/romano/openmc/scripts/nndc_hdf5/Zr90.h5 Maximum neutron transport energy: 2.00000E+07 eV for H1 Reading tallies XML file... Writing summary.h5 file... Initializing source particles... ====================> K EIGENVALUE SIMULATION <==================== Bat./Gen. k Average k ========= ======== ==================== 1/1 1.11184 2/1 1.15820 3/1 1.18468 4/1 1.17492 5/1 1.19645 6/1 1.18436 7/1 1.14070 8/1 1.15150 9/1 1.19202 10/1 1.17677 11/1 1.20272 12/1 1.21366 1.20819 +/- 0.00547 13/1 1.15906 1.19181 +/- 0.01668 14/1 1.14687 1.18058 +/- 0.01629 15/1 1.14570 1.17360 +/- 0.01442 16/1 1.13480 1.16713 +/- 0.01343 17/1 1.17680 1.16852 +/- 0.01144 18/1 1.16866 1.16853 +/- 0.00990 19/1 1.19253 1.17120 +/- 0.00913 20/1 1.18124 1.17220 +/- 0.00823 21/1 1.19206 1.17401 +/- 0.00766 22/1 1.17681 1.17424 +/- 0.00700 23/1 1.17634 1.17440 +/- 0.00644 24/1 1.13659 1.17170 +/- 0.00654 25/1 1.17144 1.17169 +/- 0.00609 26/1 1.20649 1.17386 +/- 0.00610 27/1 1.11238 1.17024 +/- 0.00678 28/1 1.18911 1.17129 +/- 0.00647 29/1 1.14681 1.17000 +/- 0.00626 30/1 1.12152 1.16758 +/- 0.00641 31/1 1.12729 1.16566 +/- 0.00639 32/1 1.15399 1.16513 +/- 0.00612 33/1 1.13547 1.16384 +/- 0.00599 34/1 1.17723 1.16440 +/- 0.00576 35/1 1.09296 1.16154 +/- 0.00622 36/1 1.19621 1.16287 +/- 0.00612 37/1 1.12560 1.16149 +/- 0.00605 38/1 1.17872 1.16211 +/- 0.00586 39/1 1.17721 1.16263 +/- 0.00568 40/1 1.13724 1.16178 +/- 0.00555 41/1 1.18526 1.16254 +/- 0.00542 42/1 1.13779 1.16177 +/- 0.00531 43/1 1.15066 1.16143 +/- 0.00516 44/1 1.12174 1.16026 +/- 0.00514 45/1 1.17478 1.16068 +/- 0.00501 46/1 1.14146 1.16014 +/- 0.00489 47/1 1.20464 1.16135 +/- 0.00491 48/1 1.15119 1.16108 +/- 0.00479 49/1 1.17938 1.16155 +/- 0.00468 50/1 1.15798 1.16146 +/- 0.00457 Creating state point statepoint.50.h5... =======================> TIMING STATISTICS <======================= Total time for initialization = 4.0504E-01 seconds Reading cross sections = 3.6457E-01 seconds Total time in simulation = 6.3478E+00 seconds Time in transport only = 6.0079E+00 seconds Time in inactive batches = 8.1713E-01 seconds Time in active batches = 5.5307E+00 seconds Time synchronizing fission bank = 5.4640E-03 seconds Sampling source sites = 4.0981E-03 seconds SEND/RECV source sites = 1.2606E-03 seconds Time accumulating tallies = 1.2030E-04 seconds Total time for finalization = 9.6554E-04 seconds Total time elapsed = 6.7713E+00 seconds Calculation Rate (inactive) = 30594.8 neutrons/second Calculation Rate (active) = 18080.8 neutrons/second ============================> RESULTS <============================ k-effective (Collision) = 1.15984 +/- 0.00411 k-effective (Track-length) = 1.16146 +/- 0.00457 k-effective (Absorption) = 1.16177 +/- 0.00380 Combined k-effective = 1.16105 +/- 0.00364 Leakage Fraction = 0.00000 +/- 0.00000
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Tally Data Processing Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a `StatePoint` object.
# Load the last statepoint file sp = openmc.StatePoint('statepoint.50.h5')
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a `Summary` object is automatically linked when a `StatePoint` is loaded. This is necessary for the `openmc.mgxs` module to properly process the tally data. The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the `StatePoint` into each object as follows and our `MGXS` objects will compute the cross sections for us under-the-hood.
# Load the tallies from the statepoint into each MGXS object total.load_from_statepoint(sp) absorption.load_from_statepoint(sp) scattering.load_from_statepoint(sp)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Voila! Our multi-group cross sections are now ready to rock 'n roll! Extracting and Storing MGXS Data Let's first inspect our total cross section by printing it to the screen.
total.print_xs()
Multi-Group XS Reaction Type = total Domain Type = cell Domain ID = 1 Cross Sections [cm^-1]: Group 1 [0.625 - 20000000.0eV]: 6.81e-01 +/- 2.69e-01% Group 2 [0.0 - 0.625 eV]: 1.40e+00 +/- 5.93e-01%
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Since the `openmc.mgxs` module uses [tally arithmetic](http://openmc.readthedocs.io/en/latest/examples/tally-arithmetic.html) under-the-hood, the cross section is stored as a "derived" `Tally` object. This means that it can be queried and manipulated using all of the same methods supported for the `Tally` class in the OpenMC Python API. For example, we can construct a [Pandas](http://pandas.pydata.org/) `DataFrame` of the multi-group cross section data.
df = scattering.get_pandas_dataframe() df.head(10)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
absorption.export_xs_data(filename='absorption-xs', format='excel')
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
The following code snippet shows how to export all three `MGXS` to the same HDF5 binary data store.
total.build_hdf5_store(filename='mgxs', append=True) absorption.build_hdf5_store(filename='mgxs', append=True) scattering.build_hdf5_store(filename='mgxs', append=True)
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Comparing MGXS with Tally Arithmetic Finally, we illustrate how one can leverage OpenMC's [tally arithmetic](http://openmc.readthedocs.io/en/latest/examples/tally-arithmetic.html) data processing feature with `MGXS` objects. The `openmc.mgxs` module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each `MGXS` object includes an `xs_tally` attribute which is a "derived" `Tally` based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the `TotalXS` is equal to the sum of the `AbsorptionXS` and `ScatterXS` objects.
# Use tally arithmetic to compute the difference between the total, absorption and scattering difference = total.xs_tally - absorption.xs_tally - scattering.xs_tally # The difference is a derived tally which can generate Pandas DataFrames for inspection difference.get_pandas_dataframe()
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Similarly, we can use tally arithmetic to compute the ratio of `AbsorptionXS` and `ScatterXS` to the `TotalXS`.
# Use tally arithmetic to compute the absorption-to-total MGXS ratio absorption_to_total = absorption.xs_tally / total.xs_tally # The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection absorption_to_total.get_pandas_dataframe() # Use tally arithmetic to compute the scattering-to-total MGXS ratio scattering_to_total = scattering.xs_tally / total.xs_tally # The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection scattering_to_total.get_pandas_dataframe()
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.
# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity sum_ratio = absorption_to_total + scattering_to_total # The sum ratio is a derived tally which can generate Pandas DataFrames for inspection sum_ratio.get_pandas_dataframe()
_____no_output_____
MIT
examples/jupyter/mgxs-part-i.ipynb
cchaugen/temp_openmc_for_Sterling
Bungee Characterization Lab PH 211 COCC Bruce Emerson 1/20/2021This notebook is meant to provide tools and discussion to support data analysis and presentation as you generate your lab reports. [Bungee Characterization (Bungee I)](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH211/PH211Materials/PH211Labs/PH211LabbungeeI.html) and [Bungee I Lab Discussion](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH211/PH211Materials/PH211Labs/PH211LabDbungeeI.html)In this lab we are gathering some data, entering the data into the notebook, plotting the data as a scatterplot, plotting a physics model of the bungee, and finally looking for patterns through normalizing the data.For the formal lab report you will want to create your own description of what you understand the process and intended outcome of the lab is. Please don't just copy the purpose statement from the lab page. DependenciesThis is where we load in the various libraries of python tools that are needed for the particular work we are undertaking. ```numpy``` is a numerical tools library - often imported as np. ```numpy``` also contains the statistical tools that we will use in this lab. There are other libraries dedicated to statistical tools but 'numpy' has everything we need. ```matplotlib```is a 'MATLAB like' library. ```matplotlib.pyplot``` is often imported as ```plt``` to make it easier to use. ```matplotlib``` has the plotting tools that we need for this lab. The following code cell will need to be run first before any other code cells.
import numpy as np import matplotlib as mplot import matplotlib.pyplot as plt
_____no_output_____
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Data Entry (Lists/Vectors)As we learned last week we can manually enter our data in as lists. See last weeks lab for reminders if needed. In this lab we are working with data pairs (x,y data). There are a number of ways of doing this but the most conceptually direct approach is to create an ordered list of the xdata and the ydata separately. Notice that I can 'fold' long lines of data by entering a new line after the comma. This is handy when manually entering data. The data shown here is completely manufactured but has some of the same characteristics as the data you are gathering.Be aware that you will gathering two sets of data yourself and getting a third data set from another group. Plan out how you will keep track of each data set with thoughtful naming choices. Comments in Code:From this point going forward I will be looking for consistent description of what is happening in the code cells both within and before the code cell. You are of little value to a future employer if they can't hand you work to another employee who can make sense of what you did. A good metric is you should spend at least as much effort commenting and explaining what you're doing as you do actually doing the work.In a python code cell any line that starts with a '' will be ignored by python and interpreted as a comment. ``` this is the actual data from your experiment```This is a typical format of a comment that is easy to read in the code. It is sometimes helpful to comment at the end of a line to explain particular items in that line.```ydata2 = [2., 3.] I can also comment at the end of a line```
# this is the actual data from your experiment xdata1 = [3.23961446, 12.3658087, 27.08638038, 36.88808393, 48.5373278, 43.90496472, 75.81073494, 105.42389529, 123.53497036, 158.87537602] ydata1 = [0.62146893, 1.53513096, 3.97591135, 4.54284862, 6.23415512, 5.12951366, 6.1733864, 7.9524996, 8.90050684, 10.29383595] # these are a couple of specific data point I want to scatterplot on top of my plot xdata2 = [60., 100.] ydata2 = [2., 3.] # I can also comment at the end of a line # print out and check my data print("stretch data:",xdata1) print("force data:",ydata1)
stretch data: [3.23961446, 12.3658087, 27.08638038, 36.88808393, 48.5373278, 43.90496472, 75.81073494, 105.42389529, 123.53497036, 158.87537602] force data: [0.62146893, 1.53513096, 3.97591135, 4.54284862, 6.23415512, 5.12951366, 6.1733864, 7.9524996, 8.90050684, 10.29383595]
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Number of Data Points:Because we are scatter plotting the data we need to be sure that every x value has a related y value or the plotting routines will complain. Previously we learned to use the ```len()``` function to determine the number of data points in a list. We do that again here. Extra: Conditional Statements:It seems reasonable that we could ask python to check whether the two data sets are the same length and we can. There are a number of what are called conditional statements. The "if-else' statement is one of these. [if-else examples](https://pythonguides.com/python-if-else/)```if (xdata1length = ydata1length): print("Looks good:)") else print("Something is wrong here!!!")```Inside the parentheses is the conditional statement which, in this case, asks if ```xdata1length = ydata1length```. 'If' this statement is true then python will look at the next line(s) to see what it should do. If the conditional statement is false (not true) python will look for an ```else``` command and do whatever is on the lines after the else statement. Python expects that everything related to the ```if-else``` statement will be indented after the line where it begins. The next line of code (or even a blank line) that is NOT indented represents the end of the conditional statement. Just play with a few things in the statement if you have time and see what happens. *** Lab Deliverable:For your lab notebook you will include the usual 'header' information we talked about last week in the first cell of the lab (a markdown cell for sure). After the header cell describe the process by which you collected data from you micro-bungee cord. The actual data can be entered directly into the code. Insert an appropriate title and describe how you determined the varibility of your data across the range of your data points. At some point in your description you need to articulate, in percentage terms, a numerical value for variability of your data that matches your description and data.***
# determine the lengths of the data lists xdata1length = len(xdata1) ydata1length = len(ydata1) # print out the lengths- visually check that they are the same print("number of data points (x):", xdata1length) print("number of data points (y):", ydata1length) if (xdata1length == ydata1length): print("Looks good:)") else: print("Something is wrong here!!!")
number of data points (x): 10 number of data points (y): 10 Looks good:)
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Scatter Plots Most data that we will want to analyze in physics is (x,y) data. For this type of data the typical plot type is called a scatter plot which is just what you think of when you plot individual data points.To begin the process in python we need to create a container for the multiple plots we will be creating. One way (not the only way) to dothis is with the ```plt.subplots``` function. This creates a container (called fig1 in this case) and a first set of axes called ax1 in this case. [pyplot subplots documentation](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html)We can then layer multiple plots onto these axes (ax1) by plotting and replotting until we are ready to show the whole thing. In this cell I am only creating a single plot of the first data set.[pyplot scatter documentation](https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.scatter.html) To try to keep things clearer for myself I have typically defined a new figure and a new set of axes for each plot. You will find that if you look at samples from the web that many coders jsut reuse the same labels over and over again. This works from a coding perspective but it violates a core expectation for all sciences that your code be clear in its communication. I encourage you to consider the choices you make in this regard.
# create a figure with a set of axes as we did with histograms fig1, ax1 = plt.subplots() # scatter plot data set 1 ax1.scatter(xdata1, ydata1) # set up labels and titles for the plot and turn on the grid lines ax1.set(xlabel='independent variable (units)', ylabel='dependent variable (units)', title='My Data from Lab') ax1.grid() # Set the size of my plot for better visibility fig1.set_size_inches(10, 9) # uncomment this line if I want to save a png of the plot for other purposes #fig1.savefig("myplot.png") plt.show()
_____no_output_____
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Adding more dataWhen I want to add more data I just make another plot on a new set of axes. I have to start a new container (fig) because the ```plt.show()``` call blocks me from adding more information to the plot (there is something in this that is still not clear to me and perhaps soon I will
# a new set of axes fig2, ax2 = plt.subplots() ax2.scatter(xdata1, ydata1, color = 'blue') ax2.scatter(xdata2, ydata2, color = 'red') ax2.set(xlabel='independent variable (units)', ylabel='dependent variable (units)', title='My Data from Lab') ax2.grid() # Set the size of my plot for better visibility fig2.set_size_inches(10, 9) #fig.savefig("myplot.png") plt.show()
_____no_output_____
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Discussion: Deliverable 2The second deliverable asks you to consider the data from your plot(s) and describe whether it has features that are consistent with an ideal physics spring (Hooke's Law). Are some regions linear? ....sort of? Is the spring stiffer at the beginning or at the end of data? Explain your answer. Do both sets of data show similar behavior? How or how not? Add physics model...For the lab you are asked to draw straight lines that 'model' (describe) the behavior of the early and latter parts of your data sets. When we are creating physics models we are now generating 'data points' from a mathematical description. Again, there are a number of ways to do this but what I will show here is typical of physics and engineering models.It starts by defining a set of x values.```numpy.linspace()``` is a tool for doing this and because we did ```import numpy as np``` it shows in the code as ```np.linspace()```[numpy.linspace documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html)What the function does is generate a list of values that are evenly distributed between 'begin' and 'end' in ```np.linspace('begin','end', values)``` In this lab we are exploring linear models (Hooke's Law) for the behavior of the bungee (spring) which means we need a slope and a y intercept. One the nice features of lists is that if I multiply a list by a number I get a new list with the same number of elements each of which is multiplied by the number. Be careful. The calculation that looks like it's relating a single x and y value is really connecting a list of x and y values.
# actual model parameters - slope and intercept model1slope = .12 model2slope = .045 model1int = 0. model2int = 3. # range of x values -- choose lower and upper limits of range model1x = np.linspace(0.,50.,20) model2x = np.linspace(30.,170.,20) # in case you want to check how many values are generated # modellength = len(model1x) # print(modellength) # generate y values from model model1y = model1slope*model1x + model1int model2y = model2slope*model2x + model2int
_____no_output_____
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Plotting ReferencesThere are a range of different marks that you can use to plot your data points on the scatter plot. Here is the link... [marker types for scatter plot](https://matplotlib.org/3.1.0/api/markers_api.htmlmodule-matplotlib.markers)There are also a range of colors that you can use for all plots. I am not yet clear when some can or can't be used but here's the reference if you want to experiment...[matplotlib named colors](https://matplotlib.org/3.1.0/gallery/color/named_colors.html)When plotting lines (```ax2.plot()```) there are a few line styles you can use from solid lines to various dashed lines. Here's the reference....[matplotlib line styles for plot](https://matplotlib.org/gallery/lines_bars_and_markers/line_styles_reference.html)You will notice that I added a label to each plot. This is then picked up and attached to each plot and displayed in the legend. You can decide where to place the legend on the plot by choosing different values for 'loc'. Play with this to get a helpful placement.
fig3, ax3 = plt.subplots() # scatter plot of the data ax3.scatter(xdata1, ydata1, marker = 'x', color = 'black', label = "82 cm Bungee") # draw the two lines that represent my model ax3.plot(model1x, model1y, color = 'red', linestyle = ':', linewidth = 3., label = "initial") ax3.plot(model2x, model2y, color = 'green', linestyle = '--', linewidth = 2., label = "tail") # set up overall plot labels ax2.set(xlabel='independent variable (units)', ylabel='dependent variable (units)', title='data and model') ax3.grid() # Set the size of my plot for better visibility fig3.set_size_inches(10, 9) # this creates a key to the meaning of the different symbols on the plot plt.legend(loc= 4) plt.show()
_____no_output_____
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Discussion: Deliverable IIISo what does your plot above mean? What explanation of the behavior of the bungee is suggested by the two line fit? NormalizationNormalization is the process of trying to see if a particular feature of the data has a simple dependence. In this case each bungee is a different length but otherwise they seem like would behave very similarly. To explore this question we normalize the stretch by dividing it by the original length of the cord. Do this for **both** sets of data and then replot.The value of this normalization exercise is the impact of plotting data from multiple bungees. What I show here is the normalization of just one bungee. You will need to do 2 or 3 depending on how much data you have and plot them all simultaneously. Using different colors for each data set will help keep track of which ones are which.You will note that I couldn't normalize by doing the obvious thing - ```xdata1norm = xdata1/length1```. Python doesn't like this (try it and look at the error message) so I had to hunt around and found this useful function. There may be other ways to accomplish this task but this works so that's where I'm going. As usual here is the documentation link:[numpy.true_divide](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.true_divide.html)
length1 = 75. xdata1norm = np.true_divide(xdata1,length1) fig, axn = plt.subplots() axn.scatter(xdata1norm, ydata1) axn.set(xlabel='independent variable (units)', ylabel='dependent variable (units)', title='My Data from Lab') axn.grid() #fig.savefig("myplot.png") plt.show()
_____no_output_____
MIT
Bungee/.ipynb_checkpoints/BungeeCharacLab-checkpoint.ipynb
JNichols-19/PH211
Data SimilarityPrevious experiments have had some strange results, with models occasionally performing abnormally well (or badly) on the out of sample set. To make sure that there are no duplicate samples or abnormally similar studies, I made this notebook
import json import matplotlib.pyplot as plt import numpy as np import pandas as pd import yaml from plotnine import * from sklearn.metrics.pairwise import euclidean_distances from saged import utils, datasets, models
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Load the data
dataset_config_file = '../../dataset_configs/refinebio_labeled_dataset.yml' dataset_config_str = """name: "RefineBioMixedDataset" compendium_path: "../../data/subset_compendium.pkl" metadata_path: "../../data/aggregated_metadata.json" label_path: "../../data/sample_classifications.pkl" """ dataset_config = yaml.safe_load(dataset_config_str) dataset_name = dataset_config.pop('name') MixedDatasetClass = datasets.RefineBioMixedDataset all_data = MixedDatasetClass.from_config(**dataset_config)
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Look for samples that are very similar to each other despite having different IDs
sample_names = all_data.get_samples() assert len(sample_names) == len(set(sample_names)) sample_names[:5] expression = all_data.get_all_data() print(len(sample_names)) print(expression.shape) sample_distance_matrix = euclidean_distances(expression, expression) # This is unrelated to debugging the data, I'm just curious gene_distance_matrix = euclidean_distances(expression.T, expression.T) sample_distance_matrix.shape sample_distance_matrix # See if there are any zero distances outside the diagonal num_zeros = 10234 * 10234 - np.count_nonzero(sample_distance_matrix) num_zeros
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Since there are as many zeros as elements in the diagonal, there are no duplicate samples with different IDs (unless noise was added somewhere) Get all distancesBecause we know there aren't any zeros outside of the diagonal, we can zero out the lower diagonal and use the the non-zero entries of the upper diagonal to visualize the distance distribution
triangle = np.triu(sample_distance_matrix, k=0) triangle distances = triangle.flatten() nonzero_distances = distances[distances != 0] nonzero_distances.shape plt.hist(nonzero_distances, bins=20)
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Distribution looks bimodal, probably due to different platforms having different distances from each other?
plt.hist(nonzero_distances[nonzero_distances < 200]) plt.hist(nonzero_distances[nonzero_distances < 100])
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Looks like there may be some samples that are abnormally close to each other. I wonder whether they're in the same study Correspondence between distance and study
# There is almost certainly a vectorized way of doing this but oh well distances = [] first_samples = [] second_samples = [] for row_index in range(sample_distance_matrix.shape[0]): for col_index in range(sample_distance_matrix.shape[0]): distance = sample_distance_matrix[row_index, col_index] if distance == 0: continue distances.append(distance) first_samples.append(sample_names[row_index]) second_samples.append(sample_names[col_index]) distance_df = pd.DataFrame({'distance': distances, 'sample_1': first_samples, 'sample_2': second_samples}) # Free up memory to prevent swapping (probably hopeless if the user has < 32GB) del(triangle) del(sample_distance_matrix) del(distances) del(first_samples) del(second_samples) del(nonzero_distances) distance_df sample_to_study = all_data.sample_to_study del(all_data) distance_df['study_1'] = distance_df['sample_1'].map(sample_to_study) distance_df['study_2'] = distance_df['sample_2'].map(sample_to_study) distance_df['same_study'] = distance_df['study_1'] == distance_df['study_2'] distance_df.head() print(len(distance_df))
104723274
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
For some reason my computer didn't want me to make a figure with 50 million points. We'll work with means instead
means_df = distance_df.groupby(['study_1', 'same_study']).mean() means_df means_df = means_df.unstack(level='same_study') means_df = means_df.reset_index() means_df.head() # Get rid of the multilevel confusion means_df.columns = means_df.columns.droplevel() means_df.columns = ['study_name', 'distance_to_other', 'distance_to_same'] means_df['difference'] = means_df['distance_to_other'] - means_df['distance_to_same'] means_df.head() plot = ggplot(means_df, aes(x='study_name', y='difference')) plot += geom_point() plot += ylab('out of study - in-study mean') plot means_df.sort_values(by='difference')
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
These results indicate that most of the data is behaving as expected (the distance between pairs of samples from different studies is less than the distance between pairs of samples within the same study).The outliers are mostly bead-chip, which makes sense (though they shouldn't be in the dataset and I'll need to look more closely at that later). The one exception is SRP049820 which is run on an Illumina Genome Analyzer II. Maybe it's due to the old tech? Without BE Correction
%reset -f # Calling reset because the notebook runs out of memory otherwise import json import matplotlib.pyplot as plt import numpy as np import pandas as pd import yaml from plotnine import * from sklearn.metrics.pairwise import euclidean_distances from saged import utils, datasets, models dataset_config_file = '../../dataset_configs/refinebio_labeled_dataset.yml' dataset_config_str = """name: "RefineBioMixedDataset" compendium_path: "../../data/subset_compendium.pkl" metadata_path: "../../data/aggregated_metadata.json" label_path: "../../data/sample_classifications.pkl" """ dataset_config = yaml.safe_load(dataset_config_str) dataset_name = dataset_config.pop('name') MixedDatasetClass = datasets.RefineBioMixedDataset all_data = MixedDatasetClass.from_config(**dataset_config) # Correct for batch effects all_data = datasets.correct_batch_effects(all_data, 'limma')
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Look for samples that are very similar to each other despite having different IDs
sample_names = all_data.get_samples() assert len(sample_names) == len(set(sample_names)) sample_names[:5] expression = all_data.get_all_data() print(len(sample_names)) print(expression.shape) sample_distance_matrix = euclidean_distances(expression, expression) # This is unrelated to debugging the data, I'm just curious gene_distance_matrix = euclidean_distances(expression.T, expression.T) sample_distance_matrix.shape sample_distance_matrix # See if there are any zero distances outside the diagonal num_zeros = 10234 * 10234 - np.count_nonzero(sample_distance_matrix) num_zeros
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Since there are as many zeros as elements in the diagonal, there are no duplicate samples with different IDs (unless noise was added somewhere) Get all distancesBecause we know there aren't any zeros outside of the diagonal, we can zero out the lower diagonal and use the the non-zero entries of the upper diagonal to visualize the distance distribution
triangle = np.triu(sample_distance_matrix, k=0) triangle distances = triangle.flatten() nonzero_distances = distances[distances != 0] nonzero_distances.shape plt.hist(nonzero_distances, bins=20)
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Distribution looks bimodal, probably due to different platforms having different distances from each other?
plt.hist(nonzero_distances[nonzero_distances < 200]) plt.hist(nonzero_distances[nonzero_distances < 100])
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Looks like there may be some samples that are abnormally close to each other. I wonder whether they're in the same study Correspondence between distance and study
# There is almost certainly a vectorized way of doing this but oh well distances = [] first_samples = [] second_samples = [] for row_index in range(sample_distance_matrix.shape[0]): for col_index in range(sample_distance_matrix.shape[0]): distance = sample_distance_matrix[row_index, col_index] if distance == 0: continue distances.append(distance) first_samples.append(sample_names[row_index]) second_samples.append(sample_names[col_index]) distance_df = pd.DataFrame({'distance': distances, 'sample_1': first_samples, 'sample_2': second_samples}) # Free up memory to prevent swapping (probably hopeless if the user has < 32GB) del(triangle) del(sample_distance_matrix) del(distances) del(first_samples) del(second_samples) del(nonzero_distances) distance_df sample_to_study = all_data.sample_to_study del(all_data) distance_df['study_1'] = distance_df['sample_1'].map(sample_to_study) distance_df['study_2'] = distance_df['sample_2'].map(sample_to_study) distance_df['same_study'] = distance_df['study_1'] == distance_df['study_2'] distance_df.head() print(len(distance_df))
104724522
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
For some reason my computer didn't want me to make a figure with 50 million points. We'll work with means instead
means_df = distance_df.groupby(['study_1', 'same_study']).mean() means_df means_df = means_df.unstack(level='same_study') means_df = means_df.reset_index() means_df.head() # Get rid of the multilevel confusion means_df.columns = means_df.columns.droplevel() means_df.columns = ['study_name', 'distance_to_other', 'distance_to_same'] means_df['difference'] = means_df['distance_to_other'] - means_df['distance_to_same'] means_df.head() plot = ggplot(means_df, aes(x='study_name', y='difference')) plot += geom_point() plot += ylab('out of study - in-study mean') plot means_df.sort_values(by='difference')
_____no_output_____
BSD-3-Clause
notebook/debugging/data_similarity.ipynb
ben-heil/saged
Data Preprocessing Notebook
import pandas as pd # import modin.pandas as pd import numpy as np import os import re import warnings warnings.filterwarnings("ignore") # os.environ["MODIN_ENGINE"] = "ray" # Modin will use Ray # os.environ["MODIN_ENGINE"] = "dask" # Modin will use Dask from sklearn.model_selection import train_test_split from argparse import Namespace from tqdm.notebook import tqdm as tqdm_notebook tqdm_notebook.pandas(desc="Preprocessin Data")
_____no_output_____
MIT
chapter_4/Surname-MLP/SurName-Dataset.ipynb
ManuLasker/nlp-with-pytorch
Define Args
args = Namespace( raw_dataset_csv="data/surnames/surnames.csv", train_proportion=0.7, val_proportion=0.15, test_proportion=0.15, output_munged_csv="data/surnames/surnames_with_splits.csv", seed=1337 )
_____no_output_____
MIT
chapter_4/Surname-MLP/SurName-Dataset.ipynb
ManuLasker/nlp-with-pytorch
Load Data suing modin ray backend
surnames = pd.read_csv(args.raw_dataset_csv, header=0) surnames.head()
_____no_output_____
MIT
chapter_4/Surname-MLP/SurName-Dataset.ipynb
ManuLasker/nlp-with-pytorch
Data Insights
surnames.describe() set(surnames.nationality.unique())
_____no_output_____
MIT
chapter_4/Surname-MLP/SurName-Dataset.ipynb
ManuLasker/nlp-with-pytorch
Split data Train/Test/Val
train_surnames, val_surnames = train_test_split(surnames, train_size=args.train_proportion, stratify=surnames.nationality.values) val_surnames, test_surnames = train_test_split(val_surnames, train_size=0.5, stratify=val_surnames.nationality.values) len(train_surnames.nationality.value_counts()) len(val_surnames.nationality.value_counts()) len(test_surnames.nationality.value_counts()) train_surnames.reset_index(drop=True, inplace=True) val_surnames.reset_index(drop=True, inplace=True) test_surnames.reset_index(drop=True, inplace=True) train_surnames["split"] = "train" val_surnames["split"] = "val" test_surnames["split"] = "test" final_surnames = pd.concat([train_surnames, val_surnames, test_surnames], axis=0, copy=True) final_surnames.split.value_counts()
_____no_output_____
MIT
chapter_4/Surname-MLP/SurName-Dataset.ipynb
ManuLasker/nlp-with-pytorch
Save Data
final_surnames.to_csv(args.output_munged_csv, index=False)
_____no_output_____
MIT
chapter_4/Surname-MLP/SurName-Dataset.ipynb
ManuLasker/nlp-with-pytorch
Getting started in scikit-learn with the famous iris dataset*From the video series: [Introduction to machine learning with scikit-learn](https://github.com/justmarkham/scikit-learn-videos)*
#environment setup with watermark %load_ext watermark %watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
MIT
tests/scikit-learn/03_getting_started_with_iris.ipynb
gopala-kr/ds-notebooks
Agenda- What is the famous iris dataset, and how does it relate to machine learning?- How do we load the iris dataset into scikit-learn?- How do we describe a dataset using machine learning terminology?- What are scikit-learn's four key requirements for working with data? Introducing the iris dataset ![Iris](images/03_iris.png) - 50 samples of 3 different species of iris (150 samples total)- Measurements: sepal length, sepal width, petal length, petal width
from IPython.display import IFrame IFrame('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', width=300, height=200)
_____no_output_____
MIT
tests/scikit-learn/03_getting_started_with_iris.ipynb
gopala-kr/ds-notebooks
Machine learning on the iris dataset- Framed as a **supervised learning** problem: Predict the species of an iris using the measurements- Famous dataset for machine learning because prediction is **easy**- Learn more about the iris dataset: [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml/datasets/Iris) Loading the iris dataset into scikit-learn
# import load_iris function from datasets module from sklearn.datasets import load_iris # save "bunch" object containing iris dataset and its attributes iris = load_iris() type(iris) # print the iris data print(iris.data)
[[ 5.1 3.5 1.4 0.2] [ 4.9 3. 1.4 0.2] [ 4.7 3.2 1.3 0.2] [ 4.6 3.1 1.5 0.2] [ 5. 3.6 1.4 0.2] [ 5.4 3.9 1.7 0.4] [ 4.6 3.4 1.4 0.3] [ 5. 3.4 1.5 0.2] [ 4.4 2.9 1.4 0.2] [ 4.9 3.1 1.5 0.1] [ 5.4 3.7 1.5 0.2] [ 4.8 3.4 1.6 0.2] [ 4.8 3. 1.4 0.1] [ 4.3 3. 1.1 0.1] [ 5.8 4. 1.2 0.2] [ 5.7 4.4 1.5 0.4] [ 5.4 3.9 1.3 0.4] [ 5.1 3.5 1.4 0.3] [ 5.7 3.8 1.7 0.3] [ 5.1 3.8 1.5 0.3] [ 5.4 3.4 1.7 0.2] [ 5.1 3.7 1.5 0.4] [ 4.6 3.6 1. 0.2] [ 5.1 3.3 1.7 0.5] [ 4.8 3.4 1.9 0.2] [ 5. 3. 1.6 0.2] [ 5. 3.4 1.6 0.4] [ 5.2 3.5 1.5 0.2] [ 5.2 3.4 1.4 0.2] [ 4.7 3.2 1.6 0.2] [ 4.8 3.1 1.6 0.2] [ 5.4 3.4 1.5 0.4] [ 5.2 4.1 1.5 0.1] [ 5.5 4.2 1.4 0.2] [ 4.9 3.1 1.5 0.1] [ 5. 3.2 1.2 0.2] [ 5.5 3.5 1.3 0.2] [ 4.9 3.1 1.5 0.1] [ 4.4 3. 1.3 0.2] [ 5.1 3.4 1.5 0.2] [ 5. 3.5 1.3 0.3] [ 4.5 2.3 1.3 0.3] [ 4.4 3.2 1.3 0.2] [ 5. 3.5 1.6 0.6] [ 5.1 3.8 1.9 0.4] [ 4.8 3. 1.4 0.3] [ 5.1 3.8 1.6 0.2] [ 4.6 3.2 1.4 0.2] [ 5.3 3.7 1.5 0.2] [ 5. 3.3 1.4 0.2] [ 7. 3.2 4.7 1.4] [ 6.4 3.2 4.5 1.5] [ 6.9 3.1 4.9 1.5] [ 5.5 2.3 4. 1.3] [ 6.5 2.8 4.6 1.5] [ 5.7 2.8 4.5 1.3] [ 6.3 3.3 4.7 1.6] [ 4.9 2.4 3.3 1. ] [ 6.6 2.9 4.6 1.3] [ 5.2 2.7 3.9 1.4] [ 5. 2. 3.5 1. ] [ 5.9 3. 4.2 1.5] [ 6. 2.2 4. 1. ] [ 6.1 2.9 4.7 1.4] [ 5.6 2.9 3.6 1.3] [ 6.7 3.1 4.4 1.4] [ 5.6 3. 4.5 1.5] [ 5.8 2.7 4.1 1. ] [ 6.2 2.2 4.5 1.5] [ 5.6 2.5 3.9 1.1] [ 5.9 3.2 4.8 1.8] [ 6.1 2.8 4. 1.3] [ 6.3 2.5 4.9 1.5] [ 6.1 2.8 4.7 1.2] [ 6.4 2.9 4.3 1.3] [ 6.6 3. 4.4 1.4] [ 6.8 2.8 4.8 1.4] [ 6.7 3. 5. 1.7] [ 6. 2.9 4.5 1.5] [ 5.7 2.6 3.5 1. ] [ 5.5 2.4 3.8 1.1] [ 5.5 2.4 3.7 1. ] [ 5.8 2.7 3.9 1.2] [ 6. 2.7 5.1 1.6] [ 5.4 3. 4.5 1.5] [ 6. 3.4 4.5 1.6] [ 6.7 3.1 4.7 1.5] [ 6.3 2.3 4.4 1.3] [ 5.6 3. 4.1 1.3] [ 5.5 2.5 4. 1.3] [ 5.5 2.6 4.4 1.2] [ 6.1 3. 4.6 1.4] [ 5.8 2.6 4. 1.2] [ 5. 2.3 3.3 1. ] [ 5.6 2.7 4.2 1.3] [ 5.7 3. 4.2 1.2] [ 5.7 2.9 4.2 1.3] [ 6.2 2.9 4.3 1.3] [ 5.1 2.5 3. 1.1] [ 5.7 2.8 4.1 1.3] [ 6.3 3.3 6. 2.5] [ 5.8 2.7 5.1 1.9] [ 7.1 3. 5.9 2.1] [ 6.3 2.9 5.6 1.8] [ 6.5 3. 5.8 2.2] [ 7.6 3. 6.6 2.1] [ 4.9 2.5 4.5 1.7] [ 7.3 2.9 6.3 1.8] [ 6.7 2.5 5.8 1.8] [ 7.2 3.6 6.1 2.5] [ 6.5 3.2 5.1 2. ] [ 6.4 2.7 5.3 1.9] [ 6.8 3. 5.5 2.1] [ 5.7 2.5 5. 2. ] [ 5.8 2.8 5.1 2.4] [ 6.4 3.2 5.3 2.3] [ 6.5 3. 5.5 1.8] [ 7.7 3.8 6.7 2.2] [ 7.7 2.6 6.9 2.3] [ 6. 2.2 5. 1.5] [ 6.9 3.2 5.7 2.3] [ 5.6 2.8 4.9 2. ] [ 7.7 2.8 6.7 2. ] [ 6.3 2.7 4.9 1.8] [ 6.7 3.3 5.7 2.1] [ 7.2 3.2 6. 1.8] [ 6.2 2.8 4.8 1.8] [ 6.1 3. 4.9 1.8] [ 6.4 2.8 5.6 2.1] [ 7.2 3. 5.8 1.6] [ 7.4 2.8 6.1 1.9] [ 7.9 3.8 6.4 2. ] [ 6.4 2.8 5.6 2.2] [ 6.3 2.8 5.1 1.5] [ 6.1 2.6 5.6 1.4] [ 7.7 3. 6.1 2.3] [ 6.3 3.4 5.6 2.4] [ 6.4 3.1 5.5 1.8] [ 6. 3. 4.8 1.8] [ 6.9 3.1 5.4 2.1] [ 6.7 3.1 5.6 2.4] [ 6.9 3.1 5.1 2.3] [ 5.8 2.7 5.1 1.9] [ 6.8 3.2 5.9 2.3] [ 6.7 3.3 5.7 2.5] [ 6.7 3. 5.2 2.3] [ 6.3 2.5 5. 1.9] [ 6.5 3. 5.2 2. ] [ 6.2 3.4 5.4 2.3] [ 5.9 3. 5.1 1.8]]
MIT
tests/scikit-learn/03_getting_started_with_iris.ipynb
gopala-kr/ds-notebooks
Machine learning terminology- Each row is an **observation** (also known as: sample, example, instance, record)- Each column is a **feature** (also known as: predictor, attribute, independent variable, input, regressor, covariate)
# print the names of the four features print(iris.feature_names) # print integers representing the species of each observation print(iris.target) # print the encoding scheme for species: 0 = setosa, 1 = versicolor, 2 = virginica print(iris.target_names)
['setosa' 'versicolor' 'virginica']
MIT
tests/scikit-learn/03_getting_started_with_iris.ipynb
gopala-kr/ds-notebooks
- Each value we are predicting is the **response** (also known as: target, outcome, label, dependent variable)- **Classification** is supervised learning in which the response is categorical- **Regression** is supervised learning in which the response is ordered and continuous Requirements for working with data in scikit-learn1. Features and response are **separate objects**2. Features and response should be **numeric**3. Features and response should be **NumPy arrays**4. Features and response should have **specific shapes**
# check the types of the features and response print(type(iris.data)) print(type(iris.target)) # check the shape of the features (first dimension = number of observations, second dimensions = number of features) print(iris.data.shape) # check the shape of the response (single dimension matching the number of observations) print(iris.target.shape) # store feature matrix in "X" X = iris.data # store response vector in "y" y = iris.target
_____no_output_____
MIT
tests/scikit-learn/03_getting_started_with_iris.ipynb
gopala-kr/ds-notebooks
Resources- scikit-learn documentation: [Dataset loading utilities](http://scikit-learn.org/stable/datasets/)- Jake VanderPlas: Fast Numerical Computing with NumPy ([slides](https://speakerdeck.com/jakevdp/losing-your-loops-fast-numerical-computing-with-numpy-pycon-2015), [video](https://www.youtube.com/watch?v=EEUXKG97YRw))- Scott Shell: [An Introduction to NumPy](http://www.engr.ucsb.edu/~shell/che210d/numpy.pdf) (PDF) Comments or Questions?- Email: - Website: http://dataschool.io- Twitter: [@justmarkham](https://twitter.com/justmarkham)
from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling() test complete; Gopal
_____no_output_____
MIT
tests/scikit-learn/03_getting_started_with_iris.ipynb
gopala-kr/ds-notebooks
How to Use Forecasters in MerlionThis notebook will guide you through using all the key features of forecasters in Merlion. Specifically, we will explain1. Initializing a forecasting model (including ensembles and automatic model selectors)1. Training the model1. Producing a forecast with the model1. Visualizing the model's predictions1. Quantitatively evaluating the model1. Saving and loading a trained model1. Simulating the live deployment of a model using a `ForecastEvaluator`We will be using a single example time series for this whole notebook. We load it now:
import matplotlib.pyplot as plt import numpy as np from merlion.utils.time_series import TimeSeries from ts_datasets.forecast import M4 # Load the time series # time_series is a time-indexed pandas.DataFrame # trainval is a time-indexed pandas.Series indicating whether each timestamp is for training or testing time_series, metadata = M4(subset="Hourly")[5] trainval = metadata["trainval"] # Is there any missing data? timedeltas = np.diff(time_series.index) print(f"Has missing data: {any(timedeltas != timedeltas[0])}") # Visualize the time series and draw a dotted line to indicate the train/test split fig = plt.figure(figsize=(10, 6)) ax = fig.add_subplot(111) ax.plot(time_series) ax.axvline(time_series[trainval].index[-1], ls="--", lw="2", c="k") plt.show() # Split the time series into train/test splits, and convert it to Merlion format train_data = TimeSeries.from_pd(time_series[trainval]) test_data = TimeSeries.from_pd(time_series[~trainval]) print(f"{len(train_data)} points in train split, " f"{len(test_data)} points in test split.")
100%|██████████| 414/414 [00:00<00:00, 513.64it/s]
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Model InitializationIn this notebook, we will use three different forecasting models:1. ARIMA (a classic stochastic process model)2. Prophet (Facebook's popular time series forecasting model)3. MSES (the Multi-Scale Exponential Smoothing model, developed in-house)Let's start by initializing each of them.
# Import models & configs from merlion.models.forecast.arima import Arima, ArimaConfig from merlion.models.forecast.prophet import Prophet, ProphetConfig from merlion.models.forecast.smoother import MSES, MSESConfig # Import data pre-processing transforms from merlion.transform.base import Identity from merlion.transform.resample import TemporalResample # All models are initialized using the syntax ModelClass(config), # where config is a model-specific configuration object. This is where # you specify any algorithm-specific hyperparameters, as well as any # data pre-processing transforms. # ARIMA assumes that input data is sampled at a regular interval, # so we set its transform to resample at that interval. We must also specify # a maximum prediction horizon. config1 = ArimaConfig(max_forecast_steps=100, order=(20, 1, 5), transform=TemporalResample(granularity="1h")) model1 = Arima(config1) # Prophet has no real assumptions on the input data (and doesn't require # a maximum prediction horizon), so we skip data pre-processing by using # the Identity transform. config2 = ProphetConfig(max_forecast_steps=None, transform=Identity()) model2 = Prophet(config2) # MSES assumes that the input data is sampled at a regular interval, # and requires us to specify a maximum prediction horizon. We will # also specify its look-back hyperparameter to be 60 here config3 = MSESConfig(max_forecast_steps=100, max_backstep=60, transform=TemporalResample(granularity="1h")) model3 = MSES(config3)
_____no_output_____
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Now that we have initialized the individual models, we will also combine them in two different ensembles: `ensemble` simply takes the mean prediction of each individual model, and `selector` selects the best individual model based on its sMAPE (symmetric Mean Average Precision Error). The sMAPE is a metric used to evaluate the quality of a continuous forecast. For ground truth $y \in \mathbb{R}^T$ and prediction $\hat{y} \in \mathbb{R}^T$, the sMAPE is computed as$$\mathrm{sMAPE}(y, \hat{y}) = \frac{200}{T} \sum_{t = 1}^{T} \frac{\lvert \hat{y}_t - y_t \rvert}{\lvert\hat{y}_t\rvert + \lvert y_t \rvert}$$
from merlion.evaluate.forecast import ForecastMetric from merlion.models.ensemble.combine import Mean, ModelSelector from merlion.models.ensemble.forecast import ForecasterEnsemble, ForecasterEnsembleConfig # The ForecasterEnsemble is a forecaster, and we treat it as a first-class model. # Its config takes a combiner object, specifying how you want to combine the # predictions of individual models in the ensemble. There are two ways to specify # the actual models in the ensemble, which we cover below. # The first way to specify the models in the ensemble is to provide their individual # configs when initializing the ForecasterEnsembleConfig. Note that if using this # syntax, you must also provide the names of the model classes. # # The combiner here will simply take the mean prediction of the ensembles here ensemble_config = ForecasterEnsembleConfig( combiner=Mean(), model_configs=[(type(model1).__name__, config1), (type(model2).__name__, config2), (type(model3).__name__, config3)]) ensemble = ForecasterEnsemble(config=ensemble_config) # Alternatively, you can skip giving the individual model configs to the # ForecasterEnsembleConfig, and instead directly specify the models when # initializing the ForecasterEnsemble itself. # # The combiner here uses the sMAPE to compare individual models, and # selects the model with the lowest sMAPE selector_config = ForecasterEnsembleConfig( combiner=ModelSelector(metric=ForecastMetric.sMAPE)) selector = ForecasterEnsemble( config=selector_config, models=[model1, model2, model3])
_____no_output_____
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Model TrainingAll forecasting models (and ensembles) share the same API for training. The `train()` method returns the model's predictions and standard error of those predictions on the training data. Note that the standard error is just `None` if the model doesn't support uncertainty estimation (this is the case for MSES and ensembles).
print(f"Training {type(model1).__name__}...") forecast1, stderr1 = model1.train(train_data) print(f"\nTraining {type(model2).__name__}...") forecast2, stderr2 = model2.train(train_data) print(f"\nTraining {type(model3).__name__}...") forecast3, stderr3 = model3.train(train_data) print("\nTraining ensemble...") forecast_e, stderr_e = ensemble.train(train_data) print("\nTraining model selector...") forecast_s, stderr_s = selector.train(train_data) print("Done!")
Training Arima...
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Model InferenceTo obtain a forecast from a trained model, we simply call `model.forecast()` with the Unix timestamps at which we the model to generate a forecast. In many cases, you may obtain these directly from a time series as shown below.
# Truncate the test data to ensure that we are within each model's maximum # forecast horizon. sub_test_data = test_data[:50] # Obtain the time stamps corresponding to the test data time_stamps = sub_test_data.univariates[sub_test_data.names[0]].time_stamps # Get the forecast & standard error of each model. These are both # merlion.utils.TimeSeries objects. Note that the standard error is None for # models which don't support uncertainty estimation (like MSES and all # ensembles). forecast1, stderr1 = model1.forecast(time_stamps=time_stamps) forecast2, stderr2 = model2.forecast(time_stamps=time_stamps) # You may optionally specify a time series prefix as context. If one isn't # specified, the prefix is assumed to be the training data. Here, we just make # this dependence explicit. More generally, this feature is useful if you want # to use a pre-trained model to make predictions on data further in the future # from the last time it was trained. forecast3, stderr3 = model3.forecast(time_stamps=time_stamps, time_series_prev=train_data) # The same options are available for ensembles as well, though the stderr is None forecast_e, stderr_e = ensemble.forecast(time_stamps=time_stamps) forecast_s, stderr_s = selector.forecast(time_stamps=time_stamps, time_series_prev=train_data)
_____no_output_____
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Model Visualization and Quantitative EvaluationIt is fairly transparent to visualize a model's forecast and also quantitatively evaluate the forecast, using standard metrics like sMAPE. We show examples for all five models below.Below, we quantitatively evaluate the models using the sMAPE metric. However, the `ForecastMetric` enum includes a number of other options as well. In general, you may use the syntax```ForecastMetric..value(ground_truth=ground_truth, predict=forecast)```where `` is the name of the evaluation metric (see the API docs for details and more options), `ground_truth` is the original time series, and `forecast` is the forecast returned by the model. We show concrete examples with `ForecastMetric.sMAPE` below.
from merlion.evaluate.forecast import ForecastMetric # We begin by computing the sMAPE of ARIMA's forecast (scale is 0 to 100) smape1 = ForecastMetric.sMAPE.value(ground_truth=sub_test_data, predict=forecast1) print(f"{type(model1).__name__} sMAPE is {smape1:.3f}") # Next, we can visualize the actual forecast, and understand why it # attains this particular sMAPE. Since ARIMA supports uncertainty # estimation, we plot its error bars too. fig, ax = model1.plot_forecast(time_series=sub_test_data, plot_forecast_uncertainty=True) plt.show() # We begin by computing the sMAPE of Prophet's forecast (scale is 0 to 100) smape2 = ForecastMetric.sMAPE.value(sub_test_data, forecast2) print(f"{type(model2).__name__} sMAPE is {smape2:.3f}") # Next, we can visualize the actual forecast, and understand why it # attains this particular sMAPE. Since Prophet supports uncertainty # estimation, we plot its error bars too. # Note that we can specify time_series_prev here as well, though it # will not be visualized unless we also supply the keyword argument # plot_time_series_prev=True. fig, ax = model2.plot_forecast(time_series=sub_test_data, time_series_prev=train_data, plot_forecast_uncertainty=True) plt.show() # We begin by computing the sMAPE of MSES's forecast (scale is 0 to 100) smape3 = ForecastMetric.sMAPE.value(sub_test_data, forecast3) print(f"{type(model3).__name__} sMAPE is {smape3:.3f}") # Next, we visualize the actual forecast, and understand why it # attains this particular sMAPE. fig, ax = model3.plot_forecast(time_series=sub_test_data, plot_forecast_uncertainty=True) plt.show() # Compute the sMAPE of the ensemble's forecast (scale is 0 to 100) smape_e = ForecastMetric.sMAPE.value(sub_test_data, forecast_e) print(f"Ensemble sMAPE is {smape_e:.3f}") # Visualize the forecast. fig, ax = ensemble.plot_forecast(time_series=sub_test_data, plot_forecast_uncertainty=True) plt.show() # Compute the sMAPE of the selector's forecast (scale is 0 to 100) smape_s = ForecastMetric.sMAPE.value(sub_test_data, forecast_s) print(f"Selector sMAPE is {smape_s:.3f}") # Visualize the forecast. fig, ax = selector.plot_forecast(time_series=sub_test_data, plot_forecast_uncertainty=True) plt.show()
Selector sMAPE is 3.768
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Saving & Loading ModelsAll models have a `save()` method and `load()` class method. Models may also be loaded with the assistance of the `ModelFactory`, which works for arbitrary models. The `save()` method creates a new directory at the specified path, where it saves a `json` file representing the model's config, as well as a binary file for the model's state.We will demonstrate these behaviors using our `Prophet` model (`model2`) for concreteness.
import json import os import pprint from merlion.models.factory import ModelFactory # Save the model os.makedirs("models", exist_ok=True) path = os.path.join("models", "prophet") model2.save(path) # Print the config saved pp = pprint.PrettyPrinter() with open(os.path.join(path, "config.json")) as f: print(f"{type(model2).__name__} Config") pp.pprint(json.load(f)) # Load the model using Prophet.load() model2_loaded = Prophet.load(dirname=path) # Load the model using the ModelFactory model2_factory_loaded = ModelFactory.load(name="Prophet", model_path=path)
Prophet Config {'add_seasonality': 'auto', 'daily_seasonality': 'auto', 'dim': 1, 'max_forecast_steps': None, 'model_path': '/Users/abhatnagar/Desktop/Merlion_public/examples/forecast/models/prophet/model.pkl', 'target_seq_index': 0, 'transform': {'name': 'Identity'}, 'uncertainty_samples': 100, 'weekly_seasonality': 'auto', 'yearly_seasonality': 'auto'}
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
We can do the same exact thing with ensembles! Note that the ensemble saves each of its sub-models in a different sub-directory, which it tracks manually. Additionally, the combiner (which is saved in the `ForecasterEnsembleConfig`), keeps track of the sMAPE achieved by each model (the `metric_values` key).
# Save the selector path = os.path.join("models", "selector") selector.save(path) # Print the config saved. Note that we've saved all individual models, # and their paths are specified under the model_paths key. pp = pprint.PrettyPrinter() with open(os.path.join(path, "config.json")) as f: print(f"Selector Config") pp.pprint(json.load(f)) # Load the selector selector_loaded = ForecasterEnsemble.load(dirname=path) # Load the selector using the ModelFactory selector_factory_loaded = ModelFactory.load(name="ForecasterEnsemble", model_path=path)
Selector Config {'combiner': {'abs_score': False, 'metric': 'ForecastMetric.sMAPE', 'metric_values': [5.479063255045728, 8.611665684950744, 17.72980301555831], 'n_models': 3, 'name': 'ModelSelector'}, 'dim': 1, 'max_forecast_steps': None, 'model_paths': [['Arima', '/Users/abhatnagar/Desktop/Merlion_public/examples/forecast/models/selector/0'], ['Prophet', '/Users/abhatnagar/Desktop/Merlion_public/examples/forecast/models/selector/1'], ['MSES', '/Users/abhatnagar/Desktop/Merlion_public/examples/forecast/models/selector/2']], 'target_seq_index': 0, 'transform': {'name': 'Identity'}}
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Simulating Live Model DeploymentA typical model deployment scenario is as follows:1. Train an initial model on some recent historical data1. At a regular interval `cadence`, obtain the model's forecast for a certain `horizon`1. At a regular interval `retrain_freq`, retrain the entire model on the most recent data1. Optionally, specify a maximum amount of data (`train_window`) that the model should use for trainingWe provide a `ForecastEvaluator` object which simulates the above deployment scenario, and also allows a user to evaluate the quality of the forecaster according to an evaluation metric of their choice. We illustrate two examples below, using ARIMA for the first example, and the model selector for the second.
from merlion.evaluate.forecast import ForecastEvaluator, ForecastEvaluatorConfig, ForecastMetric def create_evaluator(model): # Re-initialize the model, so we can re-train it from scratch model.reset() # Create an evaluation pipeline for the model, where we # -- get the model's forecast every hour # -- have the model forecast for a horizon of 6 hours # -- re-train the model every 12 hours # -- when we re-train the model, retrain it on only the past 2 weeks of data evaluator = ForecastEvaluator( model=model, config=ForecastEvaluatorConfig( cadence="1h", horizon="6h", retrain_freq="12h", train_window="14d") ) return evaluator
_____no_output_____
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
First, let's evaluate ARIMA.
# Obtain the results of running the evaluation pipeline for ARIMA. # These result objects are to be treated as a black box, and should be # passed directly to the evaluator's evaluate() method. model1_evaluator = create_evaluator(model1) model1_train_result, model1_test_result = model1_evaluator.get_predict( train_vals=train_data, test_vals=test_data) # Evaluate ARIMA's sMAPE and RMSE smape = model1_evaluator.evaluate( ground_truth=test_data, predict=model1_test_result, metric=ForecastMetric.sMAPE) rmse = model1_evaluator.evaluate( ground_truth=test_data, predict=model1_test_result, metric=ForecastMetric.RMSE) print(f"{type(model1).__name__} sMAPE: {smape:.3f}") print(f"{type(model1).__name__} RMSE: {rmse:.3f}")
Arima sMAPE: 2.015 Arima RMSE: 143.416
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Next, we will evaluate the ensemble (taking the mean prediction of ARIMA, Prophet, and MSES every time the models are called).
# Obtain the results of running the evaluation pipeline for the ensemble. # These result objects are to be treated as a black box, and should be # passed directly to the evaluator's evaluate() method. ensemble_evaluator = create_evaluator(ensemble) ensemble_train_result, ensemble_test_result = ensemble_evaluator.get_predict( train_vals=train_data, test_vals=test_data) # Evaluate the selector's sMAPE and RMSE smape = ensemble_evaluator.evaluate( ground_truth=test_data, predict=ensemble_test_result, metric=ForecastMetric.sMAPE) rmse = ensemble_evaluator.evaluate( ground_truth=test_data, predict=ensemble_test_result, metric=ForecastMetric.RMSE) print(f"Ensemble sMAPE: {smape:.3f}") print(f"Ensemble RMSE: {rmse:.3f}")
Ensemble sMAPE: 2.893 Ensemble RMSE: 210.927
BSD-3-Clause
examples/forecast/1_ForecastFeatures.ipynb
ankitakashyap05/Merlion
Strings
name = "Robin"
_____no_output_____
Apache-2.0
Lecture-Notes/2018/Day2.ipynb
unmeshvrije/python-for-beginners