markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Unlike arrays, sequences are flat. The sequence (3) is identical to the integer 3, and (1, (2, 3)) is identical to (1, 2, 3). A bit more in depth VariablesYou can bind a sequence of values to a (dollar-prefixed) variable, like so:
%%jsoniq let $x := "Bearing 3 1 4 Mark 5. " return concat($x, "Engage!") %%jsoniq let $x := ("Kirk", "Picard", "Sisko") return string-join($x, " and ")
Took: 0.006165742874145508 ms "Kirk and Picard and Sisko"
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
You can bind as many variables as you want:
%%jsoniq let $x := 1 let $y := $x * 2 let $z := $y + $x return ($x, $y, $z)
Took: 0.006880044937133789 ms 1 2 3
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
and even reuse the same name to hide formerly declared variables:
%%jsoniq let $x := 1 let $x := $x + 2 let $x := $x + 3 return $x
Took: 0.006127119064331055 ms 6
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
IterationIn a way very similar to let, you can iterate over a sequence of values with the "for" keyword. Instead of binding the entire sequence of the variable, it will bind each value of the sequence in turn to this variable.
%%jsoniq for $i in 1 to 10 return $i * 2
Took: 0.006555080413818359 ms 2 4 6 8 10 12 14 16 18 20
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
More interestingly, you can combine fors and lets like so:
%%jsoniq let $sequence := 1 to 10 for $value in $sequence let $square := $value * 2 return $square
Took: 0.006516933441162109 ms 2 4 6 8 10 12 14 16 18 20
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
and even filter out some values:
%%jsoniq let $sequence := 1 to 10 for $value in $sequence let $square := $value * 2 where $square < 10 return $square
Took: 0.0077419281005859375 ms 2 4 6 8
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Note that you can only iterate over sequences, not arrays. To iterate over an array, you can obtain the sequence of its values with the [] operator, like so:
%%jsoniq [1, 2, 3][]
Took: 0.006000041961669922 ms 1 2 3
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
ConditionsYou can make the output depend on a condition with an if-then-else construct:
%%jsoniq for $x in 1 to 10 return if ($x < 5) then $x else -$x
Took: 0.0064771175384521484 ms 1 2 3 4 -5 -6 -7 -8 -9 -10
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Note that the else clause is required - however, it can be the empty sequence () which is often when you need if only the then clause is relevant to you. Composability of ExpressionsNow that you know of a couple of elementary JSONiq expressions, you can combine them in more elaborate expressions. For example, you can put any sequence of values in an array:
%%jsoniq [ 1 to 10 ]
Took: 0.007096052169799805 ms [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Or you can dynamically compute the value of object pairs (or their key):
%%jsoniq { "Greeting" : (let $d := "Mister Spock" return concat("Hello, ", $d)), "Farewell" : string-join(("Live", "long", "and", "prosper"), " ") }
Took: 0.007810831069946289 ms {"Greeting": "Hello, Mister Spock", "Farewell": "Live long and prosper"}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
You can dynamically generate object singletons (with a single pair):
%%jsoniq { concat("Integer ", 2) : 2 * 2 }
Took: 0.006745100021362305 ms {"Integer 2": 4}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
and then merge lots of them into a new object with the {| |} notation:
%%jsoniq {| for $i in 1 to 10 return { concat("Square of ", $i) : $i * $i } |}
Took: 0.006300926208496094 ms {"Square of 1": 1, "Square of 2": 4, "Square of 3": 9, "Square of 4": 16, "Square of 5": 25, "Square of 6": 36, "Square of 7": 49, "Square of 8": 64, "Square of 9": 81, "Square of 10": 100}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
JSON NavigationUp to now, you have learnt how to compose expressions so as to do some computations and to build objects and arrays. It also works the other way round: if you have some JSON data, you can access it and navigate.All you need to know is: JSONiq viewsan array as an ordered list of values,an object as a set of name/value pairs ObjectsYou can use the dot operator to retrieve the value associated with a key. Quotes are optional, except if the key has special characters such as spaces. It will return the value associated thereto:
%%jsoniq let $person := { "first name" : "Sarah", "age" : 13, "gender" : "female", "friends" : [ "Jim", "Mary", "Jennifer"] } return $person."first name"
Took: 0.009386062622070312 ms "Sarah"
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
You can also ask for all keys in an object:
%%jsoniq let $person := { "name" : "Sarah", "age" : 13, "gender" : "female", "friends" : [ "Jim", "Mary", "Jennifer"] } return { "keys" : [ keys($person)] }
Took: 0.00790095329284668 ms {"keys": ["name", "age", "gender", "friends"]}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
ArraysThe [[]] operator retrieves the entry at the given position:
%%jsoniq let $friends := [ "Jim", "Mary", "Jennifer"] return $friends[[1+1]]
Took: 0.00620579719543457 ms "Mary"
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
It is also possible to get the size of an array:
%%jsoniq let $person := { "name" : "Sarah", "age" : 13, "gender" : "female", "friends" : [ "Jim", "Mary", "Jennifer"] } return { "how many friends" : size($person.friends) }
Took: 0.006299018859863281 ms {"how many friends": 3}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Finally, the [] operator returns all elements in an array, as a sequence:
%%jsoniq let $person := { "name" : "Sarah", "age" : 13, "gender" : "female", "friends" : [ "Jim", "Mary", "Jennifer"] } return $person.friends[]
Took: 0.0063228607177734375 ms "Jim" "Mary" "Jennifer"
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Relational AlgebraDo you remember SQL's SELECT FROM WHERE statements? JSONiq inherits selection, projection and join capability from XQuery, too.
%%jsoniq let $stores := [ { "store number" : 1, "state" : "MA" }, { "store number" : 2, "state" : "MA" }, { "store number" : 3, "state" : "CA" }, { "store number" : 4, "state" : "CA" } ] let $sales := [ { "product" : "broiler", "store number" : 1, "quantity" : 20 }, { "product" : "toaster", "store number" : 2, "quantity" : 100 }, { "product" : "toaster", "store number" : 2, "quantity" : 50 }, { "product" : "toaster", "store number" : 3, "quantity" : 50 }, { "product" : "blender", "store number" : 3, "quantity" : 100 }, { "product" : "blender", "store number" : 3, "quantity" : 150 }, { "product" : "socks", "store number" : 1, "quantity" : 500 }, { "product" : "socks", "store number" : 2, "quantity" : 10 }, { "product" : "shirt", "store number" : 3, "quantity" : 10 } ] let $join := for $store in $stores[], $sale in $sales[] where $store."store number" = $sale."store number" return { "nb" : $store."store number", "state" : $store.state, "sold" : $sale.product } return [$join]
_____no_output_____
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Access datasetsRumbleDB can read input from many file systems and many file formats. If you are using our backend, you can only use json-doc() with any URI pointing to a JSON file and navigate it as you see fit. You can read data from your local disk, from S3, from HDFS, and also from the Web. For this tutorial, we'll read from the Web because, well, we are already on the Web.We have put a sample at http://rumbledb.org/samples/products-small.json that contains 100,000 small objects like:
%%jsoniq json-file("http://rumbledb.org/samples/products-small.json", 10)[1]
Took: 5.183954954147339 ms {"product": "blender", "store-number": 20, "quantity": 920}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
The second parameter to json-file, 10, indicates to RumbleDB that it should organize the data in ten partitions after downloading it, and process it in parallel. If you were reading from HDFS or S3, the parallelization of these partitions would be pushed down to the distributed file system.JSONiq supports the relational algebra. For example, you can do a selection with a where clause, like so:
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) where $product.quantity ge 995 return $product
Took: 5.105026006698608 ms "Warning! The output sequence contains 600 items but its materialization was capped at 200 items. This value can be configured with the result-size parameter in the query string of the HTTP request." {"product": "toaster", "store-number": 97, "quantity": 997} {"product": "phone", "store-number": 100, "quantity": 1000} {"product": "tv", "store-number": 96, "quantity": 996} {"product": "socks", "store-number": 99, "quantity": 999} {"product": "shirt", "store-number": 95, "quantity": 995} {"product": "toaster", "store-number": 98, "quantity": 998} {"product": "tv", "store-number": 97, "quantity": 997} {"product": "socks", "store-number": 100, "quantity": 1000} {"product": "shirt", "store-number": 96, "quantity": 996} {"product": "toaster", "store-number": 99, "quantity": 999} {"product": "blender", "store-number": 95, "quantity": 995} {"product": "tv", "store-number": 98, "quantity": 998} {"product": "shirt", "store-number": 97, "quantity": 997} {"product": "toaster", "store-number": 100, "quantity": 1000} {"product": "blender", "store-number": 96, "quantity": 996} {"product": "tv", "store-number": 99, "quantity": 999} {"product": "broiler", "store-number": 95, "quantity": 995} {"product": "shirt", "store-number": 98, "quantity": 998} {"product": "blender", "store-number": 97, "quantity": 997} {"product": "tv", "store-number": 100, "quantity": 1000} {"product": "broiler", "store-number": 96, "quantity": 996} {"product": "shirt", "store-number": 99, "quantity": 999} {"product": "phone", "store-number": 95, "quantity": 995} {"product": "blender", "store-number": 98, "quantity": 998} {"product": "broiler", "store-number": 97, "quantity": 997} {"product": "shirt", "store-number": 100, "quantity": 1000} {"product": "phone", "store-number": 96, "quantity": 996} {"product": "blender", "store-number": 99, "quantity": 999} {"product": "socks", "store-number": 95, "quantity": 995} {"product": "broiler", "store-number": 98, "quantity": 998} {"product": "phone", "store-number": 97, "quantity": 997} {"product": "blender", "store-number": 100, "quantity": 1000} {"product": "socks", "store-number": 96, "quantity": 996} {"product": "broiler", "store-number": 99, "quantity": 999} {"product": "toaster", "store-number": 95, "quantity": 995} {"product": "phone", "store-number": 98, "quantity": 998} {"product": "socks", "store-number": 97, "quantity": 997} {"product": "broiler", "store-number": 100, "quantity": 1000} {"product": "toaster", "store-number": 96, "quantity": 996} {"product": "phone", "store-number": 99, "quantity": 999} {"product": "tv", "store-number": 95, "quantity": 995} {"product": "socks", "store-number": 98, "quantity": 998} {"product": "toaster", "store-number": 97, "quantity": 997} {"product": "phone", "store-number": 100, "quantity": 1000} {"product": "tv", "store-number": 96, "quantity": 996} {"product": "socks", "store-number": 99, "quantity": 999} {"product": "shirt", "store-number": 95, "quantity": 995} {"product": "toaster", "store-number": 98, "quantity": 998} {"product": "tv", "store-number": 97, "quantity": 997} {"product": "socks", "store-number": 100, "quantity": 1000} {"product": "shirt", "store-number": 96, "quantity": 996} {"product": "toaster", "store-number": 99, "quantity": 999} {"product": "blender", "store-number": 95, "quantity": 995} {"product": "tv", "store-number": 98, "quantity": 998} {"product": "shirt", "store-number": 97, "quantity": 997} {"product": "toaster", "store-number": 100, "quantity": 1000} {"product": "blender", "store-number": 96, "quantity": 996} {"product": "tv", "store-number": 99, "quantity": 999} {"product": "broiler", "store-number": 95, "quantity": 995} {"product": "shirt", "store-number": 98, "quantity": 998} {"product": "blender", "store-number": 97, "quantity": 997} {"product": "tv", "store-number": 100, "quantity": 1000} {"product": "broiler", "store-number": 96, "quantity": 996} {"product": "shirt", "store-number": 99, "quantity": 999} {"product": "phone", "store-number": 95, "quantity": 995} {"product": "blender", "store-number": 98, "quantity": 998} {"product": "broiler", "store-number": 97, "quantity": 997} {"product": "shirt", "store-number": 100, "quantity": 1000} {"product": "phone", "store-number": 96, "quantity": 996} {"product": "blender", "store-number": 99, "quantity": 999} {"product": "socks", "store-number": 95, "quantity": 995} {"product": "broiler", "store-number": 98, "quantity": 998} {"product": "phone", "store-number": 97, "quantity": 997} {"product": "blender", "store-number": 100, "quantity": 1000} {"product": "socks", "store-number": 96, "quantity": 996} {"product": "broiler", "store-number": 99, "quantity": 999} {"product": "toaster", "store-number": 95, "quantity": 995} {"product": "phone", "store-number": 98, "quantity": 998} {"product": "socks", "store-number": 97, "quantity": 997} {"product": "broiler", "store-number": 100, "quantity": 1000} {"product": "toaster", "store-number": 96, "quantity": 996} {"product": "phone", "store-number": 99, "quantity": 999} {"product": "tv", "store-number": 95, "quantity": 995} {"product": "socks", "store-number": 98, "quantity": 998} {"product": "toaster", "store-number": 97, "quantity": 997} {"product": "phone", "store-number": 100, "quantity": 1000} {"product": "tv", "store-number": 96, "quantity": 996} {"product": "socks", "store-number": 99, "quantity": 999} {"product": "shirt", "store-number": 95, "quantity": 995} {"product": "toaster", "store-number": 98, "quantity": 998} {"product": "tv", "store-number": 97, "quantity": 997} {"product": "socks", "store-number": 100, "quantity": 1000} {"product": "shirt", "store-number": 96, "quantity": 996} {"product": "toaster", "store-number": 99, "quantity": 999} {"product": "blender", "store-number": 95, "quantity": 995} {"product": "tv", "store-number": 98, "quantity": 998} {"product": "shirt", "store-number": 97, "quantity": 997} {"product": "toaster", "store-number": 100, "quantity": 1000} {"product": "blender", "store-number": 96, "quantity": 996} {"product": "tv", "store-number": 99, "quantity": 999} {"product": "broiler", "store-number": 95, "quantity": 995} {"product": "shirt", "store-number": 98, "quantity": 998} {"product": "blender", "store-number": 97, "quantity": 997} {"product": "tv", "store-number": 100, "quantity": 1000} {"product": "broiler", "store-number": 96, "quantity": 996} {"product": "shirt", "store-number": 99, "quantity": 999} {"product": "phone", "store-number": 95, "quantity": 995} {"product": "blender", "store-number": 98, "quantity": 998} {"product": "broiler", "store-number": 97, "quantity": 997} {"product": "shirt", "store-number": 100, "quantity": 1000} {"product": "phone", "store-number": 96, "quantity": 996} {"product": "blender", "store-number": 99, "quantity": 999} {"product": "socks", "store-number": 95, "quantity": 995} {"product": "broiler", "store-number": 98, "quantity": 998} {"product": "phone", "store-number": 97, "quantity": 997} {"product": "blender", "store-number": 100, "quantity": 1000} {"product": "socks", "store-number": 96, "quantity": 996} {"product": "broiler", "store-number": 99, "quantity": 999} {"product": "toaster", "store-number": 95, "quantity": 995} {"product": "phone", "store-number": 98, "quantity": 998} {"product": "socks", "store-number": 97, "quantity": 997} {"product": "broiler", "store-number": 100, "quantity": 1000} {"product": "toaster", "store-number": 96, "quantity": 996} {"product": "phone", "store-number": 99, "quantity": 999} {"product": "tv", "store-number": 95, "quantity": 995} {"product": "socks", "store-number": 98, "quantity": 998} {"product": "toaster", "store-number": 97, "quantity": 997} {"product": "phone", "store-number": 100, "quantity": 1000} {"product": "tv", "store-number": 96, "quantity": 996} {"product": "socks", "store-number": 99, "quantity": 999} {"product": "shirt", "store-number": 95, "quantity": 995} {"product": "toaster", "store-number": 98, "quantity": 998} {"product": "tv", "store-number": 97, "quantity": 997} {"product": "socks", "store-number": 100, "quantity": 1000} {"product": "shirt", "store-number": 96, "quantity": 996} {"product": "toaster", "store-number": 99, "quantity": 999} {"product": "blender", "store-number": 95, "quantity": 995} {"product": "tv", "store-number": 98, "quantity": 998} {"product": "shirt", "store-number": 97, "quantity": 997} {"product": "toaster", "store-number": 100, "quantity": 1000} {"product": "blender", "store-number": 96, "quantity": 996} {"product": "tv", "store-number": 99, "quantity": 999} {"product": "broiler", "store-number": 95, "quantity": 995} {"product": "shirt", "store-number": 98, "quantity": 998} {"product": "blender", "store-number": 97, "quantity": 997} {"product": "tv", "store-number": 100, "quantity": 1000} {"product": "broiler", "store-number": 96, "quantity": 996} {"product": "shirt", "store-number": 99, "quantity": 999} {"product": "phone", "store-number": 95, "quantity": 995} {"product": "blender", "store-number": 98, "quantity": 998} {"product": "broiler", "store-number": 97, "quantity": 997} {"product": "shirt", "store-number": 100, "quantity": 1000} {"product": "phone", "store-number": 96, "quantity": 996} {"product": "blender", "store-number": 99, "quantity": 999} {"product": "socks", "store-number": 95, "quantity": 995} {"product": "broiler", "store-number": 98, "quantity": 998} {"product": "phone", "store-number": 97, "quantity": 997} {"product": "blender", "store-number": 100, "quantity": 1000} {"product": "socks", "store-number": 96, "quantity": 996} {"product": "broiler", "store-number": 99, "quantity": 999} {"product": "toaster", "store-number": 95, "quantity": 995} {"product": "phone", "store-number": 98, "quantity": 998} {"product": "socks", "store-number": 97, "quantity": 997} {"product": "broiler", "store-number": 100, "quantity": 1000} {"product": "toaster", "store-number": 96, "quantity": 996} {"product": "phone", "store-number": 99, "quantity": 999} {"product": "tv", "store-number": 95, "quantity": 995} {"product": "socks", "store-number": 98, "quantity": 998} {"product": "toaster", "store-number": 97, "quantity": 997} {"product": "phone", "store-number": 100, "quantity": 1000} {"product": "tv", "store-number": 96, "quantity": 996} {"product": "socks", "store-number": 99, "quantity": 999} {"product": "shirt", "store-number": 95, "quantity": 995} {"product": "toaster", "store-number": 98, "quantity": 998} {"product": "tv", "store-number": 97, "quantity": 997} {"product": "socks", "store-number": 100, "quantity": 1000} {"product": "shirt", "store-number": 96, "quantity": 996} {"product": "toaster", "store-number": 99, "quantity": 999} {"product": "blender", "store-number": 95, "quantity": 995} {"product": "tv", "store-number": 98, "quantity": 998} {"product": "shirt", "store-number": 97, "quantity": 997} {"product": "toaster", "store-number": 100, "quantity": 1000} {"product": "blender", "store-number": 96, "quantity": 996} {"product": "tv", "store-number": 99, "quantity": 999} {"product": "broiler", "store-number": 95, "quantity": 995} {"product": "shirt", "store-number": 98, "quantity": 998} {"product": "blender", "store-number": 97, "quantity": 997} {"product": "tv", "store-number": 100, "quantity": 1000} {"product": "broiler", "store-number": 96, "quantity": 996} {"product": "shirt", "store-number": 99, "quantity": 999} {"product": "phone", "store-number": 95, "quantity": 995} {"product": "blender", "store-number": 98, "quantity": 998} {"product": "broiler", "store-number": 97, "quantity": 997} {"product": "shirt", "store-number": 100, "quantity": 1000} {"product": "phone", "store-number": 96, "quantity": 996} {"product": "blender", "store-number": 99, "quantity": 999} {"product": "socks", "store-number": 95, "quantity": 995} {"product": "broiler", "store-number": 98, "quantity": 998} {"product": "phone", "store-number": 97, "quantity": 997} {"product": "blender", "store-number": 100, "quantity": 1000}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Notice that by default only the first 200 items are shown. In a typical setup, it is possible to output the result of a query to a distributed system, so it is also possible to output all the results if needed. In this case, however, as this is printed on your screen, it is more convenient not to materialize the entire sequence.For a projection, there is project():
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) where $product.quantity ge 995 return project($product, ("store-number", "product"))
Took: 8.84467601776123 ms "Warning! The output sequence contains 600 items but its materialization was capped at 200 items. This value can be configured with the result-size parameter in the query string of the HTTP request." {"store-number": 97, "product": "toaster"} {"store-number": 100, "product": "phone"} {"store-number": 96, "product": "tv"} {"store-number": 99, "product": "socks"} {"store-number": 95, "product": "shirt"} {"store-number": 98, "product": "toaster"} {"store-number": 97, "product": "tv"} {"store-number": 100, "product": "socks"} {"store-number": 96, "product": "shirt"} {"store-number": 99, "product": "toaster"} {"store-number": 95, "product": "blender"} {"store-number": 98, "product": "tv"} {"store-number": 97, "product": "shirt"} {"store-number": 100, "product": "toaster"} {"store-number": 96, "product": "blender"} {"store-number": 99, "product": "tv"} {"store-number": 95, "product": "broiler"} {"store-number": 98, "product": "shirt"} {"store-number": 97, "product": "blender"} {"store-number": 100, "product": "tv"} {"store-number": 96, "product": "broiler"} {"store-number": 99, "product": "shirt"} {"store-number": 95, "product": "phone"} {"store-number": 98, "product": "blender"} {"store-number": 97, "product": "broiler"} {"store-number": 100, "product": "shirt"} {"store-number": 96, "product": "phone"} {"store-number": 99, "product": "blender"} {"store-number": 95, "product": "socks"} {"store-number": 98, "product": "broiler"} {"store-number": 97, "product": "phone"} {"store-number": 100, "product": "blender"} {"store-number": 96, "product": "socks"} {"store-number": 99, "product": "broiler"} {"store-number": 95, "product": "toaster"} {"store-number": 98, "product": "phone"} {"store-number": 97, "product": "socks"} {"store-number": 100, "product": "broiler"} {"store-number": 96, "product": "toaster"} {"store-number": 99, "product": "phone"} {"store-number": 95, "product": "tv"} {"store-number": 98, "product": "socks"} {"store-number": 97, "product": "toaster"} {"store-number": 100, "product": "phone"} {"store-number": 96, "product": "tv"} {"store-number": 99, "product": "socks"} {"store-number": 95, "product": "shirt"} {"store-number": 98, "product": "toaster"} {"store-number": 97, "product": "tv"} {"store-number": 100, "product": "socks"} {"store-number": 96, "product": "shirt"} {"store-number": 99, "product": "toaster"} {"store-number": 95, "product": "blender"} {"store-number": 98, "product": "tv"} {"store-number": 97, "product": "shirt"} {"store-number": 100, "product": "toaster"} {"store-number": 96, "product": "blender"} {"store-number": 99, "product": "tv"} {"store-number": 95, "product": "broiler"} {"store-number": 98, "product": "shirt"} {"store-number": 97, "product": "blender"} {"store-number": 100, "product": "tv"} {"store-number": 96, "product": "broiler"} {"store-number": 99, "product": "shirt"} {"store-number": 95, "product": "phone"} {"store-number": 98, "product": "blender"} {"store-number": 97, "product": "broiler"} {"store-number": 100, "product": "shirt"} {"store-number": 96, "product": "phone"} {"store-number": 99, "product": "blender"} {"store-number": 95, "product": "socks"} {"store-number": 98, "product": "broiler"} {"store-number": 97, "product": "phone"} {"store-number": 100, "product": "blender"} {"store-number": 96, "product": "socks"} {"store-number": 99, "product": "broiler"} {"store-number": 95, "product": "toaster"} {"store-number": 98, "product": "phone"} {"store-number": 97, "product": "socks"} {"store-number": 100, "product": "broiler"} {"store-number": 96, "product": "toaster"} {"store-number": 99, "product": "phone"} {"store-number": 95, "product": "tv"} {"store-number": 98, "product": "socks"} {"store-number": 97, "product": "toaster"} {"store-number": 100, "product": "phone"} {"store-number": 96, "product": "tv"} {"store-number": 99, "product": "socks"} {"store-number": 95, "product": "shirt"} {"store-number": 98, "product": "toaster"} {"store-number": 97, "product": "tv"} {"store-number": 100, "product": "socks"} {"store-number": 96, "product": "shirt"} {"store-number": 99, "product": "toaster"} {"store-number": 95, "product": "blender"} {"store-number": 98, "product": "tv"} {"store-number": 97, "product": "shirt"} {"store-number": 100, "product": "toaster"} {"store-number": 96, "product": "blender"} {"store-number": 99, "product": "tv"} {"store-number": 95, "product": "broiler"} {"store-number": 98, "product": "shirt"} {"store-number": 97, "product": "blender"} {"store-number": 100, "product": "tv"} {"store-number": 96, "product": "broiler"} {"store-number": 99, "product": "shirt"} {"store-number": 95, "product": "phone"} {"store-number": 98, "product": "blender"} {"store-number": 97, "product": "broiler"} {"store-number": 100, "product": "shirt"} {"store-number": 96, "product": "phone"} {"store-number": 99, "product": "blender"} {"store-number": 95, "product": "socks"} {"store-number": 98, "product": "broiler"} {"store-number": 97, "product": "phone"} {"store-number": 100, "product": "blender"} {"store-number": 96, "product": "socks"} {"store-number": 99, "product": "broiler"} {"store-number": 95, "product": "toaster"} {"store-number": 98, "product": "phone"} {"store-number": 97, "product": "socks"} {"store-number": 100, "product": "broiler"} {"store-number": 96, "product": "toaster"} {"store-number": 99, "product": "phone"} {"store-number": 95, "product": "tv"} {"store-number": 98, "product": "socks"} {"store-number": 97, "product": "toaster"} {"store-number": 100, "product": "phone"} {"store-number": 96, "product": "tv"} {"store-number": 99, "product": "socks"} {"store-number": 95, "product": "shirt"} {"store-number": 98, "product": "toaster"} {"store-number": 97, "product": "tv"} {"store-number": 100, "product": "socks"} {"store-number": 96, "product": "shirt"} {"store-number": 99, "product": "toaster"} {"store-number": 95, "product": "blender"} {"store-number": 98, "product": "tv"} {"store-number": 97, "product": "shirt"} {"store-number": 100, "product": "toaster"} {"store-number": 96, "product": "blender"} {"store-number": 99, "product": "tv"} {"store-number": 95, "product": "broiler"} {"store-number": 98, "product": "shirt"} {"store-number": 97, "product": "blender"} {"store-number": 100, "product": "tv"} {"store-number": 96, "product": "broiler"} {"store-number": 99, "product": "shirt"} {"store-number": 95, "product": "phone"} {"store-number": 98, "product": "blender"} {"store-number": 97, "product": "broiler"} {"store-number": 100, "product": "shirt"} {"store-number": 96, "product": "phone"} {"store-number": 99, "product": "blender"} {"store-number": 95, "product": "socks"} {"store-number": 98, "product": "broiler"} {"store-number": 97, "product": "phone"} {"store-number": 100, "product": "blender"} {"store-number": 96, "product": "socks"} {"store-number": 99, "product": "broiler"} {"store-number": 95, "product": "toaster"} {"store-number": 98, "product": "phone"} {"store-number": 97, "product": "socks"} {"store-number": 100, "product": "broiler"} {"store-number": 96, "product": "toaster"} {"store-number": 99, "product": "phone"} {"store-number": 95, "product": "tv"} {"store-number": 98, "product": "socks"} {"store-number": 97, "product": "toaster"} {"store-number": 100, "product": "phone"} {"store-number": 96, "product": "tv"} {"store-number": 99, "product": "socks"} {"store-number": 95, "product": "shirt"} {"store-number": 98, "product": "toaster"} {"store-number": 97, "product": "tv"} {"store-number": 100, "product": "socks"} {"store-number": 96, "product": "shirt"} {"store-number": 99, "product": "toaster"} {"store-number": 95, "product": "blender"} {"store-number": 98, "product": "tv"} {"store-number": 97, "product": "shirt"} {"store-number": 100, "product": "toaster"} {"store-number": 96, "product": "blender"} {"store-number": 99, "product": "tv"} {"store-number": 95, "product": "broiler"} {"store-number": 98, "product": "shirt"} {"store-number": 97, "product": "blender"} {"store-number": 100, "product": "tv"} {"store-number": 96, "product": "broiler"} {"store-number": 99, "product": "shirt"} {"store-number": 95, "product": "phone"} {"store-number": 98, "product": "blender"} {"store-number": 97, "product": "broiler"} {"store-number": 100, "product": "shirt"} {"store-number": 96, "product": "phone"} {"store-number": 99, "product": "blender"} {"store-number": 95, "product": "socks"} {"store-number": 98, "product": "broiler"} {"store-number": 97, "product": "phone"} {"store-number": 100, "product": "blender"}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
You can also page the results (like OFFSET and LIMIT in SQL) with a count clause and a where clause
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) where $product.quantity ge 995 count $c where $c gt 10 and $c le 20 return project($product, ("store-number", "product"))
Took: 11.857532024383545 ms {"store-number": 95, "product": "blender"} {"store-number": 98, "product": "tv"} {"store-number": 97, "product": "shirt"} {"store-number": 100, "product": "toaster"} {"store-number": 96, "product": "blender"} {"store-number": 99, "product": "tv"} {"store-number": 95, "product": "broiler"} {"store-number": 98, "product": "shirt"} {"store-number": 97, "product": "blender"} {"store-number": 100, "product": "tv"}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
JSONiq also supports grouping with a group by clause:
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) group by $store-number := $product.store-number return { "store" : $store-number, "count" : count($product) }
Took: 7.4556567668914795 ms {"store": 64, "count": 1000} {"store": 68, "count": 1000} {"store": 42, "count": 1000} {"store": 83, "count": 1000} {"store": 54, "count": 1000} {"store": 82, "count": 1000} {"store": 96, "count": 1000} {"store": 78, "count": 1000} {"store": 41, "count": 1000} {"store": 89, "count": 1000} {"store": 62, "count": 1000} {"store": 86, "count": 1000} {"store": 58, "count": 1000} {"store": 66, "count": 1000} {"store": 70, "count": 1000} {"store": 91, "count": 1000} {"store": 100, "count": 1000} {"store": 49, "count": 1000} {"store": 14, "count": 1000} {"store": 88, "count": 1000} {"store": 97, "count": 1000} {"store": 67, "count": 1000} {"store": 15, "count": 1000} {"store": 12, "count": 1000} {"store": 4, "count": 1000} {"store": 11, "count": 1000} {"store": 74, "count": 1000} {"store": 92, "count": 1000} {"store": 5, "count": 1000} {"store": 63, "count": 1000} {"store": 19, "count": 1000} {"store": 2, "count": 1000} {"store": 10, "count": 1000} {"store": 37, "count": 1000} {"store": 59, "count": 1000} {"store": 73, "count": 1000} {"store": 7, "count": 1000} {"store": 61, "count": 1000} {"store": 56, "count": 1000} {"store": 94, "count": 1000} {"store": 1, "count": 1000} {"store": 79, "count": 1000} {"store": 9, "count": 1000} {"store": 17, "count": 1000} {"store": 23, "count": 1000} {"store": 40, "count": 1000} {"store": 16, "count": 1000} {"store": 50, "count": 1000} {"store": 48, "count": 1000} {"store": 75, "count": 1000} {"store": 32, "count": 1000} {"store": 39, "count": 1000} {"store": 76, "count": 1000} {"store": 93, "count": 1000} {"store": 36, "count": 1000} {"store": 53, "count": 1000} {"store": 80, "count": 1000} {"store": 31, "count": 1000} {"store": 98, "count": 1000} {"store": 45, "count": 1000} {"store": 38, "count": 1000} {"store": 33, "count": 1000} {"store": 60, "count": 1000} {"store": 29, "count": 1000} {"store": 47, "count": 1000} {"store": 95, "count": 1000} {"store": 69, "count": 1000} {"store": 25, "count": 1000} {"store": 13, "count": 1000} {"store": 18, "count": 1000} {"store": 84, "count": 1000} {"store": 34, "count": 1000} {"store": 46, "count": 1000} {"store": 51, "count": 1000} {"store": 71, "count": 1000} {"store": 99, "count": 1000} {"store": 52, "count": 1000} {"store": 90, "count": 1000} {"store": 24, "count": 1000} {"store": 8, "count": 1000} {"store": 77, "count": 1000} {"store": 28, "count": 1000} {"store": 85, "count": 1000} {"store": 6, "count": 1000} {"store": 3, "count": 1000} {"store": 44, "count": 1000} {"store": 55, "count": 1000} {"store": 81, "count": 1000} {"store": 72, "count": 1000} {"store": 21, "count": 1000} {"store": 22, "count": 1000} {"store": 57, "count": 1000} {"store": 87, "count": 1000} {"store": 43, "count": 1000} {"store": 20, "count": 1000} {"store": 65, "count": 1000} {"store": 27, "count": 1000} {"store": 26, "count": 1000} {"store": 35, "count": 1000} {"store": 30, "count": 1000}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
As well as ordering with an order by clause:
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) group by $store-number := $product.store-number order by $store-number ascending return { "store" : $store-number, "count" : count($product) }
Took: 9.933311939239502 ms {"store": 1, "count": 1000} {"store": 2, "count": 1000} {"store": 3, "count": 1000} {"store": 4, "count": 1000} {"store": 5, "count": 1000} {"store": 6, "count": 1000} {"store": 7, "count": 1000} {"store": 8, "count": 1000} {"store": 9, "count": 1000} {"store": 10, "count": 1000} {"store": 11, "count": 1000} {"store": 12, "count": 1000} {"store": 13, "count": 1000} {"store": 14, "count": 1000} {"store": 15, "count": 1000} {"store": 16, "count": 1000} {"store": 17, "count": 1000} {"store": 18, "count": 1000} {"store": 19, "count": 1000} {"store": 20, "count": 1000} {"store": 21, "count": 1000} {"store": 22, "count": 1000} {"store": 23, "count": 1000} {"store": 24, "count": 1000} {"store": 25, "count": 1000} {"store": 26, "count": 1000} {"store": 27, "count": 1000} {"store": 28, "count": 1000} {"store": 29, "count": 1000} {"store": 30, "count": 1000} {"store": 31, "count": 1000} {"store": 32, "count": 1000} {"store": 33, "count": 1000} {"store": 34, "count": 1000} {"store": 35, "count": 1000} {"store": 36, "count": 1000} {"store": 37, "count": 1000} {"store": 38, "count": 1000} {"store": 39, "count": 1000} {"store": 40, "count": 1000} {"store": 41, "count": 1000} {"store": 42, "count": 1000} {"store": 43, "count": 1000} {"store": 44, "count": 1000} {"store": 45, "count": 1000} {"store": 46, "count": 1000} {"store": 47, "count": 1000} {"store": 48, "count": 1000} {"store": 49, "count": 1000} {"store": 50, "count": 1000} {"store": 51, "count": 1000} {"store": 52, "count": 1000} {"store": 53, "count": 1000} {"store": 54, "count": 1000} {"store": 55, "count": 1000} {"store": 56, "count": 1000} {"store": 57, "count": 1000} {"store": 58, "count": 1000} {"store": 59, "count": 1000} {"store": 60, "count": 1000} {"store": 61, "count": 1000} {"store": 62, "count": 1000} {"store": 63, "count": 1000} {"store": 64, "count": 1000} {"store": 65, "count": 1000} {"store": 66, "count": 1000} {"store": 67, "count": 1000} {"store": 68, "count": 1000} {"store": 69, "count": 1000} {"store": 70, "count": 1000} {"store": 71, "count": 1000} {"store": 72, "count": 1000} {"store": 73, "count": 1000} {"store": 74, "count": 1000} {"store": 75, "count": 1000} {"store": 76, "count": 1000} {"store": 77, "count": 1000} {"store": 78, "count": 1000} {"store": 79, "count": 1000} {"store": 80, "count": 1000} {"store": 81, "count": 1000} {"store": 82, "count": 1000} {"store": 83, "count": 1000} {"store": 84, "count": 1000} {"store": 85, "count": 1000} {"store": 86, "count": 1000} {"store": 87, "count": 1000} {"store": 88, "count": 1000} {"store": 89, "count": 1000} {"store": 90, "count": 1000} {"store": 91, "count": 1000} {"store": 92, "count": 1000} {"store": 93, "count": 1000} {"store": 94, "count": 1000} {"store": 95, "count": 1000} {"store": 96, "count": 1000} {"store": 97, "count": 1000} {"store": 98, "count": 1000} {"store": 99, "count": 1000} {"store": 100, "count": 1000}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
JSONiq supports denormalized data, so you are not forced to aggregate after a grouping, you can also nest data like so:
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) group by $store-number := $product.store-number order by $store-number ascending return { "store" : $store-number, "products" : [ distinct-values($product.product) ] }
Took: 11.702539920806885 ms {"store": 1, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 2, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 3, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 4, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 5, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 6, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 7, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 8, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 9, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 10, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 11, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 12, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 13, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 14, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 15, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 16, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 17, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 18, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 19, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 20, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 21, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 22, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 23, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 24, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 25, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 26, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 27, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 28, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 29, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 30, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 31, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 32, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 33, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 34, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 35, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 36, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 37, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 38, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 39, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 40, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 41, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 42, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 43, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 44, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 45, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 46, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 47, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 48, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 49, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 50, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 51, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 52, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 53, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 54, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 55, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 56, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 57, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 58, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 59, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 60, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 61, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 62, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 63, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 64, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 65, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 66, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 67, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 68, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 69, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 70, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 71, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 72, "products": ["shirt", "toaster", "phone", "blender", "tv", "socks", "broiler"]} {"store": 73, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 74, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 75, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 76, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 77, "products": ["toaster", "phone", "blender", "tv", "socks", "broiler", "shirt"]} {"store": 78, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 79, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 80, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 81, "products": ["phone", "blender", "tv", "socks", "broiler", "shirt", "toaster"]} {"store": 82, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 83, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 84, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 85, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 86, "products": ["blender", "tv", "socks", "broiler", "shirt", "toaster", "phone"]} {"store": 87, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 88, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 89, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 90, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 91, "products": ["tv", "socks", "broiler", "shirt", "toaster", "phone", "blender"]} {"store": 92, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 93, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 94, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 95, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 96, "products": ["socks", "broiler", "shirt", "toaster", "phone", "blender", "tv"]} {"store": 97, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 98, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 99, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]} {"store": 100, "products": ["broiler", "shirt", "toaster", "phone", "blender", "tv", "socks"]}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Or
%%jsoniq for $product in json-file("http://rumbledb.org/samples/products-small.json", 10) group by $store-number := $product.store-number order by $store-number ascending return { "store" : $store-number, "products" : [ project($product[position() le 10], ("product", "quantity")) ], "inventory" : sum($product.quantity) }
Took: 13.3197660446167 ms {"store": 1, "products": [{"product": "shirt", "quantity": 901}, {"product": "toaster", "quantity": 801}, {"product": "phone", "quantity": 701}, {"product": "blender", "quantity": 601}, {"product": "tv", "quantity": 501}, {"product": "socks", "quantity": 401}, {"product": "broiler", "quantity": 301}, {"product": "shirt", "quantity": 201}, {"product": "toaster", "quantity": 101}, {"product": "phone", "quantity": 1}], "inventory": 451000} {"store": 2, "products": [{"product": "shirt", "quantity": 602}, {"product": "toaster", "quantity": 502}, {"product": "phone", "quantity": 402}, {"product": "blender", "quantity": 302}, {"product": "tv", "quantity": 202}, {"product": "socks", "quantity": 102}, {"product": "broiler", "quantity": 2}, {"product": "shirt", "quantity": 902}, {"product": "toaster", "quantity": 802}, {"product": "phone", "quantity": 702}], "inventory": 452000} {"store": 3, "products": [{"product": "shirt", "quantity": 303}, {"product": "toaster", "quantity": 203}, {"product": "phone", "quantity": 103}, {"product": "blender", "quantity": 3}, {"product": "tv", "quantity": 903}, {"product": "socks", "quantity": 803}, {"product": "broiler", "quantity": 703}, {"product": "shirt", "quantity": 603}, {"product": "toaster", "quantity": 503}, {"product": "phone", "quantity": 403}], "inventory": 453000} {"store": 4, "products": [{"product": "shirt", "quantity": 4}, {"product": "toaster", "quantity": 904}, {"product": "phone", "quantity": 804}, {"product": "blender", "quantity": 704}, {"product": "tv", "quantity": 604}, {"product": "socks", "quantity": 504}, {"product": "broiler", "quantity": 404}, {"product": "shirt", "quantity": 304}, {"product": "toaster", "quantity": 204}, {"product": "phone", "quantity": 104}], "inventory": 454000} {"store": 5, "products": [{"product": "shirt", "quantity": 705}, {"product": "toaster", "quantity": 605}, {"product": "phone", "quantity": 505}, {"product": "blender", "quantity": 405}, {"product": "tv", "quantity": 305}, {"product": "socks", "quantity": 205}, {"product": "broiler", "quantity": 105}, {"product": "shirt", "quantity": 5}, {"product": "toaster", "quantity": 905}, {"product": "phone", "quantity": 805}], "inventory": 455000} {"store": 6, "products": [{"product": "toaster", "quantity": 306}, {"product": "phone", "quantity": 206}, {"product": "blender", "quantity": 106}, {"product": "tv", "quantity": 6}, {"product": "socks", "quantity": 906}, {"product": "broiler", "quantity": 806}, {"product": "shirt", "quantity": 706}, {"product": "toaster", "quantity": 606}, {"product": "phone", "quantity": 506}, {"product": "blender", "quantity": 406}], "inventory": 456000} {"store": 7, "products": [{"product": "toaster", "quantity": 7}, {"product": "phone", "quantity": 907}, {"product": "blender", "quantity": 807}, {"product": "tv", "quantity": 707}, {"product": "socks", "quantity": 607}, {"product": "broiler", "quantity": 507}, {"product": "shirt", "quantity": 407}, {"product": "toaster", "quantity": 307}, {"product": "phone", "quantity": 207}, {"product": "blender", "quantity": 107}], "inventory": 457000} {"store": 8, "products": [{"product": "toaster", "quantity": 708}, {"product": "phone", "quantity": 608}, {"product": "blender", "quantity": 508}, {"product": "tv", "quantity": 408}, {"product": "socks", "quantity": 308}, {"product": "broiler", "quantity": 208}, {"product": "shirt", "quantity": 108}, {"product": "toaster", "quantity": 8}, {"product": "phone", "quantity": 908}, {"product": "blender", "quantity": 808}], "inventory": 458000} {"store": 9, "products": [{"product": "toaster", "quantity": 409}, {"product": "phone", "quantity": 309}, {"product": "blender", "quantity": 209}, {"product": "tv", "quantity": 109}, {"product": "socks", "quantity": 9}, {"product": "broiler", "quantity": 909}, {"product": "shirt", "quantity": 809}, {"product": "toaster", "quantity": 709}, {"product": "phone", "quantity": 609}, {"product": "blender", "quantity": 509}], "inventory": 459000} {"store": 10, "products": [{"product": "toaster", "quantity": 110}, {"product": "phone", "quantity": 10}, {"product": "blender", "quantity": 910}, {"product": "tv", "quantity": 810}, {"product": "socks", "quantity": 710}, {"product": "broiler", "quantity": 610}, {"product": "shirt", "quantity": 510}, {"product": "toaster", "quantity": 410}, {"product": "phone", "quantity": 310}, {"product": "blender", "quantity": 210}], "inventory": 460000} {"store": 11, "products": [{"product": "phone", "quantity": 711}, {"product": "blender", "quantity": 611}, {"product": "tv", "quantity": 511}, {"product": "socks", "quantity": 411}, {"product": "broiler", "quantity": 311}, {"product": "shirt", "quantity": 211}, {"product": "toaster", "quantity": 111}, {"product": "phone", "quantity": 11}, {"product": "blender", "quantity": 911}, {"product": "tv", "quantity": 811}], "inventory": 461000} {"store": 12, "products": [{"product": "phone", "quantity": 412}, {"product": "blender", "quantity": 312}, {"product": "tv", "quantity": 212}, {"product": "socks", "quantity": 112}, {"product": "broiler", "quantity": 12}, {"product": "shirt", "quantity": 912}, {"product": "toaster", "quantity": 812}, {"product": "phone", "quantity": 712}, {"product": "blender", "quantity": 612}, {"product": "tv", "quantity": 512}], "inventory": 462000} {"store": 13, "products": [{"product": "phone", "quantity": 113}, {"product": "blender", "quantity": 13}, {"product": "tv", "quantity": 913}, {"product": "socks", "quantity": 813}, {"product": "broiler", "quantity": 713}, {"product": "shirt", "quantity": 613}, {"product": "toaster", "quantity": 513}, {"product": "phone", "quantity": 413}, {"product": "blender", "quantity": 313}, {"product": "tv", "quantity": 213}], "inventory": 463000} {"store": 14, "products": [{"product": "phone", "quantity": 814}, {"product": "blender", "quantity": 714}, {"product": "tv", "quantity": 614}, {"product": "socks", "quantity": 514}, {"product": "broiler", "quantity": 414}, {"product": "shirt", "quantity": 314}, {"product": "toaster", "quantity": 214}, {"product": "phone", "quantity": 114}, {"product": "blender", "quantity": 14}, {"product": "tv", "quantity": 914}], "inventory": 464000} {"store": 15, "products": [{"product": "phone", "quantity": 515}, {"product": "blender", "quantity": 415}, {"product": "tv", "quantity": 315}, {"product": "socks", "quantity": 215}, {"product": "broiler", "quantity": 115}, {"product": "shirt", "quantity": 15}, {"product": "toaster", "quantity": 915}, {"product": "phone", "quantity": 815}, {"product": "blender", "quantity": 715}, {"product": "tv", "quantity": 615}], "inventory": 465000} {"store": 16, "products": [{"product": "blender", "quantity": 116}, {"product": "tv", "quantity": 16}, {"product": "socks", "quantity": 916}, {"product": "broiler", "quantity": 816}, {"product": "shirt", "quantity": 716}, {"product": "toaster", "quantity": 616}, {"product": "phone", "quantity": 516}, {"product": "blender", "quantity": 416}, {"product": "tv", "quantity": 316}, {"product": "socks", "quantity": 216}], "inventory": 466000} {"store": 17, "products": [{"product": "blender", "quantity": 817}, {"product": "tv", "quantity": 717}, {"product": "socks", "quantity": 617}, {"product": "broiler", "quantity": 517}, {"product": "shirt", "quantity": 417}, {"product": "toaster", "quantity": 317}, {"product": "phone", "quantity": 217}, {"product": "blender", "quantity": 117}, {"product": "tv", "quantity": 17}, {"product": "socks", "quantity": 917}], "inventory": 467000} {"store": 18, "products": [{"product": "blender", "quantity": 518}, {"product": "tv", "quantity": 418}, {"product": "socks", "quantity": 318}, {"product": "broiler", "quantity": 218}, {"product": "shirt", "quantity": 118}, {"product": "toaster", "quantity": 18}, {"product": "phone", "quantity": 918}, {"product": "blender", "quantity": 818}, {"product": "tv", "quantity": 718}, {"product": "socks", "quantity": 618}], "inventory": 468000} {"store": 19, "products": [{"product": "blender", "quantity": 219}, {"product": "tv", "quantity": 119}, {"product": "socks", "quantity": 19}, {"product": "broiler", "quantity": 919}, {"product": "shirt", "quantity": 819}, {"product": "toaster", "quantity": 719}, {"product": "phone", "quantity": 619}, {"product": "blender", "quantity": 519}, {"product": "tv", "quantity": 419}, {"product": "socks", "quantity": 319}], "inventory": 469000} {"store": 20, "products": [{"product": "blender", "quantity": 920}, {"product": "tv", "quantity": 820}, {"product": "socks", "quantity": 720}, {"product": "broiler", "quantity": 620}, {"product": "shirt", "quantity": 520}, {"product": "toaster", "quantity": 420}, {"product": "phone", "quantity": 320}, {"product": "blender", "quantity": 220}, {"product": "tv", "quantity": 120}, {"product": "socks", "quantity": 20}], "inventory": 470000} {"store": 21, "products": [{"product": "tv", "quantity": 521}, {"product": "socks", "quantity": 421}, {"product": "broiler", "quantity": 321}, {"product": "shirt", "quantity": 221}, {"product": "toaster", "quantity": 121}, {"product": "phone", "quantity": 21}, {"product": "blender", "quantity": 921}, {"product": "tv", "quantity": 821}, {"product": "socks", "quantity": 721}, {"product": "broiler", "quantity": 621}], "inventory": 471000} {"store": 22, "products": [{"product": "tv", "quantity": 222}, {"product": "socks", "quantity": 122}, {"product": "broiler", "quantity": 22}, {"product": "shirt", "quantity": 922}, {"product": "toaster", "quantity": 822}, {"product": "phone", "quantity": 722}, {"product": "blender", "quantity": 622}, {"product": "tv", "quantity": 522}, {"product": "socks", "quantity": 422}, {"product": "broiler", "quantity": 322}], "inventory": 472000} {"store": 23, "products": [{"product": "tv", "quantity": 923}, {"product": "socks", "quantity": 823}, {"product": "broiler", "quantity": 723}, {"product": "shirt", "quantity": 623}, {"product": "toaster", "quantity": 523}, {"product": "phone", "quantity": 423}, {"product": "blender", "quantity": 323}, {"product": "tv", "quantity": 223}, {"product": "socks", "quantity": 123}, {"product": "broiler", "quantity": 23}], "inventory": 473000} {"store": 24, "products": [{"product": "tv", "quantity": 624}, {"product": "socks", "quantity": 524}, {"product": "broiler", "quantity": 424}, {"product": "shirt", "quantity": 324}, {"product": "toaster", "quantity": 224}, {"product": "phone", "quantity": 124}, {"product": "blender", "quantity": 24}, {"product": "tv", "quantity": 924}, {"product": "socks", "quantity": 824}, {"product": "broiler", "quantity": 724}], "inventory": 474000} {"store": 25, "products": [{"product": "socks", "quantity": 225}, {"product": "broiler", "quantity": 125}, {"product": "shirt", "quantity": 25}, {"product": "toaster", "quantity": 925}, {"product": "phone", "quantity": 825}, {"product": "blender", "quantity": 725}, {"product": "tv", "quantity": 625}, {"product": "socks", "quantity": 525}, {"product": "broiler", "quantity": 425}, {"product": "shirt", "quantity": 325}], "inventory": 475000} {"store": 26, "products": [{"product": "socks", "quantity": 926}, {"product": "broiler", "quantity": 826}, {"product": "shirt", "quantity": 726}, {"product": "toaster", "quantity": 626}, {"product": "phone", "quantity": 526}, {"product": "blender", "quantity": 426}, {"product": "tv", "quantity": 326}, {"product": "socks", "quantity": 226}, {"product": "broiler", "quantity": 126}, {"product": "shirt", "quantity": 26}], "inventory": 476000} {"store": 27, "products": [{"product": "socks", "quantity": 627}, {"product": "broiler", "quantity": 527}, {"product": "shirt", "quantity": 427}, {"product": "toaster", "quantity": 327}, {"product": "phone", "quantity": 227}, {"product": "blender", "quantity": 127}, {"product": "tv", "quantity": 27}, {"product": "socks", "quantity": 927}, {"product": "broiler", "quantity": 827}, {"product": "shirt", "quantity": 727}], "inventory": 477000} {"store": 28, "products": [{"product": "socks", "quantity": 328}, {"product": "broiler", "quantity": 228}, {"product": "shirt", "quantity": 128}, {"product": "toaster", "quantity": 28}, {"product": "phone", "quantity": 928}, {"product": "blender", "quantity": 828}, {"product": "tv", "quantity": 728}, {"product": "socks", "quantity": 628}, {"product": "broiler", "quantity": 528}, {"product": "shirt", "quantity": 428}], "inventory": 478000} {"store": 29, "products": [{"product": "socks", "quantity": 29}, {"product": "broiler", "quantity": 929}, {"product": "shirt", "quantity": 829}, {"product": "toaster", "quantity": 729}, {"product": "phone", "quantity": 629}, {"product": "blender", "quantity": 529}, {"product": "tv", "quantity": 429}, {"product": "socks", "quantity": 329}, {"product": "broiler", "quantity": 229}, {"product": "shirt", "quantity": 129}], "inventory": 479000} {"store": 30, "products": [{"product": "broiler", "quantity": 630}, {"product": "shirt", "quantity": 530}, {"product": "toaster", "quantity": 430}, {"product": "phone", "quantity": 330}, {"product": "blender", "quantity": 230}, {"product": "tv", "quantity": 130}, {"product": "socks", "quantity": 30}, {"product": "broiler", "quantity": 930}, {"product": "shirt", "quantity": 830}, {"product": "toaster", "quantity": 730}], "inventory": 480000} {"store": 31, "products": [{"product": "broiler", "quantity": 331}, {"product": "shirt", "quantity": 231}, {"product": "toaster", "quantity": 131}, {"product": "phone", "quantity": 31}, {"product": "blender", "quantity": 931}, {"product": "tv", "quantity": 831}, {"product": "socks", "quantity": 731}, {"product": "broiler", "quantity": 631}, {"product": "shirt", "quantity": 531}, {"product": "toaster", "quantity": 431}], "inventory": 481000} {"store": 32, "products": [{"product": "broiler", "quantity": 32}, {"product": "shirt", "quantity": 932}, {"product": "toaster", "quantity": 832}, {"product": "phone", "quantity": 732}, {"product": "blender", "quantity": 632}, {"product": "tv", "quantity": 532}, {"product": "socks", "quantity": 432}, {"product": "broiler", "quantity": 332}, {"product": "shirt", "quantity": 232}, {"product": "toaster", "quantity": 132}], "inventory": 482000} {"store": 33, "products": [{"product": "broiler", "quantity": 733}, {"product": "shirt", "quantity": 633}, {"product": "toaster", "quantity": 533}, {"product": "phone", "quantity": 433}, {"product": "blender", "quantity": 333}, {"product": "tv", "quantity": 233}, {"product": "socks", "quantity": 133}, {"product": "broiler", "quantity": 33}, {"product": "shirt", "quantity": 933}, {"product": "toaster", "quantity": 833}], "inventory": 483000} {"store": 34, "products": [{"product": "broiler", "quantity": 434}, {"product": "shirt", "quantity": 334}, {"product": "toaster", "quantity": 234}, {"product": "phone", "quantity": 134}, {"product": "blender", "quantity": 34}, {"product": "tv", "quantity": 934}, {"product": "socks", "quantity": 834}, {"product": "broiler", "quantity": 734}, {"product": "shirt", "quantity": 634}, {"product": "toaster", "quantity": 534}], "inventory": 484000} {"store": 35, "products": [{"product": "shirt", "quantity": 35}, {"product": "toaster", "quantity": 935}, {"product": "phone", "quantity": 835}, {"product": "blender", "quantity": 735}, {"product": "tv", "quantity": 635}, {"product": "socks", "quantity": 535}, {"product": "broiler", "quantity": 435}, {"product": "shirt", "quantity": 335}, {"product": "toaster", "quantity": 235}, {"product": "phone", "quantity": 135}], "inventory": 485000} {"store": 36, "products": [{"product": "shirt", "quantity": 736}, {"product": "toaster", "quantity": 636}, {"product": "phone", "quantity": 536}, {"product": "blender", "quantity": 436}, {"product": "tv", "quantity": 336}, {"product": "socks", "quantity": 236}, {"product": "broiler", "quantity": 136}, {"product": "shirt", "quantity": 36}, {"product": "toaster", "quantity": 936}, {"product": "phone", "quantity": 836}], "inventory": 486000} {"store": 37, "products": [{"product": "shirt", "quantity": 437}, {"product": "toaster", "quantity": 337}, {"product": "phone", "quantity": 237}, {"product": "blender", "quantity": 137}, {"product": "tv", "quantity": 37}, {"product": "socks", "quantity": 937}, {"product": "broiler", "quantity": 837}, {"product": "shirt", "quantity": 737}, {"product": "toaster", "quantity": 637}, {"product": "phone", "quantity": 537}], "inventory": 487000} {"store": 38, "products": [{"product": "shirt", "quantity": 138}, {"product": "toaster", "quantity": 38}, {"product": "phone", "quantity": 938}, {"product": "blender", "quantity": 838}, {"product": "tv", "quantity": 738}, {"product": "socks", "quantity": 638}, {"product": "broiler", "quantity": 538}, {"product": "shirt", "quantity": 438}, {"product": "toaster", "quantity": 338}, {"product": "phone", "quantity": 238}], "inventory": 488000} {"store": 39, "products": [{"product": "shirt", "quantity": 839}, {"product": "toaster", "quantity": 739}, {"product": "phone", "quantity": 639}, {"product": "blender", "quantity": 539}, {"product": "tv", "quantity": 439}, {"product": "socks", "quantity": 339}, {"product": "broiler", "quantity": 239}, {"product": "shirt", "quantity": 139}, {"product": "toaster", "quantity": 39}, {"product": "phone", "quantity": 939}], "inventory": 489000} {"store": 40, "products": [{"product": "toaster", "quantity": 440}, {"product": "phone", "quantity": 340}, {"product": "blender", "quantity": 240}, {"product": "tv", "quantity": 140}, {"product": "socks", "quantity": 40}, {"product": "broiler", "quantity": 940}, {"product": "shirt", "quantity": 840}, {"product": "toaster", "quantity": 740}, {"product": "phone", "quantity": 640}, {"product": "blender", "quantity": 540}], "inventory": 490000} {"store": 41, "products": [{"product": "toaster", "quantity": 141}, {"product": "phone", "quantity": 41}, {"product": "blender", "quantity": 941}, {"product": "tv", "quantity": 841}, {"product": "socks", "quantity": 741}, {"product": "broiler", "quantity": 641}, {"product": "shirt", "quantity": 541}, {"product": "toaster", "quantity": 441}, {"product": "phone", "quantity": 341}, {"product": "blender", "quantity": 241}], "inventory": 491000} {"store": 42, "products": [{"product": "toaster", "quantity": 842}, {"product": "phone", "quantity": 742}, {"product": "blender", "quantity": 642}, {"product": "tv", "quantity": 542}, {"product": "socks", "quantity": 442}, {"product": "broiler", "quantity": 342}, {"product": "shirt", "quantity": 242}, {"product": "toaster", "quantity": 142}, {"product": "phone", "quantity": 42}, {"product": "blender", "quantity": 942}], "inventory": 492000} {"store": 43, "products": [{"product": "toaster", "quantity": 543}, {"product": "phone", "quantity": 443}, {"product": "blender", "quantity": 343}, {"product": "tv", "quantity": 243}, {"product": "socks", "quantity": 143}, {"product": "broiler", "quantity": 43}, {"product": "shirt", "quantity": 943}, {"product": "toaster", "quantity": 843}, {"product": "phone", "quantity": 743}, {"product": "blender", "quantity": 643}], "inventory": 493000} {"store": 44, "products": [{"product": "phone", "quantity": 144}, {"product": "blender", "quantity": 44}, {"product": "tv", "quantity": 944}, {"product": "socks", "quantity": 844}, {"product": "broiler", "quantity": 744}, {"product": "shirt", "quantity": 644}, {"product": "toaster", "quantity": 544}, {"product": "phone", "quantity": 444}, {"product": "blender", "quantity": 344}, {"product": "tv", "quantity": 244}], "inventory": 494000} {"store": 45, "products": [{"product": "phone", "quantity": 845}, {"product": "blender", "quantity": 745}, {"product": "tv", "quantity": 645}, {"product": "socks", "quantity": 545}, {"product": "broiler", "quantity": 445}, {"product": "shirt", "quantity": 345}, {"product": "toaster", "quantity": 245}, {"product": "phone", "quantity": 145}, {"product": "blender", "quantity": 45}, {"product": "tv", "quantity": 945}], "inventory": 495000} {"store": 46, "products": [{"product": "phone", "quantity": 546}, {"product": "blender", "quantity": 446}, {"product": "tv", "quantity": 346}, {"product": "socks", "quantity": 246}, {"product": "broiler", "quantity": 146}, {"product": "shirt", "quantity": 46}, {"product": "toaster", "quantity": 946}, {"product": "phone", "quantity": 846}, {"product": "blender", "quantity": 746}, {"product": "tv", "quantity": 646}], "inventory": 496000} {"store": 47, "products": [{"product": "phone", "quantity": 247}, {"product": "blender", "quantity": 147}, {"product": "tv", "quantity": 47}, {"product": "socks", "quantity": 947}, {"product": "broiler", "quantity": 847}, {"product": "shirt", "quantity": 747}, {"product": "toaster", "quantity": 647}, {"product": "phone", "quantity": 547}, {"product": "blender", "quantity": 447}, {"product": "tv", "quantity": 347}], "inventory": 497000} {"store": 48, "products": [{"product": "phone", "quantity": 948}, {"product": "blender", "quantity": 848}, {"product": "tv", "quantity": 748}, {"product": "socks", "quantity": 648}, {"product": "broiler", "quantity": 548}, {"product": "shirt", "quantity": 448}, {"product": "toaster", "quantity": 348}, {"product": "phone", "quantity": 248}, {"product": "blender", "quantity": 148}, {"product": "tv", "quantity": 48}], "inventory": 498000} {"store": 49, "products": [{"product": "blender", "quantity": 549}, {"product": "tv", "quantity": 449}, {"product": "socks", "quantity": 349}, {"product": "broiler", "quantity": 249}, {"product": "shirt", "quantity": 149}, {"product": "toaster", "quantity": 49}, {"product": "phone", "quantity": 949}, {"product": "blender", "quantity": 849}, {"product": "tv", "quantity": 749}, {"product": "socks", "quantity": 649}], "inventory": 499000} {"store": 50, "products": [{"product": "blender", "quantity": 250}, {"product": "tv", "quantity": 150}, {"product": "socks", "quantity": 50}, {"product": "broiler", "quantity": 950}, {"product": "shirt", "quantity": 850}, {"product": "toaster", "quantity": 750}, {"product": "phone", "quantity": 650}, {"product": "blender", "quantity": 550}, {"product": "tv", "quantity": 450}, {"product": "socks", "quantity": 350}], "inventory": 500000} {"store": 51, "products": [{"product": "blender", "quantity": 951}, {"product": "tv", "quantity": 851}, {"product": "socks", "quantity": 751}, {"product": "broiler", "quantity": 651}, {"product": "shirt", "quantity": 551}, {"product": "toaster", "quantity": 451}, {"product": "phone", "quantity": 351}, {"product": "blender", "quantity": 251}, {"product": "tv", "quantity": 151}, {"product": "socks", "quantity": 51}], "inventory": 501000} {"store": 52, "products": [{"product": "blender", "quantity": 652}, {"product": "tv", "quantity": 552}, {"product": "socks", "quantity": 452}, {"product": "broiler", "quantity": 352}, {"product": "shirt", "quantity": 252}, {"product": "toaster", "quantity": 152}, {"product": "phone", "quantity": 52}, {"product": "blender", "quantity": 952}, {"product": "tv", "quantity": 852}, {"product": "socks", "quantity": 752}], "inventory": 502000} {"store": 53, "products": [{"product": "blender", "quantity": 353}, {"product": "tv", "quantity": 253}, {"product": "socks", "quantity": 153}, {"product": "broiler", "quantity": 53}, {"product": "shirt", "quantity": 953}, {"product": "toaster", "quantity": 853}, {"product": "phone", "quantity": 753}, {"product": "blender", "quantity": 653}, {"product": "tv", "quantity": 553}, {"product": "socks", "quantity": 453}], "inventory": 503000} {"store": 54, "products": [{"product": "tv", "quantity": 954}, {"product": "socks", "quantity": 854}, {"product": "broiler", "quantity": 754}, {"product": "shirt", "quantity": 654}, {"product": "toaster", "quantity": 554}, {"product": "phone", "quantity": 454}, {"product": "blender", "quantity": 354}, {"product": "tv", "quantity": 254}, {"product": "socks", "quantity": 154}, {"product": "broiler", "quantity": 54}], "inventory": 504000} {"store": 55, "products": [{"product": "tv", "quantity": 655}, {"product": "socks", "quantity": 555}, {"product": "broiler", "quantity": 455}, {"product": "shirt", "quantity": 355}, {"product": "toaster", "quantity": 255}, {"product": "phone", "quantity": 155}, {"product": "blender", "quantity": 55}, {"product": "tv", "quantity": 955}, {"product": "socks", "quantity": 855}, {"product": "broiler", "quantity": 755}], "inventory": 505000} {"store": 56, "products": [{"product": "tv", "quantity": 356}, {"product": "socks", "quantity": 256}, {"product": "broiler", "quantity": 156}, {"product": "shirt", "quantity": 56}, {"product": "toaster", "quantity": 956}, {"product": "phone", "quantity": 856}, {"product": "blender", "quantity": 756}, {"product": "tv", "quantity": 656}, {"product": "socks", "quantity": 556}, {"product": "broiler", "quantity": 456}], "inventory": 506000} {"store": 57, "products": [{"product": "tv", "quantity": 57}, {"product": "socks", "quantity": 957}, {"product": "broiler", "quantity": 857}, {"product": "shirt", "quantity": 757}, {"product": "toaster", "quantity": 657}, {"product": "phone", "quantity": 557}, {"product": "blender", "quantity": 457}, {"product": "tv", "quantity": 357}, {"product": "socks", "quantity": 257}, {"product": "broiler", "quantity": 157}], "inventory": 507000} {"store": 58, "products": [{"product": "tv", "quantity": 758}, {"product": "socks", "quantity": 658}, {"product": "broiler", "quantity": 558}, {"product": "shirt", "quantity": 458}, {"product": "toaster", "quantity": 358}, {"product": "phone", "quantity": 258}, {"product": "blender", "quantity": 158}, {"product": "tv", "quantity": 58}, {"product": "socks", "quantity": 958}, {"product": "broiler", "quantity": 858}], "inventory": 508000} {"store": 59, "products": [{"product": "socks", "quantity": 359}, {"product": "broiler", "quantity": 259}, {"product": "shirt", "quantity": 159}, {"product": "toaster", "quantity": 59}, {"product": "phone", "quantity": 959}, {"product": "blender", "quantity": 859}, {"product": "tv", "quantity": 759}, {"product": "socks", "quantity": 659}, {"product": "broiler", "quantity": 559}, {"product": "shirt", "quantity": 459}], "inventory": 509000} {"store": 60, "products": [{"product": "socks", "quantity": 60}, {"product": "broiler", "quantity": 960}, {"product": "shirt", "quantity": 860}, {"product": "toaster", "quantity": 760}, {"product": "phone", "quantity": 660}, {"product": "blender", "quantity": 560}, {"product": "tv", "quantity": 460}, {"product": "socks", "quantity": 360}, {"product": "broiler", "quantity": 260}, {"product": "shirt", "quantity": 160}], "inventory": 510000} {"store": 61, "products": [{"product": "socks", "quantity": 761}, {"product": "broiler", "quantity": 661}, {"product": "shirt", "quantity": 561}, {"product": "toaster", "quantity": 461}, {"product": "phone", "quantity": 361}, {"product": "blender", "quantity": 261}, {"product": "tv", "quantity": 161}, {"product": "socks", "quantity": 61}, {"product": "broiler", "quantity": 961}, {"product": "shirt", "quantity": 861}], "inventory": 511000} {"store": 62, "products": [{"product": "socks", "quantity": 462}, {"product": "broiler", "quantity": 362}, {"product": "shirt", "quantity": 262}, {"product": "toaster", "quantity": 162}, {"product": "phone", "quantity": 62}, {"product": "blender", "quantity": 962}, {"product": "tv", "quantity": 862}, {"product": "socks", "quantity": 762}, {"product": "broiler", "quantity": 662}, {"product": "shirt", "quantity": 562}], "inventory": 512000} {"store": 63, "products": [{"product": "broiler", "quantity": 63}, {"product": "shirt", "quantity": 963}, {"product": "toaster", "quantity": 863}, {"product": "phone", "quantity": 763}, {"product": "blender", "quantity": 663}, {"product": "tv", "quantity": 563}, {"product": "socks", "quantity": 463}, {"product": "broiler", "quantity": 363}, {"product": "shirt", "quantity": 263}, {"product": "toaster", "quantity": 163}], "inventory": 513000} {"store": 64, "products": [{"product": "broiler", "quantity": 764}, {"product": "shirt", "quantity": 664}, {"product": "toaster", "quantity": 564}, {"product": "phone", "quantity": 464}, {"product": "blender", "quantity": 364}, {"product": "tv", "quantity": 264}, {"product": "socks", "quantity": 164}, {"product": "broiler", "quantity": 64}, {"product": "shirt", "quantity": 964}, {"product": "toaster", "quantity": 864}], "inventory": 514000} {"store": 65, "products": [{"product": "broiler", "quantity": 465}, {"product": "shirt", "quantity": 365}, {"product": "toaster", "quantity": 265}, {"product": "phone", "quantity": 165}, {"product": "blender", "quantity": 65}, {"product": "tv", "quantity": 965}, {"product": "socks", "quantity": 865}, {"product": "broiler", "quantity": 765}, {"product": "shirt", "quantity": 665}, {"product": "toaster", "quantity": 565}], "inventory": 515000} {"store": 66, "products": [{"product": "broiler", "quantity": 166}, {"product": "shirt", "quantity": 66}, {"product": "toaster", "quantity": 966}, {"product": "phone", "quantity": 866}, {"product": "blender", "quantity": 766}, {"product": "tv", "quantity": 666}, {"product": "socks", "quantity": 566}, {"product": "broiler", "quantity": 466}, {"product": "shirt", "quantity": 366}, {"product": "toaster", "quantity": 266}], "inventory": 516000} {"store": 67, "products": [{"product": "broiler", "quantity": 867}, {"product": "shirt", "quantity": 767}, {"product": "toaster", "quantity": 667}, {"product": "phone", "quantity": 567}, {"product": "blender", "quantity": 467}, {"product": "tv", "quantity": 367}, {"product": "socks", "quantity": 267}, {"product": "broiler", "quantity": 167}, {"product": "shirt", "quantity": 67}, {"product": "toaster", "quantity": 967}], "inventory": 517000} {"store": 68, "products": [{"product": "shirt", "quantity": 468}, {"product": "toaster", "quantity": 368}, {"product": "phone", "quantity": 268}, {"product": "blender", "quantity": 168}, {"product": "tv", "quantity": 68}, {"product": "socks", "quantity": 968}, {"product": "broiler", "quantity": 868}, {"product": "shirt", "quantity": 768}, {"product": "toaster", "quantity": 668}, {"product": "phone", "quantity": 568}], "inventory": 518000} {"store": 69, "products": [{"product": "shirt", "quantity": 169}, {"product": "toaster", "quantity": 69}, {"product": "phone", "quantity": 969}, {"product": "blender", "quantity": 869}, {"product": "tv", "quantity": 769}, {"product": "socks", "quantity": 669}, {"product": "broiler", "quantity": 569}, {"product": "shirt", "quantity": 469}, {"product": "toaster", "quantity": 369}, {"product": "phone", "quantity": 269}], "inventory": 519000} {"store": 70, "products": [{"product": "shirt", "quantity": 870}, {"product": "toaster", "quantity": 770}, {"product": "phone", "quantity": 670}, {"product": "blender", "quantity": 570}, {"product": "tv", "quantity": 470}, {"product": "socks", "quantity": 370}, {"product": "broiler", "quantity": 270}, {"product": "shirt", "quantity": 170}, {"product": "toaster", "quantity": 70}, {"product": "phone", "quantity": 970}], "inventory": 520000} {"store": 71, "products": [{"product": "shirt", "quantity": 571}, {"product": "toaster", "quantity": 471}, {"product": "phone", "quantity": 371}, {"product": "blender", "quantity": 271}, {"product": "tv", "quantity": 171}, {"product": "socks", "quantity": 71}, {"product": "broiler", "quantity": 971}, {"product": "shirt", "quantity": 871}, {"product": "toaster", "quantity": 771}, {"product": "phone", "quantity": 671}], "inventory": 521000} {"store": 72, "products": [{"product": "shirt", "quantity": 272}, {"product": "toaster", "quantity": 172}, {"product": "phone", "quantity": 72}, {"product": "blender", "quantity": 972}, {"product": "tv", "quantity": 872}, {"product": "socks", "quantity": 772}, {"product": "broiler", "quantity": 672}, {"product": "shirt", "quantity": 572}, {"product": "toaster", "quantity": 472}, {"product": "phone", "quantity": 372}], "inventory": 522000} {"store": 73, "products": [{"product": "toaster", "quantity": 873}, {"product": "phone", "quantity": 773}, {"product": "blender", "quantity": 673}, {"product": "tv", "quantity": 573}, {"product": "socks", "quantity": 473}, {"product": "broiler", "quantity": 373}, {"product": "shirt", "quantity": 273}, {"product": "toaster", "quantity": 173}, {"product": "phone", "quantity": 73}, {"product": "blender", "quantity": 973}], "inventory": 523000} {"store": 74, "products": [{"product": "toaster", "quantity": 574}, {"product": "phone", "quantity": 474}, {"product": "blender", "quantity": 374}, {"product": "tv", "quantity": 274}, {"product": "socks", "quantity": 174}, {"product": "broiler", "quantity": 74}, {"product": "shirt", "quantity": 974}, {"product": "toaster", "quantity": 874}, {"product": "phone", "quantity": 774}, {"product": "blender", "quantity": 674}], "inventory": 524000} {"store": 75, "products": [{"product": "toaster", "quantity": 275}, {"product": "phone", "quantity": 175}, {"product": "blender", "quantity": 75}, {"product": "tv", "quantity": 975}, {"product": "socks", "quantity": 875}, {"product": "broiler", "quantity": 775}, {"product": "shirt", "quantity": 675}, {"product": "toaster", "quantity": 575}, {"product": "phone", "quantity": 475}, {"product": "blender", "quantity": 375}], "inventory": 525000} {"store": 76, "products": [{"product": "toaster", "quantity": 976}, {"product": "phone", "quantity": 876}, {"product": "blender", "quantity": 776}, {"product": "tv", "quantity": 676}, {"product": "socks", "quantity": 576}, {"product": "broiler", "quantity": 476}, {"product": "shirt", "quantity": 376}, {"product": "toaster", "quantity": 276}, {"product": "phone", "quantity": 176}, {"product": "blender", "quantity": 76}], "inventory": 526000} {"store": 77, "products": [{"product": "toaster", "quantity": 677}, {"product": "phone", "quantity": 577}, {"product": "blender", "quantity": 477}, {"product": "tv", "quantity": 377}, {"product": "socks", "quantity": 277}, {"product": "broiler", "quantity": 177}, {"product": "shirt", "quantity": 77}, {"product": "toaster", "quantity": 977}, {"product": "phone", "quantity": 877}, {"product": "blender", "quantity": 777}], "inventory": 527000} {"store": 78, "products": [{"product": "phone", "quantity": 278}, {"product": "blender", "quantity": 178}, {"product": "tv", "quantity": 78}, {"product": "socks", "quantity": 978}, {"product": "broiler", "quantity": 878}, {"product": "shirt", "quantity": 778}, {"product": "toaster", "quantity": 678}, {"product": "phone", "quantity": 578}, {"product": "blender", "quantity": 478}, {"product": "tv", "quantity": 378}], "inventory": 528000} {"store": 79, "products": [{"product": "phone", "quantity": 979}, {"product": "blender", "quantity": 879}, {"product": "tv", "quantity": 779}, {"product": "socks", "quantity": 679}, {"product": "broiler", "quantity": 579}, {"product": "shirt", "quantity": 479}, {"product": "toaster", "quantity": 379}, {"product": "phone", "quantity": 279}, {"product": "blender", "quantity": 179}, {"product": "tv", "quantity": 79}], "inventory": 529000} {"store": 80, "products": [{"product": "phone", "quantity": 680}, {"product": "blender", "quantity": 580}, {"product": "tv", "quantity": 480}, {"product": "socks", "quantity": 380}, {"product": "broiler", "quantity": 280}, {"product": "shirt", "quantity": 180}, {"product": "toaster", "quantity": 80}, {"product": "phone", "quantity": 980}, {"product": "blender", "quantity": 880}, {"product": "tv", "quantity": 780}], "inventory": 530000} {"store": 81, "products": [{"product": "phone", "quantity": 381}, {"product": "blender", "quantity": 281}, {"product": "tv", "quantity": 181}, {"product": "socks", "quantity": 81}, {"product": "broiler", "quantity": 981}, {"product": "shirt", "quantity": 881}, {"product": "toaster", "quantity": 781}, {"product": "phone", "quantity": 681}, {"product": "blender", "quantity": 581}, {"product": "tv", "quantity": 481}], "inventory": 531000} {"store": 82, "products": [{"product": "blender", "quantity": 982}, {"product": "tv", "quantity": 882}, {"product": "socks", "quantity": 782}, {"product": "broiler", "quantity": 682}, {"product": "shirt", "quantity": 582}, {"product": "toaster", "quantity": 482}, {"product": "phone", "quantity": 382}, {"product": "blender", "quantity": 282}, {"product": "tv", "quantity": 182}, {"product": "socks", "quantity": 82}], "inventory": 532000} {"store": 83, "products": [{"product": "blender", "quantity": 683}, {"product": "tv", "quantity": 583}, {"product": "socks", "quantity": 483}, {"product": "broiler", "quantity": 383}, {"product": "shirt", "quantity": 283}, {"product": "toaster", "quantity": 183}, {"product": "phone", "quantity": 83}, {"product": "blender", "quantity": 983}, {"product": "tv", "quantity": 883}, {"product": "socks", "quantity": 783}], "inventory": 533000} {"store": 84, "products": [{"product": "blender", "quantity": 384}, {"product": "tv", "quantity": 284}, {"product": "socks", "quantity": 184}, {"product": "broiler", "quantity": 84}, {"product": "shirt", "quantity": 984}, {"product": "toaster", "quantity": 884}, {"product": "phone", "quantity": 784}, {"product": "blender", "quantity": 684}, {"product": "tv", "quantity": 584}, {"product": "socks", "quantity": 484}], "inventory": 534000} {"store": 85, "products": [{"product": "blender", "quantity": 85}, {"product": "tv", "quantity": 985}, {"product": "socks", "quantity": 885}, {"product": "broiler", "quantity": 785}, {"product": "shirt", "quantity": 685}, {"product": "toaster", "quantity": 585}, {"product": "phone", "quantity": 485}, {"product": "blender", "quantity": 385}, {"product": "tv", "quantity": 285}, {"product": "socks", "quantity": 185}], "inventory": 535000} {"store": 86, "products": [{"product": "blender", "quantity": 786}, {"product": "tv", "quantity": 686}, {"product": "socks", "quantity": 586}, {"product": "broiler", "quantity": 486}, {"product": "shirt", "quantity": 386}, {"product": "toaster", "quantity": 286}, {"product": "phone", "quantity": 186}, {"product": "blender", "quantity": 86}, {"product": "tv", "quantity": 986}, {"product": "socks", "quantity": 886}], "inventory": 536000} {"store": 87, "products": [{"product": "tv", "quantity": 387}, {"product": "socks", "quantity": 287}, {"product": "broiler", "quantity": 187}, {"product": "shirt", "quantity": 87}, {"product": "toaster", "quantity": 987}, {"product": "phone", "quantity": 887}, {"product": "blender", "quantity": 787}, {"product": "tv", "quantity": 687}, {"product": "socks", "quantity": 587}, {"product": "broiler", "quantity": 487}], "inventory": 537000} {"store": 88, "products": [{"product": "tv", "quantity": 88}, {"product": "socks", "quantity": 988}, {"product": "broiler", "quantity": 888}, {"product": "shirt", "quantity": 788}, {"product": "toaster", "quantity": 688}, {"product": "phone", "quantity": 588}, {"product": "blender", "quantity": 488}, {"product": "tv", "quantity": 388}, {"product": "socks", "quantity": 288}, {"product": "broiler", "quantity": 188}], "inventory": 538000} {"store": 89, "products": [{"product": "tv", "quantity": 789}, {"product": "socks", "quantity": 689}, {"product": "broiler", "quantity": 589}, {"product": "shirt", "quantity": 489}, {"product": "toaster", "quantity": 389}, {"product": "phone", "quantity": 289}, {"product": "blender", "quantity": 189}, {"product": "tv", "quantity": 89}, {"product": "socks", "quantity": 989}, {"product": "broiler", "quantity": 889}], "inventory": 539000} {"store": 90, "products": [{"product": "tv", "quantity": 490}, {"product": "socks", "quantity": 390}, {"product": "broiler", "quantity": 290}, {"product": "shirt", "quantity": 190}, {"product": "toaster", "quantity": 90}, {"product": "phone", "quantity": 990}, {"product": "blender", "quantity": 890}, {"product": "tv", "quantity": 790}, {"product": "socks", "quantity": 690}, {"product": "broiler", "quantity": 590}], "inventory": 540000} {"store": 91, "products": [{"product": "tv", "quantity": 191}, {"product": "socks", "quantity": 91}, {"product": "broiler", "quantity": 991}, {"product": "shirt", "quantity": 891}, {"product": "toaster", "quantity": 791}, {"product": "phone", "quantity": 691}, {"product": "blender", "quantity": 591}, {"product": "tv", "quantity": 491}, {"product": "socks", "quantity": 391}, {"product": "broiler", "quantity": 291}], "inventory": 541000} {"store": 92, "products": [{"product": "socks", "quantity": 792}, {"product": "broiler", "quantity": 692}, {"product": "shirt", "quantity": 592}, {"product": "toaster", "quantity": 492}, {"product": "phone", "quantity": 392}, {"product": "blender", "quantity": 292}, {"product": "tv", "quantity": 192}, {"product": "socks", "quantity": 92}, {"product": "broiler", "quantity": 992}, {"product": "shirt", "quantity": 892}], "inventory": 542000} {"store": 93, "products": [{"product": "socks", "quantity": 493}, {"product": "broiler", "quantity": 393}, {"product": "shirt", "quantity": 293}, {"product": "toaster", "quantity": 193}, {"product": "phone", "quantity": 93}, {"product": "blender", "quantity": 993}, {"product": "tv", "quantity": 893}, {"product": "socks", "quantity": 793}, {"product": "broiler", "quantity": 693}, {"product": "shirt", "quantity": 593}], "inventory": 543000} {"store": 94, "products": [{"product": "socks", "quantity": 194}, {"product": "broiler", "quantity": 94}, {"product": "shirt", "quantity": 994}, {"product": "toaster", "quantity": 894}, {"product": "phone", "quantity": 794}, {"product": "blender", "quantity": 694}, {"product": "tv", "quantity": 594}, {"product": "socks", "quantity": 494}, {"product": "broiler", "quantity": 394}, {"product": "shirt", "quantity": 294}], "inventory": 544000} {"store": 95, "products": [{"product": "socks", "quantity": 895}, {"product": "broiler", "quantity": 795}, {"product": "shirt", "quantity": 695}, {"product": "toaster", "quantity": 595}, {"product": "phone", "quantity": 495}, {"product": "blender", "quantity": 395}, {"product": "tv", "quantity": 295}, {"product": "socks", "quantity": 195}, {"product": "broiler", "quantity": 95}, {"product": "shirt", "quantity": 995}], "inventory": 545000} {"store": 96, "products": [{"product": "socks", "quantity": 596}, {"product": "broiler", "quantity": 496}, {"product": "shirt", "quantity": 396}, {"product": "toaster", "quantity": 296}, {"product": "phone", "quantity": 196}, {"product": "blender", "quantity": 96}, {"product": "tv", "quantity": 996}, {"product": "socks", "quantity": 896}, {"product": "broiler", "quantity": 796}, {"product": "shirt", "quantity": 696}], "inventory": 546000} {"store": 97, "products": [{"product": "broiler", "quantity": 197}, {"product": "shirt", "quantity": 97}, {"product": "toaster", "quantity": 997}, {"product": "phone", "quantity": 897}, {"product": "blender", "quantity": 797}, {"product": "tv", "quantity": 697}, {"product": "socks", "quantity": 597}, {"product": "broiler", "quantity": 497}, {"product": "shirt", "quantity": 397}, {"product": "toaster", "quantity": 297}], "inventory": 547000} {"store": 98, "products": [{"product": "broiler", "quantity": 898}, {"product": "shirt", "quantity": 798}, {"product": "toaster", "quantity": 698}, {"product": "phone", "quantity": 598}, {"product": "blender", "quantity": 498}, {"product": "tv", "quantity": 398}, {"product": "socks", "quantity": 298}, {"product": "broiler", "quantity": 198}, {"product": "shirt", "quantity": 98}, {"product": "toaster", "quantity": 998}], "inventory": 548000} {"store": 99, "products": [{"product": "broiler", "quantity": 599}, {"product": "shirt", "quantity": 499}, {"product": "toaster", "quantity": 399}, {"product": "phone", "quantity": 299}, {"product": "blender", "quantity": 199}, {"product": "tv", "quantity": 99}, {"product": "socks", "quantity": 999}, {"product": "broiler", "quantity": 899}, {"product": "shirt", "quantity": 799}, {"product": "toaster", "quantity": 699}], "inventory": 549000} {"store": 100, "products": [{"product": "broiler", "quantity": 300}, {"product": "shirt", "quantity": 200}, {"product": "toaster", "quantity": 100}, {"product": "phone", "quantity": 1000}, {"product": "blender", "quantity": 900}, {"product": "tv", "quantity": 800}, {"product": "socks", "quantity": 700}, {"product": "broiler", "quantity": 600}, {"product": "shirt", "quantity": 500}, {"product": "toaster", "quantity": 400}], "inventory": 550000}
Apache-2.0
RumbleSandbox.ipynb
Sparksoniq/sparksoniq
Aggregating statistics
import pandas as pd air_quality = pd.read_pickle('air_quality.pkl') air_quality.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 95685 entries, 0 to 95684 Data columns (total 27 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 date_time 95685 non-null datetime64[ns] 1 PM2.5 95685 non-null float64 2 PM10 95685 non-null float64 3 SO2 95685 non-null float64 4 NO2 95685 non-null float64 5 CO 95685 non-null float64 6 O3 95685 non-null float64 7 TEMP 95685 non-null float64 8 PRES 95685 non-null float64 9 DEWP 95685 non-null float64 10 RAIN 95685 non-null float64 11 wd 95685 non-null object 12 WSPM 95685 non-null float64 13 station 95685 non-null object 14 year 95685 non-null int64 15 month 95685 non-null int64 16 day 95685 non-null int64 17 hour 95685 non-null int64 18 quarter 95685 non-null int64 19 day_of_week_num 95685 non-null int64 20 day_of_week_name 95685 non-null object 21 time_until_2022 95685 non-null timedelta64[ns] 22 time_until_2022_days 95685 non-null float64 23 time_until_2022_weeks 95685 non-null float64 24 prior_2016_ind 95685 non-null bool 25 PM2.5_category 95685 non-null category 26 TEMP_category 95685 non-null category dtypes: bool(1), category(2), datetime64[ns](1), float64(13), int64(6), object(3), timedelta64[ns](1) memory usage: 17.8+ MB
MIT
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb
PacktPublishing/Python-for-Data-Analysis-step-by-step-with-projects-
Series/one column of a DataFrame
air_quality['TEMP'].count() air_quality['TEMP'].mean() air_quality['TEMP'].std() air_quality['TEMP'].min() air_quality['TEMP'].max() air_quality['TEMP'].quantile(0.25) air_quality['TEMP'].median() air_quality['TEMP'].describe() air_quality['RAIN'].sum() air_quality['PM2.5_category'].mode() air_quality['PM2.5_category'].nunique() air_quality['PM2.5_category'].describe()
_____no_output_____
MIT
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb
PacktPublishing/Python-for-Data-Analysis-step-by-step-with-projects-
DataFrame by columns
air_quality.count() air_quality.mean() air_quality.mean(numeric_only=True) air_quality[['PM2.5', 'TEMP']].mean() air_quality[['PM2.5', 'TEMP']].min() air_quality[['PM2.5', 'TEMP']].max() air_quality.describe().T air_quality.describe(include=['object', 'category', 'bool']) air_quality[['PM2.5_category', 'TEMP_category', 'hour']].mode() air_quality['hour'].value_counts() air_quality[['PM2.5', 'TEMP']].agg('mean') air_quality[['PM2.5', 'TEMP']].mean() air_quality[['PM2.5', 'TEMP']].agg(['min', 'max', 'mean']) air_quality[['PM2.5', 'PM2.5_category']].agg(['min', 'max', 'mean', 'nunique']) air_quality[['PM2.5', 'PM2.5_category']].agg({'PM2.5': 'mean', 'PM2.5_category': 'nunique'}) air_quality.agg({'PM2.5': ['min', 'max', 'mean'], 'PM2.5_category': 'nunique'}) def max_minus_min(s): return s.max() - s.min() max_minus_min(air_quality['TEMP']) air_quality[['PM2.5', 'TEMP']].agg(['min', 'max', max_minus_min]) 41.6 - (-16.8)
_____no_output_____
MIT
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb
PacktPublishing/Python-for-Data-Analysis-step-by-step-with-projects-
DataFrame by rows
air_quality[['PM2.5', 'PM10']] air_quality[['PM2.5', 'PM10']].min() air_quality[['PM2.5', 'PM10']].min(axis=1) air_quality[['PM2.5', 'PM10']].mean(axis=1) air_quality[['PM2.5', 'PM10']].sum(axis=1)
_____no_output_____
MIT
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb
PacktPublishing/Python-for-Data-Analysis-step-by-step-with-projects-
Grouping by
air_quality.groupby(by='PM2.5_category') air_quality.groupby(by='PM2.5_category').groups air_quality['PM2.5_category'].head(20) air_quality.groupby(by='PM2.5_category').groups.keys() air_quality.groupby(by='PM2.5_category').get_group('Good') air_quality.sort_values('date_time') air_quality.sort_values('date_time').groupby(by='year').first() air_quality.sort_values('date_time').groupby(by='year').last() air_quality.groupby('TEMP_category').size() air_quality['TEMP_category'].value_counts(sort=False) air_quality.groupby('quarter').mean() #air_quality[['PM2.5', 'TEMP']].groupby('quarter').mean() # KeyError: 'quarter' air_quality[['PM2.5', 'TEMP', 'quarter']].groupby('quarter').mean() air_quality.groupby('quarter')[['PM2.5', 'TEMP']].mean() air_quality.groupby('quarter').mean()[['PM2.5', 'TEMP']] air_quality.groupby('quarter')[['PM2.5', 'TEMP']].describe() air_quality.groupby('quarter')[['PM2.5', 'TEMP']].agg(['min', 'max']) air_quality.groupby('day_of_week_name')[['PM2.5', 'TEMP', 'RAIN']].agg({'PM2.5': ['min', 'max', 'mean'], 'TEMP': 'mean', 'RAIN': 'mean'}) air_quality.groupby(['quarter', 'TEMP_category'])[['PM2.5', 'TEMP']].mean() air_quality.groupby(['TEMP_category', 'quarter'])[['PM2.5', 'TEMP']].mean() air_quality.groupby(['year', 'quarter', 'month'])['TEMP'].agg(['min', 'max'])
_____no_output_____
MIT
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb
PacktPublishing/Python-for-Data-Analysis-step-by-step-with-projects-
Pivoting tables
import pandas as pd student = pd.read_csv('student.csv') student.info() student pd.pivot_table(student, index='sex') pd.pivot_table(student, index=['sex', 'internet'] ) pd.pivot_table(student, index=['sex', 'internet'], values='score') pd.pivot_table(student, index=['sex', 'internet'], values='score', aggfunc='mean') pd.pivot_table(student, index=['sex', 'internet'], values='score', aggfunc='median') pd.pivot_table(student, index=['sex', 'internet'], values='score', aggfunc=['min', 'mean', 'max']) pd.pivot_table(student, index=['sex', 'internet'], values='score', aggfunc='mean', columns='studytime' ) student[(student['sex']=='M') & (student['internet']=='no') & (student['studytime']=='4. >10 hours')] pd.pivot_table(student, index=['sex', 'internet'], values='score', aggfunc='mean', columns='studytime', fill_value=-999) pd.pivot_table(student, index=['sex', 'internet'], values=['score', 'age'], aggfunc='mean', columns='studytime', fill_value=-999) pd.pivot_table(student, index=['sex'], values='score', aggfunc='mean', columns=['internet', 'studytime'], fill_value=-999) pd.pivot_table(student, index='familysize', values='score', aggfunc='mean', columns='sex' ) pd.pivot_table(student, index='familysize', values='score', aggfunc='mean', columns='sex', margins=True, margins_name='Average score total') student[student['sex']=='F'].mean() pd.pivot_table(student, index='studytime', values=['age', 'score'], aggfunc={'age': ['min', 'max'], 'score': 'median'}, columns='sex') pd.pivot_table(student, index='studytime', values='score', aggfunc=lambda s: s.max() - s.min(), columns='sex' )
_____no_output_____
MIT
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb
PacktPublishing/Python-for-Data-Analysis-step-by-step-with-projects-
Getting Started with CRESTCREST is a hybrid modelling DSL (domain-specific language) that focuses on the flow of resources within cyber-physical systems (CPS).CREST is implemented in the Python programming language as the `crestdsl` internal DSL and shipped as Python package. `crestdsl`'s source code is hosted on GitHub https://github.com/stklik/CREST/You can also visit the [documentation](https://crestdsl.readthedocs.io)for more information. This NotebookThe purpose of this notebook is to provide a small showcase of modelling with `crestdsl`.The system to be modelled is a growing lamp that produces light and heat, if the lamp is turned on and electricity is provided. How to use this Jupyter notebook:Select a code-cell (such as the one directly below) and click the `Run` button in the menu bar above to execute it. (Alternatively, you can use the keyboard combination `Ctrl+Enter`.)**Output:** will be shown directly underneath the cell, if there is any.To **run all cells**, you can iteratively execute individual cells, or execute all at once via the menu item `Cell` -> `Run all` Remember, that the order in which you execute cells is important, not the placement of a cell within the notebook.For a more profound introduction, go and visit the [Project Jupyter](http://jupyter.org/) website.
print("Try executing this cell, so you ge a feeling for it.") 2 + 2 # this should print "Out[X]: 4" directly underneath (X will be an index)
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Defining a `crestdsl` Model Import `crestdsl`In order to use `crestdsl`, you have to import it.Initially, we will create work towards creating a system model, so let's import the `model` subpackage.
import crestdsl.model as crest
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Define Resources First, it is necessary to define the resource types that will be used in the application.In CREST and `crestdsl`, resources are combinations of resource names and their value domains.Value domains can be infinite, such as Real and Integers or discrete such as `["on", "off"]`, as shown for the switch.
electricity = crest.Resource("Watt", crest.REAL) switch = crest.Resource("switch", ["on", "off"]) light = crest.Resource("Lumen", crest.INTEGER) counter = crest.Resource("Count", crest.INTEGER) time = crest.Resource("minutes", crest.REAL) celsius = crest.Resource("Celsius", crest.REAL) fahrenheit = crest.Resource("Fahrenheit", crest.REAL)
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Our First EntityIn CREST any system or component is modelled as Entity.Entities can be composed hierachically (as we will see later).To model an entity, we define a Python class that inherits from `crest.Entity`.Entities can define - `Input`, `Output` and `Local` ports (variables), - `State` objects and a `current` state - `Transition`s between states - `Influence`s between ports (to express value dependencies between ports) - `Update`s that are continuously executed and write values to a port - and `Action`s, which allow the modelling of discrete changes during transition firings. Below, we define the `LightElement` entity, which models the component that is responsible for producing light from electricity. It defines one input and one output port.
class LightElement(crest.Entity): """This is a definition of a new Entity type. It derives from CREST's Entity base class.""" """we define ports - each has a resource and an initial value""" electricity_in = crest.Input(resource=electricity, value=0) light_out = crest.Output(resource=light, value=0) """automaton states - don't forget to specify one as the current state""" on = crest.State() off = current = crest.State() """transitions and guards (as lambdas)""" off_to_on = crest.Transition(source=off, target=on, guard=(lambda self: self.electricity_in.value >= 100)) on_to_off = crest.Transition(source=on, target=off, guard=(lambda self: self.electricity_in.value < 100)) """ update functions. They are related to a state, define the port to be updated and return the port's new value Remember that updates need two parameters: self and dt. """ @crest.update(state=on, target=light_out) def set_light_on(self, dt=0): return 800 @crest.update(state=off, target=light_out) def set_light_off(self, dt=0): return 0
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Visualising EntitiesBy default, CREST is a graphical language. Therefore it only makes sense to implement a graphical visualisation of `crestdsl` systems.One of the plotting engines is defined in the `crestdsl.ui` module.The code below produces an interactive HTML output. You can easily interact with the model to explore it:- Move objects around if the automatic layout does not provide an sufficiently good layout.- Select ports and states to see their outgoing arcs (blue) and incoming arcs (red).- Hover over transitions, influences and actions to display their name and short summary.- Double click on transitions, influences and actions you will see their source code.- There is a *hot corner* on the top left of each entity. You can double-click it to collapse the entity. This feature is useful for CREST diagrams with many entities. *Unfortunately a software issue prevents the expand/collapse icon not to be displayed. It still works though (notice your cursor changing to a pointer)* **GO AHEAD AND TRY IT**
# import the plotting libraries that can visualise the CREST systems from crestdsl.ui import plot plot(LightElement())
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Define Another Entity (The HeatElement)It's time to model the heating component of our growing lamp.Its functionality is simple: if the `switch_in` input is `on`, 1% of the electricity is converted to addtional heat under the lamp.Thus, for example, by providing 100 Watt, the temperature underneath the lamp grows by 1 degree centigrade.
class HeatElement(crest.Entity): """ Ports """ electricity_in = crest.Input(resource=electricity, value=0) switch_in = crest.Input(resource=switch, value="off") # the heatelement has its own switch heat_out = crest.Output(resource=celsius, value=0) # and produces a celsius value (i.e. the temperature increase underneath the lamp) """ Automaton (States) """ state = current = crest.State() # the only state of this entity """Update""" @crest.update(state=state, target=heat_out) def heat_output(self, dt): # When the lamp is on, then we convert electricity to temperature at a rate of 100Watt = 1Celsius if self.switch_in.value == "on": return self.electricity_in.value / 100 else: return 0 # show us what it looks like plot(HeatElement())
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Adder - A Logical EntityCREST does not specify a special connector type that defines what is happening for multiple incoming influence, etc. Instead standard entities are used to define add, minimum and maximum calculation which is then written to the actual target port using an influence.We call such entities *logical*, since they don't have a real-world counterpart.
# a logical entity can inherit from LogicalEntity, # to emphasize that it does not relate to the real world class Adder(crest.LogicalEntity): heat_in = crest.Input(resource=celsius, value=0) room_temp_in = crest.Input(resource=celsius, value=22) temperature_out = crest.Output(resource=celsius, value=22) state = current = crest.State() @crest.update(state=state, target=temperature_out) def add(self, dt): return self.heat_in.value + self.room_temp_in.value plot(Adder()) # try adding the display option 'show_update_ports=True' and see what happens!
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Put it all together - Create the `GrowLamp`Finally, we create the entire `GrowLamp` entity based on the components we already created.We define subentities in a similar way to all other definitions - as class variables.Additionally, we use influences to connect the ports to each other.
class GrowLamp(crest.Entity): """ - - - - - - - PORTS - - - - - - - - - - """ electricity_in = crest.Input(resource=electricity, value=0) switch_in = crest.Input(resource=switch, value="off") heat_switch_in = crest.Input(resource=switch, value="on") room_temperature_in = crest.Input(resource=fahrenheit, value=71.6) light_out = crest.Output(resource=light, value=3.1415*1000) # note that these are bogus values for now temperature_out = crest.Output(resource=celsius, value=4242424242) # yes, nonsense..., they are updated when simulated on_time = crest.Local(resource=time, value=0) on_count = crest.Local(resource=counter, value=0) """ - - - - - - - SUBENTITIES - - - - - - - - - - """ lightelement = LightElement() heatelement = HeatElement() adder = Adder() """ - - - - - - - INFLUENCES - - - - - - - - - - """ """ Influences specify a source port and a target port. They are always executed, independent of the automaton's state. Since they are called directly with the source-port's value, a self-parameter is not necessary. """ @crest.influence(source=room_temperature_in, target=adder.room_temp_in) def celsius_to_fahrenheit(value): return (value - 32) * 5 / 9 # we can also define updates and influences with lambda functions... heat_to_add = crest.Influence(source=heatelement.heat_out, target=adder.heat_in, function=(lambda val: val)) # if the lambda function doesn't do anything (like the one above) we can omit it entirely... add_to_temp = crest.Influence(source=adder.temperature_out, target=temperature_out) light_to_light = crest.Influence(source=lightelement.light_out, target=light_out) heat_switch_influence = crest.Influence(source=heat_switch_in, target=heatelement.switch_in) """ - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """ on = crest.State() off = current = crest.State() error = crest.State() off_to_on = crest.Transition(source=off, target=on, guard=(lambda self: self.switch_in.value == "on" and self.electricity_in.value >= 100)) on_to_off = crest.Transition(source=on, target=off, guard=(lambda self: self.switch_in.value == "off" or self.electricity_in.value < 100)) # transition to error state if the lamp ran for more than 1000.5 time units @crest.transition(source=on, target=error) def to_error(self): """More complex transitions can be defined as a function. We can use variables and calculations""" timeout = self.on_time.value >= 1000.5 heat_is_on = self.heatelement.switch_in.value == "on" return timeout and heat_is_on """ - - - - - - - UPDATES - - - - - - - - - - """ # LAMP is OFF or ERROR @crest.update(state=[off, error], target=lightelement.electricity_in) def update_light_elec_off(self, dt): # no electricity return 0 @crest.update(state=[off, error], target=heatelement.electricity_in) def update_heat_elec_off(self, dt): # no electricity return 0 # LAMP is ON @crest.update(state=on, target=lightelement.electricity_in) def update_light_elec_on(self, dt): # the lightelement gets the first 100Watt return 100 @crest.update(state=on, target=heatelement.electricity_in) def update_heat_elec_on(self, dt): # the heatelement gets the rest return self.electricity_in.value - 100 @crest.update(state=on, target=on_time) def update_time(self, dt): # also update the on_time so we know whether we overheat return self.on_time.value + dt """ - - - - - - - ACTIONS - - - - - - - - - - """ # let's add an action that counts the number of times we switch to state "on" @crest.action(transition=off_to_on, target=on_count) def count_switching_on(self): """ Actions are functions that are executed when the related transition is fired. Note that actions do not have a dt. """ return self.on_count.value + 1 # create an instance and plot it plot(GrowLamp())
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
SimulationSimulation allows us to execute the model and see its evolution.`crestdsl`'s simulator is located in the `simultion` module. In order to use it, we have to import it.
# import the simulator from crestdsl.simulation import Simulator
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
After the import, we can use a simulator by initialising it with a system model.In our case, we will explore the `GrowLamp` system that we defined above.
gl = GrowLamp() sim = Simulator(gl)
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
StabilisationThe simulator will execute the system's transitions, updates and influences until reaching a fixpoint.This process is referred to as *stabilisation*.Once stable, there are no more transitions can be triggered and all updates/influences/actions have been executed. After stabilisation, all ports have their correct values, calculted from preceeding ports.In the GrowLamp, we see that the value's of the `temperature_out` and `light_out` ports are wrong (based on the dummy values we defined as their initial values).After triggering the stabilisation, these values have been corrected.The simulator also has a convenience API `plot()` that allows the direct plotting of the entity, without having to import and call the `elk` library.
sim.stabilise() sim.plot()
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Stabilisaiton also has to be called after the modification of input values, such that the new values are used to update any dependent ports.Further, all transitions have to be checked on whether they are enabled and executed if they are.Below, we show the modification of the growlamp and stabilisation.Compare the plot below to the plot above to see that the information has been updated.
# modify the growlamp instance's inputs directly, the simulator points to that object and will use it gl.electricity_in.value = 500 gl.switch_in.value = "on" sim.stabilise() sim.plot()
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
Time advanceEvidently, we also want to simulate the behaviour over time.The simulator's `advance(dt)` method does precisely that, by advancing `dt` time units.Below we advance 500 time steps. The effect is that the global system time is now `t=500` (see the growing lamp's title bar).Additionally, the local variable `on_time`, which sums up the total amount of time the automaton has spent in the `on` state, has the value of 500 too - Just as expected!
sim.advance(500) sim.plot()
_____no_output_____
MIT
GettingStarted.ipynb
stklik/crestdsl-docker
train_features = convert_examples_to_features( train_examples, MAX_SEQ_LENGTH, tokenizer) Create an input function for training. drop_remainder = True for using TPUs.train_input_fn = input_fn_builder( features=train_features, seq_length=MAX_SEQ_LENGTH, is_training=True, drop_remainder=False)
# Compute # train and warmup steps from batch size num_train_steps = int(len(train_examples) / BATCH_SIZE * NUM_TRAIN_EPOCHS) num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION) file_based_convert_examples_to_features( train_examples, MAX_SEQ_LENGTH, tokenizer, train_file) tf.logging.info("***** Running training *****") tf.logging.info(" Num examples = %d", len(train_examples)) tf.logging.info(" Batch size = %d", BATCH_SIZE) tf.logging.info(" Num steps = %d", num_train_steps) train_input_fn = file_based_input_fn_builder( input_file=train_file, seq_length=MAX_SEQ_LENGTH, is_training=True, drop_remainder=True) bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG) model_fn = model_fn_builder( bert_config=bert_config, num_labels= len(LABEL_COLUMNS), init_checkpoint=BERT_INIT_CHKPNT, learning_rate=LEARNING_RATE, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, use_tpu=False, use_one_hot_embeddings=False) estimator = tf.estimator.Estimator( model_fn=model_fn, config=run_config, params={"batch_size": BATCH_SIZE}) print(f'Beginning Training!') current_time = datetime.now() estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) print("Training took time ", datetime.now() - current_time) eval_file = os.path.join('../working', "eval.tf_record") #filename = Path(train_file) if not os.path.exists(eval_file): open(eval_file, 'w').close() eval_examples = create_examples(x_val) file_based_convert_examples_to_features( eval_examples, MAX_SEQ_LENGTH, tokenizer, eval_file) # This tells the estimator to run through the entire set. eval_steps = None eval_drop_remainder = False eval_input_fn = file_based_input_fn_builder( input_file=eval_file, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
INFO:tensorflow:Calling model_fn.
MIT
main/toxic-comment-classification-using-bert.ipynb
anshulwadhawan/FastAI
x_eval = train[100000:] Use the InputExample class from BERT's run_classifier code to create examples from the dataeval_examples = create_examples(x_val)eval_features = convert_examples_to_features( eval_examples, MAX_SEQ_LENGTH, tokenizer) This tells the estimator to run through the entire set.eval_steps = Noneeval_drop_remainder = Falseeval_input_fn = input_fn_builder( features=eval_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=eval_drop_remainder)result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
output_eval_file = os.path.join("../working", "eval_results.txt") with tf.gfile.GFile(output_eval_file, "w") as writer: tf.logging.info("***** Eval results *****") for key in sorted(result.keys()): tf.logging.info(" %s = %s", key, str(result[key])) writer.write("%s = %s\n" % (key, str(result[key]))) x_test = test#[125000:140000] x_test = x_test.reset_index(drop=True) test_file = os.path.join('../working', "test.tf_record") #filename = Path(train_file) if not os.path.exists(test_file): open(test_file, 'w').close() test_examples = create_examples(x_test, False) file_based_convert_examples_to_features( test_examples, MAX_SEQ_LENGTH, tokenizer, test_file) predict_input_fn = file_based_input_fn_builder( input_file=test_file, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) print('Begin predictions!') current_time = datetime.now() predictions = estimator.predict(predict_input_fn) print("Predicting took time ", datetime.now() - current_time)
Begin predictions! Predicting took time 0:00:00.000069
MIT
main/toxic-comment-classification-using-bert.ipynb
anshulwadhawan/FastAI
x_test = test[125000:140000]x_test = x_test.reset_index(drop=True)predict_examples = create_examples(x_test,False) test_features = convert_examples_to_features(predict_examples, MAX_SEQ_LENGTH, tokenizer) print(f'Beginning Training!')current_time = datetime.now()predict_input_fn = input_fn_builder(features=test_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)predictions = estimator.predict(predict_input_fn)print("Training took time ", datetime.now() - current_time)
def create_output(predictions): probabilities = [] for (i, prediction) in enumerate(predictions): preds = prediction["probabilities"] probabilities.append(preds) dff = pd.DataFrame(probabilities) dff.columns = LABEL_COLUMNS return dff output_df = create_output(predictions) merged_df = pd.concat([x_test, output_df], axis=1) submission = merged_df.drop(['comment_text'], axis=1) submission.to_csv("sample_submission.csv", index=False) submission.tail()
_____no_output_____
MIT
main/toxic-comment-classification-using-bert.ipynb
anshulwadhawan/FastAI
Tests on PDA
import sys sys.path[0:0] = ['../..', '../../3rdparty'] # Append to the beginning of the search path from jove.SystemImports import * from jove.DotBashers import * from jove.Def_md2mc import * from jove.Def_PDA import *
You may use any of these help commands: help(ResetStNum) help(NxtStateStr) You may use any of these help commands: help(md2mc) .. and if you want to dig more, then .. help(default_line_attr) help(length_ok_input_items) help(union_line_attr_list_fld) help(extend_rsltdict) help(form_delta) help(get_machine_components) You may use any of these help commands: help(explore_pda) help(run_pda) help(classify_l_id_path) help(h_run_pda) help(interpret_w_eps) help(step_pda) help(suvivor_id) help(term_id) help(final_id) help(cvt_str_to_sym) help(is_surv_id) help(subsumed) help(is_term_id) help(is_final_id)
Unlicense
notebooks/driver/Drive_PDA_Based_Parsing.ipynb
Thanhson89/JA
__IMPORTANT: Must time-bound explore-pda, run-pda, explore-tm, etc so that loops are caught__
repda = md2mc('''PDA !!R -> R R | R + R | R* | ( R ) | 0 | 1 | e I : '', # ; R# -> M M : '', R ; RR -> M M : '', R ; R+R -> M M : '', R ; R* -> M M : '', R ; (R) -> M M : '', R ; 0 -> M M : '', R ; 1 -> M M : '', R ; e -> M M : 0, 0 ; '' -> M M : 1, 1 ; '' -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : +, + ; '' -> M M : e, e ; '' -> M M : '', # ; # -> F ''' ) repda DO_repda = dotObj_pda(repda, FuseEdges=True) DO_repda explore_pda("0", repda, STKMAX=4) explore_pda("00", repda) explore_pda("(0)", repda) explore_pda("(00)", repda) explore_pda("(0)(0)", repda) explore_pda("(0)(0)", repda) explore_pda("0+0", repda, STKMAX=3) explore_pda("0+0", repda) explore_pda("(0)(0)", repda) explore_pda("(0)+(0)", repda) explore_pda("00+0", repda) explore_pda("000", repda, STKMAX=3) explore_pda("00+00", repda, STKMAX=4) explore_pda("00+00", repda, STKMAX=5) explore_pda("0000+0", repda, STKMAX=5) brpda = md2mc('''PDA I : '', '' ; S -> M M : '', S ; (S) -> M M : '', S ; SS -> M M : '', S ; e -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : e, e ; '' -> M M : '', # ; '' -> F''') dotObj_pda(brpda, FuseEdges=True) explore_pda("(e)", brpda, STKMAX=3) brpda1 = md2mc('''PDA I : '', '' ; S -> M M : '', S ; (S) -> M M : '', S ; SS -> M M : '', S ; '' -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : '', '' ; '' -> M M : '', # ; '' -> F''') dotObj_pda(brpda1, FuseEdges=True) explore_pda("('')", brpda1, STKMAX=0) brpda2 = md2mc('''PDA I : a, #; '' -> I I : '', '' ; '' -> I''') dotObj_pda(brpda2, FuseEdges=True) explore_pda("a", brpda2, STKMAX=1) explore_pda("a", brpda1, STKMAX=1) brpda3 = md2mc('''PDA I : a, #; '' -> I I : '', '' ; b -> I''') dotObj_pda(brpda3, FuseEdges=True) explore_pda("a", brpda3, STKMAX=7) # Parsing an arithmetic expression pdaEamb = md2mc('''PDA !!E -> E * E | E + E | ~E | ( E ) | 2 | 3 I : '', # ; E# -> M M : '', E ; ~E -> M M : '', E ; E+E -> M M : '', E ; E*E -> M M : '', E ; (E) -> M M : '', E ; 2 -> M M : '', E ; 3 -> M M : ~, ~ ; '' -> M M : 2, 2 ; '' -> M M : 3, 3 ; '' -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : +, + ; '' -> M M : *, * ; '' -> M M : '', # ; # -> F ''' ) DOpdaEamb = dotObj_pda(pdaEamb, FuseEdges=True) DOpdaEamb DOpdaEamb.source explore_pda("3+2*3", pdaEamb, STKMAX=5) explore_pda("3+2*3+2*3", pdaEamb, STKMAX=7) # Parsing an arithmetic expression pdaE = md2mc('''PDA !!E -> E+T | T !!T -> T*F | F !!F -> 2 | 3 | ~F | (E) I : '', # ; E# -> M M : '', E ; E+T -> M M : '', E ; T -> M M : '', T ; T*F -> M M : '', T ; F -> M M : '', F ; 2 -> M M : '', F ; 3 -> M M : '', F ; ~F -> M M : '', F ; (E) -> M M : ~, ~ ; '' -> M M : 2, 2 ; '' -> M M : 3, 3 ; '' -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : +, + ; '' -> M M : *, * ; '' -> M M : '', # ; # -> F ''' ) DOpdaE = dotObj_pda(pdaE, FuseEdges=True) DOpdaE DOpdaE.source explore_pda("2+2*3", pdaE, STKMAX=7) explore_pda("3+2*3+2*3", pdaE, STKMAX=7) explore_pda("3*2*~3+~~3*~3", pdaE, STKMAX=10) explore_pda("3*2*~3+~~3*~3", pdaEamb, STKMAX=8)
*** Exploring wrt STKMAX= 8 ; increase it if needed *** *** Exploring wrt STKMAX = 8 ; increase it if needed *** String 3*2*~3+~~3*~3 accepted by your PDA in 36 ways :-) Here are the ways: Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E*E#') -> ('M', '2*~3+~~3*~3', '2*E*E#') -> ('M', '*~3+~~3*~3', '*E*E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', 'E+E*E#') -> ('M', '~3+~~3*~3', '~E+E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E*E#') -> ('M', '2*~3+~~3*~3', '2*E*E#') -> ('M', '*~3+~~3*~3', '*E*E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', '~E*E#') -> ('M', '3+~~3*~3', 'E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', 'E+E*E#') -> ('M', '~3+~~3*~3', '~E+E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', '~E*E#') -> ('M', '3+~~3*~3', 'E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E+E#') -> ('M', '~3+~~3*~3', '~E+E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E+E#') -> ('M', '~3+~~3*~3', '~E+E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', '~E#') -> ('M', '~3*~3', 'E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E+E#') -> ('M', '~3+~~3*~3', '~E+E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', '~E#') -> ('M', '~3*~3', 'E#') -> ('M', '~3*~3', '~E#') -> ('M', '3*~3', 'E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', '~E#') -> ('M', '3+~~3*~3', 'E#') -> ('M', '3+~~3*~3', 'E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', '~E#') -> ('M', '3+~~3*~3', 'E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', '~E#') -> ('M', '3+~~3*~3', 'E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', '~E#') -> ('M', '~3*~3', 'E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', '~E#') -> ('M', '3+~~3*~3', 'E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', '~E#') -> ('M', '~3*~3', 'E#') -> ('M', '~3*~3', '~E#') -> ('M', '3*~3', 'E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', 'E*E*E#') -> ('M', '2*~3+~~3*~3', '2*E*E#') -> ('M', '*~3+~~3*~3', '*E*E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', 'E+E*E#') -> ('M', '~3+~~3*~3', '~E+E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', 'E*E*E#') -> ('M', '2*~3+~~3*~3', '2*E*E#') -> ('M', '*~3+~~3*~3', '*E*E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', '~E*E#') -> ('M', '3+~~3*~3', 'E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', 'E*E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E*E#') -> ('M', '*2*~3+~~3*~3', '*E*E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', 'E+E*E#') -> ('M', '2*~3+~~3*~3', 'E*E+E*E#') -> ('M', '2*~3+~~3*~3', '2*E+E*E#') -> ('M', '*~3+~~3*~3', '*E+E*E#') -> ('M', '~3+~~3*~3', 'E+E*E#') -> ('M', '~3+~~3*~3', '~E+E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E#') -> ('M', '*2*~3+~~3*~3', '*E#') -> ('M', '2*~3+~~3*~3', 'E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', 'E+E*E#') -> ('M', '~3+~~3*~3', '~E+E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E#') -> ('M', '*2*~3+~~3*~3', '*E#') -> ('M', '2*~3+~~3*~3', 'E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E*E#') -> ('M', '~3+~~3*~3', '~E*E#') -> ('M', '3+~~3*~3', 'E*E#') -> ('M', '3+~~3*~3', 'E+E*E#') -> ('M', '3+~~3*~3', '3+E*E#') -> ('M', '+~~3*~3', '+E*E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E#') -> ('M', '*2*~3+~~3*~3', '*E#') -> ('M', '2*~3+~~3*~3', 'E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E+E#') -> ('M', '~3+~~3*~3', '~E+E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', 'E*E#') -> ('M', '~~3*~3', '~E*E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E#') -> ('M', '*2*~3+~~3*~3', '*E#') -> ('M', '2*~3+~~3*~3', 'E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E+E#') -> ('M', '~3+~~3*~3', '~E+E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', '~E#') -> ('M', '~3*~3', 'E#') -> ('M', '~3*~3', 'E*E#') -> ('M', '~3*~3', '~E*E#') -> ('M', '3*~3', 'E*E#') -> ('M', '3*~3', '3*E#') -> ('M', '*~3', '*E#') -> ('M', '~3', 'E#') -> ('M', '~3', '~E#') -> ('M', '3', 'E#') -> ('M', '3', '3#') -> ('M', '', '#') -> ('F', '', '#') . Final state ('F', '', '#') Reached as follows: -> ('I', '3*2*~3+~~3*~3', '#') -> ('M', '3*2*~3+~~3*~3', 'E#') -> ('M', '3*2*~3+~~3*~3', 'E*E#') -> ('M', '3*2*~3+~~3*~3', '3*E#') -> ('M', '*2*~3+~~3*~3', '*E#') -> ('M', '2*~3+~~3*~3', 'E#') -> ('M', '2*~3+~~3*~3', 'E*E#') -> ('M', '2*~3+~~3*~3', '2*E#') -> ('M', '*~3+~~3*~3', '*E#') -> ('M', '~3+~~3*~3', 'E#') -> ('M', '~3+~~3*~3', 'E+E#') -> ('M', '~3+~~3*~3', '~E+E#') -> ('M', '3+~~3*~3', 'E+E#') -> ('M', '3+~~3*~3', '3+E#') -> ('M', '+~~3*~3', '+E#') -> ('M', '~~3*~3', 'E#') -> ('M', '~~3*~3', '~E#') -> ('M', '~3*~3', 'E#') -> ('M', '~3*~3', '~E#') -> ('M', '3*~3', 'E#')
Unlicense
notebooks/driver/Drive_PDA_Based_Parsing.ipynb
Thanhson89/JA
This notebook shows the outcome of the experiments I've conducted as well as the code used to read the 'log.txt' file in real time.
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-bright') import pandas as pd
_____no_output_____
BSD-2-Clause
Read log and experiment outcome.ipynb
flothesof/RiceCookerExperiments
Real-time plotting of log.txt Let's write a function that reads the current data and plots it:
def read_plot(): "Reads data and plots it." df = pd.read_csv('log.txt', parse_dates=['time']) df = df.set_index(df.pop('time')) df.temperature.plot.line(title='temperature in the rice cooker') df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature') plt.figure(figsize=(10, 6)) read_plot() plt.legend(loc='upper left') plt.grid()
_____no_output_____
BSD-2-Clause
Read log and experiment outcome.ipynb
flothesof/RiceCookerExperiments
First experiment Timings:- Start at 12:20:30 (button on).- End at 12:44:00 (button turns itself off) of the experiment.
df = pd.read_csv('log_20160327_v1.txt', parse_dates=['time']) df = df.set_index(df.pop('time')) df.temperature.plot.line(title='2016-03-27 rice cooking experiment 1') df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature') plt.figure(figsize=(10, 6)) df.temperature.plot.line(title='2016-03-27 rice cooking experiment 1') df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature') plt.ylabel('degrees Celsius') plt.legend(loc='lower right')
_____no_output_____
BSD-2-Clause
Read log and experiment outcome.ipynb
flothesof/RiceCookerExperiments
Second experiment I've wrapped the probe in a thin plastic layer this time. I'll also let the temperature stabilize before running the experiment. Starting temperature : 20.6 degrees. I started the log when I pushed the button. Push button pops back at 18:58. End of cooking: now warming instead.
df = pd.read_csv('log_20160327_v2.txt', parse_dates=['time']) df = df.set_index(df.pop('time')) df.temperature.plot.line(title='2016-03-27 rice cooking experiment 2') df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature') plt.figure(figsize=(10, 6)) df.temperature.plot.line(title='2016-03-27 rice cooking experiment 2') df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature') plt.xlim(1459189976.0, 1459191985.0) plt.ylim(15, 115) plt.ylabel('degrees Celsius') plt.legend(loc='lower right')
_____no_output_____
BSD-2-Clause
Read log and experiment outcome.ipynb
flothesof/RiceCookerExperiments
esBERTus: evaluation of the models resultsIn this notebook, an evaluation of the results obtained by the two models will be performed. The idea here is not as much to measure a benchmarking metric on the models but to understand the qualitative difference of the models.In order to do so Keyword extractionIn order to understand what are the "hot topics" of the corpuses that are being used to train the models, a keyword extraction is performed.Although the possibility to extract keywords based in a word embeddings approach has been considered, TF-IDF has been chosen over any other approach to model the discussion topic over the different corpuses due to it's interpretability Cleaning the textsFor this, a Spacy pipeline is used to speed up the cleaning process
from spacy.language import Language import re @Language.component("clean_lemmatize") def clean_lemmatize(doc): text = doc.text text = re.sub(r'\w*\d\w*', r'', text) # remove words containing digits text = re.sub(r'[^a-z\s]', '', text) # remove anything that is not a letter or a space return nlp.make_doc(text) print('Done!') import spacy # Instantiate the pipeline, disable ner component for perfomance reasons nlp = spacy.load("en_core_web_sm", disable=['ner']) # Add custom text cleaning function nlp.add_pipe('clean_lemmatize', before="tok2vec") # Apply to EU data with open('../data/02_preprocessed/full_eu_text.txt') as f: eu_texts = f.readlines() nlp.max_length = max([len(text)+1 for text in eu_texts]) eu_texts = [' '.join([token.lemma_ for token in doc]) for doc in nlp.pipe(eu_texts, n_process=10)] # Get lemmas with open('../data/04_evaluation/full_eu_text_for_tfidf.txt', 'w+') as f: for text in eu_texts: f.write(text) f.write('\n') print('Done EU!') # Apply to US data with open('../data/02_preprocessed/full_us_text.txt') as f: us_texts = f.readlines() nlp.max_length = max([len(text)+1 for text in us_texts]) us_texts = [' '.join([token.lemma_ for token in doc]) for doc in nlp.pipe(us_texts, n_process=10)] # Get lemmas with open('../data/04_evaluation/full_us_text_for_tfidf.txt', 'w+') as f: for text in us_texts: f.write(text) f.write('\n') print('Done US!') print('Done!')
Done EU! Done US! Done!
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Keyword extractionDue to the differences in legths and number of texts, it's not possible to use a standard approach to keywords extraction. TF-IDF has been considered, but it takes away most of the very interesting keywords such as "pandemic" or "covid". This is the reason why a hybrid approach between both of the European and US corpuses has been chosen.The approach takes the top n words from one of the corpus that intersects with the top n words from the other corpus. In order to find the most relevant words, a simple count vector is used, that counts the frequency of the words. This takes only the words that are really relevant in both cases, even if you using a relatively naive approach.
from sklearn.feature_extraction.text import CountVectorizer import numpy as np # Read the processed data with open('../data/04_evaluation/full_eu_text_for_tfidf.txt') as f: eu_texts = f.readlines() with open('../data/04_evaluation/full_us_text_for_tfidf.txt') as f: us_texts = f.readlines() # Join the texts together from nltk.corpus import stopwords stopwords = set(stopwords.words('english')) max_df = 0.9 max_features = 1000 cv_eu=CountVectorizer(max_df=max_df, stop_words=stopwords , max_features=max_features) word_count_vector=cv_eu.fit_transform(eu_texts) cv_us=CountVectorizer(max_df=max_df, stop_words=stopwords , max_features=max_features) word_count_vector=cv_us.fit_transform(us_texts) n_words = 200 keywords = [word for word in list(cv_eu.vocabulary_.keys())[:n_words] if word in list(cv_us.vocabulary_.keys())[:n_words]] keywords
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Measure the models performance on masked tokens Extract sentences where the keywords appear
keywords = ['coronavirus', 'covid', 'covid-19', 'virus', 'influenza', 'flu', 'pandemic', 'epidemic', 'outbreak', 'crisis', 'emergency', 'vaccine', 'vaccinated', 'mask', 'quarantine', 'symptoms', 'antibody', 'inmunity', 'distance', 'isolation', 'test', 'positive', 'negative', 'nurse', 'doctor', 'health', 'healthcare',] import spacy from spacy.matcher import PhraseMatcher with open('../data/02_preprocessed/full_eu_text.txt') as f: eu_texts = f.readlines() with open('../data/02_preprocessed/full_us_text.txt') as f: us_texts = f.readlines() nlp = spacy.load("en_core_web_sm", disable=['ner']) texts = [item for sublist in [eu_texts, us_texts] for item in sublist] nlp.max_length = max([len(text) for text in texts]) phrase_matcher = PhraseMatcher(nlp.vocab) patterns = [nlp(text) for text in keywords] phrase_matcher.add('KEYWORDS', None, *patterns) docs = nlp.pipe(texts, n_process=12) sentences = [] block_size = 350 # Parse the docs for sentences open('../data/04_evaluation/sentences.txt', 'wb').close() print('Starting keyword extraction') for doc in docs: for sent in doc.sents: # Check if the token is in the big sentence for match_id, start, end in phrase_matcher(nlp(sent.text)): if nlp.vocab.strings[match_id] in ["KEYWORDS"]: # Create sentences of length of no more than block size tokens = sent.text.split(' ') if len(tokens) <= block_size: sentence = sent.text else: sentence = " ".join(tokens[:block_size]) with open('../data/04_evaluation/sentences.txt', 'ab') as f: f.write(f'{sentence}\n'.encode('UTF-8')) print(f"There are {len(open('../data/04_evaluation/sentences.txt', 'rb').readlines())} sentences containing keywords")
There are 68086 sentences containing keywords
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Measure the probability of outputing the real token in the sentence
# Define a custom function that feeds the three models an example and returns the perplexity def get_masked_token_probaility(sentence:str, keywords:list, models_pipelines:list): # Find the word in the sentence to mask sentence = sentence.lower() keywords = [keyword.lower() for keyword in keywords] target = None for keyword in keywords: # Substitute only the first matched keyword if keyword in sentence: target = keyword masked_sentence = sentence.replace(keyword, '{}', 1) break if target: model_pipeline_results = [] for model_pipeline in models_pipelines: masked_sentence = masked_sentence.format(model_pipeline.tokenizer.mask_token) try: result = model_pipeline(masked_sentence, targets=target) model_pipeline_results.append(result[0]['score']) except Exception as e: model_pipeline_results.append(0) return keyword, model_pipeline_results from transformers import pipeline, AutoModelForMaskedLM, DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained('../data/03_models/tokenizer/') # The best found European model model=AutoModelForMaskedLM.from_pretrained("../data/03_models/eu_bert_model") eu_model_pipeline = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) # The best found US model model=AutoModelForMaskedLM.from_pretrained("../data/03_models/us_bert_model") us_model_pipeline = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) model_checkpoint = 'distilbert-base-uncased' # The baseline model from which the trainin model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) base_model_pipeline = pipeline( "fill-mask", model=model, tokenizer=model_checkpoint ) results = [] print(f"There are {len(open('../data/04_evaluation/sentences.txt').readlines())} sentences to be evaluated") for sequence in open('../data/04_evaluation/sentences.txt').readlines(): results.append(get_masked_token_probaility(sequence, keywords, [eu_model_pipeline, us_model_pipeline, base_model_pipeline])) import pickle pickle.dump(results, open('../data/04_evaluation/sentence_token_prediction.pickle', 'wb'))
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Evaluate the results
import pickle results = pickle.load(open('../data/04_evaluation/sentence_token_prediction.pickle', 'rb')) results[0:5]
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Frequences of masked words in the pipeline
from collections import Counter import numpy as np import matplotlib.pyplot as plt words = Counter([result[0] for result in results if result!=None]).most_common(len(keywords)) # most_common also sorts them labels = [word[0] for word in words] values = [word[1] for word in words] indexes = np.arange(len(labels)) fix, ax = plt.subplots(figsize=(10,5)) ax.set_xticks(range(len(words))) plt.bar(indexes, values, width=.8, align="center",alpha=.8) plt.xticks(indexes, labels, rotation=45) plt.title('Frequences of masked words in the pipeline') plt.show()
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Average probability of all the masked keywords by model
n_results = len([result for result in results if result!=None]) eu_results = sum([(result[1][0]) for result in results if result!=None]) / n_results us_results = sum([(result[1][1]) for result in results if result!=None]) / n_results base_results = sum([(result[1][2]) for result in results if result!=None]) / n_results labels = ['EU model', 'US model', 'Base model'] values = [eu_results, us_results, base_results] indexes = np.arange(len(labels)) fix, ax = plt.subplots(figsize=(10,5)) ax.set_xticks(range(len(words))) plt.bar(indexes, values, width=.6, align="center",alpha=.8) plt.xticks(indexes, labels, rotation=45) plt.title('Average probability of all the masked keywords by model') plt.show()
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Get the first predicted token in each sentence, masking
def get_first_predicted_masked_token(sentence:str, eu_pipeline, us_pipeline, base_pipeline): sentence = sentence.lower() model_pipeline_results = [] eu_model_pipeline_results = eu_pipeline(sentence.format(eu_pipeline.tokenizer.mask_token), top_k=1) us_model_pipeline_results = us_pipeline(sentence.format(us_pipeline.tokenizer.mask_token), top_k=1) base_model_pipeline_results = base_pipeline(sentence.format(base_pipeline.tokenizer.mask_token), top_k=1) return (eu_model_pipeline_results[0]['token_str'].replace(' ', ''), us_model_pipeline_results[0]['token_str'].replace(' ', ''), base_model_pipeline_results[0]['token_str'].replace(' ', '') ) # Create a function that identifies the first keyword in the sentences, masks it and feeds the it to the prediction function results = [] for sequence in open('../data/04_evaluation/sentences.txt').readlines(): target = None for keyword in keywords: if keyword in sequence: target = keyword break if target: masked_sentence = sequence.replace(target, '{}', 1) try: predictions = get_first_predicted_masked_token(masked_sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) results.append({'masked_token': target, 'eu_prediction': predictions[0], 'us_prediction': predictions[1], 'base_prediction': predictions[2]}) except: pass import pickle pickle.dump(results, open('../data/04_evaluation/sentence_first_predicted_tokens.pickle', 'wb'))
Token indices sequence length is longer than the specified maximum sequence length for this model (594 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (514 > 512). Running this sequence through the model will result in indexing errors
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Evaluate the results
import pickle results = pickle.load(open('../data/04_evaluation/sentence_first_predicted_tokens.pickle', 'rb')) print(len(results)) # Group the results by masked token from itertools import groupby from operator import itemgetter from collections import Counter import numpy as np import matplotlib.pyplot as plt n_words = 10 results = sorted(results, key=itemgetter('masked_token')) for keyword, v in groupby(results, key=lambda x: x['masked_token']): token_results = list(v) fig, ax = plt.subplots(1,3, figsize=(25,5)) for idx, (key, name) in enumerate(zip(['eu_prediction', 'us_prediction', 'base_prediction'], ['EU', 'US', 'Base'])): words = Counter([item[key] for item in token_results]).most_common(n_words) labels, values = zip(*words) ax[idx].barh(labels, values, align="center",alpha=.8) ax[idx].set_title(f'Predicted tokens by {name} model for {keyword}') plt.show()
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
Qualitative evaluation of masked token predictionThe objective of this section is not to compare the score obtained by all the models that are being used, but to compare what are the qualitative outputs of these models. This means that the comparison is going to be done manually, by inputing phrases that contain words related to the COVID-19 pandemic, and comparing the outputs of the models among them, enabling the possibility of discussion of these results. Feeding selected phrases belonging to the European and United States institutions websites
def get_masked_token(sentence:str, eu_pipeline, us_pipeline, base_pipeline, n_results=1): sentence = sentence.lower() model_pipeline_results = [] eu_prediction = eu_pipeline(sentence.format(eu_pipeline.tokenizer.mask_token), top_k =n_results)[0] us_prediction = us_pipeline(sentence.format(us_pipeline.tokenizer.mask_token), top_k =n_results)[0] base_prediction = base_pipeline(sentence.format(base_pipeline.tokenizer.mask_token), top_k =n_results)[0] token = eu_prediction['token_str'].replace(' ', '') print(f"EUROPEAN MODEL -------> {token}\n\t{eu_prediction['sequence'].replace(token, token.upper())}") token = us_prediction['token_str'].replace(' ', '') print(f"UNITED STATES MODEL -------> {token}\n\t{us_prediction['sequence'].replace(token, token.upper())}") token = base_prediction['token_str'].replace(' ', '') print(f"BASE MODEL -------> {token}\n\t{base_prediction['sequence'].replace(token, token.upper())}") from transformers import pipeline, AutoModelForMaskedLM, DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained('../data/03_models/tokenizer/') # The best found European model model=AutoModelForMaskedLM.from_pretrained("../data/03_models/eu_bert_model") eu_model_pipeline = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) # The best found US model model=AutoModelForMaskedLM.from_pretrained("../data/03_models/us_bert_model") us_model_pipeline = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) model_checkpoint = 'distilbert-base-uncased' # The baseline model from which the trainin model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) base_model_pipeline = pipeline( "fill-mask", model=model, tokenizer=model_checkpoint )
_____no_output_____
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
European institutions sentences
# Source https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response_en # Masked token: coronavirus sentence = """The European Commission is coordinating a common European response to the {} outbreak. We are taking resolute action to reinforce our public health sectors and mitigate the socio-economic impact in the European Union. We are mobilising all means at our disposal to help our Member States coordinate their national responses and are providing objective information about the spread of the virus and effective efforts to contain it.""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response_en # Masked token: vaccine sentence = """A safe and effective {} is our best chance to beat coronavirus and return to our normal lives""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response_en # Masked token: medicines sentence = """The European Commission is complementing the EU Vaccines Strategy with a strategy on COVID-19 therapeutics to support the development and availability of {}""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://ec.europa.eu/info/strategy/recovery-plan-europe_en # Masked token: recovery sentence = """The EU’s long-term budget, coupled with NextGenerationEU, the temporary instrument designed to boost the {}, will be the largest stimulus package ever financed in Europe. A total of €1.8 trillion will help rebuild a post-COVID-19 Europe. It will be a greener, more digital and more resilient Europe.""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
EUROPEAN MODEL -------> economy the eu ’ s long - term budget, coupled with nextgenerationeu, the temporary instrument designed to boost the ECONOMY, will be the largest stimulus package ever financed in europe. a total of €1. 8 trillion will help rebuild a post - covid - 19 europe. it will be a greener, more digital and more resilient europe. UNITED STATES MODEL -------> economy the eu ’ s long - term budget, coupled with nextgenerationeu, the temporary instrument designed to boost the ECONOMY, will be the largest stimulus package ever financed in europe. a total of €1. 8 trillion will help rebuild a post - covid - 19 europe. it will be a greener, more digital and more resilient europe. BASE MODEL -------> economy the eu ’ s long - term budget, coupled with nextgenerationeu, the temporary instrument designed to boost the ECONOMY, will be the largest stimulus package ever financed in europe. a total of €1. 8 trillion will help rebuild a post - covid - 19 europe. it will be a greener, more digital and more resilient europe.
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
US Government sentences
# Source https://www.usa.gov/covid-unemployment-benefits # Masked token: provide sentence = 'The federal government has allowed states to change their laws to {} COVID-19 unemployment benefits for people whose jobs have been affected by the coronavirus pandemic.' get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://www.usa.gov/covid-passports-and-travel # Masked token: mask-wearing sentence = """Many museums, aquariums, and zoos have restricted access or are closed during the pandemic. And many recreational areas including National Parks have COVID-19 restrictions and {} rules. Check with your destination for the latest information.""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://www.usa.gov/covid-stimulus-checks # Masked token: people sentence = """The American Rescue Plan Act of 2021 provides $1,400 Economic Impact Payments for {} who are eligible. You do not need to do anything to receive your payment. It will arrive by direct deposit to your bank account, or by mail in the form of a paper check or debit card.""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://www.usa.gov/covid-scams # Masked token: scammers sentence = """During the COVID-19 pandemic, {} may try to take advantage of you. They might get in touch by phone, email, postal mail, text, or social media. Protect your money and your identity. Don't share personal information like your bank account number, Social Security number, or date of birth. Learn how to recognize and report a COVID vaccine scam and other types of coronavirus scams. """ get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline) # Source https://www.acf.hhs.gov/coronavirus # Masked token: situation sentence = """With the COVID-19 {} continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time.""" get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
EUROPEAN MODEL -------> crisis with the covid - 19 CRISIS continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time. UNITED STATES MODEL -------> pandemic with the covid - 19 PANDEMIC continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time. BASE MODEL -------> program with the covid - 19 PROGRAM continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time.
Apache-2.0
04_Evaluation/04_evaluation.ipynb
danieldiezmallo/euBERTus
**Introduction to Python**Prof. Dr. Jan Kirenz Hochschule der Medien Stuttgart Table of Contents1&nbsp;&nbsp;Import CSV2&nbsp;&nbsp;Save CSV
# import module import pandas as pd
_____no_output_____
MIT
1_pandas_import_save_csv.ipynb
kirenz/forst_steps_in_python
* **Pandas** provides a DataFrame object along with a powerful set of methods to manipulate, filter, group, and transform data. A **DataFrame** represents a rectangular table of data and contains an ordered collection of columns,each of which can be a different value type (numeric, string, boolean, etc.). The DataFrame has botha row and column index. Import CSVThe goal of this task is to import a CSV file (CSV stands for comma-seperated values) into this Jupyter notebook. We will import the CSV with Pandas as a DataFrame (and call it df).
# Import data from GitHub df = pd.read_csv("https://raw.githubusercontent.com/kirenz/datasets/master/wage.csv")
_____no_output_____
MIT
1_pandas_import_save_csv.ipynb
kirenz/forst_steps_in_python
If you would like to import a local CSV-file from machine, you need to change the path accordingly:
# if you have a Mac, use this code df = pd.read_csv('/Users/.../wage.csv') # if you have Windows, use this code df = pd.read_csv('C://...//wage.csv')
_____no_output_____
MIT
1_pandas_import_save_csv.ipynb
kirenz/forst_steps_in_python
Save CSV Save df as new csv-file
# if you have a Mac, use this code df.to_csv('/Users/.../wage_new.csv') # if you have Windows, use this code df.to_csv('C://...//wage_new.csv')
_____no_output_____
MIT
1_pandas_import_save_csv.ipynb
kirenz/forst_steps_in_python
对象和类- 一个学生,一张桌子,一个圆都是对象- 对象是类的一个实例,你可以创建多个对象,创建类的一个实例过程被称为实例化,- 在Python中对象就是实例,而实例就是对象 定义类class ClassName: do something - class 类的表示与def 一样- 类名最好使用驼峰式- 在Python2中类是需要继承基类object的,在Python中默认继承,可写可不写- 可以将普通代码理解为皮肤,而函数可以理解为内衣,那么类可以理解为外套
# 类必须初始化,是用self,初始化自身. # 类里面所有的函数中的第一个变量不再是参数,而是一个印记. # 在类中,如果有参数需要多次使用,那么就可以将其设置为共享参数 class Joker: def __init__(self,num1,num2): print('我初始化了') # 参数共享 self.num1 = num1 self.num2 = num2 print(self.num1,self.num2) def SUM(self,name): print(name) return self.num1 + self.num2 def cheng(self): return self.num1 * self.num2 huwang = Joker(num1=1,num2=2) # () 代表直接走初始化函数 huwang.SUM(name='JJJ') huwang.cheng()
_____no_output_____
Apache-2.0
7.23.ipynb
wsq7777/Python
定义一个不含初始化__init__的简单类class ClassName: joker = “Home” def func(): print('Worker') - 尽量少使用 定义一个标准类- __init__ 代表初始化,可以初始化任何动作- 此时类调用要使用(),其中()可以理解为开始初始化- 初始化内的元素,类中其他的函数可以共享![](../Photo/85.png) - Circle 和 className_ 的第一个区别有 __init__ 这个函数- 。。。。 第二个区别,类中的每一个函数都有self的这个“参数” 何为self?- self 是指向对象本身的参数- self 只是一个命名规则,其实可以改变的,但是我们约定俗成的是self,也便于理解- 使用了self就可以访问类中定义的成员 使用类 Cirlcle 类的传参- class ClassName: def __init__(self, para1,para2...): self.para1 = para1 self.para2 = para2 EP:- A:定义一个类,类中含有两个功能: - 1、产生3个随机数,获取最大值 - 2、产生3个随机数,获取最小值- B:定义一个类,(类中函数的嵌套使用) - 1、第一个函数的功能为:输入一个数字 - 2、第二个函数的功能为:使用第一个函数中得到的数字进行平方处理 - 3、第三个函数的功能为:得到平方处理后的数字 - 原来输入的数字,并打印结果
class Joker2: """ Implement Login Class. """ def __init__(self): """ Initialization class Arguments: --------- name: xxx None. Returns: -------- None. """ self.account = '123' self.password = '123' def Account(self): """ Input Account value Arguments: --------- None. Returns: -------- None. """ self.acc = input('请输入账号:>>') def Password(self): """ Input Password value Arguments: --------- None. Returns: -------- None. """ self.passwor = input('请输入密码:>>') def Check(self): """ Check account and password Note: ---- we need "and" connect. if account and password is right, then login OK. else: running Veriy func. """ if self.acc == self.account and self.passwor == self.password: print('Success') else: # running Verify ! self.Verify() def Verify(self): """ Verify .... """ Verify_Var = 123 print('验证码是:',Verify_Var) while 1: User_Verify = eval(input('请输入验证码:>>')) if User_Verify == Verify_Var: print('Failed') break def Start(self): """ Start definelogistics. """ self.Account() self.Password() self.Check() # 创建类的一个实例 a = Joker2() a.Start()
请输入账号:>>123 请输入密码:>>1 验证码是: 123 请输入验证码:>>1 请输入验证码:>>1 请输入验证码:>>123 Failed
Apache-2.0
7.23.ipynb
wsq7777/Python
类的继承- 类的单继承- 类的多继承- 继承标识> class SonClass(FatherClass): def __init__(self): FatherClass.__init__(self)
a = 100 a = 1000 a
_____no_output_____
Apache-2.0
7.23.ipynb
wsq7777/Python
私有变量,不可继承,不可在外部调用,但是可以在内部使用.
class A: def __init__(self): self.__a = 'a' def a_(self): print('aa') print(self.__a) def b(): a() def a(): print('hahah') b()
hahah
Apache-2.0
7.23.ipynb
wsq7777/Python
_ _ -- + = / \ { } [] ! ~ !@ $ % ^ & * ( ) …… 私有数据域(私有变量,或者私有函数)- 在Python中 变量名或者函数名使用双下划线代表私有 \__Joker, def \__Joker():- 私有数据域不可继承- 私有数据域强制继承 \__dir__() ![](../Photo/87.png) EP:![](../Photo/88.png)![](../Photo/89.png)![](../Photo/90.png) 类的其他- 类的封装 - 实际上就是将一类功能放在一起,方便未来进行管理- 类的继承(上面已经讲过)- 类的多态 - 包括装饰器:将放在以后处理高级类中教 - 装饰器的好处:当许多类中的函数需要使用同一个功能的时候,那么使用装饰器就会方便许多 - 装饰器是有固定的写法 - 其包括普通装饰器与带参装饰器 Homewor UML类图可以不用画 UML 实际上就是一个思维图- 1![](../Photo/91.png)
class Rectangle(): def __init__(self,width,height): self.width=4 self.height=40 def getArea(self,width,height): self.area=width*height print(self.area) def getPerimeter(self,width,height): self.perimeter=(width+height)*2 print(self.perimeter) if __name__=='__main__':m r=Rectangle(4,40) r.getArea(4,40) r=Rectangle(4,40) r.getPerimeter(4,40) r=Rectangle(3.5,35.7) r.getArea(3.5,35.7) r=Rectangle(3.5,35.7) r.getPerimeter(3.5,35.7)
160 88 124.95000000000002 78.4
Apache-2.0
7.23.ipynb
wsq7777/Python
- 2![](../Photo/92.png)
class Account(): def __init__(self): self.id = 0 self.__balance = 100 self.__annuallnterestRate=0 def set_(self,id,balance,annuallnterestRate): self.id = id self.__balance = balance self.__annuallnterestRate=annuallnterestRate def getid(self): return self.id def getbalance(self): return self.__balance def get__annuallnterestRate(self): return self.__annuallnterestRate def getMonthlyInterestRate(self): return self.__annuallnterestRate/12 def getMonthlyInterest(self): return self.__balance*(self.__annuallnterestRate/12) def withdraw(self,number): self.__balance=self.__balance-number def deposit(self,number): self.__balance=self.__balance+number if __name__ == '__main__': acc=Account() id=int(input('请输入账户ID:')) balance=float(input('请输入账户金额:')) ann=float(input('年利率为:')) acc.set_(id,balance,ann/100) qu=float(input('取钱金额为:')) acc.withdraw(qu) cun=float(input('存钱金额为:')) acc.deposit(cun) print('账户ID:%d 剩余金额:%.2f 月利率:%.3f 月利息:%.2f '%(acc.getid(),acc.getbalance(),acc.getMonthlyInterestRate()*100,acc.getMonthlyInterest()))
请输入账户ID:1122 请输入账户金额:20000 年利率为:4.5 取钱金额为:2500 存钱金额为:3000 账户ID:1122 剩余金额:20500.00 月利率:0.375 月利息:76.88
Apache-2.0
7.23.ipynb
wsq7777/Python
- 3![](../Photo/93.png)
class Fan(): def __init__(self): self.slow=1 self.medium=2 self.fast=3 self.__speed=1 self.__on=False self.__radius=5 self.__color='blue' def set_(self,speed,on,radius,color): self.__speed=speed self.__on=on self.__radius=radius self.__color=color def getspeed(self): return self.__speed def geton(self): return self.__on def getradius(self): return self.__radius def getcolor(self): return self.__color if __name__ == '__main__': fan=Fan() speed=int(input('风扇的速度为(1:slow,2:medium,3:fast):')) radius=float(input('风扇的半径为:')) color=input('风扇的颜色是:') on=input('风扇是否打开(True or False):') fan.set_(speed,on,radius,color) fan2=Fan() speed=int(input('风扇的速度为(1:slow,2:medium,3:fast):')) radius=float(input('风扇的半径为:')) color=input('风扇的颜色是:') on=input('风扇是否打开(True or False):') fan2.set_(speed,on,radius,color) print('1号风扇的速度(speed)为:',fan.getspeed(),'颜色是(color):',fan.getcolor(),'风扇的半径为(radius):',fan.getradius(),'风扇是:',fan.geton()) print('2号风扇的速度(speed)为:',fan2.getspeed(),'颜色是(color):',fan2.getcolor(),'风扇的半径为(radius):',fan2.getradius(),'风扇是:',fan2.geton())
风扇的速度为(1:slow,2:medium,3:fast):3 风扇的半径为:10 风扇的颜色是:yellow 风扇是否打开(True or False):True 风扇的速度为(1:slow,2:medium,3:fast):2 风扇的半径为:5 风扇的颜色是:blue 风扇是否打开(True or False):Flase 1号风扇的速度(speed)为: 3 颜色是(color): yellow 风扇的半径为(radius): 10.0 风扇是: True 2号风扇的速度(speed)为: 2 颜色是(color): blue 风扇的半径为(radius): 5.0 风扇是: Flase
Apache-2.0
7.23.ipynb
wsq7777/Python
- 4![](../Photo/94.png)![](../Photo/95.png)
import math class RegularPolygon: def __init__(self,n,side,x,y): self.n=n self.side=side self.x=x self.y=y def getArea(self): return (self.n*self.side**2)/4*math.tan(3.14/self.n) def getPerimeter(self): return self.n*self.side if __name__ == "__main__": n,side,x,y=map(float,input('n,side,x,y:>>').split(',')) re=RegularPolygon(n,side,x,y) print(n,side,x,y,re.getArea(),re.getPerimeter())
n,side,x,y:>>10,4,5.6,7.8 10.0 4.0 5.6 7.8 12.989745035699281 40.0
Apache-2.0
7.23.ipynb
wsq7777/Python
- 5![](../Photo/96.png)
class LinearEquation(object): a = 0 b = 0 c = 0 d = 0 e = 0 f = 0 def __init__(self,a,b,c,d,e,f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f def getA(self): return self.a def getB(self): return self.b def getC(self): return self.c def getD(self): return self.d def getE(self): return self.e def getF(self): return self.f def isSolvable(self): if a*d-b*c !=0: return True else: return False def getX(self): return (self.e*self.d-self.b*self.f)/(self.a*self.d-self.b*self.c) def getY(self): return (self.a*self.f-self.e*self.c)/(self.a*self.d-self.b*self.c) a,b,c,d,e,f = map(int,input('请输入abcdef的值').split(',')) linearEquation=LinearEquation(a,b,c,d,e,f) if linearEquation.isSolvable() == True: print(linearEquation.getX()) print(linearEquation.getY()) else: print('这个方程式无解')
请输入abcdef的值1,2,3,4,5,6 -4.0 4.5
Apache-2.0
7.23.ipynb
wsq7777/Python
- 6![](../Photo/97.png)
class LinearEquation: def zuobiao(self): import math x1,y1,x2,y2=map(float,input().split(',')) x3,y3,x4,y4=map(float,input().split(',')) u1=(x4-x3)*(y1-y3)-(x1-x3)*(y4-y3) v1=(x4-x3)*(y2-y3)-(x2-x3)*(y4-y3) u=math.fabs(u1) v=math.fabs(v1) x5=(x1*v+x2*u)/(u+v) y5=(y1*v+y2*u)/(u+v) print(x5,y5) re=LinearEquation() re.zuobiao()
2.0,2.0,0,0 0,2.0,2.0,0 1.0 1.0
Apache-2.0
7.23.ipynb
wsq7777/Python
- 7![](../Photo/98.png)
class LinearEquation(object): a = 0 b = 0 c = 0 d = 0 e = 0 f = 0 def __init__(self,a,b,c,d,e,f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f def getA(self): return self.a def getB(self): return self.b def getC(self): return self.c def getD(self): return self.d def getE(self): return self.e def getF(self): return self.f def isSolvable(self): if a*d-b*c !=0: return True else: return False def getX(self): return (self.e*self.d-self.b*self.f)/(self.a*self.d-self.b*self.c) def getY(self): return (self.a*self.f-self.e*self.c)/(self.a*self.d-self.b*self.c) a,b,c,d,e,f = map(int,input('请输入abcdef的值').split(',')) linearEquation=LinearEquation(a,b,c,d,e,f) if linearEquation.isSolvable() == True: print(linearEquation.getX()) print(linearEquation.getY()) else: print('这个方程式无解')
请输入abcdef的值4,8,9,3,5,6 0.55 0.35
Apache-2.0
7.23.ipynb
wsq7777/Python
Transfer Learning Template
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from torch.utils.data import DataLoader from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform
_____no_output_____
MIT
experiments/tl_1v2/wisig-oracle.run1.framed/trials/28/trial.ipynb
stevester94/csc500-notebooks
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "n_shot", "n_query", "n_way", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_net", "datasets", "torch_default_dtype", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "x_shape", } from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) from steves_utils.ORACLE.utils_v2 import ( ALL_DISTANCES_FEET_NARROWED, ALL_RUNS, ALL_SERIAL_NUMBERS, ) standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["n_way"] = 8 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 50 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "source_loss" standalone_parameters["datasets"] = [ { "labels": ALL_SERIAL_NUMBERS, "domains": ALL_DISTANCES_FEET_NARROWED, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"), "source_or_target_dataset": "source", "x_transforms": ["unit_mag", "minus_two"], "episode_transforms": [], "domain_prefix": "ORACLE_" }, { "labels": ALL_NODES, "domains": ALL_DAYS, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), "source_or_target_dataset": "target", "x_transforms": ["unit_power", "times_zero"], "episode_transforms": [], "domain_prefix": "CORES_" } ] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # Parameters parameters = { "experiment_name": "tl_1v2:wisig-oracle.run1.framed", "device": "cuda", "lr": 0.0001, "n_shot": 3, "n_query": 2, "train_k_factor": 3, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "n_way": 16, "datasets": [ { "labels": [ "1-10", "1-12", "1-14", "1-16", "1-18", "1-19", "1-8", "10-11", "10-17", "10-4", "10-7", "11-1", "11-10", "11-19", "11-20", "11-4", "11-7", "12-19", "12-20", "12-7", "13-14", "13-18", "13-19", "13-20", "13-3", "13-7", "14-10", "14-11", "14-12", "14-13", "14-14", "14-19", "14-20", "14-7", "14-8", "14-9", "15-1", "15-19", "15-6", "16-1", "16-16", "16-19", "16-20", "17-10", "17-11", "18-1", "18-10", "18-11", "18-12", "18-13", "18-14", "18-15", "18-16", "18-17", "18-19", "18-2", "18-20", "18-4", "18-5", "18-7", "18-8", "18-9", "19-1", "19-10", "19-11", "19-12", "19-13", "19-14", "19-15", "19-19", "19-2", "19-20", "19-3", "19-4", "19-6", "19-7", "19-8", "19-9", "2-1", "2-13", "2-15", "2-3", "2-4", "2-5", "2-6", "2-7", "2-8", "20-1", "20-12", "20-14", "20-15", "20-16", "20-18", "20-19", "20-20", "20-3", "20-4", "20-5", "20-7", "20-8", "3-1", "3-13", "3-18", "3-2", "3-8", "4-1", "4-10", "4-11", "5-1", "5-5", "6-1", "6-15", "6-6", "7-10", "7-11", "7-12", "7-13", "7-14", "7-7", "7-8", "7-9", "8-1", "8-13", "8-14", "8-18", "8-20", "8-3", "8-8", "9-1", "9-7", ], "domains": [1, 2, 3, 4], "num_examples_per_domain_per_label": -1, "pickle_path": "/root/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl", "source_or_target_dataset": "target", "x_transforms": ["unit_mag"], "episode_transforms": [], "domain_prefix": "Wisig_", }, { "labels": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "domains": [32, 38, 8, 44, 14, 50, 20, 26], "num_examples_per_domain_per_label": 2000, "pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl", "source_or_target_dataset": "source", "x_transforms": ["unit_mag"], "episode_transforms": [], "domain_prefix": "ORACLE.run1", }, ], "dataset_seed": 500, "seed": 500, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) if "x_shape" not in p: p.x_shape = [2,256] # Default to this if we dont supply x_shape supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() p.domains_source = [] p.domains_target = [] train_original_source = [] val_original_source = [] test_original_source = [] train_original_target = [] val_original_target = [] test_original_target = [] # global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag # global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag def add_dataset( labels, domains, pickle_path, x_transforms, episode_transforms, domain_prefix, num_examples_per_domain_per_label, source_or_target_dataset:str, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), ): if x_transforms == []: x_transform = None else: x_transform = get_chained_transform(x_transforms) if episode_transforms == []: episode_transform = None else: raise Exception("episode_transforms not implemented") episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1]) eaf = Episodic_Accessor_Factory( labels=labels, domains=domains, num_examples_per_domain_per_label=num_examples_per_domain_per_label, iterator_seed=iterator_seed, dataset_seed=dataset_seed, n_shot=n_shot, n_way=n_way, n_query=n_query, train_val_test_k_factors=train_val_test_k_factors, pickle_path=pickle_path, x_transform_func=x_transform, ) train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test() train = Lazy_Iterable_Wrapper(train, episode_transform) val = Lazy_Iterable_Wrapper(val, episode_transform) test = Lazy_Iterable_Wrapper(test, episode_transform) if source_or_target_dataset=="source": train_original_source.append(train) val_original_source.append(val) test_original_source.append(test) p.domains_source.extend( [domain_prefix + str(u) for u in domains] ) elif source_or_target_dataset=="target": train_original_target.append(train) val_original_target.append(val) test_original_target.append(test) p.domains_target.extend( [domain_prefix + str(u) for u in domains] ) else: raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}") for ds in p.datasets: add_dataset(**ds) # from steves_utils.CORES.utils import ( # ALL_NODES, # ALL_NODES_MINIMUM_1000_EXAMPLES, # ALL_DAYS # ) # add_dataset( # labels=ALL_NODES, # domains = ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"cores_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle1_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle2_{u}" # ) # add_dataset( # labels=list(range(19)), # domains = [0,1,2], # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"met_{u}" # ) # # from steves_utils.wisig.utils import ( # # ALL_NODES_MINIMUM_100_EXAMPLES, # # ALL_NODES_MINIMUM_500_EXAMPLES, # # ALL_NODES_MINIMUM_1000_EXAMPLES, # # ALL_DAYS # # ) # import steves_utils.wisig.utils as wisig # add_dataset( # labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES, # domains = wisig.ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"wisig_{u}" # ) ################################### # Build the dataset ################################### train_original_source = Iterable_Aggregator(train_original_source, p.seed) val_original_source = Iterable_Aggregator(val_original_source, p.seed) test_original_source = Iterable_Aggregator(test_original_source, p.seed) train_original_target = Iterable_Aggregator(train_original_target, p.seed) val_original_target = Iterable_Aggregator(val_original_target, p.seed) test_original_target = Iterable_Aggregator(test_original_target, p.seed) # For CNN We only use X and Y. And we only train on the source. # Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) from steves_utils.transforms import get_average_magnitude, get_average_power print(set([u for u,_ in val_original_source])) print(set([u for u,_ in val_original_target])) s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source)) print(s_x) # for ds in [ # train_processed_source, # val_processed_source, # test_processed_source, # train_processed_target, # val_processed_target, # test_processed_target # ]: # for s_x, s_y, q_x, q_y, _ in ds: # for X in (s_x, q_x): # for x in X: # assert np.isclose(get_average_magnitude(x.numpy()), 1.0) # assert np.isclose(get_average_power(x.numpy()), 1.0) ################################### # Build the model ################################### # easfsl only wants a tuple for the shape model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment)
_____no_output_____
MIT
experiments/tl_1v2/wisig-oracle.run1.framed/trials/28/trial.ipynb
stevester94/csc500-notebooks
Week 4 T - testing and Inferential Statistics Most people turn to IMB SPSS for T-testings, but this programme is very expensive, very old and not really necessary if you have access to Python tools. Very focused on click and point and is probably more useful to people without a programming background. Libraries
import matplotlib.pyplot as plt import numpy as np import seaborn as sns import pandas as pd import scipy.stats as ss import statsmodels.stats.weightstats as sm_ttest
_____no_output_____
Apache-2.0
4_Ttests.ipynb
MarionMcG/machine_learning_statistics
Reading * [Independent t-test using SPSS Statistics on laerd.com](https://statistics.laerd.com/spss-tutorials/independent-t-test-using-spss-statistics.php)* [ScipyStats documentation on ttest_ind](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html)* [StatsModels documentation on ttest_ind](https://www.statsmodels.org/devel/generated/statsmodels.stats.weightstats.ttest_ind.html)* [StarTek.com, Hypothesis test: The Difference in Means](https://stattrek.com/hypothesis-test/difference-in-means.aspx)* [Python for Data Science, Independent T-Test](https://pythonfordatascience.org/independent-t-test-python/)* [Dependent t-test using SPSS Statistics on leard.com](https://statistics.laerd.com/spss-tutorials/dependent-t-test-using-spss-statistics.php)* [StackExchange, When conducting a t-test why would one prefer to assume (or test for) equal variances..?](https://stats.stackexchange.com/questions/305/when-conducting-a-t-test-why-would-one-prefer-to-assume-or-test-for-equal-vari) T-testing **Example:** If I take a sample of males and females from the population and calcaulte their heights. Now a question I might ask is, is the mean height of males in the population equal to the mean height of females in the population? T-testing is related to Hypothesis Testing. Scipy Stats
#Generating random data for the heights of 30 males in my sample m = np.random.normal(1.8, 0.1, 30) #Generating random data for the heights of 30 females in my sample f = np.random.normal(1.6, 0.1, 30) ss.stats.ttest_ind(m, f)
_____no_output_____
Apache-2.0
4_Ttests.ipynb
MarionMcG/machine_learning_statistics
The null hypothesis (H0) claims that the average male height in the population is equal to the average female height in the population. Using my sample, I can infer if the H0 should be accepted or rejected. Based on my very small pvalue, we can reject the null hypothesis. The pvalue refers to the probability of finding these samples in two populations with the same mean. We have to accept our Alternate Hypothesis (H1), which should claim that the average male height is different to the average female height in the population. This is not surprising as I generated random data for my sample with male heights having a larger mean.
np.mean(m) np.mean(f)
_____no_output_____
Apache-2.0
4_Ttests.ipynb
MarionMcG/machine_learning_statistics
Statsmodels
sm_ttest.ttest_ind(m, f)
_____no_output_____
Apache-2.0
4_Ttests.ipynb
MarionMcG/machine_learning_statistics
Graphical Analysis
#Seaborn displot to show means plt.figure() sns.distplot(m, label = 'male') sns.distplot(f, label = 'female') plt.legend(); df = pd.DataFrame({'male': m, 'female': f}) df
_____no_output_____
Apache-2.0
4_Ttests.ipynb
MarionMcG/machine_learning_statistics
It's typically not a good idea to list values side by side. It implies a relationship between the data and can lead to problems if we don't have the same sample size of males as females.
a = ['male'] * 30 b = ['female'] * 30 gender = a+b # I can't add arrays for males and females in the same way # As they are numpy arrays height = np.concatenate([m, f]) df = pd.DataFrame({'Gender': gender, 'Height': height}) df #Taking out just the male heights df[df['Gender'] == 'male']['Height'] df[df['Gender'] == 'female']['Height'] sns.catplot(x = 'Gender', y = 'Height', jitter = False, data = df); sns.catplot(x = 'Gender', y = 'Height', kind = 'box', data = df);
_____no_output_____
Apache-2.0
4_Ttests.ipynb
MarionMcG/machine_learning_statistics
Модель предсказания ключевых фраз
import numpy as np import pandas as pd pd.set_option('display.max_rows', 100) pd.set_option('display.max_columns', 100) import joblib import warnings warnings.filterwarnings('ignore') %run ThePropertyPhrases.py ThePropertyPhrasesGenerator
_____no_output_____
MIT
src/05_simple_demonstration.ipynb
cherninkiy/made-ml-hw4
Исходные данные
train = pd.read_csv('train875.csv') train.head(2) reviews = pd.read_csv('reviews875.csv') reviews.head(2)
_____no_output_____
MIT
src/05_simple_demonstration.ipynb
cherninkiy/made-ml-hw4
Вспомогательные функции
def get_from_train_by_index(i): return train[train.index == i].to_dict(orient='records')[0] def get_from_train_by_id(idx): return train[train.id == idx].to_dict(orient='records')[0] def get_reviews_by_index(i): idx = train[train.index == i].id.values[0] return reviews.loc[reviews.listing_id == idx, :] def get_reviews_by_id(idx): return reviews.loc[reviews.listing_id == idx, :]
_____no_output_____
MIT
src/05_simple_demonstration.ipynb
cherninkiy/made-ml-hw4
Тест модели
phrases_generator = ThePropertyPhrasesGenerator() for rec_index in [10, 16, 18]: d = get_from_train_by_index(rec_index) phrases = phrases_generator.generate_key_phrases(d) phrases = phrases.reset_index() if 'index' in phrases.columns: phrases = phrases.drop(columns=['index'], axis=1) comments = get_reviews_by_index(rec_index) comments = comments.reset_index() if 'index' in phrases.columns: comments = comments.drop(columns=['index'], axis=1) columns = list(phrases.columns) + list(comments.columns) df = pd.concat([phrases, comments], axis=1, ignore_index=True) \ .rename(columns=dict(zip(range(len(columns)), columns))) \ .fillna('') display(df)
_____no_output_____
MIT
src/05_simple_demonstration.ipynb
cherninkiy/made-ml-hw4
ConnectUse the `%connect` magic to find your Ascent instance and connect to it.
%connect
_____no_output_____
BSD-3-Clause
src/ascent/python/ascent_jupyter_bridge/notebooks/demo - trackball widget.ipynb
goodbadwolf/ascent
Specify ActionsSpecify your actions using a **yaml** or **json** string or any other method permitted by the Ascent Python API.
yaml = """ - action: "add_scenes" scenes: s1: plots: p1: type: "volume" field: "energy" color_table: name: "cool to warm" control_points: - type: "alpha" position: 0 alpha: .3 - type: "alpha" position: 1 alpha: 1 renders: r1: image_width: "1024" image_height: "1024" bg_color: [1,1,1] fg_color: [0,0,0] - action: "execute" - action: "reset" """ generated = conduit.Generator(yaml, "yaml") actions = conduit.Node() generated.walk(actions)
_____no_output_____
BSD-3-Clause
src/ascent/python/ascent_jupyter_bridge/notebooks/demo - trackball widget.ipynb
goodbadwolf/ascent
Execute ActionsUse the builtin `jupyter_ascent` ascent instance to execute your actions for compatability with widgets (below) or create your own ascent instance. Note that once you are connected you can use tab completion to find variables and functions in your namespace (e.g. `jupyter_ascent`, `display_images`)
jupyter_ascent.execute(actions)
_____no_output_____
BSD-3-Clause
src/ascent/python/ascent_jupyter_bridge/notebooks/demo - trackball widget.ipynb
goodbadwolf/ascent
Display ImagesDisplay all they images you've generated with the builtin `display_images` function.
# Get info about the generated images from Ascent info = conduit.Node() jupyter_ascent.info(info) # Display the images specified in info display_images(info)
_____no_output_____
BSD-3-Clause
src/ascent/python/ascent_jupyter_bridge/notebooks/demo - trackball widget.ipynb
goodbadwolf/ascent
The Trackball WidgetUse builtin Jupyter widgets to interact with your images. The trackball widget lets you rotate your image by dragging the control cube. You can also move with around with WASD and the provided buttons. Finally you can advance the simulation to see the next image.
%trackball
_____no_output_____
BSD-3-Clause
src/ascent/python/ascent_jupyter_bridge/notebooks/demo - trackball widget.ipynb
goodbadwolf/ascent
[LINK](https://www.tutorialspoint.com/python/python_variable_types.htm) Variable Types---Variables are nothing but reserved memory locations to store values. This means that when you create a variable you reserve some space in memory.Based on the data type of a variable, the interpreter allocates memory and decides what can be stored in the reserved memory. Therefore, by assigning different data types to variables, you can store integers, decimals or characters in these variables.Assigning Values to VariablesPython variables do not need explicit declaration to reserve memory space. The declaration happens automatically when you assign a value to a variable. The equal sign (=) is used to assign values to variables.The operand to the left of the = operator is the name of the variable and the operand to the right of the = operator is the value stored in the variable. For example −
_____no_output_____
Apache-2.0
Python_Course/Variable_Types.ipynb
gu-raime/dental.informatics.org
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explore Duplicate Question MatchesUse this dashboard to explore the relationship between duplicate and original questions. SetupThis section loads needed packages, and defines useful functions.
from __future__ import print_function import math import ipywidgets as widgets import pandas as pd import requests from azureml.core.webservice import AksWebservice from azureml.core.workspace import Workspace from dotenv import get_key, find_dotenv from utilities import read_questions, text_to_json, get_auth env_path = find_dotenv(raise_error_if_not_found=True) ws = Workspace.from_config(auth=get_auth(env_path)) print(ws.name, ws.resource_group, ws.location, sep="\n") aks_service_name = get_key(env_path, 'aks_service_name') aks_service = AksWebservice(ws, name=aks_service_name) aks_service.name
_____no_output_____
MIT
architectures/Python-ML-RealTimeServing/{{cookiecutter.project_name}}/aks/07_RealTimeScoring.ipynb
dciborow/AIArchitecturesAndPractices