diff --git a/annotations/1.json b/annotations/1.json index 6a870e0dac56e17801f27644d5cffc2a2960ca76..f5b8a4117a4a500cd7b8426a3014baed0a8379a4 100644 --- a/annotations/1.json +++ b/annotations/1.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ea3d6bb84f07bc373c73d3dd620c6615ab9209b82e7e405b7dfa41ad90f390b7 -size 599 +oid sha256:58d269453ed555e0a5f293e5ff9ab44642d5bc60c5329d38a14e142c58105f9a +size 598 diff --git a/annotations/10.json b/annotations/10.json index 84f3723a1775f630e45f7b8f10c801934d2a6d32..6dee81d6d893c065eb63be5baa5e3abb3491d06e 100644 --- a/annotations/10.json +++ b/annotations/10.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f9d4dddf343897aa8b1d6ce9ece455af3d184dc02590fe563d9b6313f0bec33e -size 603 +oid sha256:39a24169d8c67ebd91fdb168d7b568b593c62e6179dd11b6ddd5e027ec575ac8 +size 599 diff --git a/annotations/11.json b/annotations/11.json index c6fbd675bd271c1beac1edc3a0f00766351ed444..0bdfa74680c35fdb42c1aed91ddace769a9e4b5e 100644 --- a/annotations/11.json +++ b/annotations/11.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8d850a82fc3b2ed68eabaf456b8636241e65cd6121877cf5f973e37fe5ff7e0d -size 604 +oid sha256:d633bf81fd7e3ab0db33a8ca0fe61a562ecf1baae30cf2272a9585f3c2fcd6e3 +size 599 diff --git a/annotations/12.json b/annotations/12.json index a40bdd9a281b7b40a2b9999f45c31a1d8ebe685f..08807196083cd73506efa32c4884ce727f9795f8 100644 --- a/annotations/12.json +++ b/annotations/12.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fce46135834efc4e5ad8db204ca821cbcbe5b15497838f1a05600feafc0fb8b9 -size 601 +oid sha256:2dc4a5b1db9491a5d2ac99ecd3d4493f99a6da029df18c2ba798b78b08e549fd +size 599 diff --git a/annotations/13.json b/annotations/13.json index 0cd7ee0d712b858decf786a70634a9e1bca3cdd9..d7a79b20fc8f26a4efce93befca931743dbb6577 100644 --- a/annotations/13.json +++ b/annotations/13.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d8136388800a71850142ac30537f9a4f86705acbf0aca591fbf4ed35ce61f483 +oid sha256:e79da4895b3681948967cae17eb93c4af6a1f6e35c6569dd9d85bec2f27c57b3 size 603 diff --git a/annotations/14.json b/annotations/14.json index 0b7bc3b8717bbff06f30fcd4103a1da89255cf40..d29cb57369c2242749b3200359fe8856a05fcf81 100644 --- a/annotations/14.json +++ b/annotations/14.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b1c7dffb5f477f184676b3987aa1fcd1980108dd38f47483da864c762c90fa7e -size 603 +oid sha256:69d8c1b6152a5ad595860dab8a7a038ba2363fe898edf703b94c5430850e37df +size 604 diff --git a/annotations/15.json b/annotations/15.json index c9d5ef2316ead5108fe317a9a95e4816e4106c6f..7913b8d75012f9cb8165e9f3b876322f35c3b64b 100644 --- a/annotations/15.json +++ b/annotations/15.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9349798ff4916c38acbc66a5964f779561c104b60ed81fbc1fcf5d1a5f3e3e6c -size 600 +oid sha256:33e2cee2d6dd3f8d24f5e8192e3e9d2b1f75fdb3e5f7005366d95bd0f8c65a16 +size 601 diff --git a/annotations/16.json b/annotations/16.json index 5f8469ce039e36ffd020c3c0745e310177078b59..e18b940f2dc85eb8325391426aae5a90c23fce24 100644 --- a/annotations/16.json +++ b/annotations/16.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:954692483fb6f7b545610611276df76088b0e4a3515d44235bf09d5e7ddc75fc +oid sha256:d9ca17d303cdc0c2699babf52ad9871791173d9125361c4ab5f00011c2d5e98e size 603 diff --git a/annotations/17.json b/annotations/17.json index 83ca3e59002af01539c6f49982f67e175ac9315e..6ea3837a2c0ba7964967f70129a75a5cc0d62bb9 100644 --- a/annotations/17.json +++ b/annotations/17.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9a81d1b1438d70b2c567e8fe94668f4b2830fece3c0c22b0251e19bd3c64d973 -size 601 +oid sha256:2679ff95162447820a529b6bf2de7936a5d935aa0d198a91e7e151497f485fc5 +size 603 diff --git a/annotations/18.json b/annotations/18.json index 5ccf586e1f07bef44662edd7c752674ba38b750d..c91027cdcf1f52b58fea7c3d1052d38a80612b50 100644 --- a/annotations/18.json +++ b/annotations/18.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e283226729594dcff3bfdf194d9ed0ad2fe56f712591013c0880a9143078a94a -size 604 +oid sha256:44ce2d7df40790c852772554d0724af37dd1a4c539fcf260c92081a969c3fc68 +size 600 diff --git a/annotations/19.json b/annotations/19.json index ff7179c321bbe8af04b679b94406d3ef3aa70a5e..0c1b310c65b3bf632c7a8d60540eef14d8d5e775 100644 --- a/annotations/19.json +++ b/annotations/19.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cc1ac2c4d4210429f483b2b25228d44b83d8167b83c0966fe9ac990c8042cfb6 -size 598 +oid sha256:1fc784a08c2f4d17f9c2ba9b6215f5c0035d8f50bb41cc24afda9d9d1c28eb0f +size 603 diff --git a/annotations/2.json b/annotations/2.json index e6a61e6c7c47807b8e36c4093caccf3f5eb89cd7..d5c978c5ede723054a3f99a4818c877ab9332d46 100644 --- a/annotations/2.json +++ b/annotations/2.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:5d3a5dbf9757745bc0dd38530079aa70a5a3b3b7fb4dff31e36677d93ae830c0 -size 597 +oid sha256:da3874136481ab904c6d96e814b1ccd3b1ddc310ea15aa1d9364dcb43b211448 +size 598 diff --git a/annotations/20.json b/annotations/20.json index 450150ac0c1bc5e257887cd6be9764b2d26197a3..f69e8b9a2abd6a2a6608e64ca458e5176a01906f 100644 --- a/annotations/20.json +++ b/annotations/20.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3187df1334add28ddb2e850fa854698a4633af5b4bc2d680c37eaa9a3297ebfc -size 600 +oid sha256:cf0bf7a7f37b8b38aa52d3ee17a085d61c456447ab88120e87c16c283de49f53 +size 601 diff --git a/annotations/21.json b/annotations/21.json index 951b6ac75acdc608756aeffad3e12786a5d4f8f5..205c93bb11d75f62eac5221545b1cb7f86c27191 100644 --- a/annotations/21.json +++ b/annotations/21.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:db8b3de3de007f9bf8fd83d55638142a921a5cbae2a2868bd5f828160859174a +oid sha256:151d551058cb2ead1137513223e1cb146cd1019d6a12efebaab7bb6a04fffddd size 604 diff --git a/annotations/22.json b/annotations/22.json index a0d907088a0cecec9d4995926f51acbee3b3c25f..8c11c83a2f4387245521cda4ebc78e77824c7c21 100644 --- a/annotations/22.json +++ b/annotations/22.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3a93ca152adb4ad209885da98b24e140551cf4d42ad771540351affd6dc81e63 +oid sha256:63979a137eea0ddf6d00a8dfce7b885eda721875d0b5922f287556573909e472 size 598 diff --git a/annotations/23.json b/annotations/23.json index abc6d9c9e04b71976183c5d8786e5e26974b1016..57f766d158fae70166f3fbc3e69e381e3a770587 100644 --- a/annotations/23.json +++ b/annotations/23.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b351cd4cf356fe1b802c51430d2da0c5f80e569315a384ae7cb818ad8952e337 -size 603 +oid sha256:188adbb9b3b166a0f2b63e7517dbfeb6069e9eb82ffd77b6736072ac878cd07e +size 600 diff --git a/annotations/24.json b/annotations/24.json index 6e5831508d20df10a4664954630f531964004c72..fa2dfbe4ab730302dabbbd621d6a6961c4b1d64d 100644 --- a/annotations/24.json +++ b/annotations/24.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8ad742f2b76963fb4d87133e388ef60276a990d5e01870ab6960acb3493c7d5f +oid sha256:68558a4638d80d423221d1d634bad495af4fbd08259017174d891dcf26e3c685 size 604 diff --git a/annotations/25.json b/annotations/25.json index 223a71c5bd646d398713a2e4bc0bed2e92e17123..ca839ad184ef88f08a47f20c97231b8caa0a0bd0 100644 --- a/annotations/25.json +++ b/annotations/25.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d27a8230b9fa41758c078050e6b0c7a1c078e9d7f79047ccd2c7be1e12111577 -size 597 +oid sha256:ecfee633e322957136a3defc6a3819d6690ca1995c08559afa001b7d17d2594d +size 598 diff --git a/annotations/26.json b/annotations/26.json index 258eeaec7bb1838b0a280900b1893633b76ec5c2..a9fc59dd3aaacece25118969a84dcfd6f08d38f5 100644 --- a/annotations/26.json +++ b/annotations/26.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b668e80d99a622f3b4e4767e56a8afe5847e6db9f0f36f6cc4d92aa35232ef9f -size 599 +oid sha256:9f1a6f721bde0eae7a2c429c2f897d1041f821c29ec152aa5d38164a6f1aa773 +size 603 diff --git a/annotations/27.json b/annotations/27.json index 7d4285bffb230c14e8d66fe081f28d939a638b81..0c38de7e1f76e43cf535c9e5287d4772db890779 100644 --- a/annotations/27.json +++ b/annotations/27.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4468020d27639e5a4f8853ee8457c2e956f3185deb2002701543d803c80e8851 -size 599 +oid sha256:527a0341f9e053606abc7f90d39d12b34e1466291980be2974b2d7dec6f78f3a +size 604 diff --git a/annotations/28.json b/annotations/28.json index 82cdae8a9b38befe146e3d0fcf240df320544a5f..b922b621421464c15fa4bdd73c77bf6bc7cab927 100644 --- a/annotations/28.json +++ b/annotations/28.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7389ffcd73625187c9a734724385ab02139292353635bc99da752b30d9f665fc -size 603 +oid sha256:3a0ecc4735a4d70050cecd545e96b81728c5eb992bc350875064ea85f4c5b0cc +size 597 diff --git a/annotations/29.json b/annotations/29.json index b229ff6575dc0101740ec61c75584db921814e83..e6af09c5f481ce9a8cde031c57a28fbe6c1434e8 100644 --- a/annotations/29.json +++ b/annotations/29.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:aa62d68d61fc0ce4b530c728b124c813be0d2d8cc81eb5357fa7f37c945b5750 -size 601 +oid sha256:e5033dd30bd01e4265bc580dae418d55f5099488f5dc850b5932423bc179ab49 +size 599 diff --git a/annotations/3.json b/annotations/3.json index 255f4206481e92141367ef0e095421e472a97173..2304b2e2333cb7b008762d3f014ebb6eec07a8a9 100644 --- a/annotations/3.json +++ b/annotations/3.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:34453a00317ae97c9372143c275278f61416076c61762a499668ba60b1aa236a +oid sha256:c21bb39d40481b7a13da31dde7bf6d0e50e60b44000bb1afad3fb4db961eef29 size 598 diff --git a/annotations/30.json b/annotations/30.json index e3e2380e7b895f5d5b367b6a508506c2917ec921..5e81c7d001f3a065bbc17e649aa10de9effd2298 100644 --- a/annotations/30.json +++ b/annotations/30.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d62d4159f1744d576c9ba9540047bb3c8a14c7f132b95e6a0d561bc3672ed521 -size 602 +oid sha256:cf8d06261505718342611a7a1e8428e62e84bbc29230ac598696ba9174e44412 +size 599 diff --git a/annotations/31.json b/annotations/31.json index fbae2784958cc8cffdf96b2d38fe53cd03750352..84e72d28f439a16d66643157b328066499ec3e4d 100644 --- a/annotations/31.json +++ b/annotations/31.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a6ec0e3ee6a799973ec2e90042a67a42845d417273b43dc13d00d60064557ebc -size 599 +oid sha256:a6d8d9add12a1841c2db7ca945d3439477de44ddab6fa637698b700bc30e541a +size 603 diff --git a/annotations/32.json b/annotations/32.json index 102b1cd55face027ec3412b5698aa2e39128dbcd..f566a31fe85981e116f19aa25b1eac2bc0b682d4 100644 --- a/annotations/32.json +++ b/annotations/32.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4f847c95615025edc7da72485e4ce533a63c70c3e23dcab45dc61388b45b029f -size 603 +oid sha256:16643bff93e419d5ed5d97cf40bd982ad00f8c32c60034f6851d0241c1a7a57e +size 601 diff --git a/annotations/33.json b/annotations/33.json index 232270d7633f004ea448024ec557ec204a611020..16ab2b7614c443e4d3e404cb3a380515bb85565f 100644 --- a/annotations/33.json +++ b/annotations/33.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fdc233bd4dde75bea1603e00a50b4c01e918c527945b5448e5501aa9551f50cb -size 603 +oid sha256:0e715b42575c0b3975e235f643d5894ab153869791352424d6d12efda6a6134e +size 602 diff --git a/annotations/34.json b/annotations/34.json index 77e32babebcd79a0928bb191644823ca3e2770b4..eea0e6cf1fc3a9cd57b657fe0961ed798f515ad8 100644 --- a/annotations/34.json +++ b/annotations/34.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2c27c7e9bcedaa6d08a509178177d0b77f215e152d45d80838130a38f671c867 -size 604 +oid sha256:af3acea266200bf5fe7aa51d8f0e8c3c6e4eb67eb0eb616d71fcd4bdfdc3315a +size 599 diff --git a/annotations/35.json b/annotations/35.json index 5671bcbb745e65fecabad6396fb3eef8904b8d7c..200fda3e234855959019d65c8ad4a2aeda54b396 100644 --- a/annotations/35.json +++ b/annotations/35.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a60e34d977786c2ffba0254d6e4fb124ffcc1ca601069156ce7bb78545837635 -size 599 +oid sha256:e4b005252a3fd75a33bdaa737ce043f42cbecbde62fcb0bb3c25e3c497236039 +size 603 diff --git a/annotations/36.json b/annotations/36.json index ec40ab77b56d4be7cef0060108ac0be021d4ecc5..0b24f387f9fbab9ec637003b7402f65b1552ac99 100644 --- a/annotations/36.json +++ b/annotations/36.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cf6771f817704a982ef8114f70adf033123d7ab738a242f092adc915fc8739a5 -size 602 +oid sha256:0f98b87bf4f7ae0aba45a37ba03815dcc1b22e581c14f3d8cbca6c8df470c2c9 +size 603 diff --git a/annotations/37.json b/annotations/37.json index 5422f5949c5bc8d1cce6c0d23225d9dc124eb148..eae95f1fda6c780cc141ebb93d63df03fd9d36c3 100644 --- a/annotations/37.json +++ b/annotations/37.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:409464faa9b4aa1a0247bf77b8352d1ec30c8e59873536beea1e5850d4a7e76e -size 599 +oid sha256:544a58c6478395ccf93365e65d5d06a7dbbf4c4b5f160ac7284e0305afd86de7 +size 604 diff --git a/annotations/38.json b/annotations/38.json index 8fa1c17da1134e8dc30168e439e991e540e66ed4..9f3537f7a9323027328a5272eba38016eaec3912 100644 --- a/annotations/38.json +++ b/annotations/38.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f298a3a14fc85a6169349b19c40bf27b3386c21ee59d2c249c9103be9ef7ec01 +oid sha256:d123ebebebe40564ef5a54eabbb525c6accea49a1acce69da276ea02746c9a45 size 599 diff --git a/annotations/39.json b/annotations/39.json index c71a25dbbcf1f568cccf5ba10913b23e7c9f43ba..2a8aa8f454b6bf64b98178acac6f27aa6c2ab950 100644 --- a/annotations/39.json +++ b/annotations/39.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:97a6f9ae7d68a377fb665ebf3951cd61fbc2b64a414679c6150451ca5ee2dd44 -size 604 +oid sha256:7e84937db08c3d9dfc693e3d5f2ff59abde67ffe152dd6ac7ff6ca5ac590e5a1 +size 602 diff --git a/annotations/4.json b/annotations/4.json index 51b13e62fd4ad1633bb893c00cd37c65172cc370..9298c305c29f8e57db3318de2605de9a3b8cd224 100644 --- a/annotations/4.json +++ b/annotations/4.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1205b8abf243df401ec00496b60c0fdafdbfdde34c1d94214159082d0c701c98 -size 596 +oid sha256:fada9d8f64aeac581070bc17c50e4958b747c4ed82db274f1b9ab03b7c2d2a16 +size 599 diff --git a/annotations/40.json b/annotations/40.json index 7a43348b2dd091e5d5e82be4da6a21383fdd3c59..1a4bd393bbc87f9febafb320222f1460cba9fa89 100644 --- a/annotations/40.json +++ b/annotations/40.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1225a8e37afd061674348a96e3f2d82ca1c97ff5d0200ebb9a0f863068089b11 -size 598 +oid sha256:ed11a5d7ecb26ab4472adf5ad1e883ca97eacebb6993274ecc625059864aa506 +size 599 diff --git a/annotations/41.json b/annotations/41.json index 0922283187b4d25da9ab783ac54977ce48db3055..8cbb6bb3be2da695d8ed00300ddd39425d3b6b57 100644 --- a/annotations/41.json +++ b/annotations/41.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:452ca3691935a642ca629eac0313630edcc5f40a9a5a556466d034f09f76e13e +oid sha256:aa3e01838386b3eba88cdd26ddecf11157c63d4de32ca906bb17203d1a0e53f9 size 599 diff --git a/annotations/42.json b/annotations/42.json index 3a8c962d91faf3118be0f992a127a2a2d239e380..89cb99421cb2fa2ae143b8c7a7ca855e42e588c2 100644 --- a/annotations/42.json +++ b/annotations/42.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:20bd134f5be68b9f9fbed5402ac5ef9ef687b8952f30f36e9655840c7180ad21 -size 603 +oid sha256:12b756b5b068508c6cccc2bd237e4091f4cfb30c550984edce2f0bf53ad52401 +size 604 diff --git a/annotations/43.json b/annotations/43.json index ae70cc4ca8e0d514b328fbd35072837a0ace129d..f67f4ac7da20616abbd7da50c9437cfbddb24335 100644 --- a/annotations/43.json +++ b/annotations/43.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b2dffdb96bbe147c996cfe85f15ced48338564fd56a72ce3cd424f997e0830ee -size 604 +oid sha256:96afa146072eb9e40815537fd7fd172fd87cb3ea11c972779b380918a1d4af94 +size 598 diff --git a/annotations/44.json b/annotations/44.json index 27ffc3f34fe157a26a91b1551fb8aed1baf0d7fe..7487dc8de8b162b235fb2048fc3120c5b49fde0e 100644 --- a/annotations/44.json +++ b/annotations/44.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:662e395f3f368cbf13760a8496d490a0ad61ead79e06422e5d05a73fa45321df -size 596 +oid sha256:5488b28c3ffb8cce3a9876e1fb4ad2f175b48abf021578e6a837864facbff9c5 +size 599 diff --git a/annotations/45.json b/annotations/45.json index 24b24e60b6de07272fa9473370f0a3500684b4bd..dc4364445ed0160ac65b8cb34659d057e99f695c 100644 --- a/annotations/45.json +++ b/annotations/45.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:39177ee7ffdff0a17dc25239acbce5688773b0f1ed1f7beaf09d6ff827e8b867 -size 598 +oid sha256:230f042f7d8903212564402a6572c6285bad0f61e68439e3119e8eeff60944b0 +size 603 diff --git a/annotations/46.json b/annotations/46.json index 51f357a23a7c254b0ace0c10abdc5ef00d00b34e..90b2baf6ffdaf78d8bd004a2d054516edfe48286 100644 --- a/annotations/46.json +++ b/annotations/46.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:38db9b1d00251c895977551eba450a380d4f50330d881e08529e194a982cc6fd -size 598 +oid sha256:77d1d2eea1a599dd0b12a4fbdfac07986882da3a36579edb035f0a92cd072d3f +size 604 diff --git a/annotations/47.json b/annotations/47.json index f66bb0fbe483168c595c64ac69faad13a4967e3f..7d77d79ed5472e510c4f79cd5a7ceba6187442c9 100644 --- a/annotations/47.json +++ b/annotations/47.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:bc5ded61df68f92d6222cda1951e549045ca2cb09abbad13af3802c8e08428f8 -size 603 +oid sha256:e634ae6ea66770e628dd3b1fa0a72d85ebf6972114942e5ceddc87cce8a609d4 +size 596 diff --git a/annotations/48.json b/annotations/48.json index 66faec220890281a1d71dfe4a6fee70f2007d0ca..e47db21ce47c8b51a2fb0f32c2942413933eb87d 100644 --- a/annotations/48.json +++ b/annotations/48.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a2c37e3916273be6b06d6a65593591731c8914359ed64e05c81f9dd79c9513ff -size 602 +oid sha256:0886ed5ebcf67575133b0ec02117a6840e6a2e52f5793e8f462f37c4b855dd92 +size 598 diff --git a/annotations/49.json b/annotations/49.json index 5d572f00a0a69971d0ef792237246cd729fdfd81..f71b73d2b73b27b7725aaac143644fe19d8c737e 100644 --- a/annotations/49.json +++ b/annotations/49.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ef9c895bafbf74ea45d1262c33b18b242aba8c32b53cbc00350579da6b871501 -size 600 +oid sha256:c0dea82e553ced66103aae4d34438e385b51db7775fd0c67608740b8f447421c +size 598 diff --git a/annotations/5.json b/annotations/5.json index 8ca5dadb5a32b22282757130fb302b98a552335f..2655d7958330a973ea6155f8da500169eceb3292 100644 --- a/annotations/5.json +++ b/annotations/5.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e07dead72bde643f847d1e660d706027a4dab73bff07b394d65bced6be8b7bed +oid sha256:b66009c1c769858493b8850dc3e1005df9d93d44a5bbd5ada2fb38760af80b14 size 597 diff --git a/annotations/50.json b/annotations/50.json index 216673cdd47146e4620deff9c61201a6f739ca64..67af47e7eadc97226236ec2c86431acddb00a60a 100644 --- a/annotations/50.json +++ b/annotations/50.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:bec7218bde2ad1bd73b566118356670061a70ea0185938a8f5f0de2c37bdb04c -size 601 +oid sha256:59b84257140f06d316d1866fc46f9c722b605627e91dad1320d10e2b30271905 +size 603 diff --git a/annotations/51.json b/annotations/51.json index c7695cf929ccf735539cd9321ade808787f8d241..5216882041b2bc42de322328a5473e5466944c43 100644 --- a/annotations/51.json +++ b/annotations/51.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:eb82fe6754382928e042f652b31eb4c1eccfbd6296dfa3a47861f0dd0cc3ef43 -size 598 +oid sha256:90ad78fc1acaab278a7e5e0eabea285f1f1786ab5082157d5b148f75187fceee +size 602 diff --git a/annotations/52.json b/annotations/52.json index 8758bdf3e8f0f8db8fe984935dc1efd3f247be78..5b207d62ad34ef60337c41e4a08338deee609ae6 100644 --- a/annotations/52.json +++ b/annotations/52.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3bf181f7e8f6bf62e6d2516ac5c5bc86ecdf04d51f250823a527773e0df788ec -size 602 +oid sha256:a38da8fdf576341f3cd0a100c5c022e9e1c7769896cb075281dbefec1f8d1758 +size 600 diff --git a/annotations/53.json b/annotations/53.json new file mode 100644 index 0000000000000000000000000000000000000000..fa4ee876e3a365063abbd5dbc0c91d045f4d8699 --- /dev/null +++ b/annotations/53.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d36a9469a185727791f84cd87f461a192b50dce8f067d0db1dc562ce5d2bb027 +size 601 diff --git a/annotations/54.json b/annotations/54.json new file mode 100644 index 0000000000000000000000000000000000000000..68a7b777e18fd4728b0c35b6c831198b7c3abc34 --- /dev/null +++ b/annotations/54.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e899bffbef9237ed13e60aad1967c84b78f4581ed26ab98f2772384fb31cb5a +size 598 diff --git a/annotations/55.json b/annotations/55.json new file mode 100644 index 0000000000000000000000000000000000000000..4fe953a1c03604ae0e402f4acbb29aefe31dc6a2 --- /dev/null +++ b/annotations/55.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2035c6be28e5667f46617d6b45cd79a96102a265a5f529156d00f932f08408f9 +size 602 diff --git a/annotations/6.json b/annotations/6.json index 0eac8a32bc8b174cdd2c52d978d4fdbea3e7be66..8cf801cf44fd9a55b7d08c3dfe431f3f7a42f2de 100644 --- a/annotations/6.json +++ b/annotations/6.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:81639e3deb515d3bb8b0fe2f614570c5c9d6bdfb2a798f47b861b416af3ac451 -size 603 +oid sha256:44530418dff6a7dfed752b8912600e540cb2647ab3dc77e4003b0cc305a83bb3 +size 598 diff --git a/annotations/7.json b/annotations/7.json index 7efd75c96bea053efabcf5c0022dcb1bfb0997b6..697ed2f906da3ce43a8428b8fe96695825bbfb38 100644 --- a/annotations/7.json +++ b/annotations/7.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:524ba1a65588e44e267e1734e178f054d67afbee35030aec77d65485972318f9 -size 598 +oid sha256:d1a9d76025e1642936e08a92f7a3cf297f5330f9cb0a1cd666f00b7948c13211 +size 596 diff --git a/annotations/8.json b/annotations/8.json index 81ed6f25273b50a30db4a748ffaeb9114b42cbe1..26a47e59d8b916d16a04f35d75a6c5dbb45e5a53 100644 --- a/annotations/8.json +++ b/annotations/8.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:70206e6343ebd43087e82922b127d692f0905561c0f67224686c17b6712163ac -size 598 +oid sha256:6d2d5d68baeae4fc4ae3b9ede2f2163f1f319e875f1f3a76924b294cc434dfe2 +size 597 diff --git a/annotations/9.json b/annotations/9.json index f2a1218ccc27e2cc8a32548dc1bd8e2697133153..4315c9a7324fa3327d9ba3ee54571a58eda5b627 100644 --- a/annotations/9.json +++ b/annotations/9.json @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7f2f8befed627c7e5ceab0df39b400232a3e3c1ab5dc58622885949a9f666725 -size 598 +oid sha256:8c3187f5ea5dca33d7303e0c5804a708643d4df652f0e5970715380d2a5bf33c +size 603 diff --git a/audio/1.mp3 b/audio/1.mp3 index fbcdc9038fd123fe2abca09b199d6f5f26761a6a..6f81c91ad8c1ef9f9afd63e4b7fd8b26e66ac7db 100644 --- a/audio/1.mp3 +++ b/audio/1.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:11bce6545aacbcdecfe285eb4ecc68f2e69a7a0b58ceadd064c1b8532c361c8c -size 1660076 +oid sha256:c2534bb406584118bc5cf4876f71d825f19b2f4033284e8a635cec75a7fafec9 +size 928556 diff --git a/audio/10.mp3 b/audio/10.mp3 index 1a0098266c76f35ea4ab2d00457aa7c0a6cfddbd..a2916ca0fc36e62fb05fa9a94a769ebc748ef280 100644 --- a/audio/10.mp3 +++ b/audio/10.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:76a392c6388ea595796c1fd9abb86c2c516a6c50f470a0ef4eedd2d4d661cfff -size 3564943 +oid sha256:e9db2b716158933fef8b11858970fa98420be5469a86445923a385639c7bb678 +size 510956 diff --git a/audio/11.mp3 b/audio/11.mp3 index e3f71ad42fdc89733112eace5f494e0a44ad2187..7ef6eec65ef5b67735ecb3a0f2f737df0ba857d9 100644 --- a/audio/11.mp3 +++ b/audio/11.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6ccaef0b378bfe19bc7ad44e7fa192a562bfe542b67d70482a4bbe7c83df2940 -size 4774935 +oid sha256:77221f6efed78e4972c91130694dca8486aee363c15b1af614bdcbe7726a88cb +size 1342446 diff --git a/audio/12.mp3 b/audio/12.mp3 index 8ef16f6f2aaba86fb6e0cd94effcbd63795527ae..2b0e29cfa493e2c3d6c90c25c2ad904e244444f8 100644 --- a/audio/12.mp3 +++ b/audio/12.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cdeccb5c396b9e2d031d00e851273417881517ee1350088a5b9d9c3f030af1dd -size 1485388 +oid sha256:f8b9c835f3f6e017515de848502d2d0c2ddb890e39cbd958fc2283d315a9bcc3 +size 1556232 diff --git a/audio/13.mp3 b/audio/13.mp3 index b3382bf26b2518a0afded3620466168f4cd060ac..1a0098266c76f35ea4ab2d00457aa7c0a6cfddbd 100644 --- a/audio/13.mp3 +++ b/audio/13.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b5c1fca3255f3eaac656486501a35212d0250ad8039c626e30def2d5e0c90f42 -size 9014444 +oid sha256:76a392c6388ea595796c1fd9abb86c2c516a6c50f470a0ef4eedd2d4d661cfff +size 3564943 diff --git a/audio/14.mp3 b/audio/14.mp3 index 7f7599f6ebdba36f583d23f626fbc0e36fe92b8f..e3f71ad42fdc89733112eace5f494e0a44ad2187 100644 --- a/audio/14.mp3 +++ b/audio/14.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:48a2a69901d0ea218a034fe4ade491870c8c2ee323b567cc5b1976e8350be451 -size 2021804 +oid sha256:6ccaef0b378bfe19bc7ad44e7fa192a562bfe542b67d70482a4bbe7c83df2940 +size 4774935 diff --git a/audio/15.mp3 b/audio/15.mp3 index bb2da66bd98d91fb2f280240052bfb891b298ba2..8ef16f6f2aaba86fb6e0cd94effcbd63795527ae 100644 --- a/audio/15.mp3 +++ b/audio/15.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:07a5e2c29496853dd786a57b9f1a0691dd9ab7d02051695b0d0c634df3321660 -size 1177964 +oid sha256:cdeccb5c396b9e2d031d00e851273417881517ee1350088a5b9d9c3f030af1dd +size 1485388 diff --git a/audio/16.mp3 b/audio/16.mp3 index 262b049e2ae103339728c76ee6b436f31158b5a1..b3382bf26b2518a0afded3620466168f4cd060ac 100644 --- a/audio/16.mp3 +++ b/audio/16.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d4061565fdb2cb529f501326e9b63826412eb2bc67320c3c57a3a30216247b3f -size 3313196 +oid sha256:b5c1fca3255f3eaac656486501a35212d0250ad8039c626e30def2d5e0c90f42 +size 9014444 diff --git a/audio/17.mp3 b/audio/17.mp3 index f1159238bfc01428950bb4924e0443d3e6ac9ee8..7f7599f6ebdba36f583d23f626fbc0e36fe92b8f 100644 --- a/audio/17.mp3 +++ b/audio/17.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a2ef12a7de0780156446249ff4ece8225b803aa3dadaf54ec4f61efdba7e8e9a -size 1720556 +oid sha256:48a2a69901d0ea218a034fe4ade491870c8c2ee323b567cc5b1976e8350be451 +size 2021804 diff --git a/audio/18.mp3 b/audio/18.mp3 index 43060c75571659f5d425f123e00767fddc7c82f4..bb2da66bd98d91fb2f280240052bfb891b298ba2 100644 --- a/audio/18.mp3 +++ b/audio/18.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7ab51a2fd16eed0ac819aa53ae397d0104ad8d803074b554b067f0a79e7b9682 -size 2454956 +oid sha256:07a5e2c29496853dd786a57b9f1a0691dd9ab7d02051695b0d0c634df3321660 +size 1177964 diff --git a/audio/19.mp3 b/audio/19.mp3 index 68cc8a4e3cda505e23ab4ffb98c2b646cd58a437..262b049e2ae103339728c76ee6b436f31158b5a1 100644 --- a/audio/19.mp3 +++ b/audio/19.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:47b7ba73f546f58959ea56d8618464e480bb6f672b5bf5f70481d0b52dde1db5 -size 790316 +oid sha256:d4061565fdb2cb529f501326e9b63826412eb2bc67320c3c57a3a30216247b3f +size 3313196 diff --git a/audio/2.mp3 b/audio/2.mp3 index c00592627ec1c18b4b52a2b3b80e9fbabe5ac9b3..75029337bc3772b9492e5cfb31dc0a07f91a60b0 100644 --- a/audio/2.mp3 +++ b/audio/2.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9ba5b6c02a07d2bc6b71a558982e9789cf064c2bf8d157164d1fbe4c1ef9527c -size 1726316 +oid sha256:4cf03242f70fab523e711d04d65bd7d9c2f541972220018df9ecee41f5bd7b67 +size 933164 diff --git a/audio/20.mp3 b/audio/20.mp3 index 5e59eaa928ebf573ad57eaecaeb673fca66e5207..f1159238bfc01428950bb4924e0443d3e6ac9ee8 100644 --- a/audio/20.mp3 +++ b/audio/20.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:12f10cf027bdfcea3cd5b83232e7506cd80e10ca65d59ae652dc686644259c2a -size 2307304 +oid sha256:a2ef12a7de0780156446249ff4ece8225b803aa3dadaf54ec4f61efdba7e8e9a +size 1720556 diff --git a/audio/21.mp3 b/audio/21.mp3 index 7e502ec47ad069dfe0de20c460c45c3b101b16cc..43060c75571659f5d425f123e00767fddc7c82f4 100644 --- a/audio/21.mp3 +++ b/audio/21.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e946b6ca2eb04be83bb174d3e358090f03e670c0a9133da4a06406a24d96da31 -size 2668076 +oid sha256:7ab51a2fd16eed0ac819aa53ae397d0104ad8d803074b554b067f0a79e7b9682 +size 2454956 diff --git a/audio/22.mp3 b/audio/22.mp3 index 5e27bedfea3c81a479d3f828a25a3d068a880445..68cc8a4e3cda505e23ab4ffb98c2b646cd58a437 100644 --- a/audio/22.mp3 +++ b/audio/22.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcc72030e7b00e1fe1cc439adc827178a773a020a0a87e41077bc42ccf42126f -size 2587436 +oid sha256:47b7ba73f546f58959ea56d8618464e480bb6f672b5bf5f70481d0b52dde1db5 +size 790316 diff --git a/audio/23.mp3 b/audio/23.mp3 index 1bf9f337fbdc8be5726c5ff504a51444456d59ae..5e59eaa928ebf573ad57eaecaeb673fca66e5207 100644 --- a/audio/23.mp3 +++ b/audio/23.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2ea80e81bd6d3a5fd0addc1a52eca7496af6a382ff83a711675817d274b92300 -size 3605804 +oid sha256:12f10cf027bdfcea3cd5b83232e7506cd80e10ca65d59ae652dc686644259c2a +size 2307304 diff --git a/audio/24.mp3 b/audio/24.mp3 index 1e3e602a76377523a560ac7b7c4165900e7152a5..7e502ec47ad069dfe0de20c460c45c3b101b16cc 100644 --- a/audio/24.mp3 +++ b/audio/24.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a851078868d37b85ca594db6811fa7a9410764bf5e55f8de9ff57d0724843d98 -size 5389962 +oid sha256:e946b6ca2eb04be83bb174d3e358090f03e670c0a9133da4a06406a24d96da31 +size 2668076 diff --git a/audio/25.mp3 b/audio/25.mp3 index f9e8f7a5846a7ac0f5cebad7ee4214b58282fd8a..5e27bedfea3c81a479d3f828a25a3d068a880445 100644 --- a/audio/25.mp3 +++ b/audio/25.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1c914d6e441095bd6a40aa5a558418ae5a321ccf83238b073b8f4f16c6154f39 -size 1672842 +oid sha256:fcc72030e7b00e1fe1cc439adc827178a773a020a0a87e41077bc42ccf42126f +size 2587436 diff --git a/audio/26.mp3 b/audio/26.mp3 index 90557fc63594f7799e1923114ab958ead7e8fb88..1bf9f337fbdc8be5726c5ff504a51444456d59ae 100644 --- a/audio/26.mp3 +++ b/audio/26.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:736e83bcc12261e6d83c46b915a26d9c4fc4fbffdf441a4b6a1bc896300acf83 -size 649051 +oid sha256:2ea80e81bd6d3a5fd0addc1a52eca7496af6a382ff83a711675817d274b92300 +size 3605804 diff --git a/audio/27.mp3 b/audio/27.mp3 index 6a753d53c2f383c479cb9973787491507692384c..1e3e602a76377523a560ac7b7c4165900e7152a5 100644 --- a/audio/27.mp3 +++ b/audio/27.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:5df94fb343c121e973b335952996dfee728aa804f5adfae688651654e30a1c1b -size 2566124 +oid sha256:a851078868d37b85ca594db6811fa7a9410764bf5e55f8de9ff57d0724843d98 +size 5389962 diff --git a/audio/28.mp3 b/audio/28.mp3 index 6e2783e42ecbe6946a752ead990ac8b528f8752d..f9e8f7a5846a7ac0f5cebad7ee4214b58282fd8a 100644 --- a/audio/28.mp3 +++ b/audio/28.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cc2e20543f4f7eada7275e0c7b9fc256023320eb7f067edc41e84ff81c5f633c -size 3235436 +oid sha256:1c914d6e441095bd6a40aa5a558418ae5a321ccf83238b073b8f4f16c6154f39 +size 1672842 diff --git a/audio/29.mp3 b/audio/29.mp3 index c26a9908e7546cbac92686b02d7f46eeade3f494..90557fc63594f7799e1923114ab958ead7e8fb88 100644 --- a/audio/29.mp3 +++ b/audio/29.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1a692a5efca0e78afd528ef4edcff7d69e64cbb5989ae20f31ad01cf2faeb271 -size 2577644 +oid sha256:736e83bcc12261e6d83c46b915a26d9c4fc4fbffdf441a4b6a1bc896300acf83 +size 649051 diff --git a/audio/3.mp3 b/audio/3.mp3 index 9f2e2690bb34107e0890e806edcc911d6ba7548d..6e8451ef057d91eed31729b9328208036dfa787c 100644 --- a/audio/3.mp3 +++ b/audio/3.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6cf170fe6f04b0832af6c136ee77092f7b4c99ffe07b7abf8e3a1a97d5a7d860 -size 714284 +oid sha256:1ca7c42ca9ff2e081d43ea1a389b27a0823522541a34c614021ee84b07f4f0ec +size 1009196 diff --git a/audio/30.mp3 b/audio/30.mp3 index 71e19cb0acdc1d576535c3cdd5b6db814c495a7b..6a753d53c2f383c479cb9973787491507692384c 100644 --- a/audio/30.mp3 +++ b/audio/30.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:55a4246d9bf9bfdfc1b28b4add8b4a8c746b7473f10e28ff3f909709017b04eb -size 2070764 +oid sha256:5df94fb343c121e973b335952996dfee728aa804f5adfae688651654e30a1c1b +size 2566124 diff --git a/audio/31.mp3 b/audio/31.mp3 index a42b02acca82456e4c67bb0a1e5e945357a4e720..6e2783e42ecbe6946a752ead990ac8b528f8752d 100644 --- a/audio/31.mp3 +++ b/audio/31.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4abd166ccade3f644d84f0b394d7c64f90c5b8adee074b6b0bcfb53b95b8e07d -size 2814956 +oid sha256:cc2e20543f4f7eada7275e0c7b9fc256023320eb7f067edc41e84ff81c5f633c +size 3235436 diff --git a/audio/32.mp3 b/audio/32.mp3 index 7e502ec47ad069dfe0de20c460c45c3b101b16cc..c26a9908e7546cbac92686b02d7f46eeade3f494 100644 --- a/audio/32.mp3 +++ b/audio/32.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e946b6ca2eb04be83bb174d3e358090f03e670c0a9133da4a06406a24d96da31 -size 2668076 +oid sha256:1a692a5efca0e78afd528ef4edcff7d69e64cbb5989ae20f31ad01cf2faeb271 +size 2577644 diff --git a/audio/33.mp3 b/audio/33.mp3 index 5e27bedfea3c81a479d3f828a25a3d068a880445..71e19cb0acdc1d576535c3cdd5b6db814c495a7b 100644 --- a/audio/33.mp3 +++ b/audio/33.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcc72030e7b00e1fe1cc439adc827178a773a020a0a87e41077bc42ccf42126f -size 2587436 +oid sha256:55a4246d9bf9bfdfc1b28b4add8b4a8c746b7473f10e28ff3f909709017b04eb +size 2070764 diff --git a/audio/34.mp3 b/audio/34.mp3 index 1bf9f337fbdc8be5726c5ff504a51444456d59ae..a42b02acca82456e4c67bb0a1e5e945357a4e720 100644 --- a/audio/34.mp3 +++ b/audio/34.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2ea80e81bd6d3a5fd0addc1a52eca7496af6a382ff83a711675817d274b92300 -size 3605804 +oid sha256:4abd166ccade3f644d84f0b394d7c64f90c5b8adee074b6b0bcfb53b95b8e07d +size 2814956 diff --git a/audio/35.mp3 b/audio/35.mp3 index 1e3e602a76377523a560ac7b7c4165900e7152a5..7e502ec47ad069dfe0de20c460c45c3b101b16cc 100644 --- a/audio/35.mp3 +++ b/audio/35.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a851078868d37b85ca594db6811fa7a9410764bf5e55f8de9ff57d0724843d98 -size 5389962 +oid sha256:e946b6ca2eb04be83bb174d3e358090f03e670c0a9133da4a06406a24d96da31 +size 2668076 diff --git a/audio/36.mp3 b/audio/36.mp3 index f9e8f7a5846a7ac0f5cebad7ee4214b58282fd8a..5e27bedfea3c81a479d3f828a25a3d068a880445 100644 --- a/audio/36.mp3 +++ b/audio/36.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1c914d6e441095bd6a40aa5a558418ae5a321ccf83238b073b8f4f16c6154f39 -size 1672842 +oid sha256:fcc72030e7b00e1fe1cc439adc827178a773a020a0a87e41077bc42ccf42126f +size 2587436 diff --git a/audio/37.mp3 b/audio/37.mp3 index 90557fc63594f7799e1923114ab958ead7e8fb88..1bf9f337fbdc8be5726c5ff504a51444456d59ae 100644 --- a/audio/37.mp3 +++ b/audio/37.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:736e83bcc12261e6d83c46b915a26d9c4fc4fbffdf441a4b6a1bc896300acf83 -size 649051 +oid sha256:2ea80e81bd6d3a5fd0addc1a52eca7496af6a382ff83a711675817d274b92300 +size 3605804 diff --git a/audio/38.mp3 b/audio/38.mp3 index 6a753d53c2f383c479cb9973787491507692384c..1e3e602a76377523a560ac7b7c4165900e7152a5 100644 --- a/audio/38.mp3 +++ b/audio/38.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:5df94fb343c121e973b335952996dfee728aa804f5adfae688651654e30a1c1b -size 2566124 +oid sha256:a851078868d37b85ca594db6811fa7a9410764bf5e55f8de9ff57d0724843d98 +size 5389962 diff --git a/audio/39.mp3 b/audio/39.mp3 index 41a901c1c0525914a2e5b0920e60a968ae0c312f..f9e8f7a5846a7ac0f5cebad7ee4214b58282fd8a 100644 --- a/audio/39.mp3 +++ b/audio/39.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4249ca4f032cf3a438c3f004a48ed2da00c563e603ef284c892302689999bb96 -size 2980844 +oid sha256:1c914d6e441095bd6a40aa5a558418ae5a321ccf83238b073b8f4f16c6154f39 +size 1672842 diff --git a/audio/4.mp3 b/audio/4.mp3 index c4cefc69238fad0f81134efe23f89f982f229c79..fbcdc9038fd123fe2abca09b199d6f5f26761a6a 100644 --- a/audio/4.mp3 +++ b/audio/4.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fe150a963d9b510be1e5aff103d8bf636bc949b7ec719f208fb3c687b4bcb9f3 -size 372716 +oid sha256:11bce6545aacbcdecfe285eb4ecc68f2e69a7a0b58ceadd064c1b8532c361c8c +size 1660076 diff --git a/audio/40.mp3 b/audio/40.mp3 index 6e2783e42ecbe6946a752ead990ac8b528f8752d..90557fc63594f7799e1923114ab958ead7e8fb88 100644 --- a/audio/40.mp3 +++ b/audio/40.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cc2e20543f4f7eada7275e0c7b9fc256023320eb7f067edc41e84ff81c5f633c -size 3235436 +oid sha256:736e83bcc12261e6d83c46b915a26d9c4fc4fbffdf441a4b6a1bc896300acf83 +size 649051 diff --git a/audio/41.mp3 b/audio/41.mp3 index c26a9908e7546cbac92686b02d7f46eeade3f494..6a753d53c2f383c479cb9973787491507692384c 100644 --- a/audio/41.mp3 +++ b/audio/41.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1a692a5efca0e78afd528ef4edcff7d69e64cbb5989ae20f31ad01cf2faeb271 -size 2577644 +oid sha256:5df94fb343c121e973b335952996dfee728aa804f5adfae688651654e30a1c1b +size 2566124 diff --git a/audio/42.mp3 b/audio/42.mp3 index 71e19cb0acdc1d576535c3cdd5b6db814c495a7b..41a901c1c0525914a2e5b0920e60a968ae0c312f 100644 --- a/audio/42.mp3 +++ b/audio/42.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:55a4246d9bf9bfdfc1b28b4add8b4a8c746b7473f10e28ff3f909709017b04eb -size 2070764 +oid sha256:4249ca4f032cf3a438c3f004a48ed2da00c563e603ef284c892302689999bb96 +size 2980844 diff --git a/audio/43.mp3 b/audio/43.mp3 index a42b02acca82456e4c67bb0a1e5e945357a4e720..6e2783e42ecbe6946a752ead990ac8b528f8752d 100644 --- a/audio/43.mp3 +++ b/audio/43.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4abd166ccade3f644d84f0b394d7c64f90c5b8adee074b6b0bcfb53b95b8e07d -size 2814956 +oid sha256:cc2e20543f4f7eada7275e0c7b9fc256023320eb7f067edc41e84ff81c5f633c +size 3235436 diff --git a/audio/44.mp3 b/audio/44.mp3 index 7f60b880f0f7e7695a157e9c4c9c588efdffeea4..c26a9908e7546cbac92686b02d7f46eeade3f494 100644 --- a/audio/44.mp3 +++ b/audio/44.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:706082dc880f42aa397f9aee429f2f8a4d62fa19417e106e646d6031f91e4f11 -size 7845164 +oid sha256:1a692a5efca0e78afd528ef4edcff7d69e64cbb5989ae20f31ad01cf2faeb271 +size 2577644 diff --git a/audio/45.mp3 b/audio/45.mp3 index 181b74283416b8efbea4fdc0be68c66a3bd13f2b..71e19cb0acdc1d576535c3cdd5b6db814c495a7b 100644 --- a/audio/45.mp3 +++ b/audio/45.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:de35c8502abe369ee5876eea1d354a292d135675764ea7606e835a146c7b191c -size 8816183 +oid sha256:55a4246d9bf9bfdfc1b28b4add8b4a8c746b7473f10e28ff3f909709017b04eb +size 2070764 diff --git a/audio/46.mp3 b/audio/46.mp3 index 4a16df73f1dfc033e4c949b7db5e9eb94d953887..a42b02acca82456e4c67bb0a1e5e945357a4e720 100644 --- a/audio/46.mp3 +++ b/audio/46.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e831b32171884e6ac24dd23e3e54973da861f48f44a47a5e8fbecf2bc6720438 -size 4359902 +oid sha256:4abd166ccade3f644d84f0b394d7c64f90c5b8adee074b6b0bcfb53b95b8e07d +size 2814956 diff --git a/audio/47.mp3 b/audio/47.mp3 index fd4876876248f9873fc1e3c60713825b13fde07b..7f60b880f0f7e7695a157e9c4c9c588efdffeea4 100644 --- a/audio/47.mp3 +++ b/audio/47.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e47a89f43f0de0b54b995ac790cf0041677f8944e208b009dc1caa6029fcf414 -size 888236 +oid sha256:706082dc880f42aa397f9aee429f2f8a4d62fa19417e106e646d6031f91e4f11 +size 7845164 diff --git a/audio/48.mp3 b/audio/48.mp3 index c7bf9a38a696b47ded9a2fb62a552b74dddfae4c..181b74283416b8efbea4fdc0be68c66a3bd13f2b 100644 --- a/audio/48.mp3 +++ b/audio/48.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0fe09dc5bf67af9a6f92fcf38b02508567fb8ce34984e744908386add67de18f -size 3113324 +oid sha256:de35c8502abe369ee5876eea1d354a292d135675764ea7606e835a146c7b191c +size 8816183 diff --git a/audio/49.mp3 b/audio/49.mp3 index 25b62c47313ef75995abe35244594e65650334c3..4a16df73f1dfc033e4c949b7db5e9eb94d953887 100644 --- a/audio/49.mp3 +++ b/audio/49.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6a5c658f0ee134e0c31a9ca939ad7805a9a76a321f5e3728dd575ce734c250ae -size 1484396 +oid sha256:e831b32171884e6ac24dd23e3e54973da861f48f44a47a5e8fbecf2bc6720438 +size 4359902 diff --git a/audio/5.mp3 b/audio/5.mp3 index 29e2bcfe6f1f10527ffc72de84d4637d76f3d39b..c00592627ec1c18b4b52a2b3b80e9fbabe5ac9b3 100644 --- a/audio/5.mp3 +++ b/audio/5.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:36b7b84c16b53682627a8651aedda5861c51b12223a2edc5cf773f9e0bf4e816 -size 2097836 +oid sha256:9ba5b6c02a07d2bc6b71a558982e9789cf064c2bf8d157164d1fbe4c1ef9527c +size 1726316 diff --git a/audio/50.mp3 b/audio/50.mp3 index 34e9c6880398dfe1d777ed32bd0b8c82b9802f0f..fd4876876248f9873fc1e3c60713825b13fde07b 100644 --- a/audio/50.mp3 +++ b/audio/50.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:73519e4c374f1b7aa73fafe009ab248ad470a0a17e9b522d265af6293a246021 -size 1006406 +oid sha256:e47a89f43f0de0b54b995ac790cf0041677f8944e208b009dc1caa6029fcf414 +size 888236 diff --git a/audio/51.mp3 b/audio/51.mp3 index 73bd073932fdb9455d990f341ce98282d850b363..c7bf9a38a696b47ded9a2fb62a552b74dddfae4c 100644 --- a/audio/51.mp3 +++ b/audio/51.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a7ed9e2ba97d3231457b3e699f67130488af59df2827599cecbaa4f054e1ccf1 -size 1524716 +oid sha256:0fe09dc5bf67af9a6f92fcf38b02508567fb8ce34984e744908386add67de18f +size 3113324 diff --git a/audio/52.mp3 b/audio/52.mp3 index 67d92439269479d26ceb87aa36b281d1a75a16c7..25b62c47313ef75995abe35244594e65650334c3 100644 --- a/audio/52.mp3 +++ b/audio/52.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ff2b40d06add3f07ca26e609ca0fef0270b9f4e72bbfe33a31bf193bcee7e96b -size 4384556 +oid sha256:6a5c658f0ee134e0c31a9ca939ad7805a9a76a321f5e3728dd575ce734c250ae +size 1484396 diff --git a/audio/53.mp3 b/audio/53.mp3 new file mode 100644 index 0000000000000000000000000000000000000000..34e9c6880398dfe1d777ed32bd0b8c82b9802f0f --- /dev/null +++ b/audio/53.mp3 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73519e4c374f1b7aa73fafe009ab248ad470a0a17e9b522d265af6293a246021 +size 1006406 diff --git a/audio/54.mp3 b/audio/54.mp3 new file mode 100644 index 0000000000000000000000000000000000000000..73bd073932fdb9455d990f341ce98282d850b363 --- /dev/null +++ b/audio/54.mp3 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7ed9e2ba97d3231457b3e699f67130488af59df2827599cecbaa4f054e1ccf1 +size 1524716 diff --git a/audio/55.mp3 b/audio/55.mp3 new file mode 100644 index 0000000000000000000000000000000000000000..67d92439269479d26ceb87aa36b281d1a75a16c7 --- /dev/null +++ b/audio/55.mp3 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff2b40d06add3f07ca26e609ca0fef0270b9f4e72bbfe33a31bf193bcee7e96b +size 4384556 diff --git a/audio/6.mp3 b/audio/6.mp3 index 13fe4b302f03aa0241b0a18d421aae3b3fccfad3..9f2e2690bb34107e0890e806edcc911d6ba7548d 100644 --- a/audio/6.mp3 +++ b/audio/6.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2e3d8d5cc47f680c320d1df5b41f34affd5dbc9b6425f4a822d1118013b93236 -size 2914604 +oid sha256:6cf170fe6f04b0832af6c136ee77092f7b4c99ffe07b7abf8e3a1a97d5a7d860 +size 714284 diff --git a/audio/7.mp3 b/audio/7.mp3 index a2916ca0fc36e62fb05fa9a94a769ebc748ef280..c4cefc69238fad0f81134efe23f89f982f229c79 100644 --- a/audio/7.mp3 +++ b/audio/7.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e9db2b716158933fef8b11858970fa98420be5469a86445923a385639c7bb678 -size 510956 +oid sha256:fe150a963d9b510be1e5aff103d8bf636bc949b7ec719f208fb3c687b4bcb9f3 +size 372716 diff --git a/audio/8.mp3 b/audio/8.mp3 index 7ef6eec65ef5b67735ecb3a0f2f737df0ba857d9..29e2bcfe6f1f10527ffc72de84d4637d76f3d39b 100644 --- a/audio/8.mp3 +++ b/audio/8.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:77221f6efed78e4972c91130694dca8486aee363c15b1af614bdcbe7726a88cb -size 1342446 +oid sha256:36b7b84c16b53682627a8651aedda5861c51b12223a2edc5cf773f9e0bf4e816 +size 2097836 diff --git a/audio/9.mp3 b/audio/9.mp3 index 2b0e29cfa493e2c3d6c90c25c2ad904e244444f8..13fe4b302f03aa0241b0a18d421aae3b3fccfad3 100644 --- a/audio/9.mp3 +++ b/audio/9.mp3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f8b9c835f3f6e017515de848502d2d0c2ddb890e39cbd958fc2283d315a9bcc3 -size 1556232 +oid sha256:2e3d8d5cc47f680c320d1df5b41f34affd5dbc9b6425f4a822d1118013b93236 +size 2914604 diff --git a/transcripts/uncorrected/1.txt b/transcripts/uncorrected/1.txt index 97dc205e9d7b77068f580705263f66d3a0ce82b0..b7896c7f96af437ec44fecaba4cd587b9fd8c785 100644 --- a/transcripts/uncorrected/1.txt +++ b/transcripts/uncorrected/1.txt @@ -1 +1 @@ -Go through the website and see any place in which icons have been implemented which were custom designed but which could have been implemented more efficiently through using an existing icon library.

Pay particular attention to icons for common uses such as social media icons which exist in many libraries, as well as emojis which may have been used in place of icons.

This approach should not be followed.

If the user uses an existing icon library that you can identify, then replace the custom coded icons with the most appropriate matches.

If the user hasn't yet implemented an icon library, provide some suggestions to the user, focusing on those libraries which will best match the aesthetic which they are following in their designs. \ No newline at end of file +Okay, so the basic validation of the app is good. It functions according to spec.

I'd like to just remove the emojis and please take a look at the screenshots of the app as it's currently implemented and see if you can think of any design and UI optimizations that would make it even more friendly to use.

For transcribe and optimize, we definitely would like to have a label transcribe and optimize.

Maybe let's have a homer text or an about section where we describe to users the differences between these two functions. \ No newline at end of file diff --git a/transcripts/uncorrected/10.txt b/transcripts/uncorrected/10.txt index 1c658e5f3d7436116c6a372301158c4d76aff497..b57417cde1c5303a489404bcd259f827ea2cf7a6 100644 --- a/transcripts/uncorrected/10.txt +++ b/transcripts/uncorrected/10.txt @@ -1 +1 @@ -Something that would be very useful would be the following. So I use an app called Voice Notes for Android. And it's a voice recording app. It's called Voice Notes. Now, it has one fatal flaw, in my opinion, which is that it doesn't have Bluetooth support. So when I'm out and about, like now, I literally hold the phone up to my mouth, and it certainly gets me much, much, much better recording quality, but I kind of look a little bit goofy and I feel very self-conscious.

So there's two things I've thought about. One is finding a voice recording app that has more robust Bluetooth support. I think there are two options really that I'm thinking of. The first is finding, as I said, a voice recorder with very robust Bluetooth support and using a Bluetooth microphone to record with. The alternative, because I'm seeing these products come to market increasingly, is to use a wearable Android device, which probably wouldn't be that different, maybe even physically. And I think the more I think about it, the more I think about it, the more I think about it, the more flexibility. Rather than being a Bluetooth accessory, it's running, I guess, Wear OS, and maybe that would give you more flexibility.

I'm trying to think of the pros and cons on which would be better. I veered towards the wearable approach as it seems to be what's where. I don't know where the market is going with this concept, but I'm curious to know what your thoughts are regarding the pros and cons. \ No newline at end of file +The problem is that we looked at this before and when it reboots the router it's not bringing up the Cloudflare tunnel.

So see it's working now, but just see what can be done to make sure that this, we need to make very certain that this does start automatically on reboot. \ No newline at end of file diff --git a/transcripts/uncorrected/11.txt b/transcripts/uncorrected/11.txt index 59abde81206328bbd33b6fe792b0dcf161a7d148..4b73fda258009e56d8fc1e8ade93312193c751d0 100644 --- a/transcripts/uncorrected/11.txt +++ b/transcripts/uncorrected/11.txt @@ -1 +1 @@ -So there's a lot of these AI voice pins emerging onto the market which are designed to be wearable devices.

So I record as I'm doing now quite a number of voice notes when I'm out of the house.

I use an Android app called Voice Notes that I really like but it doesn't have support yet for Bluetooth microphones.

At least not support that's reliable.

So I have to hold the phone up to my mouth, which really kind of degrades the experience.

As I started, I want to actually start doing, going on walks expressly for usually the moment I do this when I'm going places.

I just happen to think, but I actually want to start taking walks to jot down some ideas as a healthier way of combining work and getting out and getting some exercise and getting some sunlight.

And for that it would be really nice to not have to, you know, be holding up a phone to your mouth for 30 minutes or an hour or whatever it may be.

So I was thinking about wearable voice recorders but a lot of them from what I've seen are these kind of closed ecosystems in which they sell you can't just buy the hardware.

They'll sell you, they'll do like onboard transcription or they'll sell you like a Cloud Transcription Bundle.

I'm really not a fan of on-device transcription.

I mean I think it works but in my experience it doesn’t make a lot of sense to me just architecturally.

I think why do stuff on device that can be done in the cloud cost effectively?

And you got, you know, you can run vastly more powerful models in the cloud.

You don’t have to worry about quantizing models on a very, very small piece of hardware.

And so I guess what would be great for me, but Android, when you're looking at wearables, Android's like the obvious sync partner.

So you just need to get the voice of the audio data from the recording thing to Android and from there you can push to the cloud and then the rest is back-end speech and text.

So what I'm saying is that I'd love a modular solution that could do this.

A pin that is just hardware, just recording this audio sync, maybe has its own app, or maybe can be used preferably with third-party apps.

And therefore you can kind of build your own voice recording stack around it, and you can use your existing Speech-to-Docs transcription workflow.

And you don't have to subscribe to these very kind of, I forget the word, walled gardens in which the vendor chooses your force into this package that's often very unnecessarily expensive and you're paying mostly for overpriced transcription.

I'd prefer to just get, invest in good hardware! \ No newline at end of file +I recently picked up a Samsung Galaxy 6 smartwatch just to try out the idea basically.

And my only need was really for a dual time display, local and UTC, and the day display.

It was about $100 give or take, so a very basic entry level that would sync with my OnePlus.

If it turns out that I really like it...

The other requirement was a good microphone for voice recordings.

Even if it's not the best and my phone is better, it would be nice to be able to use it for that because I take a lot of voicemails during the day.

If I turn out to really like it, what would you suggest as a good upgrade?

I tend to like more everything that's getting under the hood with technology.

So I wasn't thrilled about buying a Samsung, but it was what was available for the price point approximately. \ No newline at end of file diff --git a/transcripts/uncorrected/12.txt b/transcripts/uncorrected/12.txt index 65b441258e93b436681d73b0928dd3ea5da97777..dd20ba87d4e27f321810d0504c0736c4e154d407 100644 --- a/transcripts/uncorrected/12.txt +++ b/transcripts/uncorrected/12.txt @@ -1 +1 @@ -I picked up a Samsung Galaxy FE watch. I checked compatibility, smartwatch. I think it's in the 7 series if I'm not mistaken. What is it exactly? It's a 40mm smartwatch. Where does it fit in their line up? What's the difference between this and the Watch 7? I just went for this one because it was what was in stock.

And is it shower proof, waterproof? And I know it's a glass display. So I'm wondering how tough is the glass? Or is it tough at all? I just asked because it's a fitness watch. I assume they make them a little bit more ruggedized, but maybe that's not the case. What does it say? \ No newline at end of file +I recently picked up a smartwatch from Samsung Galaxy and I'm curious one thing that would be really helpful that I thought of.

I'm always stressed about losing or potentially losing phone wallet keys.

And for all of these things, Fun Walla Keys, I use Pebble Bee Tracker now.

So I'm wondering if there's any way or app that can do something like geofencing in which if any of the things are...

Maybe you can turn it on and off at certain times but they're in.

If they move out of the zone you get an alert notification if the smartwatch vibrates or whatever. \ No newline at end of file diff --git a/transcripts/uncorrected/13.txt b/transcripts/uncorrected/13.txt index b51e9cf9eacfa8f539ba2c6270fbbbdcb80adeda..1c658e5f3d7436116c6a372301158c4d76aff497 100644 --- a/transcripts/uncorrected/13.txt +++ b/transcripts/uncorrected/13.txt @@ -1 +1 @@ -So there has been this vast development in multimodal AI recently. I signed up for Replicate and FAL AI. And what really strikes me is not only the diversity and number of models out there, but also the large number of permutations in multimodal AI, meaning what input can go to what output. And I think what I find difficult about it at the moment to navigate as a, let's say, creator. I created a few music videos just as kind of fun experiments. Is that there's so many different models. Like just in, let's say, the one series, there is maybe 20 different models to choose from in FAL, but they all do slightly different things not only in terms of the resolution and the parameter and the max duration but also in terms of the modalities, and they don’t really allow you to filter on this at the moment.

So what I mean by that is if we take an image to video model that animates still images to video, one model in one might create video without audio and another might create video with audio. And that's a very significant difference. But there's also a significant difference in do I prompt for the audio? In other words, is it going to be text to audio and render out audio that then gets added to video? Is it reference audio and reference image? So when you begin opening, all these differences really matter because I might want to filter on ideally, let's say I wanted to look at image to video models, which could generate lip sync to audio from a prompt. That might be one use case as well as the video.

In another use case, I might want to create a dialogue video. Let’s say I have a still image of a crowded market in Jerusalem, and I might want to print something like create a video from this image; the background soundtrack is background conversation noise in a bustling marketplace with vendors yelling out sales prices. That's just an example of the kind of background noise and the ambient noise that we have in this market I'm thinking about.

So what I would like to do, I created this repository which I created here. I'm trying to think of a taxonomy for multimodal, really for my own reference, but also as an open source project. Exploring the permutations of multimodal that are possible. So in the preceding example, we might have one definition of a modality might be still image to video without audio. Another modality, and then the description. Another modality might be still image to audio without lip sync. Another modality might be still image to video with lip sync.

But then you might have some sub modalities being still image to video with lip sync with reference image, that a reference to image. Another sub modality there might be still image to video with reference character reference in video. Another might be still image to video with audio with character reference through a LoRa (L-O-R-A). And I reckon that if we really enumerated the modalities we might get to hundreds if not thousands of different ones. For example, in FAL, just to talk about the long tail, there's music to music, which is music in painting. There's audio in painting, well, yeah, audio in painting, which I'm thinking aloud here is, I guess, distinguished music in painting is a subset of audio in painting, that it's melodic.

So that's the objective. I think that the JSON is the obvious format in which to attempt to denote these. And what I'd like you to do as the task definition is try to do this basically. Try to enumerate, list out a hierarchy, some kind of taxonomy representation that makes sense. We could try to create a baseline and then explore various ways of mapping out the hierarchy, manipulating the JSON so that we look at different ways of organizing it. So I think it would be useful to have like a first entry JSON in which we, and later maybe I, as new modalities come to, and we can maybe have very interesting labels might be their point of maturity, example workflows, use cases, etc. There's an awful lot that could be explored within these parameters. \ No newline at end of file +Something that would be very useful would be the following. So I use an app called Voice Notes for Android. And it's a voice recording app. It's called Voice Notes. Now, it has one fatal flaw, in my opinion, which is that it doesn't have Bluetooth support. So when I'm out and about, like now, I literally hold the phone up to my mouth, and it certainly gets me much, much, much better recording quality, but I kind of look a little bit goofy and I feel very self-conscious.

So there's two things I've thought about. One is finding a voice recording app that has more robust Bluetooth support. I think there are two options really that I'm thinking of. The first is finding, as I said, a voice recorder with very robust Bluetooth support and using a Bluetooth microphone to record with. The alternative, because I'm seeing these products come to market increasingly, is to use a wearable Android device, which probably wouldn't be that different, maybe even physically. And I think the more I think about it, the more I think about it, the more I think about it, the more flexibility. Rather than being a Bluetooth accessory, it's running, I guess, Wear OS, and maybe that would give you more flexibility.

I'm trying to think of the pros and cons on which would be better. I veered towards the wearable approach as it seems to be what's where. I don't know where the market is going with this concept, but I'm curious to know what your thoughts are regarding the pros and cons. \ No newline at end of file diff --git a/transcripts/uncorrected/14.txt b/transcripts/uncorrected/14.txt index 7691086737e7862b23604ec7c3b5a56071521899..59abde81206328bbd33b6fe792b0dcf161a7d148 100644 --- a/transcripts/uncorrected/14.txt +++ b/transcripts/uncorrected/14.txt @@ -1 +1 @@ -Look at the Facer's, I'm really surprised for no one's made a Hebrew date watch on the Facer creator, but it's probably the developer studio from Samsung is the way to go for that. And I want to edit, like the one that I have slightly, I can't find the perfect one, people put too much on them. I'm looking at the face I got from Facer now and they've added temperature, sunrise, sunset, neither of which work, I guess the integrations don't work, but who wants that on their watch? These are all like anti-simplicity. I just want... It's almost perfect, but they added these stupid unnecessary features.

Maybe on the Facer creator marketplace, I can just create one that I want. Maybe that will actually work. That's probably the easiest way to go. But if that doesn't work, I can create one on Github and open sources, the font that I want, but the Hebrew one would be very special to me. It's definitely possible.

I'm looking at my desktop display. It says 30 Tishra 5786. So for sure from the HIPAA Cal API the data source is there. And I looked last night and it seemed that people only had created sort of ones for from a very different reason.

The VoiceNote data set I really want to create as well. That's actually a very important project, the GUI for adding that I have a backlog of literally thousands and it would form the basis for my classification model which I should probably note out and that's a real model I can build for the idea as well. \ No newline at end of file +So there's a lot of these AI voice pins emerging onto the market which are designed to be wearable devices.

So I record as I'm doing now quite a number of voice notes when I'm out of the house.

I use an Android app called Voice Notes that I really like but it doesn't have support yet for Bluetooth microphones.

At least not support that's reliable.

So I have to hold the phone up to my mouth, which really kind of degrades the experience.

As I started, I want to actually start doing, going on walks expressly for usually the moment I do this when I'm going places.

I just happen to think, but I actually want to start taking walks to jot down some ideas as a healthier way of combining work and getting out and getting some exercise and getting some sunlight.

And for that it would be really nice to not have to, you know, be holding up a phone to your mouth for 30 minutes or an hour or whatever it may be.

So I was thinking about wearable voice recorders but a lot of them from what I've seen are these kind of closed ecosystems in which they sell you can't just buy the hardware.

They'll sell you, they'll do like onboard transcription or they'll sell you like a Cloud Transcription Bundle.

I'm really not a fan of on-device transcription.

I mean I think it works but in my experience it doesn’t make a lot of sense to me just architecturally.

I think why do stuff on device that can be done in the cloud cost effectively?

And you got, you know, you can run vastly more powerful models in the cloud.

You don’t have to worry about quantizing models on a very, very small piece of hardware.

And so I guess what would be great for me, but Android, when you're looking at wearables, Android's like the obvious sync partner.

So you just need to get the voice of the audio data from the recording thing to Android and from there you can push to the cloud and then the rest is back-end speech and text.

So what I'm saying is that I'd love a modular solution that could do this.

A pin that is just hardware, just recording this audio sync, maybe has its own app, or maybe can be used preferably with third-party apps.

And therefore you can kind of build your own voice recording stack around it, and you can use your existing Speech-to-Docs transcription workflow.

And you don't have to subscribe to these very kind of, I forget the word, walled gardens in which the vendor chooses your force into this package that's often very unnecessarily expensive and you're paying mostly for overpriced transcription.

I'd prefer to just get, invest in good hardware! \ No newline at end of file diff --git a/transcripts/uncorrected/15.txt b/transcripts/uncorrected/15.txt index eaea5b9166faabd9642d0c97478ecd6f6fd86d89..65b441258e93b436681d73b0928dd3ea5da97777 100644 --- a/transcripts/uncorrected/15.txt +++ b/transcripts/uncorrected/15.txt @@ -1 +1 @@ -Okay, so I've just configured. VS Code is very, very important. I've just configured automatic updates, and I asked Claude, I said, why am I not getting them? Why do I, it says, you're out of date, download the Debian. And I said, I don't want to have to download a Debian every time, and I really want to keep this updated.

So it says, you should know, you need to join the Microsoft ASC, their repo, their third-party repo, which I had before then I think because I removed it as a duplicate.

So to clarify, it's not the case that you need to do this process. It is actually an automatic upgrade thing but you do need to be attached to the Microsoft repo to get those. \ No newline at end of file +I picked up a Samsung Galaxy FE watch. I checked compatibility, smartwatch. I think it's in the 7 series if I'm not mistaken. What is it exactly? It's a 40mm smartwatch. Where does it fit in their line up? What's the difference between this and the Watch 7? I just went for this one because it was what was in stock.

And is it shower proof, waterproof? And I know it's a glass display. So I'm wondering how tough is the glass? Or is it tough at all? I just asked because it's a fitness watch. I assume they make them a little bit more ruggedized, but maybe that's not the case. What does it say? \ No newline at end of file diff --git a/transcripts/uncorrected/16.txt b/transcripts/uncorrected/16.txt index ffc57e5992be591a97dbd7ee169ed839fe73e975..b51e9cf9eacfa8f539ba2c6270fbbbdcb80adeda 100644 --- a/transcripts/uncorrected/16.txt +++ b/transcripts/uncorrected/16.txt @@ -1 +1 @@ -I want to add to my DSR Holdings a LLM store TXT. It's almost a pity I didn't talk about this with Shlomo, but a radical idea. It actually, I mean, it appears to be working. I don't know if you're sure where I read from if it just parts my home page or read the txt but I asked Claude to pull in some context data about me into the into the file it seemed to work really well so what the thought I had for I mentioned Shlomo and what I thought about for myself is inbound LLM marketing considering AI traffic.

It's a pity I didn't take some in fact I'll add to the DAM a screenshots folder because a perfect example of a screenshot was the last time that I saw a and I sure I see them almost every day A sign up form where they didn ask for was the LLM your referral source I think it's absolutely insanity that anyone, any company would not have LLM as top of their list of referral sources for traffic.

And this opens up a whole world actually of LLM analytics. and you see which LLMs are scraping our site. LLM optimization. And then basically the idea of being LLM as an inbound pipeline. If you did all this well, could you actually view large language models as an inbound traffic source saying Google's dead, LLM is where it's at.

Here's how you can, I mean, I would have to try these approaches on my own site, but all I can do there is keep optimizing and see if someone says, if you typed into ChatGPT in a month and said, I need someone who's good with AI in Jerusalem, Israel. Can you find any profiles? And if it worked, that would almost be the opposite to pursue the outbound track as well for jobs. But as a complementary angle of attack, I think it would be very interesting to see as an experiment even. \ No newline at end of file +So there has been this vast development in multimodal AI recently. I signed up for Replicate and FAL AI. And what really strikes me is not only the diversity and number of models out there, but also the large number of permutations in multimodal AI, meaning what input can go to what output. And I think what I find difficult about it at the moment to navigate as a, let's say, creator. I created a few music videos just as kind of fun experiments. Is that there's so many different models. Like just in, let's say, the one series, there is maybe 20 different models to choose from in FAL, but they all do slightly different things not only in terms of the resolution and the parameter and the max duration but also in terms of the modalities, and they don’t really allow you to filter on this at the moment.

So what I mean by that is if we take an image to video model that animates still images to video, one model in one might create video without audio and another might create video with audio. And that's a very significant difference. But there's also a significant difference in do I prompt for the audio? In other words, is it going to be text to audio and render out audio that then gets added to video? Is it reference audio and reference image? So when you begin opening, all these differences really matter because I might want to filter on ideally, let's say I wanted to look at image to video models, which could generate lip sync to audio from a prompt. That might be one use case as well as the video.

In another use case, I might want to create a dialogue video. Let’s say I have a still image of a crowded market in Jerusalem, and I might want to print something like create a video from this image; the background soundtrack is background conversation noise in a bustling marketplace with vendors yelling out sales prices. That's just an example of the kind of background noise and the ambient noise that we have in this market I'm thinking about.

So what I would like to do, I created this repository which I created here. I'm trying to think of a taxonomy for multimodal, really for my own reference, but also as an open source project. Exploring the permutations of multimodal that are possible. So in the preceding example, we might have one definition of a modality might be still image to video without audio. Another modality, and then the description. Another modality might be still image to audio without lip sync. Another modality might be still image to video with lip sync.

But then you might have some sub modalities being still image to video with lip sync with reference image, that a reference to image. Another sub modality there might be still image to video with reference character reference in video. Another might be still image to video with audio with character reference through a LoRa (L-O-R-A). And I reckon that if we really enumerated the modalities we might get to hundreds if not thousands of different ones. For example, in FAL, just to talk about the long tail, there's music to music, which is music in painting. There's audio in painting, well, yeah, audio in painting, which I'm thinking aloud here is, I guess, distinguished music in painting is a subset of audio in painting, that it's melodic.

So that's the objective. I think that the JSON is the obvious format in which to attempt to denote these. And what I'd like you to do as the task definition is try to do this basically. Try to enumerate, list out a hierarchy, some kind of taxonomy representation that makes sense. We could try to create a baseline and then explore various ways of mapping out the hierarchy, manipulating the JSON so that we look at different ways of organizing it. So I think it would be useful to have like a first entry JSON in which we, and later maybe I, as new modalities come to, and we can maybe have very interesting labels might be their point of maturity, example workflows, use cases, etc. There's an awful lot that could be explored within these parameters. \ No newline at end of file diff --git a/transcripts/uncorrected/17.txt b/transcripts/uncorrected/17.txt index e9383aa5db79a22c214793ffdd4a93fc6ed49a60..7691086737e7862b23604ec7c3b5a56071521899 100644 --- a/transcripts/uncorrected/17.txt +++ b/transcripts/uncorrected/17.txt @@ -1 +1 @@ -Can I just make a suggestion? Before we proceed in this direction, I think that it definitely is the right content environment. But the reason I've created these is so that we have them ready for recurrent use. So Lama Index is very, very good and would be used for a lot of very versatile.

So before we start, let's update the cond environment to install all the different utilities we might need for tokenizing text, processing markdown, markdown to PDF, PDF splitting, all these different text utilities. Even ImageMagick typesetting utilities. Once we have that ready then we can begin. But let's get that environment good first if we can use a conda.yaml to define it.

In other words, take in the existing environment, make a few edits and then install that. Just remember there's an AMD GPU so it will affect the choice of packages. \ No newline at end of file +Look at the Facer's, I'm really surprised for no one's made a Hebrew date watch on the Facer creator, but it's probably the developer studio from Samsung is the way to go for that. And I want to edit, like the one that I have slightly, I can't find the perfect one, people put too much on them. I'm looking at the face I got from Facer now and they've added temperature, sunrise, sunset, neither of which work, I guess the integrations don't work, but who wants that on their watch? These are all like anti-simplicity. I just want... It's almost perfect, but they added these stupid unnecessary features.

Maybe on the Facer creator marketplace, I can just create one that I want. Maybe that will actually work. That's probably the easiest way to go. But if that doesn't work, I can create one on Github and open sources, the font that I want, but the Hebrew one would be very special to me. It's definitely possible.

I'm looking at my desktop display. It says 30 Tishra 5786. So for sure from the HIPAA Cal API the data source is there. And I looked last night and it seemed that people only had created sort of ones for from a very different reason.

The VoiceNote data set I really want to create as well. That's actually a very important project, the GUI for adding that I have a backlog of literally thousands and it would form the basis for my classification model which I should probably note out and that's a real model I can build for the idea as well. \ No newline at end of file diff --git a/transcripts/uncorrected/18.txt b/transcripts/uncorrected/18.txt index 68f0272363ffede253054f91243a4d0b8203d19b..eaea5b9166faabd9642d0c97478ecd6f6fd86d89 100644 --- a/transcripts/uncorrected/18.txt +++ b/transcripts/uncorrected/18.txt @@ -1 +1 @@ -Okay, here's just a few more specific things that I want to include. So I see you mentioning hydration drinks, which is very important. Electrolyte tablets become very expensive. So there's a few things I'd like to explore. More cost-effective ways for making them. I think you can buy them as a dry powder is one idea. The second one is a homemade recipe.

The next set of ideas is I really really need to always have some kind of food stuff at home ready to eat. So there's a few things in that regard. A list of a kind of basic pantry shopping list. Obviously optimized for all the dietary recommendations we've discussed here. Suggestions for, and I think protein bars aren't really enough, it needs to be carbohydrate as well. Recipes or suggestions for homemade protein bars for the same reason that they become very expensive to buy them individually.

That's probably the key thing I'm looking for at the moment is to have always on hand the ingredients and ideally like kind of a backup layer like I kind of make these protein bars but I also and that's kind of the fallback but ideally I prefer to obviously eat and so on. \ No newline at end of file +Okay, so I've just configured. VS Code is very, very important. I've just configured automatic updates, and I asked Claude, I said, why am I not getting them? Why do I, it says, you're out of date, download the Debian. And I said, I don't want to have to download a Debian every time, and I really want to keep this updated.

So it says, you should know, you need to join the Microsoft ASC, their repo, their third-party repo, which I had before then I think because I removed it as a duplicate.

So to clarify, it's not the case that you need to do this process. It is actually an automatic upgrade thing but you do need to be attached to the Microsoft repo to get those. \ No newline at end of file diff --git a/transcripts/uncorrected/19.txt b/transcripts/uncorrected/19.txt index b373213f419ec9b2e4b9ca165f42170441577ed2..ffc57e5992be591a97dbd7ee169ed839fe73e975 100644 --- a/transcripts/uncorrected/19.txt +++ b/transcripts/uncorrected/19.txt @@ -1 +1 @@ -Okay there's a bunch of memory layer projects now to explore later that are actually it's not longer separation between vector storage and memory which makes sense because it's kind of basically the same server it's offered by API mem0 super memory remember api memories.api that's a good starter list and they can all be integrated and used they'll do the vector backend so I'm using I'm testing it out on the documentary finding one, but just to see the concept and how it works with agency. \ No newline at end of file +I want to add to my DSR Holdings a LLM store TXT. It's almost a pity I didn't talk about this with Shlomo, but a radical idea. It actually, I mean, it appears to be working. I don't know if you're sure where I read from if it just parts my home page or read the txt but I asked Claude to pull in some context data about me into the into the file it seemed to work really well so what the thought I had for I mentioned Shlomo and what I thought about for myself is inbound LLM marketing considering AI traffic.

It's a pity I didn't take some in fact I'll add to the DAM a screenshots folder because a perfect example of a screenshot was the last time that I saw a and I sure I see them almost every day A sign up form where they didn ask for was the LLM your referral source I think it's absolutely insanity that anyone, any company would not have LLM as top of their list of referral sources for traffic.

And this opens up a whole world actually of LLM analytics. and you see which LLMs are scraping our site. LLM optimization. And then basically the idea of being LLM as an inbound pipeline. If you did all this well, could you actually view large language models as an inbound traffic source saying Google's dead, LLM is where it's at.

Here's how you can, I mean, I would have to try these approaches on my own site, but all I can do there is keep optimizing and see if someone says, if you typed into ChatGPT in a month and said, I need someone who's good with AI in Jerusalem, Israel. Can you find any profiles? And if it worked, that would almost be the opposite to pursue the outbound track as well for jobs. But as a complementary angle of attack, I think it would be very interesting to see as an experiment even. \ No newline at end of file diff --git a/transcripts/uncorrected/2.txt b/transcripts/uncorrected/2.txt index 0f9f01aeb1efa9b56a188dbecffed93a32cfd7c5..c4062c3c839f500b2242b1b7628a7ef9e4bd26f0 100644 --- a/transcripts/uncorrected/2.txt +++ b/transcripts/uncorrected/2.txt @@ -1 +1 @@ -This repository contains a collection of slash commands which I use with Claudecode.

I capture some of the slash commands using speech to text.

The slash commands that have been captured with dictation frequently lack elements like punctuation, paragraph spacing, and they may contain occasionally words that were mistranscribed.

Please recurse through the directories and correct slash commands which you can find which were missing these basic textual features but do not limit your fixes to only I don't want to go into those containing these defects but rather consider in your editing any slash commands which need to be rewritten for optimal intelligibility. \ No newline at end of file +I would like to create a docs folder in this repository.

The docs folder should be separate from the code and it will be the place in which documentation is gathered.

Ask the user if there is any specific functionalities or aspects of the application that the user wishes to document in this folder.

The docs folder should be mentioned and linked in the readme, directing users to it for more extensive documentation than can be found in the readme itself. \ No newline at end of file diff --git a/transcripts/uncorrected/20.txt b/transcripts/uncorrected/20.txt index 847a19b97210af5a0d79cb54c259b54cbe8103aa..e9383aa5db79a22c214793ffdd4a93fc6ed49a60 100644 --- a/transcripts/uncorrected/20.txt +++ b/transcripts/uncorrected/20.txt @@ -1 +1 @@ -Create now a meetings taker, meetings minute producer. It will have the following functionality. The user will upload a recording of meetings, of a meeting that took place. and we'll provide then there will be a section so that's an audio upload functionality the next one will be a meeting participants the user will provide the names and identifying characteristics of people who are audible in the recording so it'll say like for example and there should be Name, Description, Daniel, male voice in the recording, Hannah, female voice in the recording.

Upon receiving both of these things, it will send it to Gemini Multimodal in order to produce two things One is a transcript, slightly cleaned up diaries transcript That's one output and the second one is a minute which is a automatically generated minutes formatted with decisions, action items for each participant.

And then it should be integrated with Google Drive so the user can connect their Google Drive and save them to a folder after they've been generated and view them in the app. \ No newline at end of file +Can I just make a suggestion? Before we proceed in this direction, I think that it definitely is the right content environment. But the reason I've created these is so that we have them ready for recurrent use. So Lama Index is very, very good and would be used for a lot of very versatile.

So before we start, let's update the cond environment to install all the different utilities we might need for tokenizing text, processing markdown, markdown to PDF, PDF splitting, all these different text utilities. Even ImageMagick typesetting utilities. Once we have that ready then we can begin. But let's get that environment good first if we can use a conda.yaml to define it.

In other words, take in the existing environment, make a few edits and then install that. Just remember there's an AMD GPU so it will affect the choice of packages. \ No newline at end of file diff --git a/transcripts/uncorrected/21.txt b/transcripts/uncorrected/21.txt index 73f338799a7ffd0c5b0b5fd814b5e3f3a8c78a2c..68f0272363ffede253054f91243a4d0b8203d19b 100644 --- a/transcripts/uncorrected/21.txt +++ b/transcripts/uncorrected/21.txt @@ -1 +1 @@ -I'd like to create a content recommendation app. This will be using... I'd like to get recommendations for movies to watch, things on Netflix, YouTube that are up to date. I'm based in Israel. I like watching things that are based on a true story or true stories. I prefer to watch things that are recent so it has to be up to date and the pitfall with these apps is that they'll recommend stuff that you've already seen or you don't want to watch so it would have to have some memory that it makes recommendations preferably one at a time and I can say like add to watch list or add to recommendation list or not interested or I've seen and the app would need to remember these responses so that it doesn't. It's just the same thing over and over again.

I know there's TMDB API which is great for getting movies. I have an API key I can provide. And I'd like to maybe say recommend across all categories just recommend movies. The Netflix thing it's very hard to get recommendations that are geo-sensitive for Netflix but that would probably be the ideal meaning that I'm based in Israel and if stuff isn't available here that should be considered as recommendations. \ No newline at end of file +Okay, here's just a few more specific things that I want to include. So I see you mentioning hydration drinks, which is very important. Electrolyte tablets become very expensive. So there's a few things I'd like to explore. More cost-effective ways for making them. I think you can buy them as a dry powder is one idea. The second one is a homemade recipe.

The next set of ideas is I really really need to always have some kind of food stuff at home ready to eat. So there's a few things in that regard. A list of a kind of basic pantry shopping list. Obviously optimized for all the dietary recommendations we've discussed here. Suggestions for, and I think protein bars aren't really enough, it needs to be carbohydrate as well. Recipes or suggestions for homemade protein bars for the same reason that they become very expensive to buy them individually.

That's probably the key thing I'm looking for at the moment is to have always on hand the ingredients and ideally like kind of a backup layer like I kind of make these protein bars but I also and that's kind of the fallback but ideally I prefer to obviously eat and so on. \ No newline at end of file diff --git a/transcripts/uncorrected/22.txt b/transcripts/uncorrected/22.txt index 24994713fc006cf39dff6433f341d9e5b812c141..b373213f419ec9b2e4b9ca165f42170441577ed2 100644 --- a/transcripts/uncorrected/22.txt +++ b/transcripts/uncorrected/22.txt @@ -1 +1 @@ -So what I would like to do in this is create an app really for the purpose of demonstrating the capabilities of audio input as a modality because I think it's overlooked and it brings a lot of really interesting use cases.

What I'd like to do for this one is, as one facet of it, the user uploads a recording. It should be a recording of just one speaker. And upon receiving the recording, it'll be ingested to Gemini. and Gemini will analyse it for the following. It will try to categorise the speaker's accent. It will estimate the words per minute at which they speak. And then it will provide a phonetic analysis, basically a linguistic analysis of their speech, how they pronounce certain and many others.

A voice clip, Gemini processes it and then it produces a detailed analysis in a nicely displayed manner. \ No newline at end of file +Okay there's a bunch of memory layer projects now to explore later that are actually it's not longer separation between vector storage and memory which makes sense because it's kind of basically the same server it's offered by API mem0 super memory remember api memories.api that's a good starter list and they can all be integrated and used they'll do the vector backend so I'm using I'm testing it out on the documentary finding one, but just to see the concept and how it works with agency. \ No newline at end of file diff --git a/transcripts/uncorrected/23.txt b/transcripts/uncorrected/23.txt index 8eb532b0a713565b3b2fae20960656ec0d9e6e2f..847a19b97210af5a0d79cb54c259b54cbe8103aa 100644 --- a/transcripts/uncorrected/23.txt +++ b/transcripts/uncorrected/23.txt @@ -1 +1 @@ -Okay what I'd like to do is create an application with Gemini. The user will upload their resume and upon receiving the resume the purpose of this application is to ideate and many more. So, I'm going to show you how to create jobs, positions that the user might be suitable for. It could be what they've done previously or an extension of that, but it would also try to suggest alternative directions, as in slide pivots or rigby pig pivots.

They'll frame its suggestions with job title as in if the user uploads their resume they'll say oh you could be an AI product manager, salary range for this position. The user might also maybe the user should provide where they based though that should be obvious from the CV. So try to contextualize that by their area demand who hires for it analysis why this could be a cool job for you. Knowledge gaps slash upskilling, how you might want to upskill to qualify yourself for this job. Keywords that this job might be that you might find opportunities using these keywords. A certification, certifications that I want to pursue.

Then a kind of a Tinder interface, and so on. So, it's a really nice, thumbs up, thumbs down, and those are recorded in memory so that the user can go back through the suggestions that it liked. So it's kind of a career ideation tool really, career pivot ideation tool for the user to explore alternative directions if they're feeling like they might not be thinking very sufficiently widely about what it is that they could be using their skills for. \ No newline at end of file +Create now a meetings taker, meetings minute producer. It will have the following functionality. The user will upload a recording of meetings, of a meeting that took place. and we'll provide then there will be a section so that's an audio upload functionality the next one will be a meeting participants the user will provide the names and identifying characteristics of people who are audible in the recording so it'll say like for example and there should be Name, Description, Daniel, male voice in the recording, Hannah, female voice in the recording.

Upon receiving both of these things, it will send it to Gemini Multimodal in order to produce two things One is a transcript, slightly cleaned up diaries transcript That's one output and the second one is a minute which is a automatically generated minutes formatted with decisions, action items for each participant.

And then it should be integrated with Google Drive so the user can connect their Google Drive and save them to a folder after they've been generated and view them in the app. \ No newline at end of file diff --git a/transcripts/uncorrected/24.txt b/transcripts/uncorrected/24.txt index 492695d3c04244eba8ee90b40f4d0ed8cbb6793b..73f338799a7ffd0c5b0b5fd814b5e3f3a8c78a2c 100644 --- a/transcripts/uncorrected/24.txt +++ b/transcripts/uncorrected/24.txt @@ -1 +1 @@ -Here's an idea for a product I had. Tell me if you think it's ridiculous and if something like this has been attempted. So, speech-to-text transcription is amazing and I've become very dependent on it for voice typing. Unfortunately, on Linux and specifically, it's really tricky to find something that works at the operating system level. There are tools for Windows and Mac, and what I really need is something that will do it in any program. Not a browser extension, not an IDE extension, because then you're forever looking for does this tool have voice support. And you end up having, like what I have now, three or four Whisper subscriptions.

And many more. And you free yourself from the keyboard literally, you begin to want to use it at all your computers on my laptop. And some of them, my desktop can run a whisper, my laptop really can't. And you don't want to be spending a bunch of time provisioning separate environments.

So my idea is for a mini PC, think something like the Raspberry Pi or Orange Pi, but not presented as an enthusiast product so much as a little edge device and many more A box for all intents and purposes which runs on device a very efficient speech model like Whisper and it does on hardware local inference. Everything is optimized for this one workload. It has a USB out and the USB out it functions as a HID device and it sends the transcribed text and so on. Influence on the device and straight out USB.

What this means is you can plug your voice keyboard, which I think is obvious name, into anything. You can have it bound to your desktop for most of the time, you go away for traveling for a while, you pack your box. So it's really analogous to a keyboard.

Now what I was thinking to myself as a stupid idea is yes, you could do this stuff on device, you could use Claude, maybe it's too niche. But it could be quite creative for people who are really into voice typing and want a way to. And if it had Bluetooth support, your little box, your voice typing centerpiece could also work with your tablets, your phone and you could sort of extend around it. \ No newline at end of file +I'd like to create a content recommendation app. This will be using... I'd like to get recommendations for movies to watch, things on Netflix, YouTube that are up to date. I'm based in Israel. I like watching things that are based on a true story or true stories. I prefer to watch things that are recent so it has to be up to date and the pitfall with these apps is that they'll recommend stuff that you've already seen or you don't want to watch so it would have to have some memory that it makes recommendations preferably one at a time and I can say like add to watch list or add to recommendation list or not interested or I've seen and the app would need to remember these responses so that it doesn't. It's just the same thing over and over again.

I know there's TMDB API which is great for getting movies. I have an API key I can provide. And I'd like to maybe say recommend across all categories just recommend movies. The Netflix thing it's very hard to get recommendations that are geo-sensitive for Netflix but that would probably be the ideal meaning that I'm based in Israel and if stuff isn't available here that should be considered as recommendations. \ No newline at end of file diff --git a/transcripts/uncorrected/25.txt b/transcripts/uncorrected/25.txt index acadef7c73d2b38c88ec7b03751c008a67eca4fc..24994713fc006cf39dff6433f341d9e5b812c141 100644 --- a/transcripts/uncorrected/25.txt +++ b/transcripts/uncorrected/25.txt @@ -1 +1 @@ -Another idea for Gemini app. Recipe modifier, you get a recipe. Gemini parses the recipe, structures the data. Then, using a nutritional database, attempts to calculate the total fat per serving and the fat per ingredient.

Then, this is an app for people like me who are trying to adhere to a low-fat diet. It remixes a recipe to either achieve a certain fat amount, as in under X grams of fat, or to just make a general reduction within reasonable bounds while still trying to keep the recipe the recipe. \ No newline at end of file +So what I would like to do in this is create an app really for the purpose of demonstrating the capabilities of audio input as a modality because I think it's overlooked and it brings a lot of really interesting use cases.

What I'd like to do for this one is, as one facet of it, the user uploads a recording. It should be a recording of just one speaker. And upon receiving the recording, it'll be ingested to Gemini. and Gemini will analyse it for the following. It will try to categorise the speaker's accent. It will estimate the words per minute at which they speak. And then it will provide a phonetic analysis, basically a linguistic analysis of their speech, how they pronounce certain and many others.

A voice clip, Gemini processes it and then it produces a detailed analysis in a nicely displayed manner. \ No newline at end of file diff --git a/transcripts/uncorrected/26.txt b/transcripts/uncorrected/26.txt index 48df2efb7e5f7af2de5f6a9e6f79c4188a1f5e45..8eb532b0a713565b3b2fae20960656ec0d9e6e2f 100644 --- a/transcripts/uncorrected/26.txt +++ b/transcripts/uncorrected/26.txt @@ -1 +1 @@ -Google ID8 to Try would be one of the apps that connects with the Google Workspace services. Which I don't know, maybe they've circumvented their general cautiousness.

Like voice to email. You send an email, you record a voice memo, it transcribes it, it checks your contacts, it generates an email, it shows you a draft, is that okay, and then it sends. \ No newline at end of file +Okay what I'd like to do is create an application with Gemini. The user will upload their resume and upon receiving the resume the purpose of this application is to ideate and many more. So, I'm going to show you how to create jobs, positions that the user might be suitable for. It could be what they've done previously or an extension of that, but it would also try to suggest alternative directions, as in slide pivots or rigby pig pivots.

They'll frame its suggestions with job title as in if the user uploads their resume they'll say oh you could be an AI product manager, salary range for this position. The user might also maybe the user should provide where they based though that should be obvious from the CV. So try to contextualize that by their area demand who hires for it analysis why this could be a cool job for you. Knowledge gaps slash upskilling, how you might want to upskill to qualify yourself for this job. Keywords that this job might be that you might find opportunities using these keywords. A certification, certifications that I want to pursue.

Then a kind of a Tinder interface, and so on. So, it's a really nice, thumbs up, thumbs down, and those are recorded in memory so that the user can go back through the suggestions that it liked. So it's kind of a career ideation tool really, career pivot ideation tool for the user to explore alternative directions if they're feeling like they might not be thinking very sufficiently widely about what it is that they could be using their skills for. \ No newline at end of file diff --git a/transcripts/uncorrected/27.txt b/transcripts/uncorrected/27.txt index 353b380ddee0d6134e7cfc905de9171524ef566e..492695d3c04244eba8ee90b40f4d0ed8cbb6793b 100644 --- a/transcripts/uncorrected/27.txt +++ b/transcripts/uncorrected/27.txt @@ -1 +1 @@ -I'd like to create an app that does the following. The user will paste an image or multiple images into the image upload feature. It'll run it through Gemini and it will attempt to extract the following fields: Serial Number, Model Number, Manufacturer, in a text field it will OCR readable text, Country of Manufacture.

And then based upon the detected product, the manufacturer and the part number and the serial number, it will provide a one line description, it will provide a multi-line description, it will provide a spec sheet. It will provide a year of first released on the market, age in years based on first release minus the current time, correct to the nearest 8.1, one decimal place.

And deprecation level from almost deprecated, fully deprecated, RRP, still on market, the last of the checkbox. So it'll basically take an image and then extract all these fields based on the initial OCR and then based on the web search complementing that. \ No newline at end of file +Here's an idea for a product I had. Tell me if you think it's ridiculous and if something like this has been attempted. So, speech-to-text transcription is amazing and I've become very dependent on it for voice typing. Unfortunately, on Linux and specifically, it's really tricky to find something that works at the operating system level. There are tools for Windows and Mac, and what I really need is something that will do it in any program. Not a browser extension, not an IDE extension, because then you're forever looking for does this tool have voice support. And you end up having, like what I have now, three or four Whisper subscriptions.

And many more. And you free yourself from the keyboard literally, you begin to want to use it at all your computers on my laptop. And some of them, my desktop can run a whisper, my laptop really can't. And you don't want to be spending a bunch of time provisioning separate environments.

So my idea is for a mini PC, think something like the Raspberry Pi or Orange Pi, but not presented as an enthusiast product so much as a little edge device and many more A box for all intents and purposes which runs on device a very efficient speech model like Whisper and it does on hardware local inference. Everything is optimized for this one workload. It has a USB out and the USB out it functions as a HID device and it sends the transcribed text and so on. Influence on the device and straight out USB.

What this means is you can plug your voice keyboard, which I think is obvious name, into anything. You can have it bound to your desktop for most of the time, you go away for traveling for a while, you pack your box. So it's really analogous to a keyboard.

Now what I was thinking to myself as a stupid idea is yes, you could do this stuff on device, you could use Claude, maybe it's too niche. But it could be quite creative for people who are really into voice typing and want a way to. And if it had Bluetooth support, your little box, your voice typing centerpiece could also work with your tablets, your phone and you could sort of extend around it. \ No newline at end of file diff --git a/transcripts/uncorrected/28.txt b/transcripts/uncorrected/28.txt index 0ec335394a72e80887a3672f290bc5828d8227e0..acadef7c73d2b38c88ec7b03751c008a67eca4fc 100644 --- a/transcripts/uncorrected/28.txt +++ b/transcripts/uncorrected/28.txt @@ -1 +1 @@ -I'd like to create an app that is a meeting documentation assistant and it can provide three outputs from a voice input. So there's a voice recorder, so the user can record a voice note, pause, stop and retake, and then send. Once the voice note is sent, the user selects whether they want to generate a meeting minutes, an agenda for an upcoming meeting, so meeting agenda, or just those two actually.

And then if they do meeting agenda, it'll also generate a short version that can fit in a calendar description and a suggested meeting title. Upon receiving this from the user it gets sent to Gemini it analyzes the audio parses the audio and then generates a well minute or agenda as according to what the user selects with an automatically generated title a body that formatted in Markdown but renders in rich text so the user can download the original file with an automatically generated title a body that is formatted in Markdown but renders in rich text The user can download the original file and Runs the user would just clear the recording and start again.

It should also be able to automatically detect start time, end time, participants, action items, and it can deliver a... It will put those in organized fields in the output, even though the... and maybe the user can edit those to rectify any mistakes. And then when they click download, it will combine the corrected or uncorrected version as the case may be to generate the actual document for the minutes or the agenda. \ No newline at end of file +Another idea for Gemini app. Recipe modifier, you get a recipe. Gemini parses the recipe, structures the data. Then, using a nutritional database, attempts to calculate the total fat per serving and the fat per ingredient.

Then, this is an app for people like me who are trying to adhere to a low-fat diet. It remixes a recipe to either achieve a certain fat amount, as in under X grams of fat, or to just make a general reduction within reasonable bounds while still trying to keep the recipe the recipe. \ No newline at end of file diff --git a/transcripts/uncorrected/29.txt b/transcripts/uncorrected/29.txt index 243f36cf36c052964af7ebe83a792dae9e67d205..48df2efb7e5f7af2de5f6a9e6f79c4188a1f5e45 100644 --- a/transcripts/uncorrected/29.txt +++ b/transcripts/uncorrected/29.txt @@ -1 +1 @@ -I'd like to create an app which will do the following. It's a voice-to-voice app. The user will record a voice message. The voice recording in the app. The voice recording gets sent to Gemini with a transcript. Gemini's task is to create an abbreviated version of the Voice Message, as short as possible. Essentially cleaning it up. This stage is not shown to the user.

But what happens next is that it gets text to speech, it gets synthesized, the user can choose between a male or a female voice. Yeah, and once that, once the generated audio is created, it presents to the user, the user can download it. So it's essentially taking audio from the user, cleaning it, condensing it, synthesizing it, and then download.

Come up with an imaginative name for this use case. \ No newline at end of file +Google ID8 to Try would be one of the apps that connects with the Google Workspace services. Which I don't know, maybe they've circumvented their general cautiousness.

Like voice to email. You send an email, you record a voice memo, it transcribes it, it checks your contacts, it generates an email, it shows you a draft, is that okay, and then it sends. \ No newline at end of file diff --git a/transcripts/uncorrected/3.txt b/transcripts/uncorrected/3.txt index 35a25c66c27c2d44f0a64ca785442bcb2b03db07..ec565a8b602b1abb235b4c8a5616370d701f5be7 100644 --- a/transcripts/uncorrected/3.txt +++ b/transcripts/uncorrected/3.txt @@ -1 +1 @@ -This repository contains a folder of screenshots.

The intended use of the screenshots is that they will be integrated into the README or other documentation to demonstrate the UI of the app.

It's important therefore that the screenshots have descriptive file names.

Please rename the screenshots for this purpose and integrate them into the README in the most appropriate section. \ No newline at end of file +Please go through the markdown files in this repository to make sure that no emojis have been used.

If you find any emojis, remove them.

If emojis have been used in place of proper icons, then identify an appropriate icon library that could be used to provide the emojis.

Remember that if the icons are well known, such as the icons from major social networks, these should be integrated via a pre-designed library.

Do not attempt to create custom once-off SVGs for any logo that likely already exists in a professional library. \ No newline at end of file diff --git a/transcripts/uncorrected/30.txt b/transcripts/uncorrected/30.txt index 35a55fa10abb62fbf49bc2c38d73e8cc53fca620..353b380ddee0d6134e7cfc905de9171524ef566e 100644 --- a/transcripts/uncorrected/30.txt +++ b/transcripts/uncorrected/30.txt @@ -1 +1 @@ -This is called Impact Report Finder. The objective is that the user will provide the name of a company and the AI tool, Gemini, will attempt to find any voluntary sustainability disclosures, impact disclosures that they've written from the internet and it will send them by year. If they include data about their GSD admissions there will be a tick symbol and there will be a link to the result and there will be a direct link to the PDF. and Jeff.

So after the user provides the name of the company, there can be a... if Gemini needs to disambiguate, it will ask the user in a text box below, can you clarify and then the user can hit submit again, otherwise it's more than an interactive chat app, it just provides those search results in that specific format with the reports chronologically from by year, if there's multiple ones by year, by date of release, and then if they have GSG data, a link to the data sheet if it's separate, or just the PDF, but basically annotated table of links. \ No newline at end of file +I'd like to create an app that does the following. The user will paste an image or multiple images into the image upload feature. It'll run it through Gemini and it will attempt to extract the following fields: Serial Number, Model Number, Manufacturer, in a text field it will OCR readable text, Country of Manufacture.

And then based upon the detected product, the manufacturer and the part number and the serial number, it will provide a one line description, it will provide a multi-line description, it will provide a spec sheet. It will provide a year of first released on the market, age in years based on first release minus the current time, correct to the nearest 8.1, one decimal place.

And deprecation level from almost deprecated, fully deprecated, RRP, still on market, the last of the checkbox. So it'll basically take an image and then extract all these fields based on the initial OCR and then based on the web search complementing that. \ No newline at end of file diff --git a/transcripts/uncorrected/31.txt b/transcripts/uncorrected/31.txt index e3960e6d457375f71a0aa63d07c4c8ad4af74fc2..0ec335394a72e80887a3672f290bc5828d8227e0 100644 --- a/transcripts/uncorrected/31.txt +++ b/transcripts/uncorrected/31.txt @@ -1 +1 @@ -Okay, I'd like to create a sustainability report parser which will operate as follows. The user will provide a link to a sustainability disclosure or better they will upload a PDF. That's the expectation.

Upon receiving the PDF from the user the app will load the PDF in a frame. Gemini will identify on which page sustainability, The disclosure data for Scope 321 emissions is reported. And the PDF will load up in the frame, the viewer, with that page skipped to that page, and the data highlighted with a yellow overlay, slight highlight.

And beneath it Gemini will output the table for the top level in other words the summary of the scope 321 emissions with a short text description of what they were in summary the units detected scope 321 itemize then a disclaimer under that that this detection is based on automated processing may be incorrect and so on. \ No newline at end of file +I'd like to create an app that is a meeting documentation assistant and it can provide three outputs from a voice input. So there's a voice recorder, so the user can record a voice note, pause, stop and retake, and then send. Once the voice note is sent, the user selects whether they want to generate a meeting minutes, an agenda for an upcoming meeting, so meeting agenda, or just those two actually.

And then if they do meeting agenda, it'll also generate a short version that can fit in a calendar description and a suggested meeting title. Upon receiving this from the user it gets sent to Gemini it analyzes the audio parses the audio and then generates a well minute or agenda as according to what the user selects with an automatically generated title a body that formatted in Markdown but renders in rich text so the user can download the original file with an automatically generated title a body that is formatted in Markdown but renders in rich text The user can download the original file and Runs the user would just clear the recording and start again.

It should also be able to automatically detect start time, end time, participants, action items, and it can deliver a... It will put those in organized fields in the output, even though the... and maybe the user can edit those to rectify any mistakes. And then when they click download, it will combine the corrected or uncorrected version as the case may be to generate the actual document for the minutes or the agenda. \ No newline at end of file diff --git a/transcripts/uncorrected/32.txt b/transcripts/uncorrected/32.txt index 4215c595a95e066a9ecda2a2ae08b9013686c002..243f36cf36c052964af7ebe83a792dae9e67d205 100644 --- a/transcripts/uncorrected/32.txt +++ b/transcripts/uncorrected/32.txt @@ -1 +1 @@ -Okay, I'd like to create an app which does the following. The purpose of the app is to visualize how different countries, ideologies, systems approach common policy challenges. An example of a policy challenge that I'm just providing for explaining how I could see this working is second-hand smoke control. Some countries have very strict regulations, some countries have very lax enforcement. And probably there is not really much distinction by system of government but the user prompts it called policy visualizer and the user enters a policy challenge. So another example might be minimum alcohol purchasing laws.

Once Gemini receives this prompt, its task will be to research how different countries in the first instance approach this topic. And from that analysis, it can identify commonalities or clusters. The research process happens in the back end. And the user is shown some kind of progress indicators like researching what it's doing basically. Not a huge amount of verbosity but just a few cues so the user knows that it's not stuck or it's actually doing something.

Once Gemini concludes its first pass it will have grouped not necessarily every country in the world but based on the clusters it identifies it found groups. Each group is given a label. The label might be laissez-faire, permissive. These may be either recognized labels or what Gemini feels it's best to describe them as. And the countries are displayed with their national flags in alphabetical order.

The next functionality is that the user can click on the cluster and Gemini will describe what it is about this law that it considered them to be a cluster. In other words, the way in which they approach the challenge. That's a modal. Then the user can click on any country and it can see how that country approaches it. So I might click on the flag of Germany and either an accordion or a modal it show how Germany approaches in this case gun control and its cluster.

Country level is always a tab and only if there's other taxonomies. By taxonomy I mean that we think there's a very, Gemini says there's a very big difference and how different right-wing versus left-wing approaches we're going to do. We're going to create one more tab with that. But that should be kind of only if there's very compelling reason to do so. Or if it has significant data to share. So if it feels like there's enough data about how US states approach an issue at the state level, it might create a tab called US States and then follow the same pattern in which it groups them into clusters.

The objective is to, rather than searching through Google to see how different countries do different things, to start with your question and then get this visualisation. And I think the icing on the cake would be an analysis. So this is a visual presentation and then there may be analysis showing significant differences, some similarities. So there's like a report, a textual report, but the main tab, because I think it's the most interesting one, is the visualization, the policy visualizer. \ No newline at end of file +I'd like to create an app which will do the following. It's a voice-to-voice app. The user will record a voice message. The voice recording in the app. The voice recording gets sent to Gemini with a transcript. Gemini's task is to create an abbreviated version of the Voice Message, as short as possible. Essentially cleaning it up. This stage is not shown to the user.

But what happens next is that it gets text to speech, it gets synthesized, the user can choose between a male or a female voice. Yeah, and once that, once the generated audio is created, it presents to the user, the user can download it. So it's essentially taking audio from the user, cleaning it, condensing it, synthesizing it, and then download.

Come up with an imaginative name for this use case. \ No newline at end of file diff --git a/transcripts/uncorrected/33.txt b/transcripts/uncorrected/33.txt index 145fac41057e67a2489a588fef1f5d5a4b0df965..35a55fa10abb62fbf49bc2c38d73e8cc53fca620 100644 --- a/transcripts/uncorrected/33.txt +++ b/transcripts/uncorrected/33.txt @@ -1 +1 @@ -Alright, so the plan is for this repository, I want to create an audio media streaming interface for my home network. And there's a few things I want to roll into this one too.

Number 1 is media playback. So I have a volume on the NAS called AudioShare. The NAS is 10.0.0.50. So connect to the NAS, you'll find the AudioShare volume and let's mount that as the media library. It'll have a lot of tracks already populated.

Second thing is a soundboard. So I'll create a folder within that audio share volume called soundboard. And in the soundboard I just upload some stupid sound effects I do one to start it off Like laughing sound.

And then I also want to create a intercom system. and the functionality for the intercom is that from this computer, sorry from the interface which will be audio.residence.jlm.com I'd like to have the push to talk and the start and stop. PUSH TO TALK

So for the speaker networking this is where I would like you to give me your thoughts on what makes the most sense So I've used before MPD. I've installed MPD clients on... So the devices are, there is a device called Nursery Pi in SSH. Bedroom Pi, R-Pi and Smart TV. Each one is connected to a speaker. That's the network.

I tried MPD, putting an MPD client on each device. MPD has been the most reliable But it seems kind of a pity to use this when there are protocols like SnapServer that are designed specifically for this use case. However, using Home Assistant, I found SnapServer to be very buggy. I could never really get it to work and many more and the system that's reliable.

I find with MPD, because you need to select the speaker on the client devices, those bindings frequently broke. So I'd like to have something that kind of, the speakers are really never going to change. In the sense that I'm going to, I have a sound card for the Raspberry Pi. That's the speaker. and for as long as I use this system that's gonna be the configuration. So I want to set up something that once it's in place it's pretty much just gonna work.

So I leave that call up to you and please create a... Create a folder in the repository providing your recommendations just before you begin and what you suggest as the best implementation for the multi-speaker network whether it is broadcasting to a bunch of MCD clients from the Web UI or whether it's creating a single Snap server or something else that manages the networking I don't envision much of a need to select individual speakers by which I mean, I think that for the most part the occasions I'm using this I'll just play media to the pool but of course it would be nice to be able to select that ! \ No newline at end of file +This is called Impact Report Finder. The objective is that the user will provide the name of a company and the AI tool, Gemini, will attempt to find any voluntary sustainability disclosures, impact disclosures that they've written from the internet and it will send them by year. If they include data about their GSD admissions there will be a tick symbol and there will be a link to the result and there will be a direct link to the PDF. and Jeff.

So after the user provides the name of the company, there can be a... if Gemini needs to disambiguate, it will ask the user in a text box below, can you clarify and then the user can hit submit again, otherwise it's more than an interactive chat app, it just provides those search results in that specific format with the reports chronologically from by year, if there's multiple ones by year, by date of release, and then if they have GSG data, a link to the data sheet if it's separate, or just the PDF, but basically annotated table of links. \ No newline at end of file diff --git a/transcripts/uncorrected/34.txt b/transcripts/uncorrected/34.txt index b314f3f74074ca02c2a47132cea688da6abb56d9..e3960e6d457375f71a0aa63d07c4c8ad4af74fc2 100644 --- a/transcripts/uncorrected/34.txt +++ b/transcripts/uncorrected/34.txt @@ -1 +1 @@ -Building a Reporting Disclosure. I have a few thoughts. One, I can create a model. A model is actually quite feasible. It would be, but it's a data annotation project. It's saying, here's a PDF, here are the actual variables. In other words, here's the scope 3, scope 2, scope 1, here are the units, train it like that.

Second thought is if I did want to put together a dataset of sustainability disclosure reports, I think you could argue a public fair use clause for the PDFs being there.

And then the one I did with Gemini the other day which was basically a parsing AI tool seemed to work and could probably be used in production and which works even maybe as a way of trying to get in touch with Google is they have They have definitely an AI for good division who may let's say provide Gemini credits for the actual deployment of it on Cloud Run. Because from my first run of it, it was very, very promising for the task of parsing the reports.

And that would greatly the feature would be when it extracts the data human human in the loop is done by seeing what it is matching it to a company in the database or to a known company Let's take Google itself as an example. Detects its stock ticker, detects its stock exchange. And then you click like add to database meaning that you're adding the validated data and it could even pull out the metadata from the document pull out the source and that would be a great way of building up a human validated database in other words you take the reports you say either everything everything looks good to me or this is wrong either way you add it then of course you've got the missing financials and the rest of the world.

But that would probably be because there is thousands of sustainability disclosures, especially when you consider I think beyond the US globally, and it's beyond. So certainly it's a task for a model, but it's also human in the loop. The ultimate question is if Gemini stock performs 99% sufficiently well in the task of extracting this data from the sustainability reports. A model might actually not even be necessary because out of the box it's almost perfect. That is, I suspect, what the case would be. \ No newline at end of file +Okay, I'd like to create a sustainability report parser which will operate as follows. The user will provide a link to a sustainability disclosure or better they will upload a PDF. That's the expectation.

Upon receiving the PDF from the user the app will load the PDF in a frame. Gemini will identify on which page sustainability, The disclosure data for Scope 321 emissions is reported. And the PDF will load up in the frame, the viewer, with that page skipped to that page, and the data highlighted with a yellow overlay, slight highlight.

And beneath it Gemini will output the table for the top level in other words the summary of the scope 321 emissions with a short text description of what they were in summary the units detected scope 321 itemize then a disclaimer under that that this detection is based on automated processing may be incorrect and so on. \ No newline at end of file diff --git a/transcripts/uncorrected/35.txt b/transcripts/uncorrected/35.txt index 8d2caf72445f7704d8455a3c2b790fdf76026b9e..4215c595a95e066a9ecda2a2ae08b9013686c002 100644 --- a/transcripts/uncorrected/35.txt +++ b/transcripts/uncorrected/35.txt @@ -1 +1 @@ -The purpose of the repository basically is to model or suggest the idea of using AI agents to scope out gap filling and extending multi-agent networks based on their inferred understanding of the purpose of a multi-agent network.

I think iterative workflow is the best. It suggests to the user what about this agent the user says yes or no, rather than the batch system. Although it could do both, but let's make the defaults the kind of individual review system. \ No newline at end of file +Okay, I'd like to create an app which does the following. The purpose of the app is to visualize how different countries, ideologies, systems approach common policy challenges. An example of a policy challenge that I'm just providing for explaining how I could see this working is second-hand smoke control. Some countries have very strict regulations, some countries have very lax enforcement. And probably there is not really much distinction by system of government but the user prompts it called policy visualizer and the user enters a policy challenge. So another example might be minimum alcohol purchasing laws.

Once Gemini receives this prompt, its task will be to research how different countries in the first instance approach this topic. And from that analysis, it can identify commonalities or clusters. The research process happens in the back end. And the user is shown some kind of progress indicators like researching what it's doing basically. Not a huge amount of verbosity but just a few cues so the user knows that it's not stuck or it's actually doing something.

Once Gemini concludes its first pass it will have grouped not necessarily every country in the world but based on the clusters it identifies it found groups. Each group is given a label. The label might be laissez-faire, permissive. These may be either recognized labels or what Gemini feels it's best to describe them as. And the countries are displayed with their national flags in alphabetical order.

The next functionality is that the user can click on the cluster and Gemini will describe what it is about this law that it considered them to be a cluster. In other words, the way in which they approach the challenge. That's a modal. Then the user can click on any country and it can see how that country approaches it. So I might click on the flag of Germany and either an accordion or a modal it show how Germany approaches in this case gun control and its cluster.

Country level is always a tab and only if there's other taxonomies. By taxonomy I mean that we think there's a very, Gemini says there's a very big difference and how different right-wing versus left-wing approaches we're going to do. We're going to create one more tab with that. But that should be kind of only if there's very compelling reason to do so. Or if it has significant data to share. So if it feels like there's enough data about how US states approach an issue at the state level, it might create a tab called US States and then follow the same pattern in which it groups them into clusters.

The objective is to, rather than searching through Google to see how different countries do different things, to start with your question and then get this visualisation. And I think the icing on the cake would be an analysis. So this is a visual presentation and then there may be analysis showing significant differences, some similarities. So there's like a report, a textual report, but the main tab, because I think it's the most interesting one, is the visualization, the policy visualizer. \ No newline at end of file diff --git a/transcripts/uncorrected/36.txt b/transcripts/uncorrected/36.txt index 2acd54bd254b2cdcc6a5457142eb4e0e917685f0..145fac41057e67a2489a588fef1f5d5a4b0df965 100644 --- a/transcripts/uncorrected/36.txt +++ b/transcripts/uncorrected/36.txt @@ -1 +1 @@ -Okay, I'd like to create an app with Gemini. It's going to do the following. It will be called MyEQCreator. Here's how it works.

The user will, there will be a microphone recording interface, or the user can upload a file. Either way, the user should aim to upload a three minute audio sample. Audio Sample goes to Gemini and Gemini will parse the submitted audio to determine speaker characteristics, namely their vocal range, frequency distribution. And when it does this its goal way to provide an EQ preset for the user.

I use Audacity for lightweight audio editing and if I had a Daniel voice preset that had these EQ settings built in or that could even use via a CLI I would use it but that would require maybe a second pass Gemini would generate it according to that file spec.

What would be very useful and impressive in addition would be after the analysis a five second audio sample might be visualized and the frequencies highlighted to illustrate to the user where the frequency distribution falls for their particular voice. \ No newline at end of file +Alright, so the plan is for this repository, I want to create an audio media streaming interface for my home network. And there's a few things I want to roll into this one too.

Number 1 is media playback. So I have a volume on the NAS called AudioShare. The NAS is 10.0.0.50. So connect to the NAS, you'll find the AudioShare volume and let's mount that as the media library. It'll have a lot of tracks already populated.

Second thing is a soundboard. So I'll create a folder within that audio share volume called soundboard. And in the soundboard I just upload some stupid sound effects I do one to start it off Like laughing sound.

And then I also want to create a intercom system. and the functionality for the intercom is that from this computer, sorry from the interface which will be audio.residence.jlm.com I'd like to have the push to talk and the start and stop. PUSH TO TALK

So for the speaker networking this is where I would like you to give me your thoughts on what makes the most sense So I've used before MPD. I've installed MPD clients on... So the devices are, there is a device called Nursery Pi in SSH. Bedroom Pi, R-Pi and Smart TV. Each one is connected to a speaker. That's the network.

I tried MPD, putting an MPD client on each device. MPD has been the most reliable But it seems kind of a pity to use this when there are protocols like SnapServer that are designed specifically for this use case. However, using Home Assistant, I found SnapServer to be very buggy. I could never really get it to work and many more and the system that's reliable.

I find with MPD, because you need to select the speaker on the client devices, those bindings frequently broke. So I'd like to have something that kind of, the speakers are really never going to change. In the sense that I'm going to, I have a sound card for the Raspberry Pi. That's the speaker. and for as long as I use this system that's gonna be the configuration. So I want to set up something that once it's in place it's pretty much just gonna work.

So I leave that call up to you and please create a... Create a folder in the repository providing your recommendations just before you begin and what you suggest as the best implementation for the multi-speaker network whether it is broadcasting to a bunch of MCD clients from the Web UI or whether it's creating a single Snap server or something else that manages the networking I don't envision much of a need to select individual speakers by which I mean, I think that for the most part the occasions I'm using this I'll just play media to the pool but of course it would be nice to be able to select that ! \ No newline at end of file diff --git a/transcripts/uncorrected/37.txt b/transcripts/uncorrected/37.txt index b2de03d17424a2fed8639d2dfa09c98e84d864d7..b314f3f74074ca02c2a47132cea688da6abb56d9 100644 --- a/transcripts/uncorrected/37.txt +++ b/transcripts/uncorrected/37.txt @@ -1 +1 @@ -It would be great to run the demo. I'm opening, creating a .env. And it would be useful so people can see straight up how it works to have a page that just says demo.

And it'll have so we'll need to run the audio data through the pipeline just as if we were using it capture the results into the repo here and just display that on the front end I've just provided the Gemini API key so let's try to do that I I also deleted, I think we just need one readme and the instructions for the app can be attached. \ No newline at end of file +Building a Reporting Disclosure. I have a few thoughts. One, I can create a model. A model is actually quite feasible. It would be, but it's a data annotation project. It's saying, here's a PDF, here are the actual variables. In other words, here's the scope 3, scope 2, scope 1, here are the units, train it like that.

Second thought is if I did want to put together a dataset of sustainability disclosure reports, I think you could argue a public fair use clause for the PDFs being there.

And then the one I did with Gemini the other day which was basically a parsing AI tool seemed to work and could probably be used in production and which works even maybe as a way of trying to get in touch with Google is they have They have definitely an AI for good division who may let's say provide Gemini credits for the actual deployment of it on Cloud Run. Because from my first run of it, it was very, very promising for the task of parsing the reports.

And that would greatly the feature would be when it extracts the data human human in the loop is done by seeing what it is matching it to a company in the database or to a known company Let's take Google itself as an example. Detects its stock ticker, detects its stock exchange. And then you click like add to database meaning that you're adding the validated data and it could even pull out the metadata from the document pull out the source and that would be a great way of building up a human validated database in other words you take the reports you say either everything everything looks good to me or this is wrong either way you add it then of course you've got the missing financials and the rest of the world.

But that would probably be because there is thousands of sustainability disclosures, especially when you consider I think beyond the US globally, and it's beyond. So certainly it's a task for a model, but it's also human in the loop. The ultimate question is if Gemini stock performs 99% sufficiently well in the task of extracting this data from the sustainability reports. A model might actually not even be necessary because out of the box it's almost perfect. That is, I suspect, what the case would be. \ No newline at end of file diff --git a/transcripts/uncorrected/38.txt b/transcripts/uncorrected/38.txt index f2066bdff489a0e7af0c17fa8ccf736412194aad..8d2caf72445f7704d8455a3c2b790fdf76026b9e 100644 --- a/transcripts/uncorrected/38.txt +++ b/transcripts/uncorrected/38.txt @@ -1 +1 @@ -Hello, yeah, I'm looking for, okay, I'm trying to find a phone case for the Nord 3 5G from OnePlus. I want something which has MagSafe, a magnet built into the case itself, and something good quality and that's just a good protective case for the phone.

Do you know of any recommendations? Any ones on AliExpress or if Otterbox makes a case for this phone or anyone else? It's a slightly older OnePlus, so it's tricky to find a compatible case for it.

So if you happen to know, you should know of any products on AliExpress and product numbers, list them please. \ No newline at end of file +The purpose of the repository basically is to model or suggest the idea of using AI agents to scope out gap filling and extending multi-agent networks based on their inferred understanding of the purpose of a multi-agent network.

I think iterative workflow is the best. It suggests to the user what about this agent the user says yes or no, rather than the batch system. Although it could do both, but let's make the defaults the kind of individual review system. \ No newline at end of file diff --git a/transcripts/uncorrected/39.txt b/transcripts/uncorrected/39.txt index 73f338799a7ffd0c5b0b5fd814b5e3f3a8c78a2c..2acd54bd254b2cdcc6a5457142eb4e0e917685f0 100644 --- a/transcripts/uncorrected/39.txt +++ b/transcripts/uncorrected/39.txt @@ -1 +1 @@ -I'd like to create a content recommendation app. This will be using... I'd like to get recommendations for movies to watch, things on Netflix, YouTube that are up to date. I'm based in Israel. I like watching things that are based on a true story or true stories. I prefer to watch things that are recent so it has to be up to date and the pitfall with these apps is that they'll recommend stuff that you've already seen or you don't want to watch so it would have to have some memory that it makes recommendations preferably one at a time and I can say like add to watch list or add to recommendation list or not interested or I've seen and the app would need to remember these responses so that it doesn't. It's just the same thing over and over again.

I know there's TMDB API which is great for getting movies. I have an API key I can provide. And I'd like to maybe say recommend across all categories just recommend movies. The Netflix thing it's very hard to get recommendations that are geo-sensitive for Netflix but that would probably be the ideal meaning that I'm based in Israel and if stuff isn't available here that should be considered as recommendations. \ No newline at end of file +Okay, I'd like to create an app with Gemini. It's going to do the following. It will be called MyEQCreator. Here's how it works.

The user will, there will be a microphone recording interface, or the user can upload a file. Either way, the user should aim to upload a three minute audio sample. Audio Sample goes to Gemini and Gemini will parse the submitted audio to determine speaker characteristics, namely their vocal range, frequency distribution. And when it does this its goal way to provide an EQ preset for the user.

I use Audacity for lightweight audio editing and if I had a Daniel voice preset that had these EQ settings built in or that could even use via a CLI I would use it but that would require maybe a second pass Gemini would generate it according to that file spec.

What would be very useful and impressive in addition would be after the analysis a five second audio sample might be visualized and the frequencies highlighted to illustrate to the user where the frequency distribution falls for their particular voice. \ No newline at end of file diff --git a/transcripts/uncorrected/4.txt b/transcripts/uncorrected/4.txt index c3e6aec46313e6c703697e4fcc48f050db3015c1..97dc205e9d7b77068f580705263f66d3a0ce82b0 100644 --- a/transcripts/uncorrected/4.txt +++ b/transcripts/uncorrected/4.txt @@ -1 +1 @@ -What's the most professional way to install a package on Linux? If I create an executable and copy that into the directory on path, such that I can call it, is that considered a worse way to install applications than through a Debian package? \ No newline at end of file +Go through the website and see any place in which icons have been implemented which were custom designed but which could have been implemented more efficiently through using an existing icon library.

Pay particular attention to icons for common uses such as social media icons which exist in many libraries, as well as emojis which may have been used in place of icons.

This approach should not be followed.

If the user uses an existing icon library that you can identify, then replace the custom coded icons with the most appropriate matches.

If the user hasn't yet implemented an icon library, provide some suggestions to the user, focusing on those libraries which will best match the aesthetic which they are following in their designs. \ No newline at end of file diff --git a/transcripts/uncorrected/40.txt b/transcripts/uncorrected/40.txt index 24994713fc006cf39dff6433f341d9e5b812c141..b2de03d17424a2fed8639d2dfa09c98e84d864d7 100644 --- a/transcripts/uncorrected/40.txt +++ b/transcripts/uncorrected/40.txt @@ -1 +1 @@ -So what I would like to do in this is create an app really for the purpose of demonstrating the capabilities of audio input as a modality because I think it's overlooked and it brings a lot of really interesting use cases.

What I'd like to do for this one is, as one facet of it, the user uploads a recording. It should be a recording of just one speaker. And upon receiving the recording, it'll be ingested to Gemini. and Gemini will analyse it for the following. It will try to categorise the speaker's accent. It will estimate the words per minute at which they speak. And then it will provide a phonetic analysis, basically a linguistic analysis of their speech, how they pronounce certain and many others.

A voice clip, Gemini processes it and then it produces a detailed analysis in a nicely displayed manner. \ No newline at end of file +It would be great to run the demo. I'm opening, creating a .env. And it would be useful so people can see straight up how it works to have a page that just says demo.

And it'll have so we'll need to run the audio data through the pipeline just as if we were using it capture the results into the repo here and just display that on the front end I've just provided the Gemini API key so let's try to do that I I also deleted, I think we just need one readme and the instructions for the app can be attached. \ No newline at end of file diff --git a/transcripts/uncorrected/41.txt b/transcripts/uncorrected/41.txt index 5eac1414e49e1b8618ce1ba2193d7d10b91f431a..f2066bdff489a0e7af0c17fa8ccf736412194aad 100644 --- a/transcripts/uncorrected/41.txt +++ b/transcripts/uncorrected/41.txt @@ -1 +1 @@ -I'd like to consider a wee factor and then just give me your thoughts about this so currently it's a file based backend what I was wondering is would it make more sense to have a lightweight database backend SQLite let's say and and the important part of the utility which is the Hugging Face dataset push is what I'm using for the classification model would actually be a job whereby locally it will create the dataset from the local backend.

In other words, rather than having this sit in place as files, it's going to be constructed periodically. Basically when I say okay I've uploaded another batch, let's push, would that be easier and more logical to integrate with the front end? \ No newline at end of file +Hello, yeah, I'm looking for, okay, I'm trying to find a phone case for the Nord 3 5G from OnePlus. I want something which has MagSafe, a magnet built into the case itself, and something good quality and that's just a good protective case for the phone.

Do you know of any recommendations? Any ones on AliExpress or if Otterbox makes a case for this phone or anyone else? It's a slightly older OnePlus, so it's tricky to find a compatible case for it.

So if you happen to know, you should know of any products on AliExpress and product numbers, list them please. \ No newline at end of file diff --git a/transcripts/uncorrected/42.txt b/transcripts/uncorrected/42.txt index 8eb532b0a713565b3b2fae20960656ec0d9e6e2f..73f338799a7ffd0c5b0b5fd814b5e3f3a8c78a2c 100644 --- a/transcripts/uncorrected/42.txt +++ b/transcripts/uncorrected/42.txt @@ -1 +1 @@ -Okay what I'd like to do is create an application with Gemini. The user will upload their resume and upon receiving the resume the purpose of this application is to ideate and many more. So, I'm going to show you how to create jobs, positions that the user might be suitable for. It could be what they've done previously or an extension of that, but it would also try to suggest alternative directions, as in slide pivots or rigby pig pivots.

They'll frame its suggestions with job title as in if the user uploads their resume they'll say oh you could be an AI product manager, salary range for this position. The user might also maybe the user should provide where they based though that should be obvious from the CV. So try to contextualize that by their area demand who hires for it analysis why this could be a cool job for you. Knowledge gaps slash upskilling, how you might want to upskill to qualify yourself for this job. Keywords that this job might be that you might find opportunities using these keywords. A certification, certifications that I want to pursue.

Then a kind of a Tinder interface, and so on. So, it's a really nice, thumbs up, thumbs down, and those are recorded in memory so that the user can go back through the suggestions that it liked. So it's kind of a career ideation tool really, career pivot ideation tool for the user to explore alternative directions if they're feeling like they might not be thinking very sufficiently widely about what it is that they could be using their skills for. \ No newline at end of file +I'd like to create a content recommendation app. This will be using... I'd like to get recommendations for movies to watch, things on Netflix, YouTube that are up to date. I'm based in Israel. I like watching things that are based on a true story or true stories. I prefer to watch things that are recent so it has to be up to date and the pitfall with these apps is that they'll recommend stuff that you've already seen or you don't want to watch so it would have to have some memory that it makes recommendations preferably one at a time and I can say like add to watch list or add to recommendation list or not interested or I've seen and the app would need to remember these responses so that it doesn't. It's just the same thing over and over again.

I know there's TMDB API which is great for getting movies. I have an API key I can provide. And I'd like to maybe say recommend across all categories just recommend movies. The Netflix thing it's very hard to get recommendations that are geo-sensitive for Netflix but that would probably be the ideal meaning that I'm based in Israel and if stuff isn't available here that should be considered as recommendations. \ No newline at end of file diff --git a/transcripts/uncorrected/43.txt b/transcripts/uncorrected/43.txt index 492695d3c04244eba8ee90b40f4d0ed8cbb6793b..24994713fc006cf39dff6433f341d9e5b812c141 100644 --- a/transcripts/uncorrected/43.txt +++ b/transcripts/uncorrected/43.txt @@ -1 +1 @@ -Here's an idea for a product I had. Tell me if you think it's ridiculous and if something like this has been attempted. So, speech-to-text transcription is amazing and I've become very dependent on it for voice typing. Unfortunately, on Linux and specifically, it's really tricky to find something that works at the operating system level. There are tools for Windows and Mac, and what I really need is something that will do it in any program. Not a browser extension, not an IDE extension, because then you're forever looking for does this tool have voice support. And you end up having, like what I have now, three or four Whisper subscriptions.

And many more. And you free yourself from the keyboard literally, you begin to want to use it at all your computers on my laptop. And some of them, my desktop can run a whisper, my laptop really can't. And you don't want to be spending a bunch of time provisioning separate environments.

So my idea is for a mini PC, think something like the Raspberry Pi or Orange Pi, but not presented as an enthusiast product so much as a little edge device and many more A box for all intents and purposes which runs on device a very efficient speech model like Whisper and it does on hardware local inference. Everything is optimized for this one workload. It has a USB out and the USB out it functions as a HID device and it sends the transcribed text and so on. Influence on the device and straight out USB.

What this means is you can plug your voice keyboard, which I think is obvious name, into anything. You can have it bound to your desktop for most of the time, you go away for traveling for a while, you pack your box. So it's really analogous to a keyboard.

Now what I was thinking to myself as a stupid idea is yes, you could do this stuff on device, you could use Claude, maybe it's too niche. But it could be quite creative for people who are really into voice typing and want a way to. And if it had Bluetooth support, your little box, your voice typing centerpiece could also work with your tablets, your phone and you could sort of extend around it. \ No newline at end of file +So what I would like to do in this is create an app really for the purpose of demonstrating the capabilities of audio input as a modality because I think it's overlooked and it brings a lot of really interesting use cases.

What I'd like to do for this one is, as one facet of it, the user uploads a recording. It should be a recording of just one speaker. And upon receiving the recording, it'll be ingested to Gemini. and Gemini will analyse it for the following. It will try to categorise the speaker's accent. It will estimate the words per minute at which they speak. And then it will provide a phonetic analysis, basically a linguistic analysis of their speech, how they pronounce certain and many others.

A voice clip, Gemini processes it and then it produces a detailed analysis in a nicely displayed manner. \ No newline at end of file diff --git a/transcripts/uncorrected/44.txt b/transcripts/uncorrected/44.txt index acadef7c73d2b38c88ec7b03751c008a67eca4fc..5eac1414e49e1b8618ce1ba2193d7d10b91f431a 100644 --- a/transcripts/uncorrected/44.txt +++ b/transcripts/uncorrected/44.txt @@ -1 +1 @@ -Another idea for Gemini app. Recipe modifier, you get a recipe. Gemini parses the recipe, structures the data. Then, using a nutritional database, attempts to calculate the total fat per serving and the fat per ingredient.

Then, this is an app for people like me who are trying to adhere to a low-fat diet. It remixes a recipe to either achieve a certain fat amount, as in under X grams of fat, or to just make a general reduction within reasonable bounds while still trying to keep the recipe the recipe. \ No newline at end of file +I'd like to consider a wee factor and then just give me your thoughts about this so currently it's a file based backend what I was wondering is would it make more sense to have a lightweight database backend SQLite let's say and and the important part of the utility which is the Hugging Face dataset push is what I'm using for the classification model would actually be a job whereby locally it will create the dataset from the local backend.

In other words, rather than having this sit in place as files, it's going to be constructed periodically. Basically when I say okay I've uploaded another batch, let's push, would that be easier and more logical to integrate with the front end? \ No newline at end of file diff --git a/transcripts/uncorrected/45.txt b/transcripts/uncorrected/45.txt index 48df2efb7e5f7af2de5f6a9e6f79c4188a1f5e45..8eb532b0a713565b3b2fae20960656ec0d9e6e2f 100644 --- a/transcripts/uncorrected/45.txt +++ b/transcripts/uncorrected/45.txt @@ -1 +1 @@ -Google ID8 to Try would be one of the apps that connects with the Google Workspace services. Which I don't know, maybe they've circumvented their general cautiousness.

Like voice to email. You send an email, you record a voice memo, it transcribes it, it checks your contacts, it generates an email, it shows you a draft, is that okay, and then it sends. \ No newline at end of file +Okay what I'd like to do is create an application with Gemini. The user will upload their resume and upon receiving the resume the purpose of this application is to ideate and many more. So, I'm going to show you how to create jobs, positions that the user might be suitable for. It could be what they've done previously or an extension of that, but it would also try to suggest alternative directions, as in slide pivots or rigby pig pivots.

They'll frame its suggestions with job title as in if the user uploads their resume they'll say oh you could be an AI product manager, salary range for this position. The user might also maybe the user should provide where they based though that should be obvious from the CV. So try to contextualize that by their area demand who hires for it analysis why this could be a cool job for you. Knowledge gaps slash upskilling, how you might want to upskill to qualify yourself for this job. Keywords that this job might be that you might find opportunities using these keywords. A certification, certifications that I want to pursue.

Then a kind of a Tinder interface, and so on. So, it's a really nice, thumbs up, thumbs down, and those are recorded in memory so that the user can go back through the suggestions that it liked. So it's kind of a career ideation tool really, career pivot ideation tool for the user to explore alternative directions if they're feeling like they might not be thinking very sufficiently widely about what it is that they could be using their skills for. \ No newline at end of file diff --git a/transcripts/uncorrected/46.txt b/transcripts/uncorrected/46.txt index 353b380ddee0d6134e7cfc905de9171524ef566e..492695d3c04244eba8ee90b40f4d0ed8cbb6793b 100644 --- a/transcripts/uncorrected/46.txt +++ b/transcripts/uncorrected/46.txt @@ -1 +1 @@ -I'd like to create an app that does the following. The user will paste an image or multiple images into the image upload feature. It'll run it through Gemini and it will attempt to extract the following fields: Serial Number, Model Number, Manufacturer, in a text field it will OCR readable text, Country of Manufacture.

And then based upon the detected product, the manufacturer and the part number and the serial number, it will provide a one line description, it will provide a multi-line description, it will provide a spec sheet. It will provide a year of first released on the market, age in years based on first release minus the current time, correct to the nearest 8.1, one decimal place.

And deprecation level from almost deprecated, fully deprecated, RRP, still on market, the last of the checkbox. So it'll basically take an image and then extract all these fields based on the initial OCR and then based on the web search complementing that. \ No newline at end of file +Here's an idea for a product I had. Tell me if you think it's ridiculous and if something like this has been attempted. So, speech-to-text transcription is amazing and I've become very dependent on it for voice typing. Unfortunately, on Linux and specifically, it's really tricky to find something that works at the operating system level. There are tools for Windows and Mac, and what I really need is something that will do it in any program. Not a browser extension, not an IDE extension, because then you're forever looking for does this tool have voice support. And you end up having, like what I have now, three or four Whisper subscriptions.

And many more. And you free yourself from the keyboard literally, you begin to want to use it at all your computers on my laptop. And some of them, my desktop can run a whisper, my laptop really can't. And you don't want to be spending a bunch of time provisioning separate environments.

So my idea is for a mini PC, think something like the Raspberry Pi or Orange Pi, but not presented as an enthusiast product so much as a little edge device and many more A box for all intents and purposes which runs on device a very efficient speech model like Whisper and it does on hardware local inference. Everything is optimized for this one workload. It has a USB out and the USB out it functions as a HID device and it sends the transcribed text and so on. Influence on the device and straight out USB.

What this means is you can plug your voice keyboard, which I think is obvious name, into anything. You can have it bound to your desktop for most of the time, you go away for traveling for a while, you pack your box. So it's really analogous to a keyboard.

Now what I was thinking to myself as a stupid idea is yes, you could do this stuff on device, you could use Claude, maybe it's too niche. But it could be quite creative for people who are really into voice typing and want a way to. And if it had Bluetooth support, your little box, your voice typing centerpiece could also work with your tablets, your phone and you could sort of extend around it. \ No newline at end of file diff --git a/transcripts/uncorrected/47.txt b/transcripts/uncorrected/47.txt index da218ad130c3c5a5f3ca672509c6c517f4fa87f2..acadef7c73d2b38c88ec7b03751c008a67eca4fc 100644 --- a/transcripts/uncorrected/47.txt +++ b/transcripts/uncorrected/47.txt @@ -1 +1 @@ -I'd like to create an app that does the following. The user will paste a screenshot from their calendar or there's a text field for calendar entries for a certain time period. Below that there is a voice recorder. The voice recorder will let out the user to record a voice message, record, pause, stop, and or retake.

When the user is instructed to narrate their timesheet for the week, and the user can also select a date for week commencing, just to validate when the first date that they're referring to in this timesheet is. When those three fields are provided by the user they get sent to Gemini and Gemini will then generate a timesheet based upon the user description with activities per day.

The meeting information that was received will be added. So I might diarize specific meetings that were referenced. So combining the two sets of data. And finally based the user might if the user includes a time spent estimate how many hours were spent per day on a certain project or task it will then calculate the estimated total hours spent and then a summary section.

This will be provided as a document which is created in markdown with the user it's rendered in rich text on the screen and the user can click download and if they do that it'll download the timesheet as a markdown file with the title automatically file name timesheet for week commencing in machine readable case. \ No newline at end of file +Another idea for Gemini app. Recipe modifier, you get a recipe. Gemini parses the recipe, structures the data. Then, using a nutritional database, attempts to calculate the total fat per serving and the fat per ingredient.

Then, this is an app for people like me who are trying to adhere to a low-fat diet. It remixes a recipe to either achieve a certain fat amount, as in under X grams of fat, or to just make a general reduction within reasonable bounds while still trying to keep the recipe the recipe. \ No newline at end of file diff --git a/transcripts/uncorrected/48.txt b/transcripts/uncorrected/48.txt index 0ec335394a72e80887a3672f290bc5828d8227e0..48df2efb7e5f7af2de5f6a9e6f79c4188a1f5e45 100644 --- a/transcripts/uncorrected/48.txt +++ b/transcripts/uncorrected/48.txt @@ -1 +1 @@ -I'd like to create an app that is a meeting documentation assistant and it can provide three outputs from a voice input. So there's a voice recorder, so the user can record a voice note, pause, stop and retake, and then send. Once the voice note is sent, the user selects whether they want to generate a meeting minutes, an agenda for an upcoming meeting, so meeting agenda, or just those two actually.

And then if they do meeting agenda, it'll also generate a short version that can fit in a calendar description and a suggested meeting title. Upon receiving this from the user it gets sent to Gemini it analyzes the audio parses the audio and then generates a well minute or agenda as according to what the user selects with an automatically generated title a body that formatted in Markdown but renders in rich text so the user can download the original file with an automatically generated title a body that is formatted in Markdown but renders in rich text The user can download the original file and Runs the user would just clear the recording and start again.

It should also be able to automatically detect start time, end time, participants, action items, and it can deliver a... It will put those in organized fields in the output, even though the... and maybe the user can edit those to rectify any mistakes. And then when they click download, it will combine the corrected or uncorrected version as the case may be to generate the actual document for the minutes or the agenda. \ No newline at end of file +Google ID8 to Try would be one of the apps that connects with the Google Workspace services. Which I don't know, maybe they've circumvented their general cautiousness.

Like voice to email. You send an email, you record a voice memo, it transcribes it, it checks your contacts, it generates an email, it shows you a draft, is that okay, and then it sends. \ No newline at end of file diff --git a/transcripts/uncorrected/49.txt b/transcripts/uncorrected/49.txt index 243f36cf36c052964af7ebe83a792dae9e67d205..353b380ddee0d6134e7cfc905de9171524ef566e 100644 --- a/transcripts/uncorrected/49.txt +++ b/transcripts/uncorrected/49.txt @@ -1 +1 @@ -I'd like to create an app which will do the following. It's a voice-to-voice app. The user will record a voice message. The voice recording in the app. The voice recording gets sent to Gemini with a transcript. Gemini's task is to create an abbreviated version of the Voice Message, as short as possible. Essentially cleaning it up. This stage is not shown to the user.

But what happens next is that it gets text to speech, it gets synthesized, the user can choose between a male or a female voice. Yeah, and once that, once the generated audio is created, it presents to the user, the user can download it. So it's essentially taking audio from the user, cleaning it, condensing it, synthesizing it, and then download.

Come up with an imaginative name for this use case. \ No newline at end of file +I'd like to create an app that does the following. The user will paste an image or multiple images into the image upload feature. It'll run it through Gemini and it will attempt to extract the following fields: Serial Number, Model Number, Manufacturer, in a text field it will OCR readable text, Country of Manufacture.

And then based upon the detected product, the manufacturer and the part number and the serial number, it will provide a one line description, it will provide a multi-line description, it will provide a spec sheet. It will provide a year of first released on the market, age in years based on first release minus the current time, correct to the nearest 8.1, one decimal place.

And deprecation level from almost deprecated, fully deprecated, RRP, still on market, the last of the checkbox. So it'll basically take an image and then extract all these fields based on the initial OCR and then based on the web search complementing that. \ No newline at end of file diff --git a/transcripts/uncorrected/5.txt b/transcripts/uncorrected/5.txt index 72dd47f2927e95f6a555120604796efb0f7010e8..0f9f01aeb1efa9b56a188dbecffed93a32cfd7c5 100644 --- a/transcripts/uncorrected/5.txt +++ b/transcripts/uncorrected/5.txt @@ -1 +1 @@ -Your task is to take this system prompt and rewrite it for implementation in a structured AI system.

In order to do so, adhere to the following instructions.

Within the text of the prompt itself, define the The JSON output that the AI should be constrained to giving.

And instruct the AI tool that it is working in a structured workflow and must only return valid JSON.

Create a folder for the prompt.

And add in addition to the rewritten prompt text.

You should also create a .json file containing an Open API compliant JSON schema and finally and you create another JSON called object.json which contains just the JSON object. \ No newline at end of file +This repository contains a collection of slash commands which I use with Claudecode.

I capture some of the slash commands using speech to text.

The slash commands that have been captured with dictation frequently lack elements like punctuation, paragraph spacing, and they may contain occasionally words that were mistranscribed.

Please recurse through the directories and correct slash commands which you can find which were missing these basic textual features but do not limit your fixes to only I don't want to go into those containing these defects but rather consider in your editing any slash commands which need to be rewritten for optimal intelligibility. \ No newline at end of file diff --git a/transcripts/uncorrected/50.txt b/transcripts/uncorrected/50.txt index 35a55fa10abb62fbf49bc2c38d73e8cc53fca620..da218ad130c3c5a5f3ca672509c6c517f4fa87f2 100644 --- a/transcripts/uncorrected/50.txt +++ b/transcripts/uncorrected/50.txt @@ -1 +1 @@ -This is called Impact Report Finder. The objective is that the user will provide the name of a company and the AI tool, Gemini, will attempt to find any voluntary sustainability disclosures, impact disclosures that they've written from the internet and it will send them by year. If they include data about their GSD admissions there will be a tick symbol and there will be a link to the result and there will be a direct link to the PDF. and Jeff.

So after the user provides the name of the company, there can be a... if Gemini needs to disambiguate, it will ask the user in a text box below, can you clarify and then the user can hit submit again, otherwise it's more than an interactive chat app, it just provides those search results in that specific format with the reports chronologically from by year, if there's multiple ones by year, by date of release, and then if they have GSG data, a link to the data sheet if it's separate, or just the PDF, but basically annotated table of links. \ No newline at end of file +I'd like to create an app that does the following. The user will paste a screenshot from their calendar or there's a text field for calendar entries for a certain time period. Below that there is a voice recorder. The voice recorder will let out the user to record a voice message, record, pause, stop, and or retake.

When the user is instructed to narrate their timesheet for the week, and the user can also select a date for week commencing, just to validate when the first date that they're referring to in this timesheet is. When those three fields are provided by the user they get sent to Gemini and Gemini will then generate a timesheet based upon the user description with activities per day.

The meeting information that was received will be added. So I might diarize specific meetings that were referenced. So combining the two sets of data. And finally based the user might if the user includes a time spent estimate how many hours were spent per day on a certain project or task it will then calculate the estimated total hours spent and then a summary section.

This will be provided as a document which is created in markdown with the user it's rendered in rich text on the screen and the user can click download and if they do that it'll download the timesheet as a markdown file with the title automatically file name timesheet for week commencing in machine readable case. \ No newline at end of file diff --git a/transcripts/uncorrected/51.txt b/transcripts/uncorrected/51.txt index e3960e6d457375f71a0aa63d07c4c8ad4af74fc2..0ec335394a72e80887a3672f290bc5828d8227e0 100644 --- a/transcripts/uncorrected/51.txt +++ b/transcripts/uncorrected/51.txt @@ -1 +1 @@ -Okay, I'd like to create a sustainability report parser which will operate as follows. The user will provide a link to a sustainability disclosure or better they will upload a PDF. That's the expectation.

Upon receiving the PDF from the user the app will load the PDF in a frame. Gemini will identify on which page sustainability, The disclosure data for Scope 321 emissions is reported. And the PDF will load up in the frame, the viewer, with that page skipped to that page, and the data highlighted with a yellow overlay, slight highlight.

And beneath it Gemini will output the table for the top level in other words the summary of the scope 321 emissions with a short text description of what they were in summary the units detected scope 321 itemize then a disclaimer under that that this detection is based on automated processing may be incorrect and so on. \ No newline at end of file +I'd like to create an app that is a meeting documentation assistant and it can provide three outputs from a voice input. So there's a voice recorder, so the user can record a voice note, pause, stop and retake, and then send. Once the voice note is sent, the user selects whether they want to generate a meeting minutes, an agenda for an upcoming meeting, so meeting agenda, or just those two actually.

And then if they do meeting agenda, it'll also generate a short version that can fit in a calendar description and a suggested meeting title. Upon receiving this from the user it gets sent to Gemini it analyzes the audio parses the audio and then generates a well minute or agenda as according to what the user selects with an automatically generated title a body that formatted in Markdown but renders in rich text so the user can download the original file with an automatically generated title a body that is formatted in Markdown but renders in rich text The user can download the original file and Runs the user would just clear the recording and start again.

It should also be able to automatically detect start time, end time, participants, action items, and it can deliver a... It will put those in organized fields in the output, even though the... and maybe the user can edit those to rectify any mistakes. And then when they click download, it will combine the corrected or uncorrected version as the case may be to generate the actual document for the minutes or the agenda. \ No newline at end of file diff --git a/transcripts/uncorrected/52.txt b/transcripts/uncorrected/52.txt index 73fdefbd1c2ebcfad9ad59e23523ae1b8526edf2..243f36cf36c052964af7ebe83a792dae9e67d205 100644 --- a/transcripts/uncorrected/52.txt +++ b/transcripts/uncorrected/52.txt @@ -1 +1 @@ -Okay, so I'd like to add to the VoiceNote dataset manager. So I have really annotations, there's two main objectives for this project as I currently conceive of it. And I think on the front end it would be useful to, when I'm uploading stuff and annotating, to have two separate sections for it, a little bit more clearly delineated. and so on.

So, if we have delineated, for example, where we have upload new voice note, that can firstly just be called maybe upload, next section transcripts, next section, and by next section I'm defining the headers, next section classification, next section annotations.

So in classification, I'll just add a few more recurrent ones that we should have. Prompt General, Development Prompt, Read Me Dictation, Social Media Post, and then in Annotations.

So content issues call that Audio defects and let add one for a significant background noise In audio quality issues, what I'd like to have actually maybe is, and again, we're going to, I mean, in the process of defining the annotations and might have to sort of work backwards initially, but most of them haven't been annotated yet. I'm not going to start annotating until the schema is defined so it would actually be a lagging annotation process.

The ones that are missing currently are background music. You have background noise but I think background music is actually very important because from a copyright standpoint that could be an issue. and for multi-language don't actually even have English Hebrew I'd have to keep it open-ended as to what other languages are present and I'd like to have one for background conversations actually and tagging by language so English Hebrew Arabic Russian French I'm hard these would be the ones that encounter my local environments a lot \ No newline at end of file +I'd like to create an app which will do the following. It's a voice-to-voice app. The user will record a voice message. The voice recording in the app. The voice recording gets sent to Gemini with a transcript. Gemini's task is to create an abbreviated version of the Voice Message, as short as possible. Essentially cleaning it up. This stage is not shown to the user.

But what happens next is that it gets text to speech, it gets synthesized, the user can choose between a male or a female voice. Yeah, and once that, once the generated audio is created, it presents to the user, the user can download it. So it's essentially taking audio from the user, cleaning it, condensing it, synthesizing it, and then download.

Come up with an imaginative name for this use case. \ No newline at end of file diff --git a/transcripts/uncorrected/53.txt b/transcripts/uncorrected/53.txt new file mode 100644 index 0000000000000000000000000000000000000000..35a55fa10abb62fbf49bc2c38d73e8cc53fca620 --- /dev/null +++ b/transcripts/uncorrected/53.txt @@ -0,0 +1 @@ +This is called Impact Report Finder. The objective is that the user will provide the name of a company and the AI tool, Gemini, will attempt to find any voluntary sustainability disclosures, impact disclosures that they've written from the internet and it will send them by year. If they include data about their GSD admissions there will be a tick symbol and there will be a link to the result and there will be a direct link to the PDF. and Jeff.

So after the user provides the name of the company, there can be a... if Gemini needs to disambiguate, it will ask the user in a text box below, can you clarify and then the user can hit submit again, otherwise it's more than an interactive chat app, it just provides those search results in that specific format with the reports chronologically from by year, if there's multiple ones by year, by date of release, and then if they have GSG data, a link to the data sheet if it's separate, or just the PDF, but basically annotated table of links. \ No newline at end of file diff --git a/transcripts/uncorrected/54.txt b/transcripts/uncorrected/54.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3960e6d457375f71a0aa63d07c4c8ad4af74fc2 --- /dev/null +++ b/transcripts/uncorrected/54.txt @@ -0,0 +1 @@ +Okay, I'd like to create a sustainability report parser which will operate as follows. The user will provide a link to a sustainability disclosure or better they will upload a PDF. That's the expectation.

Upon receiving the PDF from the user the app will load the PDF in a frame. Gemini will identify on which page sustainability, The disclosure data for Scope 321 emissions is reported. And the PDF will load up in the frame, the viewer, with that page skipped to that page, and the data highlighted with a yellow overlay, slight highlight.

And beneath it Gemini will output the table for the top level in other words the summary of the scope 321 emissions with a short text description of what they were in summary the units detected scope 321 itemize then a disclaimer under that that this detection is based on automated processing may be incorrect and so on. \ No newline at end of file diff --git a/transcripts/uncorrected/55.txt b/transcripts/uncorrected/55.txt new file mode 100644 index 0000000000000000000000000000000000000000..73fdefbd1c2ebcfad9ad59e23523ae1b8526edf2 --- /dev/null +++ b/transcripts/uncorrected/55.txt @@ -0,0 +1 @@ +Okay, so I'd like to add to the VoiceNote dataset manager. So I have really annotations, there's two main objectives for this project as I currently conceive of it. And I think on the front end it would be useful to, when I'm uploading stuff and annotating, to have two separate sections for it, a little bit more clearly delineated. and so on.

So, if we have delineated, for example, where we have upload new voice note, that can firstly just be called maybe upload, next section transcripts, next section, and by next section I'm defining the headers, next section classification, next section annotations.

So in classification, I'll just add a few more recurrent ones that we should have. Prompt General, Development Prompt, Read Me Dictation, Social Media Post, and then in Annotations.

So content issues call that Audio defects and let add one for a significant background noise In audio quality issues, what I'd like to have actually maybe is, and again, we're going to, I mean, in the process of defining the annotations and might have to sort of work backwards initially, but most of them haven't been annotated yet. I'm not going to start annotating until the schema is defined so it would actually be a lagging annotation process.

The ones that are missing currently are background music. You have background noise but I think background music is actually very important because from a copyright standpoint that could be an issue. and for multi-language don't actually even have English Hebrew I'd have to keep it open-ended as to what other languages are present and I'd like to have one for background conversations actually and tagging by language so English Hebrew Arabic Russian French I'm hard these would be the ones that encounter my local environments a lot \ No newline at end of file diff --git a/transcripts/uncorrected/6.txt b/transcripts/uncorrected/6.txt index 76af9ed38a7f3a464480738293afb78a25ff5929..35a25c66c27c2d44f0a64ca785442bcb2b03db07 100644 --- a/transcripts/uncorrected/6.txt +++ b/transcripts/uncorrected/6.txt @@ -1 +1 @@ -Okay, so here is the type of license that generally work for me for open source projects. I usually open source software because I've created something useful. I think other people might either find it helpful or develop upon the idea to do it to take my idea and ability further. Attribution is always appreciated but I'd only want to make it mandatory if that wouldn't really sort of create friction with other people who'd like to use a project.

But attribution really helps me because it opens up the relationship and connectedness of open sourcing because if someone were to use it downstream, they have a way to sort of get in touch with me. People commercializing open source software doesn't sit very well with me, but again, it's only if it's, I'd be very reluctant to add that as a limitation.

Other than that, nothing else really stands out to me as something that I'd require. Like if people took it in any other direction, it's fine. The only one I think about sometimes is obviously no one wants something that creates to be sort of misused or used for harm. And one also doesn't want to end up with lawsuits if something they create is misused, so I don't know if there's any legal language that can create a little bit of protection around those potentials. \ No newline at end of file +This repository contains a folder of screenshots.

The intended use of the screenshots is that they will be integrated into the README or other documentation to demonstrate the UI of the app.

It's important therefore that the screenshots have descriptive file names.

Please rename the screenshots for this purpose and integrate them into the README in the most appropriate section. \ No newline at end of file diff --git a/transcripts/uncorrected/7.txt b/transcripts/uncorrected/7.txt index b57417cde1c5303a489404bcd259f827ea2cf7a6..c3e6aec46313e6c703697e4fcc48f050db3015c1 100644 --- a/transcripts/uncorrected/7.txt +++ b/transcripts/uncorrected/7.txt @@ -1 +1 @@ -The problem is that we looked at this before and when it reboots the router it's not bringing up the Cloudflare tunnel.

So see it's working now, but just see what can be done to make sure that this, we need to make very certain that this does start automatically on reboot. \ No newline at end of file +What's the most professional way to install a package on Linux? If I create an executable and copy that into the directory on path, such that I can call it, is that considered a worse way to install applications than through a Debian package? \ No newline at end of file diff --git a/transcripts/uncorrected/8.txt b/transcripts/uncorrected/8.txt index 4b73fda258009e56d8fc1e8ade93312193c751d0..72dd47f2927e95f6a555120604796efb0f7010e8 100644 --- a/transcripts/uncorrected/8.txt +++ b/transcripts/uncorrected/8.txt @@ -1 +1 @@ -I recently picked up a Samsung Galaxy 6 smartwatch just to try out the idea basically.

And my only need was really for a dual time display, local and UTC, and the day display.

It was about $100 give or take, so a very basic entry level that would sync with my OnePlus.

If it turns out that I really like it...

The other requirement was a good microphone for voice recordings.

Even if it's not the best and my phone is better, it would be nice to be able to use it for that because I take a lot of voicemails during the day.

If I turn out to really like it, what would you suggest as a good upgrade?

I tend to like more everything that's getting under the hood with technology.

So I wasn't thrilled about buying a Samsung, but it was what was available for the price point approximately. \ No newline at end of file +Your task is to take this system prompt and rewrite it for implementation in a structured AI system.

In order to do so, adhere to the following instructions.

Within the text of the prompt itself, define the The JSON output that the AI should be constrained to giving.

And instruct the AI tool that it is working in a structured workflow and must only return valid JSON.

Create a folder for the prompt.

And add in addition to the rewritten prompt text.

You should also create a .json file containing an Open API compliant JSON schema and finally and you create another JSON called object.json which contains just the JSON object. \ No newline at end of file diff --git a/transcripts/uncorrected/9.txt b/transcripts/uncorrected/9.txt index dd20ba87d4e27f321810d0504c0736c4e154d407..76af9ed38a7f3a464480738293afb78a25ff5929 100644 --- a/transcripts/uncorrected/9.txt +++ b/transcripts/uncorrected/9.txt @@ -1 +1 @@ -I recently picked up a smartwatch from Samsung Galaxy and I'm curious one thing that would be really helpful that I thought of.

I'm always stressed about losing or potentially losing phone wallet keys.

And for all of these things, Fun Walla Keys, I use Pebble Bee Tracker now.

So I'm wondering if there's any way or app that can do something like geofencing in which if any of the things are...

Maybe you can turn it on and off at certain times but they're in.

If they move out of the zone you get an alert notification if the smartwatch vibrates or whatever. \ No newline at end of file +Okay, so here is the type of license that generally work for me for open source projects. I usually open source software because I've created something useful. I think other people might either find it helpful or develop upon the idea to do it to take my idea and ability further. Attribution is always appreciated but I'd only want to make it mandatory if that wouldn't really sort of create friction with other people who'd like to use a project.

But attribution really helps me because it opens up the relationship and connectedness of open sourcing because if someone were to use it downstream, they have a way to sort of get in touch with me. People commercializing open source software doesn't sit very well with me, but again, it's only if it's, I'd be very reluctant to add that as a limitation.

Other than that, nothing else really stands out to me as something that I'd require. Like if people took it in any other direction, it's fine. The only one I think about sometimes is obviously no one wants something that creates to be sort of misused or used for harm. And one also doesn't want to end up with lawsuits if something they create is misused, so I don't know if there's any legal language that can create a little bit of protection around those potentials. \ No newline at end of file